[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801506#comment-13801506
 ] 

Hive QA commented on HIVE-4974:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12609548/HIVE-4974-trunk.patch.txt

{color:green}SUCCESS:{color} +1 4429 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1191/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1191/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5600) Fix PTest2 Maven support

2013-10-21 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801487#comment-13801487
 ] 

Edward Capriolo commented on HIVE-5600:
---

+1

> Fix PTest2 Maven support
> 
>
> Key: HIVE-5600
> URL: https://issues.apache.org/jira/browse/HIVE-5600
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5600.patch
>
>
> At present we don't download all the dependencies required in the source prep 
> phase therefore tests fail when the maven repo has been cleared.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5606) Default Derby metastore_db initial creation fails if hive.metastore.schema.verification=true

2013-10-21 Thread Prasad Mujumdar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801485#comment-13801485
 ] 

Prasad Mujumdar commented on HIVE-5606:
---

[~brett_s_r] The hive.metastore.schema.verification is false by default. If you 
set it to true, then you need to create the schema using the metastore schema 
script or [schema 
tool|https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool]. The 
very purpose of the schema consistency check is to avoid accidental corruption 
of the metastore schema.


> Default Derby metastore_db initial creation fails if 
> hive.metastore.schema.verification=true
> 
>
> Key: HIVE-5606
> URL: https://issues.apache.org/jira/browse/HIVE-5606
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 0.12.0
> Environment: JDK 1.6.0_43, Hadoop 1.2.1
>Reporter: Brett Randall
>
> Hive cannot create the default//initial/Derby metastore_db, if new 0.12 
> configuration property {{hive.metastore.schema.verification}} is set to 
> {{true}}.
> # Start with a clean 0.12 installation, or remove any existing (Derby) 
> default metastore_db directory
> # In {{hive-site.xml}}, set {{hive.metastore.schema.verification=true}}
> # Start hive CLI
> # Run {{hive> create database if not exists mydb;}}
> The following exception occurs:
> {noformat}
> 2013-10-22 15:02:59,390 WARN  bonecp.BoneCPConfig 
> (BoneCPConfig.java:sanitize(1537)) - Max Connections < 1. Setting to 20
> 2013-10-22 15:03:01,899 ERROR exec.DDLTask (DDLTask.java:execute(435)) - 
> org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: 
> Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:231)
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.createDatabase(DDLTask.java:3442)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:227)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> Caused by: java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1212)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:62)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2372)
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2383)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:225)
> ... 19 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1210)
> ... 24 more
> Caused by: MetaException(message:Version information not found in metastore. )
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(O

[jira] [Commented] (HIVE-5606) Default Derby metastore_db initial creation fails if hive.metastore.schema.verification=true

2013-10-21 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801473#comment-13801473
 ] 

Brock Noland commented on HIVE-5606:


cc [~prasadm]

> Default Derby metastore_db initial creation fails if 
> hive.metastore.schema.verification=true
> 
>
> Key: HIVE-5606
> URL: https://issues.apache.org/jira/browse/HIVE-5606
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 0.12.0
> Environment: JDK 1.6.0_43, Hadoop 1.2.1
>Reporter: Brett Randall
>
> Hive cannot create the default//initial/Derby metastore_db, if new 0.12 
> configuration property {{hive.metastore.schema.verification}} is set to 
> {{true}}.
> # Start with a clean 0.12 installation, or remove any existing (Derby) 
> default metastore_db directory
> # In {{hive-site.xml}}, set {{hive.metastore.schema.verification=true}}
> # Start hive CLI
> # Run {{hive> create database if not exists mydb;}}
> The following exception occurs:
> {noformat}
> 2013-10-22 15:02:59,390 WARN  bonecp.BoneCPConfig 
> (BoneCPConfig.java:sanitize(1537)) - Max Connections < 1. Setting to 20
> 2013-10-22 15:03:01,899 ERROR exec.DDLTask (DDLTask.java:execute(435)) - 
> org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: 
> Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:231)
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.createDatabase(DDLTask.java:3442)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:227)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> Caused by: java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1212)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:62)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2372)
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2383)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:225)
> ... 19 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1210)
> ... 24 more
> Caused by: MetaException(message:Version information not found in metastore. )
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:5638)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:5622)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(Deleg

[jira] [Commented] (HIVE-5600) Fix PTest2 Maven support

2013-10-21 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801472#comment-13801472
 ] 

Brock Noland commented on HIVE-5600:


That is a flaky test. [~appodictic], any chance I can bother you for a review?

We want to commit this to trunk.

> Fix PTest2 Maven support
> 
>
> Key: HIVE-5600
> URL: https://issues.apache.org/jira/browse/HIVE-5600
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5600.patch
>
>
> At present we don't download all the dependencies required in the source prep 
> phase therefore tests fail when the maven repo has been cleared.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5606) Default Derby metastore_db initial creation fails if hive.metastore.schema.verification=true

2013-10-21 Thread Brett Randall (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801456#comment-13801456
 ] 

Brett Randall commented on HIVE-5606:
-

Implementation of schema DB version check.

> Default Derby metastore_db initial creation fails if 
> hive.metastore.schema.verification=true
> 
>
> Key: HIVE-5606
> URL: https://issues.apache.org/jira/browse/HIVE-5606
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 0.12.0
> Environment: JDK 1.6.0_43, Hadoop 1.2.1
>Reporter: Brett Randall
>
> Hive cannot create the default//initial/Derby metastore_db, if new 0.12 
> configuration property {{hive.metastore.schema.verification}} is set to 
> {{true}}.
> # Start with a clean 0.12 installation, or remove any existing (Derby) 
> default metastore_db directory
> # In {{hive-site.xml}}, set {{hive.metastore.schema.verification=true}}
> # Start hive CLI
> # Run {{hive> create database if not exists mydb;}}
> The following exception occurs:
> {noformat}
> 2013-10-22 15:02:59,390 WARN  bonecp.BoneCPConfig 
> (BoneCPConfig.java:sanitize(1537)) - Max Connections < 1. Setting to 20
> 2013-10-22 15:03:01,899 ERROR exec.DDLTask (DDLTask.java:execute(435)) - 
> org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: 
> Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:231)
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.createDatabase(DDLTask.java:3442)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:227)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> Caused by: java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1212)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:62)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2372)
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2383)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:225)
> ... 19 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1210)
> ... 24 more
> Caused by: MetaException(message:Version information not found in metastore. )
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:5638)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:5622)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.Delegating

[jira] [Created] (HIVE-5606) Default Derby metastore_db initial creation fails if hive.metastore.schema.verification=true

2013-10-21 Thread Brett Randall (JIRA)
Brett Randall created HIVE-5606:
---

 Summary: Default Derby metastore_db initial creation fails if 
hive.metastore.schema.verification=true
 Key: HIVE-5606
 URL: https://issues.apache.org/jira/browse/HIVE-5606
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 0.12.0
 Environment: JDK 1.6.0_43, Hadoop 1.2.1
Reporter: Brett Randall


Hive cannot create the default//initial/Derby metastore_db, if new 0.12 
configuration property {{hive.metastore.schema.verification}} is set to 
{{true}}.

# Start with a clean 0.12 installation, or remove any existing (Derby) default 
metastore_db directory
# In {{hive-site.xml}}, set {{hive.metastore.schema.verification=true}}
# Start hive CLI
# Run {{hive> create database if not exists mydb;}}

The following exception occurs:

{noformat}
2013-10-22 15:02:59,390 WARN  bonecp.BoneCPConfig 
(BoneCPConfig.java:sanitize(1537)) - Max Connections < 1. Setting to 20
2013-10-22 15:03:01,899 ERROR exec.DDLTask (DDLTask.java:execute(435)) - 
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: 
Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:231)
at 
org.apache.hadoop.hive.ql.exec.DDLTask.createDatabase(DDLTask.java:3442)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:227)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1212)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:62)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
at 
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2372)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2383)
at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:225)
... 19 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1210)
... 24 more
Caused by: MetaException(message:Version information not found in metastore. )
at 
org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:5638)
at 
org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:5622)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
at com.sun.proxy.$Proxy10.verifySchema(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:403)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(

[jira] [Commented] (HIVE-5600) Fix PTest2 Maven support

2013-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801452#comment-13801452
 ] 

Hive QA commented on HIVE-5600:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12609448/HIVE-5600.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4429 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1188/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1188/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

> Fix PTest2 Maven support
> 
>
> Key: HIVE-5600
> URL: https://issues.apache.org/jira/browse/HIVE-5600
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5600.patch
>
>
> At present we don't download all the dependencies required in the source prep 
> phase therefore tests fail when the maven repo has been cleared.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5605) AddResourceOperation, DeleteResourceOperation, DfsOperation, SetOperation should be removed from org.apache.hive.service.cli.operation

2013-10-21 Thread Vaibhav Gumashta (JIRA)
Vaibhav Gumashta created HIVE-5605:
--

 Summary: AddResourceOperation, DeleteResourceOperation, 
DfsOperation, SetOperation should be removed from 
org.apache.hive.service.cli.operation 
 Key: HIVE-5605
 URL: https://issues.apache.org/jira/browse/HIVE-5605
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
Priority: Minor
 Fix For: 0.13.0


These classes are not used as the processing for Add, Delete, DFS and Set 
commands is done by HiveCommandOperation



--
This message was sent by Atlassian JIRA
(v6.1#6144)


hive pull request: HIVE-5596:hive-default.xml.template is invalid

2013-10-21 Thread killuahzl
GitHub user killuahzl opened a pull request:

https://github.com/apache/hive/pull/12

HIVE-5596:hive-default.xml.template is invalid

Fixed HIVE-5596:hive-default.xml.template is invalid.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/killuahzl/hive trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/12.patch


commit 8e97138a5f70f9b61808ac511133ffcfab3e9c5d
Author: Killua 
Date:   2013-10-22T02:48:44Z

HIVE-5596:hive-default.xml.template is invalid 

Fixed HIVE-5596:hive-default.xml.template is invalid.





[jira] [Commented] (HIVE-5580) push down predicates with an and-operator between non-SARGable predicates will get NPE

2013-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801430#comment-13801430
 ] 

Hive QA commented on HIVE-5580:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12609043/D13533.1.patch

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1187/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1187/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests failed with: NonZeroExitCodeException: Command 'bash 
/data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and 
output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1187/source-prep.txt
+ [[ true == \t\r\u\e ]]
+ rm -rf ivy maven
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'ql/src/test/results/clientpositive/orc_create.q.out'
Reverted 'ql/src/test/queries/clientpositive/orc_create.q'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf build hcatalog/build hcatalog/core/build 
hcatalog/storage-handlers/hbase/build hcatalog/server-extensions/build 
hcatalog/webhcat/svr/build hcatalog/webhcat/java-client/build 
hcatalog/hcatalog-pig-adapter/build common/src/gen 
ql/src/test/results/clientpositive/orc_create.q.out.orig 
ql/src/test/queries/clientpositive/orc_create.q.orig
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1534464.

At revision 1534464.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0 to p2
+ exit 1
'
{noformat}

This message is automatically generated.

> push down predicates with an and-operator between non-SARGable predicates 
> will get NPE
> --
>
> Key: HIVE-5580
> URL: https://issues.apache.org/jira/browse/HIVE-5580
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: D13533.1.patch
>
>
> When all of the predicates in an AND-operator in a SARG expression get 
> removed by the SARG builder, evaluation can end up with a NPE. 
> Sub-expressions are typically removed from AND-operators because they aren't 
> SARGable.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5604) Fix validation of nested expressions.

2013-10-21 Thread Jitendra Nath Pandey (JIRA)
Jitendra Nath Pandey created HIVE-5604:
--

 Summary: Fix validation of nested expressions.
 Key: HIVE-5604
 URL: https://issues.apache.org/jira/browse/HIVE-5604
 Project: Hive
  Issue Type: Sub-task
  Components: Vectorization
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey


There is a bug that nested expressions are not being validated correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5568) count(*) on ORC tables with predicate pushdown on partition columns fail

2013-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801429#comment-13801429
 ] 

Hive QA commented on HIVE-5568:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12609049/D13485.3.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 4429 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_create
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_parallel_orderby
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1186/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1186/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

> count(*) on ORC tables with predicate pushdown on partition columns fail
> 
>
> Key: HIVE-5568
> URL: https://issues.apache.org/jira/browse/HIVE-5568
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.12.1
>
> Attachments: D13485.1.patch, D13485.2.patch, D13485.3.patch
>
>
> If the query is:
> {code}
> select count(*) from orc_table where x = 10;
> {code}
> where x is a partition column and predicate pushdown is enabled, you'll get 
> an array out of bounds exception.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5592) Add an option to convert enum as struct as of Hive 0.8

2013-10-21 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801415#comment-13801415
 ] 

Edward Capriolo commented on HIVE-5592:
---

If this is true we need to fix this asap.

> Add an option to convert enum as struct as of Hive 0.8
> -
>
> Key: HIVE-5592
> URL: https://issues.apache.org/jira/browse/HIVE-5592
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Jie Li
>
> HIVE-3323 introduced the incompatible change: Hive handling of enum types has 
> been changed to always return the string value rather than struct. 
> But it didn't add the option "hive.data.convert.enum.to.string"  as planned 
> and thus broke all Enum usage prior to 0.10.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5589) perflogger output is hard to associate with queries

2013-10-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801408#comment-13801408
 ] 

Sergey Shelukhin commented on HIVE-5589:


this test passed for me, I don't think it's related.
[~ashutoshc] do you want to review?

> perflogger output is hard to associate with queries
> ---
>
> Key: HIVE-5589
> URL: https://issues.apache.org/jira/browse/HIVE-5589
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Attachments: HIVE-5589.01.patch
>
>
> It would be nice to dump the query somewhere in output.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5592) Add an option to convert enum as struct as of Hive 0.8

2013-10-21 Thread Yin Huai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yin Huai updated HIVE-5592:
---

Description: 
HIVE-3323 introduced the incompatible change: Hive handling of enum types has 
been changed to always return the string value rather than struct. 
But it didn't add the option "hive.data.convert.enum.to.string"  as planned and 
thus broke all Enum usage prior to 0.10.


  was:
HIVE-3222 introduced the incompatible change: Hive handling of enum types has 
been changed to always return the string value rather than struct. 
But it didn't add the option "hive.data.convert.enum.to.string"  as planned and 
thus broke all Enum usage prior to 0.10.



> Add an option to convert enum as struct as of Hive 0.8
> -
>
> Key: HIVE-5592
> URL: https://issues.apache.org/jira/browse/HIVE-5592
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Jie Li
>
> HIVE-3323 introduced the incompatible change: Hive handling of enum types has 
> been changed to always return the string value rather than struct. 
> But it didn't add the option "hive.data.convert.enum.to.string"  as planned 
> and thus broke all Enum usage prior to 0.10.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5603) several classes call initCause which masks lower level exceptions

2013-10-21 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801374#comment-13801374
 ] 

Thejas M Nair commented on HIVE-5603:
-

Similar to AuthorizationPreEventListener class, there are other places where 
initCause is being called, that need to be reviewed -.

{code}
 git grep initCause
hcatalog/server-extensions/src/main/java/org/apache/hcatalog/listener/NotificationListener.java:
me.initCause(e);
hcatalog/server-extensions/src/main/java/org/apache/hcatalog/listener/NotificationListener.java:
me.initCause(e);
hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/NotificationListener.java:
me.initCause(e);
hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/NotificationListener.java:
me.initCause(e);
hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/snapshot/lock/WriteLock.java:
  initCause(e);
hwi/src/java/org/apache/hadoop/hive/hwi/HWIServer.java:  ie.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 te.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java:
me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(original);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(original);
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: 
 me.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java:  
metaException.initCause(e);
metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java:  
metaException.initCause(e);

[jira] [Updated] (HIVE-5603) several classes call initCause which masks lower level exceptions

2013-10-21 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5603:


Summary: several classes call initCause which masks lower level exceptions  
(was: AuthorizationPreEventListener masks lower level exception with 
IllegalStateException)

> several classes call initCause which masks lower level exceptions
> -
>
> Key: HIVE-5603
> URL: https://issues.apache.org/jira/browse/HIVE-5603
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>
> AuthorizationPreEventListener has following code that will result in "Can't 
> overwrite exception" being thrown, and also mask the lower level exception.
> {code}
>   private InvalidOperationException invalidOperationException(Exception e) {
> InvalidOperationException ex = new InvalidOperationException();
> ex.initCause(e.getCause());
> return ex;
>   }
>   private MetaException metaException(HiveException e) {
> MetaException ex =  new MetaException(e.getMessage());
> ex.initCause(e);
> return ex;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5582) Implement BETWEEN filter in vectorized mode

2013-10-21 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801373#comment-13801373
 ] 

Eric Hanson commented on HIVE-5582:
---

This works for integer and float families, timestamp, and string. It uses 
direct implementations based on new templates. NOT [BETWEEN] is evaluated with 
a single pass over the input vector.

> Implement BETWEEN filter in vectorized mode
> ---
>
> Key: HIVE-5582
> URL: https://issues.apache.org/jira/browse/HIVE-5582
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Reporter: Eric Hanson
>Assignee: Eric Hanson
> Attachments: hive-5582.1.patch.txt, hive-5582.3.patch.txt
>
>
> Implement optimized support for filters of the form
> column BETWEEN scalar1 AND scalar2
> in vectorized mode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5603) AuthorizationPreEventListener masks lower level exception with IllegalStateException

2013-10-21 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-5603:
---

 Summary: AuthorizationPreEventListener masks lower level exception 
with IllegalStateException
 Key: HIVE-5603
 URL: https://issues.apache.org/jira/browse/HIVE-5603
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.12.0
Reporter: Thejas M Nair


AuthorizationPreEventListener has following code that will result in "Can't 
overwrite exception" being thrown, and also mask the lower level exception.

{code}
  private InvalidOperationException invalidOperationException(Exception e) {
InvalidOperationException ex = new InvalidOperationException();
ex.initCause(e.getCause());
return ex;
  }

  private MetaException metaException(HiveException e) {
MetaException ex =  new MetaException(e.getMessage());
ex.initCause(e);
return ex;
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5582) Implement BETWEEN filter in vectorized mode

2013-10-21 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-5582:
--

Attachment: hive-5582.3.patch.txt

Here's a second more or less finished patch for [NOT] BETWEEN. Needs some more 
testing.

> Implement BETWEEN filter in vectorized mode
> ---
>
> Key: HIVE-5582
> URL: https://issues.apache.org/jira/browse/HIVE-5582
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Reporter: Eric Hanson
>Assignee: Eric Hanson
> Attachments: hive-5582.1.patch.txt, hive-5582.3.patch.txt
>
>
> Implement optimized support for filters of the form
> column BETWEEN scalar1 AND scalar2
> in vectorized mode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5602) Micro optimize select operator

2013-10-21 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801368#comment-13801368
 ] 

Edward Capriolo commented on HIVE-5602:
---

SELECT operator is doing try catch inside a for loop each column when it does 
not need to. Additionally we are making a function call each row to check 
conf.isSelectComputeNoStart()

I micro-benched before and after the change and showed a minimal bonus, please 
review.
{pre}
13/10/21 20:29:29 INFO exec.FilterOperator: 0 forwarding 1 rows
13/10/21 20:29:29 INFO exec.FilterOperator: 0 forwarding 10 rows
13/10/21 20:29:29 INFO exec.FilterOperator: 0 forwarding 100 rows
13/10/21 20:29:29 INFO exec.FilterOperator: 0 forwarding 1000 rows
13/10/21 20:29:29 INFO exec.FilterOperator: 0 forwarding 1 rows
13/10/21 20:29:30 INFO exec.FilterOperator: 0 forwarding 10 rows
13/10/21 20:29:31 INFO exec.FilterOperator: 0 forwarding 100 rows
13/10/21 20:29:33 INFO exec.FilterOperator: 0 forwarding 200 rows
13/10/21 20:29:34 INFO exec.FilterOperator: 0 forwarding 300 rows
13/10/21 20:29:36 INFO exec.FilterOperator: 0 forwarding 400 rows
13/10/21 20:29:38 INFO exec.FilterOperator: 0 forwarding 500 rows
13/10/21 20:29:40 INFO exec.FilterOperator: 0 forwarding 600 rows
13/10/21 20:29:41 INFO exec.FilterOperator: 0 forwarding 700 rows
13/10/21 20:29:43 INFO exec.FilterOperator: 0 forwarding 800 rows
13/10/21 20:29:45 INFO exec.FilterOperator: 0 forwarding 900 rows
13/10/21 20:29:46 INFO exec.FilterOperator: 0 forwarding 1000 rows

13/10/21 20:31:36 INFO exec.FilterOperator: Initialization Done 0 FIL
13/10/21 20:31:36 INFO exec.FilterOperator: 0 forwarding 1 rows
13/10/21 20:31:36 INFO exec.FilterOperator: 0 forwarding 10 rows
13/10/21 20:31:36 INFO exec.FilterOperator: 0 forwarding 100 rows
13/10/21 20:31:36 INFO exec.FilterOperator: 0 forwarding 1000 rows
13/10/21 20:31:37 INFO exec.FilterOperator: 0 forwarding 1 rows
13/10/21 20:31:37 INFO exec.FilterOperator: 0 forwarding 10 rows
13/10/21 20:31:38 INFO exec.FilterOperator: 0 forwarding 100 rows
13/10/21 20:31:40 INFO exec.FilterOperator: 0 forwarding 200 rows
13/10/21 20:31:41 INFO exec.FilterOperator: 0 forwarding 300 rows
13/10/21 20:31:43 INFO exec.FilterOperator: 0 forwarding 400 rows
13/10/21 20:31:45 INFO exec.FilterOperator: 0 forwarding 500 rows
13/10/21 20:31:46 INFO exec.FilterOperator: 0 forwarding 600 rows
13/10/21 20:31:48 INFO exec.FilterOperator: 0 forwarding 700 rows
13/10/21 20:31:49 INFO exec.FilterOperator: 0 forwarding 800 rows
13/10/21 20:31:51 INFO exec.FilterOperator: 0 forwarding 900 rows
13/10/21 20:31:53 INFO exec.FilterOperator: 0 forwarding 1000 rows
{pre}

> Micro optimize select operator
> --
>
> Key: HIVE-5602
> URL: https://issues.apache.org/jira/browse/HIVE-5602
> Project: Hive
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
> Attachments: HIVE-5602.patch.1.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5602) Micro optimize select operator

2013-10-21 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated HIVE-5602:
--

Assignee: Edward Capriolo
  Status: Patch Available  (was: Open)

> Micro optimize select operator
> --
>
> Key: HIVE-5602
> URL: https://issues.apache.org/jira/browse/HIVE-5602
> Project: Hive
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
> Attachments: HIVE-5602.patch.1.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5602) Micro optimize select operator

2013-10-21 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated HIVE-5602:
--

Attachment: HIVE-5602.patch.1.txt

> Micro optimize select operator
> --
>
> Key: HIVE-5602
> URL: https://issues.apache.org/jira/browse/HIVE-5602
> Project: Hive
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Priority: Minor
> Attachments: HIVE-5602.patch.1.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5602) Micro optimize select operator

2013-10-21 Thread Edward Capriolo (JIRA)
Edward Capriolo created HIVE-5602:
-

 Summary: Micro optimize select operator
 Key: HIVE-5602
 URL: https://issues.apache.org/jira/browse/HIVE-5602
 Project: Hive
  Issue Type: Improvement
Reporter: Edward Capriolo
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5430) Refactor VectorizationContext and handle NOT expression with nulls.

2013-10-21 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801355#comment-13801355
 ] 

Jitendra Nath Pandey commented on HIVE-5430:


It is the same link :
https://reviews.apache.org/r/14576/

> Refactor VectorizationContext and handle NOT expression with nulls.
> ---
>
> Key: HIVE-5430
> URL: https://issues.apache.org/jira/browse/HIVE-5430
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5430.1.patch, HIVE-5430.2.patch, HIVE-5430.3.patch, 
> HIVE-5430.4.patch, HIVE-5430.5.patch, HIVE-5430.6.patch
>
>
> NOT expression doesn't handle nulls correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5425) Provide a configuration option to control the default stripe size for ORC

2013-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801350#comment-13801350
 ] 

Hive QA commented on HIVE-5425:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606417/D13233.1.patch

{color:green}SUCCESS:{color} +1 4429 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1185/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1185/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Provide a configuration option to control the default stripe size for ORC
> -
>
> Key: HIVE-5425
> URL: https://issues.apache.org/jira/browse/HIVE-5425
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: D13233.1.patch
>
>
> We should provide a configuration option to control the default stripe size.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5268) HiveServer2 accumulates orphaned OperationHandle objects when a client fails while executing query

2013-10-21 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801334#comment-13801334
 ] 

Vaibhav Gumashta commented on HIVE-5268:


[~thiruvel] Thanks a lot!

> HiveServer2 accumulates orphaned OperationHandle objects when a client fails 
> while executing query
> --
>
> Key: HIVE-5268
> URL: https://issues.apache.org/jira/browse/HIVE-5268
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Thiruvel Thirumoolan
> Fix For: 0.13.0
>
> Attachments: HIVE-5268_prototype.patch
>
>
> When queries are executed against the HiveServer2 an OperationHandle object 
> is stored in the OperationManager.handleToOperation HashMap. Currently its 
> the duty of the JDBC client to explicitly close to cleanup the entry in the 
> map. But if the client fails to close the statement then the OperationHandle 
> object is never cleaned up and gets accumulated in the server.
> This can potentially cause OOM on the server over time. This also can be used 
> as a loophole by a malicious client to bring down the Hive server.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5268) HiveServer2 accumulates orphaned OperationHandle objects when a client fails while executing query

2013-10-21 Thread Thiruvel Thirumoolan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801331#comment-13801331
 ] 

Thiruvel Thirumoolan commented on HIVE-5268:


[~vgumashta] Here it is https://reviews.apache.org/r/14809/
Let me dig in and come up with a design.

> HiveServer2 accumulates orphaned OperationHandle objects when a client fails 
> while executing query
> --
>
> Key: HIVE-5268
> URL: https://issues.apache.org/jira/browse/HIVE-5268
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Thiruvel Thirumoolan
> Fix For: 0.13.0
>
> Attachments: HIVE-5268_prototype.patch
>
>
> When queries are executed against the HiveServer2 an OperationHandle object 
> is stored in the OperationManager.handleToOperation HashMap. Currently its 
> the duty of the JDBC client to explicitly close to cleanup the entry in the 
> map. But if the client fails to close the statement then the OperationHandle 
> object is never cleaned up and gets accumulated in the server.
> This can potentially cause OOM on the server over time. This also can be used 
> as a loophole by a malicious client to bring down the Hive server.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Review Request 14809: HIVE-5268: HiveServer2 accumulates orphaned OperationHandle objects when a client fails while executing query

2013-10-21 Thread Thiruvel Thirumoolan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14809/
---

Review request for hive and Vaibhav Gumashta.


Bugs: HIVE-5268
https://issues.apache.org/jira/browse/HIVE-5268


Repository: hive-git


Description
---

This is a prototype of the patch we have to cleanup resources on HS2 on client 
disconnects. This has worked for Hive-10.

An updated patch to handle async query execution and session timeouts is on the 
way.


Diffs
-

  
common/src/java/org/apache/hadoop/hive/common/thrift/HiveThriftChainedEventHandler.java
 PRE-CREATION 
  
jdbc/src/test/org/apache/hive/service/cli/thrift/TestDisconnectCleanupEventHandler.java
 PRE-CREATION 
  service/src/java/org/apache/hive/service/cli/CLIService.java 1a7f338 
  service/src/java/org/apache/hive/service/cli/session/SessionManager.java 
f392d62 
  
service/src/java/org/apache/hive/service/cli/thrift/DisconnectCleanupEventHandler.java
 PRE-CREATION 
  
service/src/java/org/apache/hive/service/cli/thrift/ThriftBinaryCLIService.java 
9c8f5c1 
  service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java 
857e627 

Diff: https://reviews.apache.org/r/14809/diff/


Testing
---

A new test case to test preliminaries.
Manual testing: Start HS2 on a machine and launch a job through JDBC. Before 
the job is done, kill the client. The server will cleanup all resources, 
scratch directory etc at the end of the query. 


Thanks,

Thiruvel Thirumoolan



[jira] [Updated] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-21 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4974:
--

Affects Version/s: (was: 0.10.0)

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5430) Refactor VectorizationContext and handle NOT expression with nulls.

2013-10-21 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801313#comment-13801313
 ] 

Eric Hanson commented on HIVE-5430:
---

Can you put up a Review Board entry for this to make it easier to review?

> Refactor VectorizationContext and handle NOT expression with nulls.
> ---
>
> Key: HIVE-5430
> URL: https://issues.apache.org/jira/browse/HIVE-5430
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5430.1.patch, HIVE-5430.2.patch, HIVE-5430.3.patch, 
> HIVE-5430.4.patch, HIVE-5430.5.patch, HIVE-5430.6.patch
>
>
> NOT expression doesn't handle nulls correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-21 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4974:
--

Fix Version/s: 0.13.0
   Status: Patch Available  (was: Open)

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.12.0, 0.11.0, 0.10.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-21 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801272#comment-13801272
 ] 

Chris Drome commented on HIVE-4974:
---

Created phabricator ticket: https://reviews.facebook.net/D13611

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-21 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4974:
--

Attachment: HIVE-4974-trunk.patch.txt

Uploaded trunk patch.

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Attachments: HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5601) NPE in ORC's PPD when using select * from table with where predicate

2013-10-21 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-5601:
-

Attachment: HIVE-5601.branch-0.12.2.patch.txt
HIVE-5601.trunk.2.patch.txt

Updated the patch as per [~owen.omalley]'s comment..

> NPE in ORC's PPD  when using select * from table with where predicate
> -
>
> Key: HIVE-5601
> URL: https://issues.apache.org/jira/browse/HIVE-5601
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>Priority: Critical
>  Labels: ORC
> Attachments: HIVE-5601.branch-0.12.2.patch.txt, 
> HIVE-5601.branch-12.1.patch.txt, HIVE-5601.trunk.1.patch.txt, 
> HIVE-5601.trunk.2.patch.txt
>
>
> ORCInputFormat has a method findIncludedColumns() which returns boolean array 
> of included columns. In case of the following query 
> {code}select * from qlog_orc where id<1000 limit 10;{code}
>  where all columns are selected the findIncludedColumns() returns null. This 
> will result in a NPE when PPD is enabled. Following is the stack trace
> {code}Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.planReadPartialDataStreams(RecordReaderImpl.java:2387)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:2543)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripe(RecordReaderImpl.java:2200)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:2573)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:2615)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:132)
>   at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rows(ReaderImpl.java:348)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.(OrcInputFormat.java:99)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:241)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:237)
>   ... 8 more{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5601) NPE in ORC's PPD when using select * from table with where predicate

2013-10-21 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801226#comment-13801226
 ] 

Owen O'Malley commented on HIVE-5601:
-

Actually, can you fix it lower down and make the Reader.rows replace null with 
a boolean[] of the right length?

> NPE in ORC's PPD  when using select * from table with where predicate
> -
>
> Key: HIVE-5601
> URL: https://issues.apache.org/jira/browse/HIVE-5601
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>Priority: Critical
>  Labels: ORC
> Attachments: HIVE-5601.branch-12.1.patch.txt, 
> HIVE-5601.trunk.1.patch.txt
>
>
> ORCInputFormat has a method findIncludedColumns() which returns boolean array 
> of included columns. In case of the following query 
> {code}select * from qlog_orc where id<1000 limit 10;{code}
>  where all columns are selected the findIncludedColumns() returns null. This 
> will result in a NPE when PPD is enabled. Following is the stack trace
> {code}Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.planReadPartialDataStreams(RecordReaderImpl.java:2387)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:2543)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripe(RecordReaderImpl.java:2200)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:2573)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:2615)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:132)
>   at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rows(ReaderImpl.java:348)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.(OrcInputFormat.java:99)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:241)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:237)
>   ... 8 more{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5132) Can't access to hwi due to "No Java compiler available"

2013-10-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5132:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk! Thank you for your contribution Bing and review 
Edward!

> Can't access to hwi due to "No Java compiler available"
> ---
>
> Key: HIVE-5132
> URL: https://issues.apache.org/jira/browse/HIVE-5132
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.10.0, 0.11.0
> Environment: JDK1.6, hadoop 2.0.4-alpha
>Reporter: Bing Li
>Assignee: Bing Li
>Priority: Critical
> Fix For: 0.13.0
>
> Attachments: HIVE-5132-01.patch
>
>
> I want to use hwi to submit hive queries, but after start hwi successfully, I 
> can't open the web page of it.
> I noticed that someone also met the same issue in hive-0.10.
> Reproduce steps:
> --
> 1. start hwi
> bin/hive --config $HIVE_CONF_DIR --service hwi
> 2. access to http://:/hwi via browser
> got the following error message:
> HTTP ERROR 500
> Problem accessing /hwi/. Reason: 
> No Java compiler available
> Caused by:
> java.lang.IllegalStateException: No Java compiler available
>   at 
> org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225)
>   at 
> org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560)
>   at 
> org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299)
>   at 
> org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315)
>   at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327)
>   at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126)
>   at 
> org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at 
> org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5425) Provide a configuration option to control the default stripe size for ORC

2013-10-21 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801173#comment-13801173
 ] 

Brock Noland commented on HIVE-5425:


DEFAULT_STRIPE_SIZE in OrcFile is redundant. We should just use:

{noformat}
conf.getLongVar(HiveConf.ConfVars.HIVE_ORC_DEFAULT_STRIPE_SIZE);
{noformat}

> Provide a configuration option to control the default stripe size for ORC
> -
>
> Key: HIVE-5425
> URL: https://issues.apache.org/jira/browse/HIVE-5425
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: D13233.1.patch
>
>
> We should provide a configuration option to control the default stripe size.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5601) NPE in ORC's PPD when using select * from table with where predicate

2013-10-21 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-5601:
-

Status: Patch Available  (was: Open)

> NPE in ORC's PPD  when using select * from table with where predicate
> -
>
> Key: HIVE-5601
> URL: https://issues.apache.org/jira/browse/HIVE-5601
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>Priority: Critical
>  Labels: ORC
> Attachments: HIVE-5601.branch-12.1.patch.txt, 
> HIVE-5601.trunk.1.patch.txt
>
>
> ORCInputFormat has a method findIncludedColumns() which returns boolean array 
> of included columns. In case of the following query 
> {code}select * from qlog_orc where id<1000 limit 10;{code}
>  where all columns are selected the findIncludedColumns() returns null. This 
> will result in a NPE when PPD is enabled. Following is the stack trace
> {code}Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.planReadPartialDataStreams(RecordReaderImpl.java:2387)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:2543)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripe(RecordReaderImpl.java:2200)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:2573)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:2615)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:132)
>   at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rows(ReaderImpl.java:348)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.(OrcInputFormat.java:99)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:241)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:237)
>   ... 8 more{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5572) Fails of non-sql command are not propagated to jdbc2 client

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801120#comment-13801120
 ] 

Hudson commented on HIVE-5572:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2412 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2412/])
HIVE-5572 : Fails of non-sql command are not propagated to jdbc2 client (Navis 
reviewed by Brock Noland) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534034)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java


> Fails of non-sql command are not propagated to jdbc2 client
> ---
>
> Key: HIVE-5572
> URL: https://issues.apache.org/jira/browse/HIVE-5572
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5572.1.patch.txt
>
>
> For example after setting restricted configs, trying to override it by set 
> command looks to be succeeded but it's not.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4957) Restrict number of bit vectors, to prevent out of Java heap memory

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801123#comment-13801123
 ] 

Hudson commented on HIVE-4957:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #209 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/209/])
HIVE-4957 - Restrict number of bit vectors, to prevent out of Java heap memory 
(Shreepadma Venugopalan via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534337)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFComputeStats.java
* /hive/trunk/ql/src/test/queries/clientnegative/compute_stats_long.q
* /hive/trunk/ql/src/test/results/clientnegative/compute_stats_long.q.out


> Restrict number of bit vectors, to prevent out of Java heap memory
> --
>
> Key: HIVE-4957
> URL: https://issues.apache.org/jira/browse/HIVE-4957
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.11.0
>Reporter: Brock Noland
>Assignee: Shreepadma Venugopalan
> Fix For: 0.13.0
>
> Attachments: HIVE-4957.1.patch, HIVE-4957.2.patch
>
>
> normally increase number of bit vectors will increase calculation accuracy. 
> Let's say
> {noformat}
> select compute_stats(a, 40) from test_hive;
> {noformat}
> generally get better accuracy than
> {noformat}
> select compute_stats(a, 16) from test_hive;
> {noformat}
> But larger number of bit vectors also cause query run slower. When number of 
> bit vectors over 50, it won't help to increase accuracy anymore. But it still 
> increase memory usage, and crash Hive if number if too huge. Current Hive 
> doesn't prevent user use ridiculous large number of bit vectors in 
> 'compute_stats' query.
> One example
> {noformat}
> select compute_stats(a, 9) from column_eight_types;
> {noformat}
> crashes Hive.
> {noformat}
> 2012-12-20 23:21:52,247 Stage-1 map = 0%,  reduce = 0%
> 2012-12-20 23:22:11,315 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.29 
> sec
> MapReduce Total cumulative CPU time: 290 msec
> Ended Job = job_1354923204155_0777 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL: 
> http://cs-10-20-81-171.cloud.cloudera.com:8088/proxy/application_1354923204155_0777/
> Examining task ID: task_1354923204155_0777_m_00 (and more) from job 
> job_1354923204155_0777
> Task with the most failures(4): 
> -
> Task ID:
>   task_1354923204155_0777_m_00
> URL:
>   
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1354923204155_0777&tipid=task_1354923204155_0777_m_00
> -
> Diagnostic Messages for this Task:
> Error: Java heap space
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5559) Stats publisher fails for list bucketing when IDs are too long

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801124#comment-13801124
 ] 

Hudson commented on HIVE-5559:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2412 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2412/])
HIVE-5559 : Stats publisher fails for list bucketing when IDs are too long 
(Jason Dere via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534024)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/test/queries/clientpositive/stats_list_bucket.q
* /hive/trunk/ql/src/test/results/clientpositive/stats_list_bucket.q.out
* /hive/trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input7.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input9.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testsequencefile.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample7.q.xml


> Stats publisher fails for list bucketing when IDs are too long
> --
>
> Key: HIVE-5559
> URL: https://issues.apache.org/jira/browse/HIVE-5559
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-5559.1.patch, HIVE-5559.2.patch
>
>
> Several of the list_bucket_* q files fail if the hive source path gets too 
> long. It looks like the numRows and rawDataSize stats aren't getting updated 
> properly in this situation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5411) Migrate expression serialization to Kryo

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801126#comment-13801126
 ] 

Hudson commented on HIVE-5411:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #209 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/209/])
HIVE-5411 : Migrate expression serialization to Kryo (Ashutosh Chauhan via 
Thejas Nair) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534023)
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeGenericFuncEvaluator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/udf/VectorUDFAdaptor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgument.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgumentImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PcrExprProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartExprEvalUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionExpressionForMetastore.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeGenericFuncDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/ExprWalkerProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/metastore/TestMetastoreExpr.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestUtilities.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/vector/TestVectorSelectOperator.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/vector/TestVectorizationContext.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/sarg/TestSearchArgumentImpl.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/physical/TestVectorizer.java
* /hive/trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/cast1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input20.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input8.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input9.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_part1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testxpath.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testxpath2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join7.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join8.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample2.q.xml
* /h

[jira] [Commented] (HIVE-5574) Unnecessary newline at the end of message of ParserException

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801116#comment-13801116
 ] 

Hudson commented on HIVE-5574:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2412 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2412/])
HIVE-5574 : Unnecessary newline at the end of message of ParserException (Navis 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534203)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseException.java
* 
/hive/trunk/ql/src/test/results/clientnegative/alter_partition_coltype_2columns.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/alter_partition_coltype_invalidtype.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_partspec3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/clusterbyorderby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/column_rename3.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/columnstats_partlvl_multiple_part_clause.q.out
* /hive/trunk/ql/src/test/results/clientnegative/create_or_replace_view6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/invalid_create_tbl2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/invalid_select_expression.q.out
* /hive/trunk/ql/src/test/results/clientnegative/invalid_tbl_name.q.out
* /hive/trunk/ql/src/test/results/clientnegative/lateral_view_join.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/ptf_negative_DistributeByOrderBy.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/ptf_negative_PartitionBySortBy.q.out
* /hive/trunk/ql/src/test/results/clientnegative/ptf_window_boundaries.q.out
* /hive/trunk/ql/src/test/results/clientnegative/ptf_window_boundaries2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/select_charliteral.q.out
* /hive/trunk/ql/src/test/results/clientnegative/select_udtf_alias.q.out
* /hive/trunk/ql/src/test/results/clientnegative/set_table_property.q.out
* /hive/trunk/ql/src/test/results/clientnegative/show_columns2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/show_tables_bad1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/show_tables_bad2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/uniquejoin3.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/garbage.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/invalid_create_table.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/invalid_select.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/macro_reserved_word.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/missing_overwrite.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/quoted_string.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/wrong_distinct2.q.out


> Unnecessary newline at the end of message of ParserException
> 
>
> Key: HIVE-5574
> URL: https://issues.apache.org/jira/browse/HIVE-5574
> Project: Hive
>  Issue Type: Bug
>  Components: Diagnosability
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5574.1.patch.txt
>
>
> Error messages in ParserException is ended with newline, which is a little 
> annoying.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5070) Implement listLocatedStatus() in ProxyFileSystem for 0.23 shim

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801117#comment-13801117
 ] 

Hudson commented on HIVE-5070:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2412 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2412/])
HIVE-5070 - Implement listLocatedStatus() in ProxyFileSystem for 0.23 shim 
(shanyu zhao via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534240)
* 
/hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/hive/shims/Hadoop20SShims.java
* 
/hive/trunk/shims/src/0.23/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java
* 
/hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
* /hive/trunk/shims/src/common/java/org/apache/hadoop/fs/ProxyFileSystem.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


> Implement listLocatedStatus() in ProxyFileSystem for 0.23 shim
> --
>
> Key: HIVE-5070
> URL: https://issues.apache.org/jira/browse/HIVE-5070
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.12.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Fix For: 0.13.0
>
> Attachments: HIVE-5070.3.patch, HIVE-5070.4.patch, 
> HIVE-5070.patch.txt, HIVE-5070-v2.patch, HIVE-5070-v3.patch, 
> HIVE-5070-v4-trunk.patch
>
>
> MAPREDUCE-1981 introduced a new API for FileSystem - listLocatedStatus. It is 
> used in Hadoop's FileInputFormat.getSplits(). Hive's ProxyFileSystem class 
> needs to implement this API in order to make Hive unit test work.
> Otherwise, you'll see these exceptions when running TestCliDriver test case, 
> e.g. results of running allcolref_in_udf.q:
> {noformat}
> [junit] Running org.apache.hadoop.hive.cli.TestCliDriver
> [junit] Begin query: allcolref_in_udf.q
> [junit] java.lang.IllegalArgumentException: Wrong FS: 
> pfile:/GitHub/Monarch/project/hive-monarch/build/ql/test/data/warehouse/src, 
> expected: file:///
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
> [junit]   at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:69)
> [junit]   at 
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:375)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1482)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1522)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:1798)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1797)
> [junit]   at 
> org.apache.hadoop.fs.ChecksumFileSystem.listLocatedStatus(ChecksumFileSystem.java:579)
> [junit]   at 
> org.apache.hadoop.fs.FilterFileSystem.listLocatedStatus(FilterFileSystem.java:235)
> [junit]   at 
> org.apache.hadoop.fs.FilterFileSystem.listLocatedStatus(FilterFileSystem.java:235)
> [junit]   at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
> [junit]   at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
> [junit]   at 
> org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:69)
> [junit]   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:385)
> [junit]   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:351)
> [junit]   at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:389)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:503)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:495)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:390)
> [junit]   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> [junit]   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
> [junit]   at java.security.AccessController.doPrivileged(Native Method)
> [junit]   at javax.security.auth.Subject.doAs(Subject.java:396)
> [junit]   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1481)
> [junit]   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
> [junit]   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
> [junit]   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:552)

[jira] [Commented] (HIVE-5572) Fails of non-sql command are not propagated to jdbc2 client

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801121#comment-13801121
 ] 

Hudson commented on HIVE-5572:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #209 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/209/])
HIVE-5572 : Fails of non-sql command are not propagated to jdbc2 client (Navis 
reviewed by Brock Noland) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534034)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java


> Fails of non-sql command are not propagated to jdbc2 client
> ---
>
> Key: HIVE-5572
> URL: https://issues.apache.org/jira/browse/HIVE-5572
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5572.1.patch.txt
>
>
> For example after setting restricted configs, trying to override it by set 
> command looks to be succeeded but it's not.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5070) Implement listLocatedStatus() in ProxyFileSystem for 0.23 shim

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801119#comment-13801119
 ] 

Hudson commented on HIVE-5070:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #209 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/209/])
HIVE-5070 - Implement listLocatedStatus() in ProxyFileSystem for 0.23 shim 
(shanyu zhao via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534240)
* 
/hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/hive/shims/Hadoop20SShims.java
* 
/hive/trunk/shims/src/0.23/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java
* 
/hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
* /hive/trunk/shims/src/common/java/org/apache/hadoop/fs/ProxyFileSystem.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


> Implement listLocatedStatus() in ProxyFileSystem for 0.23 shim
> --
>
> Key: HIVE-5070
> URL: https://issues.apache.org/jira/browse/HIVE-5070
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.12.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Fix For: 0.13.0
>
> Attachments: HIVE-5070.3.patch, HIVE-5070.4.patch, 
> HIVE-5070.patch.txt, HIVE-5070-v2.patch, HIVE-5070-v3.patch, 
> HIVE-5070-v4-trunk.patch
>
>
> MAPREDUCE-1981 introduced a new API for FileSystem - listLocatedStatus. It is 
> used in Hadoop's FileInputFormat.getSplits(). Hive's ProxyFileSystem class 
> needs to implement this API in order to make Hive unit test work.
> Otherwise, you'll see these exceptions when running TestCliDriver test case, 
> e.g. results of running allcolref_in_udf.q:
> {noformat}
> [junit] Running org.apache.hadoop.hive.cli.TestCliDriver
> [junit] Begin query: allcolref_in_udf.q
> [junit] java.lang.IllegalArgumentException: Wrong FS: 
> pfile:/GitHub/Monarch/project/hive-monarch/build/ql/test/data/warehouse/src, 
> expected: file:///
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
> [junit]   at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:69)
> [junit]   at 
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:375)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1482)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1522)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:1798)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1797)
> [junit]   at 
> org.apache.hadoop.fs.ChecksumFileSystem.listLocatedStatus(ChecksumFileSystem.java:579)
> [junit]   at 
> org.apache.hadoop.fs.FilterFileSystem.listLocatedStatus(FilterFileSystem.java:235)
> [junit]   at 
> org.apache.hadoop.fs.FilterFileSystem.listLocatedStatus(FilterFileSystem.java:235)
> [junit]   at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
> [junit]   at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
> [junit]   at 
> org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:69)
> [junit]   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:385)
> [junit]   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:351)
> [junit]   at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:389)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:503)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:495)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:390)
> [junit]   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> [junit]   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
> [junit]   at java.security.AccessController.doPrivileged(Native Method)
> [junit]   at javax.security.auth.Subject.doAs(Subject.java:396)
> [junit]   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1481)
> [junit]   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
> [junit]   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
> [junit]   at org.apache.hadoop.mapred.JobClient$1.run(JobCli

[jira] [Commented] (HIVE-5574) Unnecessary newline at the end of message of ParserException

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801118#comment-13801118
 ] 

Hudson commented on HIVE-5574:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #209 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/209/])
HIVE-5574 : Unnecessary newline at the end of message of ParserException (Navis 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534203)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseException.java
* 
/hive/trunk/ql/src/test/results/clientnegative/alter_partition_coltype_2columns.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/alter_partition_coltype_invalidtype.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_partspec3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/clusterbyorderby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/column_rename3.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/columnstats_partlvl_multiple_part_clause.q.out
* /hive/trunk/ql/src/test/results/clientnegative/create_or_replace_view6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/invalid_create_tbl2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/invalid_select_expression.q.out
* /hive/trunk/ql/src/test/results/clientnegative/invalid_tbl_name.q.out
* /hive/trunk/ql/src/test/results/clientnegative/lateral_view_join.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/ptf_negative_DistributeByOrderBy.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/ptf_negative_PartitionBySortBy.q.out
* /hive/trunk/ql/src/test/results/clientnegative/ptf_window_boundaries.q.out
* /hive/trunk/ql/src/test/results/clientnegative/ptf_window_boundaries2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/select_charliteral.q.out
* /hive/trunk/ql/src/test/results/clientnegative/select_udtf_alias.q.out
* /hive/trunk/ql/src/test/results/clientnegative/set_table_property.q.out
* /hive/trunk/ql/src/test/results/clientnegative/show_columns2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/show_tables_bad1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/show_tables_bad2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/uniquejoin3.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/garbage.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/invalid_create_table.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/invalid_select.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/macro_reserved_word.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/missing_overwrite.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/quoted_string.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/wrong_distinct2.q.out


> Unnecessary newline at the end of message of ParserException
> 
>
> Key: HIVE-5574
> URL: https://issues.apache.org/jira/browse/HIVE-5574
> Project: Hive
>  Issue Type: Bug
>  Components: Diagnosability
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5574.1.patch.txt
>
>
> Error messages in ParserException is ended with newline, which is a little 
> annoying.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5559) Stats publisher fails for list bucketing when IDs are too long

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801127#comment-13801127
 ] 

Hudson commented on HIVE-5559:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #209 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/209/])
HIVE-5559 : Stats publisher fails for list bucketing when IDs are too long 
(Jason Dere via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534024)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/test/queries/clientpositive/stats_list_bucket.q
* /hive/trunk/ql/src/test/results/clientpositive/stats_list_bucket.q.out
* /hive/trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input7.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input9.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testsequencefile.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample7.q.xml


> Stats publisher fails for list bucketing when IDs are too long
> --
>
> Key: HIVE-5559
> URL: https://issues.apache.org/jira/browse/HIVE-5559
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-5559.1.patch, HIVE-5559.2.patch
>
>
> Several of the list_bucket_* q files fail if the hive source path gets too 
> long. It looks like the numRows and rawDataSize stats aren't getting updated 
> properly in this situation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5578) hcat script doesn't include jars from HIVE_AUX_JARS_PATH

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801125#comment-13801125
 ] 

Hudson commented on HIVE-5578:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #209 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/209/])
HIVE-5578 - hcat script doesn't include jars from HIVE_AUX_JARS_PATH (Mohammad 
Kamrul Islam via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534242)
* /hive/trunk/hcatalog/bin/hcat


> hcat script doesn't include jars from HIVE_AUX_JARS_PATH
> 
>
> Key: HIVE-5578
> URL: https://issues.apache.org/jira/browse/HIVE-5578
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.5.0
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.13.0
>
> Attachments: HIVE-5578.1.patch
>
>
> hcat script include jars from $HIVE_HOME/lib but not from HIVE_AUX_JARS_PATH.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5578) hcat script doesn't include jars from HIVE_AUX_JARS_PATH

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801122#comment-13801122
 ] 

Hudson commented on HIVE-5578:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2412 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2412/])
HIVE-5578 - hcat script doesn't include jars from HIVE_AUX_JARS_PATH (Mohammad 
Kamrul Islam via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534242)
* /hive/trunk/hcatalog/bin/hcat


> hcat script doesn't include jars from HIVE_AUX_JARS_PATH
> 
>
> Key: HIVE-5578
> URL: https://issues.apache.org/jira/browse/HIVE-5578
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.5.0
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.13.0
>
> Attachments: HIVE-5578.1.patch
>
>
> hcat script include jars from $HIVE_HOME/lib but not from HIVE_AUX_JARS_PATH.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5601) NPE in ORC's PPD when using select * from table with where predicate

2013-10-21 Thread Prasanth J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth J updated HIVE-5601:
-

Attachment: HIVE-5601.trunk.1.patch.txt
HIVE-5601.branch-12.1.patch.txt

> NPE in ORC's PPD  when using select * from table with where predicate
> -
>
> Key: HIVE-5601
> URL: https://issues.apache.org/jira/browse/HIVE-5601
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>Priority: Critical
>  Labels: ORC
> Attachments: HIVE-5601.branch-12.1.patch.txt, 
> HIVE-5601.trunk.1.patch.txt
>
>
> ORCInputFormat has a method findIncludedColumns() which returns boolean array 
> of included columns. In case of the following query 
> {code}select * from qlog_orc where id<1000 limit 10;{code}
>  where all columns are selected the findIncludedColumns() returns null. This 
> will result in a NPE when PPD is enabled. Following is the stack trace
> {code}Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.planReadPartialDataStreams(RecordReaderImpl.java:2387)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:2543)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripe(RecordReaderImpl.java:2200)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:2573)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:2615)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:132)
>   at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rows(ReaderImpl.java:348)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.(OrcInputFormat.java:99)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:241)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:237)
>   ... 8 more{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5601) NPE in ORC's PPD when using select * from table with where predicate

2013-10-21 Thread Prasanth J (JIRA)
Prasanth J created HIVE-5601:


 Summary: NPE in ORC's PPD  when using select * from table with 
where predicate
 Key: HIVE-5601
 URL: https://issues.apache.org/jira/browse/HIVE-5601
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Prasanth J
Assignee: Prasanth J
Priority: Critical


ORCInputFormat has a method findIncludedColumns() which returns boolean array 
of included columns. In case of the following query 
{code}select * from qlog_orc where id<1000 limit 10;{code}
 where all columns are selected the findIncludedColumns() returns null. This 
will result in a NPE when PPD is enabled. Following is the stack trace
{code}Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.planReadPartialDataStreams(RecordReaderImpl.java:2387)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:2543)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripe(RecordReaderImpl.java:2200)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:2573)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:2615)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:132)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rows(ReaderImpl.java:348)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.(OrcInputFormat.java:99)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:241)
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:237)
... 8 more{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5441) Async query execution doesn't return resultset status

2013-10-21 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801097#comment-13801097
 ] 

Vaibhav Gumashta commented on HIVE-5441:


[~prasadm] Another query: so I'm guessing that after compilation, in the use 
case you pointed to, you're planning to use the presence of fetch task in the 
plan (like here: 
https://github.com/apache/hive/blob/trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java#L110)
 to determine if it will generate a result set before the query is run, and not 
set TOperationHandle#hasResultSet to true?

> Async query execution doesn't return resultset status
> -
>
> Key: HIVE-5441
> URL: https://issues.apache.org/jira/browse/HIVE-5441
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5441.1.patch, HIVE-5441.3.patch
>
>
> For synchronous statement execution (SQL as well as metadata and other), the 
> operation handle includes a boolean flag indicating whether the statement 
> returns a resultset. In case of async execution, that's always set to false.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3976) Support specifying scale and precision with Hive decimal type

2013-10-21 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801096#comment-13801096
 ] 

Xuefu Zhang commented on HIVE-3976:
---

Functional document is updated at wiki: 
https://cwiki.apache.org/confluence/download/attachments/27362075/Hive_Decimal_Precision_Scale_Support.pdf

Highlights:
1. maximum precision and scale is scaled down to 38.
2. emphasizing SQL standard about rounding in multiplication operation.
3. clarifying system default precision/scale for unknown decimal types.

> Support specifying scale and precision with Hive decimal type
> -
>
> Key: HIVE-3976
> URL: https://issues.apache.org/jira/browse/HIVE-3976
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor, Types
>Affects Versions: 0.11.0
>Reporter: Mark Grover
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-3976.1.patch, HIVE-3976.2.patch, HIVE-3976.3.patch, 
> HIVE-3976.4.patch, HIVE-3976.5.patch, HIVE-3976.6.patch, HIVE-3976.7.patch, 
> HIVE-3976.8.patch, HIVE-3976.9.patch, HIVE-3976.patch, remove_prec_scale.diff
>
>
> HIVE-2693 introduced support for Decimal datatype in Hive. However, the 
> current implementation has unlimited precision and provides no way to specify 
> precision and scale when creating the table.
> For example, MySQL allows users to specify scale and precision of the decimal 
> datatype when creating the table:
> {code}
> CREATE TABLE numbers (a DECIMAL(20,2));
> {code}
> Hive should support something similar too.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5559) Stats publisher fails for list bucketing when IDs are too long

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801078#comment-13801078
 ] 

Hudson commented on HIVE-5559:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #514 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/514/])
HIVE-5559 : Stats publisher fails for list bucketing when IDs are too long 
(Jason Dere via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534024)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/test/queries/clientpositive/stats_list_bucket.q
* /hive/trunk/ql/src/test/results/clientpositive/stats_list_bucket.q.out
* /hive/trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input7.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input9.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testsequencefile.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample7.q.xml


> Stats publisher fails for list bucketing when IDs are too long
> --
>
> Key: HIVE-5559
> URL: https://issues.apache.org/jira/browse/HIVE-5559
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-5559.1.patch, HIVE-5559.2.patch
>
>
> Several of the list_bucket_* q files fail if the hive source path gets too 
> long. It looks like the numRows and rawDataSize stats aren't getting updated 
> properly in this situation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5411) Migrate expression serialization to Kryo

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801077#comment-13801077
 ] 

Hudson commented on HIVE-5411:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #514 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/514/])
HIVE-5411 : Migrate expression serialization to Kryo (Ashutosh Chauhan via 
Thejas Nair) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534023)
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeGenericFuncEvaluator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/udf/VectorUDFAdaptor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgument.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgumentImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PcrExprProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartExprEvalUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionExpressionForMetastore.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeGenericFuncDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/ExprWalkerProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/metastore/TestMetastoreExpr.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestUtilities.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/vector/TestVectorSelectOperator.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/vector/TestVectorizationContext.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/sarg/TestSearchArgumentImpl.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/physical/TestVectorizer.java
* /hive/trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/cast1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input20.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input8.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input9.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_part1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testxpath.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testxpath2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join7.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join8.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample2.q.xml
* /hive/trunk/ql

[jira] [Commented] (HIVE-5572) Fails of non-sql command are not propagated to jdbc2 client

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801076#comment-13801076
 ] 

Hudson commented on HIVE-5572:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #514 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/514/])
HIVE-5572 : Fails of non-sql command are not propagated to jdbc2 client (Navis 
reviewed by Brock Noland) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534034)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java


> Fails of non-sql command are not propagated to jdbc2 client
> ---
>
> Key: HIVE-5572
> URL: https://issues.apache.org/jira/browse/HIVE-5572
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5572.1.patch.txt
>
>
> For example after setting restricted configs, trying to override it by set 
> command looks to be succeeded but it's not.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4957) Restrict number of bit vectors, to prevent out of Java heap memory

2013-10-21 Thread Shreepadma Venugopalan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801070#comment-13801070
 ] 

Shreepadma Venugopalan commented on HIVE-4957:
--

Thanks, Brock!

> Restrict number of bit vectors, to prevent out of Java heap memory
> --
>
> Key: HIVE-4957
> URL: https://issues.apache.org/jira/browse/HIVE-4957
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.11.0
>Reporter: Brock Noland
>Assignee: Shreepadma Venugopalan
> Fix For: 0.13.0
>
> Attachments: HIVE-4957.1.patch, HIVE-4957.2.patch
>
>
> normally increase number of bit vectors will increase calculation accuracy. 
> Let's say
> {noformat}
> select compute_stats(a, 40) from test_hive;
> {noformat}
> generally get better accuracy than
> {noformat}
> select compute_stats(a, 16) from test_hive;
> {noformat}
> But larger number of bit vectors also cause query run slower. When number of 
> bit vectors over 50, it won't help to increase accuracy anymore. But it still 
> increase memory usage, and crash Hive if number if too huge. Current Hive 
> doesn't prevent user use ridiculous large number of bit vectors in 
> 'compute_stats' query.
> One example
> {noformat}
> select compute_stats(a, 9) from column_eight_types;
> {noformat}
> crashes Hive.
> {noformat}
> 2012-12-20 23:21:52,247 Stage-1 map = 0%,  reduce = 0%
> 2012-12-20 23:22:11,315 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.29 
> sec
> MapReduce Total cumulative CPU time: 290 msec
> Ended Job = job_1354923204155_0777 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL: 
> http://cs-10-20-81-171.cloud.cloudera.com:8088/proxy/application_1354923204155_0777/
> Examining task ID: task_1354923204155_0777_m_00 (and more) from job 
> job_1354923204155_0777
> Task with the most failures(4): 
> -
> Task ID:
>   task_1354923204155_0777_m_00
> URL:
>   
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1354923204155_0777&tipid=task_1354923204155_0777_m_00
> -
> Diagnostic Messages for this Task:
> Error: Java heap space
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HIVE-5268) HiveServer2 accumulates orphaned OperationHandle objects when a client fails while executing query

2013-10-21 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan reassigned HIVE-5268:
--

Assignee: Thiruvel Thirumoolan  (was: Vaibhav Gumashta)

> HiveServer2 accumulates orphaned OperationHandle objects when a client fails 
> while executing query
> --
>
> Key: HIVE-5268
> URL: https://issues.apache.org/jira/browse/HIVE-5268
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Thiruvel Thirumoolan
> Fix For: 0.13.0
>
> Attachments: HIVE-5268_prototype.patch
>
>
> When queries are executed against the HiveServer2 an OperationHandle object 
> is stored in the OperationManager.handleToOperation HashMap. Currently its 
> the duty of the JDBC client to explicitly close to cleanup the entry in the 
> map. But if the client fails to close the statement then the OperationHandle 
> object is never cleaned up and gets accumulated in the server.
> This can potentially cause OOM on the server over time. This also can be used 
> as a loophole by a malicious client to bring down the Hive server.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5268) HiveServer2 accumulates orphaned OperationHandle objects when a client fails while executing query

2013-10-21 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801063#comment-13801063
 ] 

Vaibhav Gumashta commented on HIVE-5268:


[~thiruvel] Thanks Thiruvel - yes please go ahead and assign it to yourself. It 
will be awesome if you could upload the patch on review board as well - much 
easier to browse through.

Would be keen to hear more on the newer design you are proposing.

> HiveServer2 accumulates orphaned OperationHandle objects when a client fails 
> while executing query
> --
>
> Key: HIVE-5268
> URL: https://issues.apache.org/jira/browse/HIVE-5268
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-5268_prototype.patch
>
>
> When queries are executed against the HiveServer2 an OperationHandle object 
> is stored in the OperationManager.handleToOperation HashMap. Currently its 
> the duty of the JDBC client to explicitly close to cleanup the entry in the 
> map. But if the client fails to close the statement then the OperationHandle 
> object is never cleaned up and gets accumulated in the server.
> This can potentially cause OOM on the server over time. This also can be used 
> as a loophole by a malicious client to bring down the Hive server.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5217) Add long polling to asynchronous execution in HiveServer2

2013-10-21 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-5217:
---

Attachment: HIVE-5217.D12801.3.patch

> Add long polling to asynchronous execution in HiveServer2
> -
>
> Key: HIVE-5217
> URL: https://issues.apache.org/jira/browse/HIVE-5217
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 0.13.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-5217.D12801.2.patch, HIVE-5217.D12801.3.patch
>
>
> [HIVE-4617|https://issues.apache.org/jira/browse/HIVE-4617] provides support 
> for async execution in HS2. The client gets an operation handle which it can 
> poll to check on the operation status. However, the polling frequency is 
> entirely left to the client which can be resource inefficient. Long polling 
> will solve this, by blocking the client request to check the operation status 
> for a configurable amount of time (a new HS2 config) if the data is not 
> available, but responding immediately if the data is available.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5268) HiveServer2 accumulates orphaned OperationHandle objects when a client fails while executing query

2013-10-21 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HIVE-5268:
---

Attachment: HIVE-5268_prototype.patch

Attaching a preliminary patch for branch 12. As mentioned before, this patch is 
aggressive (was a start) in cleaning up resources on server side. As soon as a 
client disconnects the resources are cleaned up on HS2 (if a query is running 
during disconnection, the resources are cleaned up at the end of the query). 
This approach was designed for Hive10 and I am working on porting it to trunk 
and a patch will be available for Hive12 too. The newer approach will handle 
disconnects during async query execution and also have timeouts after which 
handles/sessions will be cleaned up instead of the existing aggressive approach.

Vaibhav, can I assign this to myself if you arent working on this? Thanks!

> HiveServer2 accumulates orphaned OperationHandle objects when a client fails 
> while executing query
> --
>
> Key: HIVE-5268
> URL: https://issues.apache.org/jira/browse/HIVE-5268
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-5268_prototype.patch
>
>
> When queries are executed against the HiveServer2 an OperationHandle object 
> is stored in the OperationManager.handleToOperation HashMap. Currently its 
> the duty of the JDBC client to explicitly close to cleanup the entry in the 
> map. But if the client fails to close the statement then the OperationHandle 
> object is never cleaned up and gets accumulated in the server.
> This can potentially cause OOM on the server over time. This also can be used 
> as a loophole by a malicious client to bring down the Hive server.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5514) webhcat_server.sh foreground option does not work as expected

2013-10-21 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801025#comment-13801025
 ] 

Thejas M Nair commented on HIVE-5514:
-

+1


> webhcat_server.sh foreground option does not work as expected
> -
>
> Key: HIVE-5514
> URL: https://issues.apache.org/jira/browse/HIVE-5514
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Brock Noland
>Priority: Minor
> Attachments: HIVE-5514.patch
>
>
> Executing webhcat script webhcat_server.sh with the foreground option, it 
> calls calls hadoop without using exec. When you kill the webhcat_server.sh 
> process, it does not kill the real webhcat server.
> Just need to add the word exec below in webhcat_server.sh:
> {noformat}
> function foreground_webhcat() {
> exec $start_cmd
> }
> {noformat}
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5425) Provide a configuration option to control the default stripe size for ORC

2013-10-21 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-5425:


Status: Patch Available  (was: Open)

> Provide a configuration option to control the default stripe size for ORC
> -
>
> Key: HIVE-5425
> URL: https://issues.apache.org/jira/browse/HIVE-5425
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: D13233.1.patch
>
>
> We should provide a configuration option to control the default stripe size.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5568) count(*) on ORC tables with predicate pushdown on partition columns fail

2013-10-21 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-5568:


Status: Patch Available  (was: Open)

> count(*) on ORC tables with predicate pushdown on partition columns fail
> 
>
> Key: HIVE-5568
> URL: https://issues.apache.org/jira/browse/HIVE-5568
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.12.1
>
> Attachments: D13485.1.patch, D13485.2.patch, D13485.3.patch
>
>
> If the query is:
> {code}
> select count(*) from orc_table where x = 10;
> {code}
> where x is a partition column and predicate pushdown is enabled, you'll get 
> an array out of bounds exception.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5580) push down predicates with an and-operator between non-SARGable predicates will get NPE

2013-10-21 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-5580:


Status: Patch Available  (was: Open)

> push down predicates with an and-operator between non-SARGable predicates 
> will get NPE
> --
>
> Key: HIVE-5580
> URL: https://issues.apache.org/jira/browse/HIVE-5580
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: D13533.1.patch
>
>
> When all of the predicates in an AND-operator in a SARG expression get 
> removed by the SARG builder, evaluation can end up with a NPE. 
> Sub-expressions are typically removed from AND-operators because they aren't 
> SARGable.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5514) webhcat_server.sh foreground option does not work as expected

2013-10-21 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801006#comment-13801006
 ] 

Brock Noland commented on HIVE-5514:


[~thejas] any chance you can review this HCatalog fix?

> webhcat_server.sh foreground option does not work as expected
> -
>
> Key: HIVE-5514
> URL: https://issues.apache.org/jira/browse/HIVE-5514
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Brock Noland
>Priority: Minor
> Attachments: HIVE-5514.patch
>
>
> Executing webhcat script webhcat_server.sh with the foreground option, it 
> calls calls hadoop without using exec. When you kill the webhcat_server.sh 
> process, it does not kill the real webhcat server.
> Just need to add the word exec below in webhcat_server.sh:
> {noformat}
> function foreground_webhcat() {
> exec $start_cmd
> }
> {noformat}
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4395) Support TFetchOrientation.FIRST for HiveServer2 FetchResults

2013-10-21 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800998#comment-13800998
 ] 

Brock Noland commented on HIVE-4395:


[~prasadm] looks like the latest patch doesn't apply. Can you rebase?

> Support TFetchOrientation.FIRST for HiveServer2 FetchResults
> 
>
> Key: HIVE-4395
> URL: https://issues.apache.org/jira/browse/HIVE-4395
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 0.11.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-4395-1.patch, HIVE-4395.1.patch, HIVE-4395.2.patch
>
>
> Currently HiveServer2 only support fetching next row 
> (TFetchOrientation.NEXT). This ticket is to implement support for 
> TFetchOrientation.FIRST that resets the fetch position at the begining of the 
> resultset. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4957) Restrict number of bit vectors, to prevent out of Java heap memory

2013-10-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-4957:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Thank you for the contribution Shreepadma! I have committed this to trunk!

> Restrict number of bit vectors, to prevent out of Java heap memory
> --
>
> Key: HIVE-4957
> URL: https://issues.apache.org/jira/browse/HIVE-4957
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.11.0
>Reporter: Brock Noland
>Assignee: Shreepadma Venugopalan
> Fix For: 0.13.0
>
> Attachments: HIVE-4957.1.patch, HIVE-4957.2.patch
>
>
> normally increase number of bit vectors will increase calculation accuracy. 
> Let's say
> {noformat}
> select compute_stats(a, 40) from test_hive;
> {noformat}
> generally get better accuracy than
> {noformat}
> select compute_stats(a, 16) from test_hive;
> {noformat}
> But larger number of bit vectors also cause query run slower. When number of 
> bit vectors over 50, it won't help to increase accuracy anymore. But it still 
> increase memory usage, and crash Hive if number if too huge. Current Hive 
> doesn't prevent user use ridiculous large number of bit vectors in 
> 'compute_stats' query.
> One example
> {noformat}
> select compute_stats(a, 9) from column_eight_types;
> {noformat}
> crashes Hive.
> {noformat}
> 2012-12-20 23:21:52,247 Stage-1 map = 0%,  reduce = 0%
> 2012-12-20 23:22:11,315 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.29 
> sec
> MapReduce Total cumulative CPU time: 290 msec
> Ended Job = job_1354923204155_0777 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL: 
> http://cs-10-20-81-171.cloud.cloudera.com:8088/proxy/application_1354923204155_0777/
> Examining task ID: task_1354923204155_0777_m_00 (and more) from job 
> job_1354923204155_0777
> Task with the most failures(4): 
> -
> Task ID:
>   task_1354923204155_0777_m_00
> URL:
>   
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1354923204155_0777&tipid=task_1354923204155_0777_m_00
> -
> Diagnostic Messages for this Task:
> Error: Java heap space
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5070) Implement listLocatedStatus() in ProxyFileSystem for 0.23 shim

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800974#comment-13800974
 ] 

Hudson commented on HIVE-5070:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #146 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/146/])
HIVE-5070 - Implement listLocatedStatus() in ProxyFileSystem for 0.23 shim 
(shanyu zhao via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534240)
* 
/hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/hive/shims/Hadoop20SShims.java
* 
/hive/trunk/shims/src/0.23/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java
* 
/hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
* /hive/trunk/shims/src/common/java/org/apache/hadoop/fs/ProxyFileSystem.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


> Implement listLocatedStatus() in ProxyFileSystem for 0.23 shim
> --
>
> Key: HIVE-5070
> URL: https://issues.apache.org/jira/browse/HIVE-5070
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.12.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Fix For: 0.13.0
>
> Attachments: HIVE-5070.3.patch, HIVE-5070.4.patch, 
> HIVE-5070.patch.txt, HIVE-5070-v2.patch, HIVE-5070-v3.patch, 
> HIVE-5070-v4-trunk.patch
>
>
> MAPREDUCE-1981 introduced a new API for FileSystem - listLocatedStatus. It is 
> used in Hadoop's FileInputFormat.getSplits(). Hive's ProxyFileSystem class 
> needs to implement this API in order to make Hive unit test work.
> Otherwise, you'll see these exceptions when running TestCliDriver test case, 
> e.g. results of running allcolref_in_udf.q:
> {noformat}
> [junit] Running org.apache.hadoop.hive.cli.TestCliDriver
> [junit] Begin query: allcolref_in_udf.q
> [junit] java.lang.IllegalArgumentException: Wrong FS: 
> pfile:/GitHub/Monarch/project/hive-monarch/build/ql/test/data/warehouse/src, 
> expected: file:///
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
> [junit]   at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:69)
> [junit]   at 
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:375)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1482)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1522)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:1798)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1797)
> [junit]   at 
> org.apache.hadoop.fs.ChecksumFileSystem.listLocatedStatus(ChecksumFileSystem.java:579)
> [junit]   at 
> org.apache.hadoop.fs.FilterFileSystem.listLocatedStatus(FilterFileSystem.java:235)
> [junit]   at 
> org.apache.hadoop.fs.FilterFileSystem.listLocatedStatus(FilterFileSystem.java:235)
> [junit]   at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
> [junit]   at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
> [junit]   at 
> org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:69)
> [junit]   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:385)
> [junit]   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:351)
> [junit]   at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:389)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:503)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:495)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:390)
> [junit]   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> [junit]   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
> [junit]   at java.security.AccessController.doPrivileged(Native Method)
> [junit]   at javax.security.auth.Subject.doAs(Subject.java:396)
> [junit]   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1481)
> [junit]   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
> [junit]   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
> [junit]   at org.apache.hadoop.mapred.JobClient$1.run(JobCli

[jira] [Commented] (HIVE-5411) Migrate expression serialization to Kryo

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800977#comment-13800977
 ] 

Hudson commented on HIVE-5411:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #146 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/146/])
HIVE-5411 : Migrate expression serialization to Kryo (Ashutosh Chauhan via 
Thejas Nair) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534023)
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeGenericFuncEvaluator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/udf/VectorUDFAdaptor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgument.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgumentImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PcrExprProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartExprEvalUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionExpressionForMetastore.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeGenericFuncDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/ExprWalkerProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/metastore/TestMetastoreExpr.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestUtilities.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/vector/TestVectorSelectOperator.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/vector/TestVectorizationContext.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/sarg/TestSearchArgumentImpl.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/physical/TestVectorizer.java
* /hive/trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/cast1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input20.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input8.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input9.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_part1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testxpath.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testxpath2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join7.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join8.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample2.q.xml
* /h

[jira] [Commented] (HIVE-5574) Unnecessary newline at the end of message of ParserException

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800972#comment-13800972
 ] 

Hudson commented on HIVE-5574:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #146 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/146/])
HIVE-5574 : Unnecessary newline at the end of message of ParserException (Navis 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534203)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseException.java
* 
/hive/trunk/ql/src/test/results/clientnegative/alter_partition_coltype_2columns.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/alter_partition_coltype_invalidtype.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_partspec3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/clusterbyorderby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/column_rename3.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/columnstats_partlvl_multiple_part_clause.q.out
* /hive/trunk/ql/src/test/results/clientnegative/create_or_replace_view6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/invalid_create_tbl2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/invalid_select_expression.q.out
* /hive/trunk/ql/src/test/results/clientnegative/invalid_tbl_name.q.out
* /hive/trunk/ql/src/test/results/clientnegative/lateral_view_join.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/ptf_negative_DistributeByOrderBy.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/ptf_negative_PartitionBySortBy.q.out
* /hive/trunk/ql/src/test/results/clientnegative/ptf_window_boundaries.q.out
* /hive/trunk/ql/src/test/results/clientnegative/ptf_window_boundaries2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/select_charliteral.q.out
* /hive/trunk/ql/src/test/results/clientnegative/select_udtf_alias.q.out
* /hive/trunk/ql/src/test/results/clientnegative/set_table_property.q.out
* /hive/trunk/ql/src/test/results/clientnegative/show_columns2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/show_tables_bad1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/show_tables_bad2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/uniquejoin3.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/garbage.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/invalid_create_table.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/invalid_select.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/macro_reserved_word.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/missing_overwrite.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/quoted_string.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/wrong_distinct2.q.out


> Unnecessary newline at the end of message of ParserException
> 
>
> Key: HIVE-5574
> URL: https://issues.apache.org/jira/browse/HIVE-5574
> Project: Hive
>  Issue Type: Bug
>  Components: Diagnosability
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5574.1.patch.txt
>
>
> Error messages in ParserException is ended with newline, which is a little 
> annoying.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5559) Stats publisher fails for list bucketing when IDs are too long

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800978#comment-13800978
 ] 

Hudson commented on HIVE-5559:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #146 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/146/])
HIVE-5559 : Stats publisher fails for list bucketing when IDs are too long 
(Jason Dere via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534024)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/test/queries/clientpositive/stats_list_bucket.q
* /hive/trunk/ql/src/test/results/clientpositive/stats_list_bucket.q.out
* /hive/trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input7.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input9.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testsequencefile.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample7.q.xml


> Stats publisher fails for list bucketing when IDs are too long
> --
>
> Key: HIVE-5559
> URL: https://issues.apache.org/jira/browse/HIVE-5559
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-5559.1.patch, HIVE-5559.2.patch
>
>
> Several of the list_bucket_* q files fail if the hive source path gets too 
> long. It looks like the numRows and rawDataSize stats aren't getting updated 
> properly in this situation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5572) Fails of non-sql command are not propagated to jdbc2 client

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800975#comment-13800975
 ] 

Hudson commented on HIVE-5572:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #146 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/146/])
HIVE-5572 : Fails of non-sql command are not propagated to jdbc2 client (Navis 
reviewed by Brock Noland) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534034)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java


> Fails of non-sql command are not propagated to jdbc2 client
> ---
>
> Key: HIVE-5572
> URL: https://issues.apache.org/jira/browse/HIVE-5572
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5572.1.patch.txt
>
>
> For example after setting restricted configs, trying to override it by set 
> command looks to be succeeded but it's not.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5578) hcat script doesn't include jars from HIVE_AUX_JARS_PATH

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800976#comment-13800976
 ] 

Hudson commented on HIVE-5578:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #146 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/146/])
HIVE-5578 - hcat script doesn't include jars from HIVE_AUX_JARS_PATH (Mohammad 
Kamrul Islam via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534242)
* /hive/trunk/hcatalog/bin/hcat


> hcat script doesn't include jars from HIVE_AUX_JARS_PATH
> 
>
> Key: HIVE-5578
> URL: https://issues.apache.org/jira/browse/HIVE-5578
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.5.0
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.13.0
>
> Attachments: HIVE-5578.1.patch
>
>
> hcat script include jars from $HIVE_HOME/lib but not from HIVE_AUX_JARS_PATH.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5483) use metastore statistics to optimize max/min/etc. queries

2013-10-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800952#comment-13800952
 ] 

Ashutosh Chauhan commented on HIVE-5483:


Review request at https://reviews.facebook.net/D13605

> use metastore statistics to optimize max/min/etc. queries
> -
>
> Key: HIVE-5483
> URL: https://issues.apache.org/jira/browse/HIVE-5483
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-5483.patch
>
>
> We have discussed this a little bit.
> Hive can answer queries such as select max(c1) from t purely from metastore 
> using partition statistics, provided that we know the statistics are up to 
> date.
> All data changes (e.g. adding new partitions) currently go thru metastore so 
> we can track up-to-date-ness. If they are not up-to-date, the queries will 
> have to read data (at least for outdated partitions) until someone runs 
> analyze table. We can also analyze new partitions after add, if that is 
> configured/specified in the command.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5317) Implement insert, update, and delete in Hive with full ACID support

2013-10-21 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800912#comment-13800912
 ] 

Owen O'Malley commented on HIVE-5317:
-

Hive already depends on the metastore being up, so it isn't adding a new SPoF. 
Zookeeper adds additional semantic complexity, especially for highly dynamic 
data.

> Implement insert, update, and delete in Hive with full ACID support
> ---
>
> Key: HIVE-5317
> URL: https://issues.apache.org/jira/browse/HIVE-5317
> Project: Hive
>  Issue Type: New Feature
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: InsertUpdatesinHive.pdf
>
>
> Many customers want to be able to insert, update and delete rows from Hive 
> tables with full ACID support. The use cases are varied, but the form of the 
> queries that should be supported are:
> * INSERT INTO tbl SELECT …
> * INSERT INTO tbl VALUES ...
> * UPDATE tbl SET … WHERE …
> * DELETE FROM tbl WHERE …
> * MERGE INTO tbl USING src ON … WHEN MATCHED THEN ... WHEN NOT MATCHED THEN 
> ...
> * SET TRANSACTION LEVEL …
> * BEGIN/END TRANSACTION
> Use Cases
> * Once an hour, a set of inserts and updates (up to 500k rows) for various 
> dimension tables (eg. customer, inventory, stores) needs to be processed. The 
> dimension tables have primary keys and are typically bucketed and sorted on 
> those keys.
> * Once a day a small set (up to 100k rows) of records need to be deleted for 
> regulatory compliance.
> * Once an hour a log of transactions is exported from a RDBS and the fact 
> tables need to be updated (up to 1m rows)  to reflect the new data. The 
> transactions are a combination of inserts, updates, and deletes. The table is 
> partitioned and bucketed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5411) Migrate expression serialization to Kryo

2013-10-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800894#comment-13800894
 ] 

Hudson commented on HIVE-5411:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2411 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2411/])
HIVE-5411 : Migrate expression serialization to Kryo (Ashutosh Chauhan via 
Thejas Nair) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1534023)
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeGenericFuncEvaluator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/udf/VectorUDFAdaptor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgument.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgumentImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PcrExprProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartExprEvalUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionExpressionForMetastore.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeGenericFuncDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/ExprWalkerProcFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/metastore/TestMetastoreExpr.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestUtilities.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/vector/TestVectorSelectOperator.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/vector/TestVectorizationContext.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/sarg/TestSearchArgumentImpl.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/optimizer/physical/TestVectorizer.java
* /hive/trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/cast1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input20.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input8.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input9.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_part1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testxpath.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testxpath2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join7.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join8.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample2.q.xml
* /hive/trunk/ql/s

Re: Review Request 14576: NOT expression doesn't handle nulls correctly.

2013-10-21 Thread Jitendra Pandey

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14576/
---

(Updated Oct. 21, 2013, 6 p.m.)


Review request for hive, Ashutosh Chauhan and Eric Hanson.


Bugs: HIVE-5430
https://issues.apache.org/jira/browse/HIVE-5430


Repository: hive-git


Description
---

NOT expression doesn't handle nulls correctly.


Diffs
-

  ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java 4065067 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticColumn.txt 
2ab4aec 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticScalar.txt 
35890f8 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareColumn.txt 5ce261f 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareScalar.txt e333224 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryFunc.txt eed6ebe 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryMinus.txt dbcee4c 
  ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareColumn.txt 
1c16816 
  ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareScalar.txt 
bf02419 
  ql/src/gen/vectorization/ExpressionTemplates/FilterScalarCompareColumn.txt 
9a1d741 
  
ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareColumn.txt
 3625f44 
  
ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareScalar.txt
 690dd3c 
  
ql/src/gen/vectorization/ExpressionTemplates/FilterStringScalarCompareColumn.txt
 5ba7703 
  ql/src/gen/vectorization/ExpressionTemplates/ScalarArithmeticColumn.txt 
d9efbe7 
  ql/src/gen/vectorization/ExpressionTemplates/ScalarCompareColumn.txt 4a29724 
  ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareColumn.txt 
401fa3c 
  ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareScalar.txt 
a441d87 
  ql/src/gen/vectorization/ExpressionTemplates/StringScalarCompareColumn.txt 
635b3e6 
  
ql/src/gen/vectorization/TestTemplates/TestColumnScalarFilterVectorExpressionEvaluation.txt
 af30490 
  ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java 1f955d4 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExpressionDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorFilterOperator.java 
101ea28 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java 
f213ee8 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorReduceSinkOperator.java 
55e11f8 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorSelectOperator.java 
5cbf618 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java 
79437a5 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedExpressions.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/AbstractFilterStringColLikeStringScalar.java
 d1b70ab 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColAndCol.java 
a6cde8e 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColOrCol.java 
b57a844 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ConstantVectorExpression.java
 119b4b9 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColAndScalar.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColOrScalar.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprAndExpr.java
 e6b511d 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprOrExpr.java
 703096c 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterNotExpr.java
 cdf404c 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarAndColumn.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarOrColumn.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColLikeStringScalar.java
 2b54008 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColRegExpStringScalar.java
 92c46b3 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseDoubleToDouble.java
 214b6a5 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseLongToDouble.java
 42cb926 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLongToString.java
 cb9d4d1 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerDoubleToDouble.java
 dca4265 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerLongToDouble.java
 59e058c 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncRand.java 
1a7fa2b 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncRandNoSeed.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/IdentityExpression.java
 758cfcb 
  ql/src/java/org/apache/ha

Re: Review Request 14576: NOT expression doesn't handle nulls correctly.

2013-10-21 Thread Jitendra Pandey

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14576/
---

(Updated Oct. 21, 2013, 5:59 p.m.)


Review request for hive and Ashutosh Chauhan.


Changes
---

Updated patch to remove static mappings. The uploaded patch, doesn't include a 
small change in VectorMapJoinOperator.java because of some review board errors. 
The exact patch to be committed should be picked from the JIRA.


Bugs: HIVE-5430
https://issues.apache.org/jira/browse/HIVE-5430


Repository: hive-git


Description
---

NOT expression doesn't handle nulls correctly.


Diffs (updated)
-

  ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java 4065067 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticColumn.txt 
2ab4aec 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnArithmeticScalar.txt 
35890f8 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareColumn.txt 5ce261f 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnCompareScalar.txt e333224 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryFunc.txt eed6ebe 
  ql/src/gen/vectorization/ExpressionTemplates/ColumnUnaryMinus.txt dbcee4c 
  ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareColumn.txt 
1c16816 
  ql/src/gen/vectorization/ExpressionTemplates/FilterColumnCompareScalar.txt 
bf02419 
  ql/src/gen/vectorization/ExpressionTemplates/FilterScalarCompareColumn.txt 
9a1d741 
  
ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareColumn.txt
 3625f44 
  
ql/src/gen/vectorization/ExpressionTemplates/FilterStringColumnCompareScalar.txt
 690dd3c 
  
ql/src/gen/vectorization/ExpressionTemplates/FilterStringScalarCompareColumn.txt
 5ba7703 
  ql/src/gen/vectorization/ExpressionTemplates/ScalarArithmeticColumn.txt 
d9efbe7 
  ql/src/gen/vectorization/ExpressionTemplates/ScalarCompareColumn.txt 4a29724 
  ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareColumn.txt 
401fa3c 
  ql/src/gen/vectorization/ExpressionTemplates/StringColumnCompareScalar.txt 
a441d87 
  ql/src/gen/vectorization/ExpressionTemplates/StringScalarCompareColumn.txt 
635b3e6 
  
ql/src/gen/vectorization/TestTemplates/TestColumnScalarFilterVectorExpressionEvaluation.txt
 af30490 
  ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java 1f955d4 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExpressionDescriptor.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorFilterOperator.java 
101ea28 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java 
f213ee8 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorReduceSinkOperator.java 
55e11f8 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorSelectOperator.java 
5cbf618 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java 
79437a5 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedExpressions.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/AbstractFilterStringColLikeStringScalar.java
 d1b70ab 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColAndCol.java 
a6cde8e 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ColOrCol.java 
b57a844 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/ConstantVectorExpression.java
 119b4b9 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColAndScalar.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterColOrScalar.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprAndExpr.java
 e6b511d 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterExprOrExpr.java
 703096c 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterNotExpr.java
 cdf404c 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarAndColumn.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterScalarOrColumn.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColLikeStringScalar.java
 2b54008 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterStringColRegExpStringScalar.java
 92c46b3 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseDoubleToDouble.java
 214b6a5 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLogWithBaseLongToDouble.java
 42cb926 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncLongToString.java
 cb9d4d1 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerDoubleToDouble.java
 dca4265 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncPowerLongToDouble.java
 59e058c 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncRa

[jira] [Commented] (HIVE-5599) Change default logging level to INFO

2013-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800814#comment-13800814
 ] 

Hive QA commented on HIVE-5599:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12609433/HIVE-5599.patch

{color:green}SUCCESS:{color} +1 4428 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1184/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1184/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Change default logging level to INFO
> 
>
> Key: HIVE-5599
> URL: https://issues.apache.org/jira/browse/HIVE-5599
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5599.patch
>
>
> The default logging level is warn:
> https://github.com/apache/hive/blob/trunk/common/src/java/conf/hive-log4j.properties#L19
> but hive logs lot's of good information at INFO level. Additionally most 
> hadoop projects log at INFO by default. Let's change the logging level to 
> INFO by default.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5515) Writing to an HBase table throws IllegalArgumentException, failing job submission

2013-10-21 Thread Viraj Bhat (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800799#comment-13800799
 ] 

Viraj Bhat commented on HIVE-5515:
--

Hi Nick,
 Our QE tested this on a Hadoop cluster. It now populates data
Viraj

> Writing to an HBase table throws IllegalArgumentException, failing job 
> submission
> -
>
> Key: HIVE-5515
> URL: https://issues.apache.org/jira/browse/HIVE-5515
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.12.0
> Environment: Hadoop2, Hive 0.12.0, HBase-0.96RC
>Reporter: Nick Dimiduk
>Assignee: Viraj Bhat
>  Labels: hbase
> Fix For: 0.13.0
>
> Attachments: HIVE-5515.patch
>
>
> Inserting data into HBase table via hive query fails with the following 
> message:
> {noformat}
> $ hive -e "FROM pgc INSERT OVERWRITE TABLE pagecounts_hbase SELECT pgc.* 
> WHERE rowkey LIKE 'en/q%' LIMIT 10;"
> ...
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> java.lang.IllegalArgumentException: Property value must not be null
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:810)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:792)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.copyTableJobPropertiesToConf(Utilities.java:2002)
> at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:947)
> at 
> org.apache.hadoop.hive.ql.io.HiveOutputFormatImpl.checkOutputSpecs(HiveOutputFormatImpl.java:67)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:342)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:425)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:348)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:731)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Job Subm

[jira] [Commented] (HIVE-5193) Columnar Pushdown for RC/ORC File not happening in HCatLoader

2013-10-21 Thread Viraj Bhat (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800796#comment-13800796
 ] 

Viraj Bhat commented on HIVE-5193:
--

Hi Thejas and Sushanth,
This test failure has something to do with ptest. I will investigate. The test 
runs fine on my machine.
Viraj

> Columnar Pushdown for RC/ORC File not happening in HCatLoader 
> --
>
> Key: HIVE-5193
> URL: https://issues.apache.org/jira/browse/HIVE-5193
> Project: Hive
>  Issue Type: Improvement
>  Components: HCatalog
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Viraj Bhat
>Assignee: Viraj Bhat
>  Labels: hcatalog
> Fix For: 0.13.0
>
> Attachments: HIVE-5193.patch
>
>
> Currently the HCatLoader is not taking advantage of the 
> ColumnProjectionUtils. where it could skip columns during read. The 
> information is available in Pig it just needs to get to the Readers.
> Viraj



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4629) HS2 should support an API to retrieve query logs

2013-10-21 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800765#comment-13800765
 ] 

Brock Noland commented on HIVE-4629:


Carl, I see, as you know I took your comments as a suggestion for a scrollable 
log. I apologize for mis-understanding your comments.

> HS2 should support an API to retrieve query logs
> 
>
> Key: HIVE-4629
> URL: https://issues.apache.org/jira/browse/HIVE-4629
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Shreepadma Venugopalan
>Assignee: Shreepadma Venugopalan
> Attachments: HIVE-4629.1.patch, HIVE-4629.2.patch, 
> HIVE-4629-no_thrift.1.patch
>
>
> HiveServer2 should support an API to retrieve query logs. This is 
> particularly relevant because HiveServer2 supports async execution but 
> doesn't provide a way to report progress. Providing an API to retrieve query 
> logs will help report progress to the client.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5600) Fix PTest2 Maven support

2013-10-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5600:
---

Status: Patch Available  (was: Open)

> Fix PTest2 Maven support
> 
>
> Key: HIVE-5600
> URL: https://issues.apache.org/jira/browse/HIVE-5600
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5600.patch
>
>
> At present we don't download all the dependencies required in the source prep 
> phase therefore tests fail when the maven repo has been cleared.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5600) Fix PTest2 Maven support

2013-10-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5600:
---

Attachment: HIVE-5600.patch

> Fix PTest2 Maven support
> 
>
> Key: HIVE-5600
> URL: https://issues.apache.org/jira/browse/HIVE-5600
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5600.patch
>
>
> At present we don't download all the dependencies required in the source prep 
> phase therefore tests fail when the maven repo has been cleared.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5600) Fix PTest2 Maven support

2013-10-21 Thread Brock Noland (JIRA)
Brock Noland created HIVE-5600:
--

 Summary: Fix PTest2 Maven support
 Key: HIVE-5600
 URL: https://issues.apache.org/jira/browse/HIVE-5600
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland


At present we don't download all the dependencies required in the source prep 
phase therefore tests fail when the maven repo has been cleared.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HIVE-4523) round() function with specified decimal places not consistent with mysql

2013-10-21 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang reassigned HIVE-4523:
-

Assignee: Xuefu Zhang

> round() function with specified decimal places not consistent with mysql 
> -
>
> Key: HIVE-4523
> URL: https://issues.apache.org/jira/browse/HIVE-4523
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Affects Versions: 0.7.1
>Reporter: Fred Desing
>Assignee: Xuefu Zhang
>Priority: Minor
>
> // hive
> hive> select round(150.000, 2) from temp limit 1;
> 150.0
> hive> select round(150, 2) from temp limit 1;
> 150.0
> // mysql
> mysql> select round(150.000, 2) from DUAL limit 1;
> round(150.000, 2)
> 150.00
> mysql> select round(150, 2) from DUAL limit 1;
> round(150, 2)
> 150
> http://dev.mysql.com/doc/refman/5.1/en/mathematical-functions.html#function_round



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5578) hcat script doesn't include jars from HIVE_AUX_JARS_PATH

2013-10-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5578:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have committed this to trunk! Thank you for your contribution Mohammad!

> hcat script doesn't include jars from HIVE_AUX_JARS_PATH
> 
>
> Key: HIVE-5578
> URL: https://issues.apache.org/jira/browse/HIVE-5578
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.5.0
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
> Fix For: 0.13.0
>
> Attachments: HIVE-5578.1.patch
>
>
> hcat script include jars from $HIVE_HOME/lib but not from HIVE_AUX_JARS_PATH.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5070) Implement listLocatedStatus() in ProxyFileSystem for 0.23 shim

2013-10-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5070:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank you for the contribution shanyu! I have committed this to trunk!

> Implement listLocatedStatus() in ProxyFileSystem for 0.23 shim
> --
>
> Key: HIVE-5070
> URL: https://issues.apache.org/jira/browse/HIVE-5070
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.12.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Fix For: 0.13.0
>
> Attachments: HIVE-5070.3.patch, HIVE-5070.4.patch, 
> HIVE-5070.patch.txt, HIVE-5070-v2.patch, HIVE-5070-v3.patch, 
> HIVE-5070-v4-trunk.patch
>
>
> MAPREDUCE-1981 introduced a new API for FileSystem - listLocatedStatus. It is 
> used in Hadoop's FileInputFormat.getSplits(). Hive's ProxyFileSystem class 
> needs to implement this API in order to make Hive unit test work.
> Otherwise, you'll see these exceptions when running TestCliDriver test case, 
> e.g. results of running allcolref_in_udf.q:
> {noformat}
> [junit] Running org.apache.hadoop.hive.cli.TestCliDriver
> [junit] Begin query: allcolref_in_udf.q
> [junit] java.lang.IllegalArgumentException: Wrong FS: 
> pfile:/GitHub/Monarch/project/hive-monarch/build/ql/test/data/warehouse/src, 
> expected: file:///
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
> [junit]   at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:69)
> [junit]   at 
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:375)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1482)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1522)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:1798)
> [junit]   at 
> org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1797)
> [junit]   at 
> org.apache.hadoop.fs.ChecksumFileSystem.listLocatedStatus(ChecksumFileSystem.java:579)
> [junit]   at 
> org.apache.hadoop.fs.FilterFileSystem.listLocatedStatus(FilterFileSystem.java:235)
> [junit]   at 
> org.apache.hadoop.fs.FilterFileSystem.listLocatedStatus(FilterFileSystem.java:235)
> [junit]   at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
> [junit]   at 
> org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
> [junit]   at 
> org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:69)
> [junit]   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:385)
> [junit]   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:351)
> [junit]   at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:389)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:503)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:495)
> [junit]   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:390)
> [junit]   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
> [junit]   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
> [junit]   at java.security.AccessController.doPrivileged(Native Method)
> [junit]   at javax.security.auth.Subject.doAs(Subject.java:396)
> [junit]   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1481)
> [junit]   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
> [junit]   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
> [junit]   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:552)
> [junit]   at java.security.AccessController.doPrivileged(Native Method)
> [junit]   at javax.security.auth.Subject.doAs(Subject.java:396)
> [junit]   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1481)
> [junit]   at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:552)
> [junit]   at 
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:543)
> [junit]   at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:448)
> [junit]   at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
> [junit]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [junit]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja

[jira] [Commented] (HIVE-5276) Skip useless string encoding stage for hiveserver2

2013-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800739#comment-13800739
 ] 

Hive QA commented on HIVE-5276:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12609390/D12879.3.patch

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1183/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1183/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests failed with: NonZeroExitCodeException: Command 'bash 
/data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and 
output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1183/source-prep.txt
+ [[ true == \t\r\u\e ]]
+ rm -rf ivy maven
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'conf/hive-default.xml.template'
Reverted 'common/src/java/org/apache/hadoop/hive/conf/HiveConf.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/plan/FetchWork.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/MapReduceCompiler.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/FetchTask.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java'
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf build hcatalog/build hcatalog/core/build 
hcatalog/storage-handlers/hbase/build hcatalog/server-extensions/build 
hcatalog/webhcat/svr/build hcatalog/webhcat/java-client/build 
hcatalog/hcatalog-pig-adapter/build common/src/gen 
ql/src/test/results/clientpositive/orderby_query_bucketing.q.out 
ql/src/test/queries/clientpositive/orderby_query_bucketing.q 
ql/src/java/org/apache/hadoop/hive/ql/exec/MergeSortingFetcher.java 
ql/src/java/org/apache/hadoop/hive/ql/exec/RowFetcher.java
+ svn update
Uql/src/test/results/clientnegative/invalid_tbl_name.q.out
Uql/src/test/results/clientnegative/ptf_window_boundaries.q.out
Uql/src/test/results/clientnegative/column_rename3.q.out
Uql/src/test/results/clientnegative/show_columns2.q.out
Uql/src/test/results/clientnegative/invalid_select_expression.q.out
Uql/src/test/results/clientnegative/uniquejoin3.q.out
Uql/src/test/results/clientnegative/alter_partition_coltype_2columns.q.out
Uql/src/test/results/clientnegative/invalid_create_tbl2.q.out
Uql/src/test/results/clientnegative/clusterbyorderby.q.out
Uql/src/test/results/clientnegative/archive_partspec3.q.out
Uql/src/test/results/clientnegative/show_tables_bad1.q.out
Uql/src/test/results/clientnegative/create_or_replace_view6.q.out
Uql/src/test/results/clientnegative/select_charliteral.q.out
Uql/src/test/results/clientnegative/ptf_negative_PartitionBySortBy.q.out
U
ql/src/test/results/clientnegative/columnstats_partlvl_multiple_part_clause.q.out
Uql/src/test/results/clientnegative/ptf_window_boundaries2.q.out
Uql/src/test/results/clientnegative/select_udtf_alias.q.out
Uql/src/test/results/clientnegative/lateral_view_join.q.out
Uql/src/test/results/clientnegative/set_table_property.q.out
Uql/src/test/results/clientnegative/ptf_negative_DistributeByOrderBy.q.out
Uql/src/test/results/clientnegative/show_tables_bad2.q.out
U
ql/src/test/results/clientnegative/alter_partition_coltype_invalidtype.q.out
Uql/src/test/results/compiler/errors/invalid_select.q.out
Uql/src/test/results/compiler/errors/quoted_string.q.out
Uql/src/test/results/compiler/errors/garbage.q.out
Uql/src/test/results/compiler/errors/macro_reserved_word.q.out
Uql/src/test/results/compiler/errors/wrong_distinct2.q.out
Uql/src/test/results/compiler/errors/missing_overwrite.q.out
Uql/src/test/results/compiler/errors/invalid_create_table.q.out
Uql/src/java/org/apache/hadoop/hive/ql/parse/ParseException.java

Fetching external item into 'hcatalog/src/test/e2e/harness'
Updated external to revision 1534224.

Updated to revision 1534224.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.

[jira] [Commented] (HIVE-3972) Support using multiple reducer for fetching order by results

2013-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13800736#comment-13800736
 ] 

Hive QA commented on HIVE-3972:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12609386/D8349.5.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4429 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_fetch_aggregation
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1182/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1182/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

> Support using multiple reducer for fetching order by results
> 
>
> Key: HIVE-3972
> URL: https://issues.apache.org/jira/browse/HIVE-3972
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: D8349.5.patch, HIVE-3972.D8349.1.patch, 
> HIVE-3972.D8349.2.patch, HIVE-3972.D8349.3.patch, HIVE-3972.D8349.4.patch
>
>
> Queries for fetching results which have lastly "order by" clause make final 
> MR run with single reducer, which can be too much. For example, 
> {code}
> select value, sum(key) as sum from src group by value order by sum;
> {code}
> If number of reducer is reasonable, multiple result files could be merged 
> into single sorted stream in the fetcher level.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5574) Unnecessary newline at the end of message of ParserException

2013-10-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5574:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Navis!

> Unnecessary newline at the end of message of ParserException
> 
>
> Key: HIVE-5574
> URL: https://issues.apache.org/jira/browse/HIVE-5574
> Project: Hive
>  Issue Type: Bug
>  Components: Diagnosability
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.13.0
>
> Attachments: HIVE-5574.1.patch.txt
>
>
> Error messages in ParserException is ended with newline, which is a little 
> annoying.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5596) hive-default.xml.template is invalid

2013-10-21 Thread Killua Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Killua Huang updated HIVE-5596:
---

Tags:   (was: Hive)
  Resolution: Fixed
Release Note:   (was: Fixed invalid format in hive-default.xml.)
  Status: Resolved  (was: Patch Available)

> hive-default.xml.template is invalid 
> -
>
> Key: HIVE-5596
> URL: https://issues.apache.org/jira/browse/HIVE-5596
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 0.12.0
> Environment: OS: Oracle Linux 6
> JDK:1.6
> Hadoop: 2.2.0
>Reporter: Killua Huang
>Assignee: Killua Huang
>Priority: Critical
>  Labels: patch
> Fix For: 0.12.1
>
> Attachments: HIVE-5596.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Line 2000:16 in hive-default.xml.template is
> auth
> I think is invalid and it will lead Hive crash if you use this template. The 
> error message is as followed:
> [Fatal Error] hive-site.xml:2000:16: The element type "value" must be 
> terminated by the matching end-tag "".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5596) hive-default.xml.template is invalid

2013-10-21 Thread Killua Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Killua Huang updated HIVE-5596:
---

Tags: Hive
  Labels: patch  (was: )
Release Note: Fixed invalid format in hive-default.xml.
  Status: Patch Available  (was: In Progress)

The patch is in attachments list.

> hive-default.xml.template is invalid 
> -
>
> Key: HIVE-5596
> URL: https://issues.apache.org/jira/browse/HIVE-5596
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 0.12.0
> Environment: OS: Oracle Linux 6
> JDK:1.6
> Hadoop: 2.2.0
>Reporter: Killua Huang
>Assignee: Killua Huang
>Priority: Critical
>  Labels: patch
> Fix For: 0.12.1
>
> Attachments: HIVE-5596.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Line 2000:16 in hive-default.xml.template is
> auth
> I think is invalid and it will lead Hive crash if you use this template. The 
> error message is as followed:
> [Fatal Error] hive-site.xml:2000:16: The element type "value" must be 
> terminated by the matching end-tag "".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5596) hive-default.xml.template is invalid

2013-10-21 Thread Killua Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Killua Huang updated HIVE-5596:
---

Attachment: HIVE-5596.patch

The Attachment is for issue HIVE-5596.

This patch was generated from Git.

> hive-default.xml.template is invalid 
> -
>
> Key: HIVE-5596
> URL: https://issues.apache.org/jira/browse/HIVE-5596
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 0.12.0
> Environment: OS: Oracle Linux 6
> JDK:1.6
> Hadoop: 2.2.0
>Reporter: Killua Huang
>Assignee: Killua Huang
>Priority: Critical
> Fix For: 0.12.1
>
> Attachments: HIVE-5596.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Line 2000:16 in hive-default.xml.template is
> auth
> I think is invalid and it will lead Hive crash if you use this template. The 
> error message is as followed:
> [Fatal Error] hive-site.xml:2000:16: The element type "value" must be 
> terminated by the matching end-tag "".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5596) hive-default.xml.template is invalid

2013-10-21 Thread Killua Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Killua Huang updated HIVE-5596:
---

Attachment: (was: 
0001-HIVE-5596-hive-default.xml.template-is-invalid.patch)

> hive-default.xml.template is invalid 
> -
>
> Key: HIVE-5596
> URL: https://issues.apache.org/jira/browse/HIVE-5596
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 0.12.0
> Environment: OS: Oracle Linux 6
> JDK:1.6
> Hadoop: 2.2.0
>Reporter: Killua Huang
>Assignee: Killua Huang
>Priority: Critical
> Fix For: 0.12.1
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Line 2000:16 in hive-default.xml.template is
> auth
> I think is invalid and it will lead Hive crash if you use this template. The 
> error message is as followed:
> [Fatal Error] hive-site.xml:2000:16: The element type "value" must be 
> terminated by the matching end-tag "".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5596) hive-default.xml.template is invalid

2013-10-21 Thread Killua Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Killua Huang updated HIVE-5596:
---

Attachment: 0001-HIVE-5596-hive-default.xml.template-is-invalid.patch

This attachment is for issue HIVE-5596.



> hive-default.xml.template is invalid 
> -
>
> Key: HIVE-5596
> URL: https://issues.apache.org/jira/browse/HIVE-5596
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 0.12.0
> Environment: OS: Oracle Linux 6
> JDK:1.6
> Hadoop: 2.2.0
>Reporter: Killua Huang
>Assignee: Killua Huang
>Priority: Critical
> Fix For: 0.12.1
>
> Attachments: 0001-HIVE-5596-hive-default.xml.template-is-invalid.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Line 2000:16 in hive-default.xml.template is
> auth
> I think is invalid and it will lead Hive crash if you use this template. The 
> error message is as followed:
> [Fatal Error] hive-site.xml:2000:16: The element type "value" must be 
> terminated by the matching end-tag "".



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >