[jira] [Commented] (HIVE-6595) Hive 0.11.0 build failure

2014-03-08 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925142#comment-13925142
 ] 

Szehon Ho commented on HIVE-6595:
-

That one is already included in 0.12, so not sure why you still get errors.  

Unfortunately, Hive Jiras are for issues found in current trunk , and not for 
maintaining old release.  That said, feel free to search hive jira-base for 
other patches that might have fixed your use-case, or reach out to hive user 
forum.  Hope that helps.

> Hive 0.11.0 build failure
> -
>
> Key: HIVE-6595
> URL: https://issues.apache.org/jira/browse/HIVE-6595
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.11.0
> Environment: CentOS 6.5, java version "1.7.0_45", Hadoop 2.2.0
>Reporter: Amit Anand
>
> I am unable to build Hive 0.11.0 from the source. I have a single node hadoop 
> 2.2.0, that I built from the source, running. 
> I followed steps given below:
> svn co http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/ hive-0.11.0
> cd hive-0.11.0
> ant clean
> ant package
> I got messages given below 
> compile:
>  [echo] Project: jdbc
> [javac] Compiling 28 source files to 
> /opt/apache/source/hive-0.11.0/build/jdbc/classes
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java:48:
>  error: HiveCallableStatement is not abstract and does not override abstract 
> method getObject(String,Class) in CallableStatement
> [javac] public class HiveCallableStatement implements 
> java.sql.CallableStatement {
> [javac]^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java:65:
>  error: HiveConnection is not abstract and does not override abstract method 
> getNetworkTimeout() in Connection
> [javac] public class HiveConnection implements java.sql.Connection {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDataSource.java:31:
>  error: HiveDataSource is not abstract and does not override abstract method 
> getParentLogger() in CommonDataSource
> [javac] public class HiveDataSource implements DataSource {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:56:
>  error: HiveDatabaseMetaData is not abstract and does not override abstract 
> method generatedKeyAlwaysReturned() in DatabaseMetaData
> [javac] public class HiveDatabaseMetaData implements DatabaseMetaData {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:707:
>  error:  is not 
> abstract and does not override abstract method getObject(String,Class) 
> in ResultSet
> [javac] , null) {
> [javac] ^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDriver.java:35:
>  error: HiveDriver is not abstract and does not override abstract method 
> getParentLogger() in Driver
> [javac] public class HiveDriver implements Driver {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java:56:
>  error: HivePreparedStatement is not abstract and does not override abstract 
> method isCloseOnCompletion() in Statement
> [javac] public class HivePreparedStatement implements PreparedStatement {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java:48:
>  error: HiveQueryResultSet is not abstract and does not override abstract 
> method getObject(String,Class) in ResultSet
> [javac] public class HiveQueryResultSet extends HiveBaseResultSet {
> [javac]^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java:42:
>  error: HiveStatement is not abstract and does not override abstract method 
> isCloseOnCompletion() in Statement
> [javac] public class HiveStatement implements java.sql.Statement {
> [javac]^
> [javac] Note: Some input files use or override a deprecated API.
> [javac] Note: Recompile with -Xlint:deprecation for details.
> [javac] Note: Some input files use unchecked or unsafe operations.
> [javac] Note: Recompile with -Xlint:unchecked for d

[jira] [Commented] (HIVE-3682) when output hive table to file,users should could have a separator of their own choice

2014-03-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925141#comment-13925141
 ] 

Lefty Leverenz commented on HIVE-3682:
--

[~lars_francke] added this note to the wiki:  "As of Hive 0.11.0 the separator 
used can be specified, in earlier versions it was always the ^A character 
(\001)" and [~prasadm] added the ROW FORMAT syntax.  More details and some 
examples would be helpful.

* [LanguageManual DML:  Writing data into the filesystem from queries 
|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Writingdataintothefilesystemfromqueries]



> when output hive table to file,users should could have a separator of their 
> own choice
> --
>
> Key: HIVE-3682
> URL: https://issues.apache.org/jira/browse/HIVE-3682
> Project: Hive
>  Issue Type: New Feature
>  Components: CLI
>Affects Versions: 0.8.1
> Environment: Linux 3.0.0-14-generic #23-Ubuntu SMP Mon Nov 21 
> 20:34:47 UTC 2011 i686 i686 i386 GNU/Linux
> java version "1.6.0_25"
> hadoop-0.20.2-cdh3u0
> hive-0.8.1
>Reporter: caofangkun
>Assignee: Sushanth Sowmyan
> Fix For: 0.11.0
>
> Attachments: HIVE-3682-1.patch, HIVE-3682.D10275.1.patch, 
> HIVE-3682.D10275.2.patch, HIVE-3682.D10275.3.patch, HIVE-3682.D10275.4.patch, 
> HIVE-3682.D10275.4.patch.for.0.11, HIVE-3682.with.serde.patch
>
>
> By default,when output hive table to file ,columns of the Hive table are 
> separated by ^A character (that is \001).
> But indeed users should have the right to set a seperator of their own choice.
> Usage Example:
> create table for_test (key string, value string);
> load data local inpath './in1.txt' into table for_test
> select * from for_test;
> UT-01:default separator is \001 line separator is \n
> insert overwrite local directory './test-01' 
> select * from src ;
> create table array_table (a array, b array)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '\t'
> COLLECTION ITEMS TERMINATED BY ',';
> load data local inpath "../hive/examples/files/arraytest.txt" overwrite into 
> table table2;
> CREATE TABLE map_table (foo STRING , bar MAP)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '\t'
> COLLECTION ITEMS TERMINATED BY ','
> MAP KEYS TERMINATED BY ':'
> STORED AS TEXTFILE;
> UT-02:defined field separator as ':'
> insert overwrite local directory './test-02' 
> row format delimited 
> FIELDS TERMINATED BY ':' 
> select * from src ;
> UT-03: line separator DO NOT ALLOWED to define as other separator 
> insert overwrite local directory './test-03' 
> row format delimited 
> FIELDS TERMINATED BY ':' 
> select * from src ;
> UT-04: define map separators 
> insert overwrite local directory './test-04' 
> row format delimited 
> FIELDS TERMINATED BY '\t'
> COLLECTION ITEMS TERMINATED BY ','
> MAP KEYS TERMINATED BY ':'
> select * from src;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6391) Use pre-warm APIs in Tez to improve hive query startup

2014-03-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925136#comment-13925136
 ] 

Lefty Leverenz commented on HIVE-6391:
--

For the record:  this adds the *hive.prewarm.enabled* and 
*hive.prewarm.numcontainers* parameters in HiveConf.java and 
hive-default.xml.template.

> Use pre-warm APIs in Tez to improve hive query startup
> --
>
> Key: HIVE-6391
> URL: https://issues.apache.org/jira/browse/HIVE-6391
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Affects Versions: tez-branch
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Minor
>  Labels: optimization
> Fix For: tez-branch
>
> Attachments: HIVE-6391.1-tez.patch, HIVE-6391.2-tez.patch, 
> HIVE-6391.3-tez.patch
>
>
> With the addition of TEZ-766, Tez supports pre-warmed containers within the 
> Tez session.
> Allow hive users to enable and use this feature from within the hive shell.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-03-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925131#comment-13925131
 ] 

Lefty Leverenz commented on HIVE-5958:
--

Will this be documented along with the parent jira, or does it need separate 
documentation?

> SQL std auth - authorize statements that work with paths
> 
>
> Key: HIVE-5958
> URL: https://issues.apache.org/jira/browse/HIVE-5958
> Project: Hive
>  Issue Type: Sub-task
>  Components: Authorization
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.13.0
>
> Attachments: HIVE-5958.1.patch, HIVE-5958.2.patch, HIVE-5958.3.patch, 
> HIVE-5958.4.patch, HIVE-5958.5.patch, HIVE-5958.6.patch, HIVE-5958.7.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Statement such as create table, alter table that specify an path uri should 
> be allowed under the new authorization scheme only if URI(Path) specified has 
> permissions including read/write and ownership of the file/dir and its 
> children.
> Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925130#comment-13925130
 ] 

Hive QA commented on HIVE-6486:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12632880/HIVE-6486.3.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5374 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.mapreduce.TestHCatMutablePartitioned.testHCatPartitionedTable
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1668/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1668/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12632880

> Support secure Subject.doAs() in HiveServer2 JDBC client.
> -
>
> Key: HIVE-6486
> URL: https://issues.apache.org/jira/browse/HIVE-6486
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication, HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Shivaraju Gowda
>Assignee: Shivaraju Gowda
> Fix For: 0.13.0
>
> Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, HIVE-6486.3.patch, 
> Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java
>
>
> HIVE-5155 addresses the problem of kerberos authentication in multi-user 
> middleware server using proxy user.  In this mode the principal used by the 
> middle ware server has privileges to impersonate selected users in 
> Hive/Hadoop. 
> This enhancement is to support Subject.doAs() authentication in  Hive JDBC 
> layer so that the end users Kerberos Subject is passed through in the middle 
> ware server. With this improvement there won't be any additional setup in the 
> server to grant proxy privileges to some users and there won't be need to 
> specify a proxy user in the JDBC client. This version should also be more 
> secure since it won't require principals with the privileges to impersonate 
> other users in Hive/Hadoop setup.
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6047) Permanent UDFs in Hive

2014-03-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925128#comment-13925128
 ] 

Lefty Leverenz commented on HIVE-6047:
--

[~jdere] added documentation to the wiki here:

* [Language Manual DDL:  Permanent Functions 
|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-PermanentFunctions]

So I added a link to it from Hive Plugins:

* [Creating Custom UDFs 
|https://cwiki.apache.org/confluence/display/Hive/HivePlugins]

Note that all four subtasks are committed in Hive 0.13.0 so that's what the doc 
says, but this parent jira hasn't been committed yet.

> Permanent UDFs in Hive
> --
>
> Key: HIVE-6047
> URL: https://issues.apache.org/jira/browse/HIVE-6047
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: PermanentFunctionsinHive.pdf, 
> PermanentFunctionsinHive.pdf
>
>
> Currently Hive only supports temporary UDFs which must be re-registered when 
> starting up a Hive session. Provide some support to register permanent UDFs 
> with Hive. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6167) Allow user-defined functions to be qualified with database name

2014-03-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925126#comment-13925126
 ] 

Lefty Leverenz commented on HIVE-6167:
--

[~jdere] documented this in the wiki's DDL doc, so I added a brief description 
in the Hive Plugins doc and linked it to the DDL.

* [DDL:  Permanent Functions 
|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-PermanentFunctions]
* [Hive Plugins |https://cwiki.apache.org/confluence/display/Hive/HivePlugins]

But the jira description currently has things backwards:  "This task would 
allow users to define temporary UDFs (and eventually permanent UDFs) with a 
database name" -- it's permanent UDFs that can be defined with a database name, 
not temporary UDFs.

> Allow user-defined functions to be qualified with database name
> ---
>
> Key: HIVE-6167
> URL: https://issues.apache.org/jira/browse/HIVE-6167
> Project: Hive
>  Issue Type: Sub-task
>  Components: UDF
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.13.0
>
> Attachments: HIVE-6167.1.patch, HIVE-6167.2.patch, HIVE-6167.3.patch, 
> HIVE-6167.4.patch
>
>
> Function names in Hive are currently unqualified and there is a single 
> namespace for all function names. This task would allow users to define 
> temporary UDFs (and eventually permanent UDFs) with a database name, such as:
> CREATE TEMPORARY FUNCTION userdb.myfunc 'myudfclass';



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6597) WebHCat E2E tests doAsTests_6 and doAsTests_7 need to be updated

2014-03-08 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-6597:
-

Fix Version/s: 0.13.0
   Status: Patch Available  (was: Open)

> WebHCat E2E tests doAsTests_6 and doAsTests_7 need to be updated
> 
>
> Key: HIVE-6597
> URL: https://issues.apache.org/jira/browse/HIVE-6597
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.13.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Fix For: 0.13.0
>
> Attachments: HIVE-6597.patch
>
>
> Currently the following WebHCat doAsTests need to be fixed:
> In doAsTests_6 REST request url needs to be updated and corresponding 
> expected output to reflect the correct intent.
> doAsTests_7 fails because of the strict error message checking.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6597) WebHCat E2E tests doAsTests_6 and doAsTests_7 need to be updated

2014-03-08 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-6597:
-

Attachment: HIVE-6597.patch

Attaching the patch that fixes the two tests.

> WebHCat E2E tests doAsTests_6 and doAsTests_7 need to be updated
> 
>
> Key: HIVE-6597
> URL: https://issues.apache.org/jira/browse/HIVE-6597
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.13.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Fix For: 0.13.0
>
> Attachments: HIVE-6597.patch
>
>
> Currently the following WebHCat doAsTests need to be fixed:
> In doAsTests_6 REST request url needs to be updated and corresponding 
> expected output to reflect the correct intent.
> doAsTests_7 fails because of the strict error message checking.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6597) WebHCat E2E tests doAsTests_6 and doAsTests_7 need to be updated

2014-03-08 Thread Deepesh Khandelwal (JIRA)
Deepesh Khandelwal created HIVE-6597:


 Summary: WebHCat E2E tests doAsTests_6 and doAsTests_7 need to be 
updated
 Key: HIVE-6597
 URL: https://issues.apache.org/jira/browse/HIVE-6597
 Project: Hive
  Issue Type: Bug
  Components: Tests, WebHCat
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal


Currently the following WebHCat doAsTests need to be fixed:
In doAsTests_6 REST request url needs to be updated and corresponding expected 
output to reflect the correct intent.
doAsTests_7 fails because of the strict error message checking.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: --hiveconf vs -hiveconf

2014-03-08 Thread Lefty Leverenz
What's the difference between double-dash options and single-dash options?

-- Lefty


On Sat, Mar 8, 2014 at 9:40 AM, Edward Capriolo wrote:

> Great thanks for following up. THere might be a number of etl processes in
> the wild saying -hiveconf which is why it is important to keep around for
> the cli at least.
>
>
> On Sat, Mar 8, 2014 at 1:56 AM, Xuefu Zhang  wrote:
>
> > This is just getting more and more interesting. I never thought of
> > -hiveconf option, and always assumed it was a typo of --hiveconf. (That's
> > why I edited the one, which triggered the discovery.) I just checked and
> > found that both work, which is out of my surprise.
> >
> > With this assumption, Beeline has implemented only --hiveconf to mimic
> CLI.
> >
> > As to the documentation, I think we can stick to --hiveconf from now on,
> > since they are supported by both CLI and Beeline. However, -hiveconf will
> > continue to work for CLI until its death.
> >
> > Thanks,
> > Xuefu
> >
> >
> > On Fri, Mar 7, 2014 at 10:36 PM, Lefty Leverenz  > >wrote:
> >
> > > > OK, so just one of the pages in wiki has changed, and hive behavior
> has
> > > not changed
> > >
> > > That's right, and a closer look at the wiki shows that all the examples
> > are
> > > -hiveconf except the new change.  The only place --hiveconf appears is
> in
> > > duplications of help messages for the hive command, the old Hive
> server,
> > or
> > > Beeline.
> > >
> > > In a fresh export of the wiki --hiveconf occurs in these docs:
> > >
> > >- CLI<
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli#LanguageManualCli-HiveCommandLineOptions
> > > >
> > > repeats
> > >what hive -H says (--hiveconf) but gives 3 examples of -hiveconf.
> > >- Admin Config<
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration#AdminManualConfiguration-ConfiguringHive
> > > >
> > > says
> > >--hiveconf twice, in text and an example (both changed this week).
> > >- Hive Server<
> > > https://cwiki.apache.org/confluence/display/Hive/HiveServer>
> > > says
> > >--hiveconf once, but that's the Thrift server help message.
> > >- HiveServer2
> > > Clients<
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineCommandOptions
> > > >says
> > > --hiveconf twice, but that's the Beeline option.
> > >
> > > These wikidocs say -hiveconf:
> > >
> > >- Getting Started (4 in config
> > > overview<
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-ConfigurationManagementOverview
> > > >
> > > and
> > >2 in error logs<
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-ErrorLogs
> > > >
> > >)
> > >- Avro SerDe<
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/AvroSerDe#AvroSerDe-SpecifyingtheAvroschemaforatable
> > > >(2
> > > in example and text)
> > >- Developer Guide<
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/DeveloperGuide#DeveloperGuide-RunningHiveWithoutaHadoopCluster
> > > >(4
> > > in "export HIVE_OPTS")
> > >- HBase Integration<
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration#HBaseIntegration-Usage
> > > >(2
> > > in examples)
> > >- Variable Substitution<
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+VariableSubstitution
> > > >(1
> > > in the "evil laugh" example)
> > >- CLI (2 in one
> > > example<
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli#LanguageManualCli-Examples
> > > >,
> > >1 in logging<
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli#LanguageManualCli-Logging
> > > >
> > >)
> > >
> > > (My grep hits were inflated because "-i" caught HiveConf.)
> > >
> > > So what's it supposed to be?
> > >
> > >
> > > -- Lefty
> > >
> > >
> > > On Fri, Mar 7, 2014 at 11:06 PM, Thejas Nair 
> > > wrote:
> > >
> > > > OK, so just one of the pages in wiki has changed, and hive behavior
> > > > has not changed ? (I have been using -hiveconf, but i haven't
> verified
> > > > that with the tip of the trunk as of now).
> > > >
> > > > On Fri, Mar 7, 2014 at 6:19 PM, Xuefu Zhang 
> > wrote:
> > > > > I didn't know that -hiveconf is supported. However, from hive -H,
> > > double
> > > > > dashes are seen.
> > > > >
> > > > >  -h connecting to Hive Server on
> remote
> > > > host
> > > > > --hiveconfUse value for given property
> > > > > --hivevar  Variable subsitution to apply to
> > hive
> > > > >
> > > > > Thanks,
> > > > > Xuefu
> > > > >
> > > > >
> > > > > On Fri, Mar 7, 2014 at 6:00 PM, Edward Capriolo <
> > edlinuxg...@gmail.com
> > > > >wrote:
> > > > >
> > > > >> I was not around when this change was made but I think we should
> > have
> > > > kept
> > > > >> the old - dash version. We should consider adding it back.
> > > > >>
> > > > >>
> > >

Re: Timeline for the Hive 0.13 release?

2014-03-08 Thread Harish Butani
i added it.


On Sat, Mar 8, 2014 at 5:30 PM, Xuefu Zhang  wrote:

> Hi Harish,
>
>
> 6414 seem missing from:
> https://cwiki.apache.org/confluence/display/Hive/Hive+0.13+release+status.
>
> Thanks,
> Xuefu
>
>
> On Fri, Mar 7, 2014 at 10:41 AM, Harish Butani  >wrote:
>
> > ok adding 6414, 6572, 6574 to the list
> > On Mar 7, 2014, at 10:27 AM, Xuefu Zhang  wrote:
> >
> > > HIVE-6414 seems bad enough to included.
> > >
> > >
> > > On Thu, Mar 6, 2014 at 3:28 PM, Sushanth Sowmyan 
> > wrote:
> > >
> > >> One more, I'm afraid. I'll add it to the wiki. I just uploaded a patch
> > >> for https://issues.apache.org/jira/browse/HIVE-6572 . This fixes one
> > >> glaring bug on usage of mapred.min.split.size.per.rack and
> > >> mapred.min.split.size.per.node, and makes sure some of the other conf
> > >> parameters we use are correctly through shims. I want this to go into
> > >> 0.13 because I don't want a release of ours to go out with the
> > >> dual/old configs (and erroneous config in the per.rack/node case) and
> > >> have that be expected behaviour to support in the future.
> > >>
> > >> On Thu, Mar 6, 2014 at 11:04 AM, Harish Butani <
> hbut...@hortonworks.com
> > >
> > >> wrote:
> > >>> ok sure.
> > >>> Tracking these with the JQL below. I don’t have permission to setup a
> > >> Shared Filter; can someone help with this.
> > >>> Of the 35 issues: 11 are still open, 22 are patch available, 2 are
> > >> resolved.
> > >>>
> > >>> regards,
> > >>> Harish.
> > >>>
> > >>> JQL:
> > >>>
> > >>> id in (HIVE-5317, HIVE-5843, HIVE-6060, HIVE-6319, HIVE-6460,
> > HIVE-5687,
> > >> HIVE-5943, HIVE-5942, HIVE-6547, HIVE-5155, HIVE-6486, HIVE-6455,
> > >> HIVE-4177, HIVE-4764, HIVE-6306, HIVE-6350, HIVE-6485, HIVE-6507,
> > >> HIVE-6499, HIVE-6325, HIVE-6558, HIVE-6403, HIVE-4790, HIVE-4293,
> > >> HIVE-6551, HIVE-6359, HIVE-6314, HIVE-6241, HIVE-5768, HIVE-2752,
> > >> HIVE-6312, HIVE-6129, HIVE-6012, HIVE-6434, HIVE-6562) ORDER BY status
> > ASC,
> > >> assignee
> > >>>
> > >>> On Mar 5, 2014, at 6:50 PM, Prasanth Jayachandran <
> > >> pjayachand...@hortonworks.com> wrote:
> > >>>
> >  Can you consider HIVE-6562 as well?
> > 
> >  HIVE-6562 - Protection from exceptions in ORC predicate evaluation
> > 
> >  Thanks
> >  Prasanth Jayachandran
> > 
> >  On Mar 5, 2014, at 5:56 PM, Jason Dere 
> wrote:
> > 
> > >
> > > Would like to get these in, if possible:
> > >
> > > HIVE-6012 restore backward compatibility of arithmetic operations
> > > HIVE-6434 Restrict function create/drop to admin roles
> > >
> > > On Mar 5, 2014, at 5:41 PM, Navis류승우  wrote:
> > >
> > >> I have really big wish list(65 pending) but it would be time to
> > focus
> > >> on
> > >> finalization.
> > >>
> > >> - Small bugs
> > >> HIVE-6403 uncorrelated subquery is failing with
> > auto.convert.join=true
> > >> HIVE-4790 MapredLocalTask task does not make virtual columns
> > >> HIVE-4293 Predicates following UDTF operator are removed by PPD
> > >>
> > >> - Trivials
> > >> HIVE-6551 group by after join with skew join optimization
> references
> > >> invalid task sometimes
> > >> HIVE-6359 beeline -f fails on scripts with tabs in them.
> > >> HIVE-6314 The logging (progress reporting) is too verbose
> > >> HIVE-6241 Remove direct reference of Hadoop23Shims inQTestUtil
> > >> HIVE-5768 Beeline connection cannot be closed with !close command
> > >> HIVE-2752 Index names are case sensitive
> > >>
> > >> - Memory leakage
> > >> HIVE-6312 doAs with plain sasl auth should be session aware
> > >>
> > >> - Implementation is not accord with document
> > >> HIVE-6129 alter exchange is implemented in inverted manner
> > >>
> > >> I'll update the wiki, too.
> > >>
> > >>
> > >>
> > >>
> > >> 2014-03-05 12:18 GMT+09:00 Harish Butani  >:
> > >>
> > >>> Tracking jiras to be applied to branch 0.13 here:
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/Hive/Hive+0.13+release+status
> > >>>
> > >>> On Mar 4, 2014, at 5:45 PM, Harish Butani <
> hbut...@hortonworks.com
> > >
> > >> wrote:
> > >>>
> >  the branch is created.
> >  have changed the poms in both branches.
> >  Planning to setup a wikipage to track jiras that will get ported
> > to
> > >> 0.13
> > 
> >  regards,
> >  Harish.
> > 
> > 
> >  On Mar 4, 2014, at 5:05 PM, Harish Butani <
> > hbut...@hortonworks.com>
> > >>> wrote:
> > 
> > > branching now. Will be changing the pom files on trunk.
> > > Will send another email when the branch and trunk changes are
> in.
> > >
> > >
> > > On Mar 4, 2014, at 4:03 PM, Sushanth Sowmyan <
> khorg...@gmail.com
> > >
> > >>> wrote:
> > >
> > >> I have two patches still as patch-available, that have had +1s
> > as
> 

[jira] [Commented] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization

2014-03-08 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925097#comment-13925097
 ] 

Jitendra Nath Pandey commented on HIVE-6594:


+1

> UnsignedInt128 addition does not increase internal int array count resulting 
> in corrupted values during serialization
> -
>
> Key: HIVE-6594
> URL: https://issues.apache.org/jira/browse/HIVE-6594
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Attachments: HIVE-6594.1.patch, HIVE-6594.2.patch
>
>
> Discovered this while investigating why my fix for HIVE-6222 produced diffs. 
> I discovered that Decimal128.addDestructive does not adjust the internal 
> count when an the number of relevant ints increases. Since this count is used 
> in the fast HiveDecimalWriter conversion code, the results are off. 
> The root cause is UnsignedDecimal128.differenceInternal does not do an 
> updateCount() on the result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6583) wrong sql comments : ----... instead of -- ---...

2014-03-08 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6583:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Pierre!

> wrong sql comments : ... instead of -- ---...
> -
>
> Key: HIVE-6583
> URL: https://issues.apache.org/jira/browse/HIVE-6583
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 0.14.0
>Reporter: Pierre Nerzic
>Assignee: Pierre Nerzic
>Priority: Minor
> Fix For: 0.14.0
>
> Attachments: HIVE-6583.1.patch.txt
>
>
> In file metastore/scripts/upgrade/mysql/hive-schema-0.13.0.mysql.sql, lines 
> 799 and 801, a comment is written as -- (uninterrupted line of 
> dashes) and should be -- --  (a space after 2 dashes)
> (source https://dev.mysql.com/doc/refman/5.7/en/comments.html)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5901) Query cancel should stop running MR tasks

2014-03-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925091#comment-13925091
 ] 

Ashutosh Chauhan commented on HIVE-5901:


Has someone tested this with hadoop-2? Asking because AFAIK doing control-C on 
hive cli doesn't cancel MR tasks on hadoop-2 and since we use same api I would 
assume this would result in same problem for hadoop-2 even for this issue.

> Query cancel should stop running MR tasks
> -
>
> Key: HIVE-5901
> URL: https://issues.apache.org/jira/browse/HIVE-5901
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-5901.1.patch.txt, HIVE-5901.2.patch.txt, 
> HIVE-5901.3.patch.txt, HIVE-5901.4.patch.txt, HIVE-5901.5.patch.txt, 
> HIVE-5901.6.patch.txt, HIVE-5901.7.patch.txt
>
>
> Currently, query canceling does not stop running MR job immediately.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6559) sourcing txn-script from schema script results in failure for mysql & oracle

2014-03-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925090#comment-13925090
 ] 

Ashutosh Chauhan commented on HIVE-6559:


Although, HIVE-6583 will help one get past problem I pasted above, that doesn't 
help completely. New requirement because of sourcing another script leads one 
into a situation that you need to be in same dir to perform this sourcing 
correctly. Otherwise, you will get 
{code}
ERROR: 
Failed to open file 'hive-txn-schema-0.13.0.mysql.sql', error: 2
{code}
This is inconvenience for user, but real problems is for tools because they now 
need to cd into correct dir to invoke this script.
I believe we should not source another script, but inline all sql statements in 
existing script.

> sourcing txn-script from schema script results in failure for mysql & oracle
> 
>
> Key: HIVE-6559
> URL: https://issues.apache.org/jira/browse/HIVE-6559
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.0
>Reporter: Ashutosh Chauhan
>Assignee: Alan Gates
> Fix For: 0.13.0
>
>
> On mysql, I got:
> ERROR 1064 (42000): You have an error in your SQL syntax; check the manual 
> that corresponds to your MySQL server version for the right syntax to use 
> near '
> 
> SOURCE hive-txn-schem' at line 1
> On Oracle, I got:
> SP2-0310: unable to open file "hive-txn-schema-0.13.0.oracle.sql" 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6583) wrong sql comments : ----... instead of -- ---...

2014-03-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925089#comment-13925089
 ] 

Ashutosh Chauhan commented on HIVE-6583:


+1

> wrong sql comments : ... instead of -- ---...
> -
>
> Key: HIVE-6583
> URL: https://issues.apache.org/jira/browse/HIVE-6583
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 0.14.0
>Reporter: Pierre Nerzic
>Priority: Minor
> Fix For: 0.14.0
>
> Attachments: HIVE-6583.1.patch.txt
>
>
> In file metastore/scripts/upgrade/mysql/hive-schema-0.13.0.mysql.sql, lines 
> 799 and 801, a comment is written as -- (uninterrupted line of 
> dashes) and should be -- --  (a space after 2 dashes)
> (source https://dev.mysql.com/doc/refman/5.7/en/comments.html)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6583) wrong sql comments : ----... instead of -- ---...

2014-03-08 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6583:
---

Assignee: Pierre Nerzic

> wrong sql comments : ... instead of -- ---...
> -
>
> Key: HIVE-6583
> URL: https://issues.apache.org/jira/browse/HIVE-6583
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 0.14.0
>Reporter: Pierre Nerzic
>Assignee: Pierre Nerzic
>Priority: Minor
> Fix For: 0.14.0
>
> Attachments: HIVE-6583.1.patch.txt
>
>
> In file metastore/scripts/upgrade/mysql/hive-schema-0.13.0.mysql.sql, lines 
> 799 and 801, a comment is written as -- (uninterrupted line of 
> dashes) and should be -- --  (a space after 2 dashes)
> (source https://dev.mysql.com/doc/refman/5.7/en/comments.html)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6596) build.xml is missing from trunk and branch 0.13

2014-03-08 Thread Amit Anand (JIRA)
Amit Anand created HIVE-6596:


 Summary: build.xml is missing from trunk and branch 0.13
 Key: HIVE-6596
 URL: https://issues.apache.org/jira/browse/HIVE-6596
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.13.0
 Environment: Hadoop 2.2.0, JDK 7, Centos 6.5
Reporter: Amit Anand






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6573) Oracle metastore doesnt come up when hive.cluster.delegation.token.store.class is set to DBTokenStore

2014-03-08 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6573:
---

Fix Version/s: (was: 0.14.0)
   0.13.0

> Oracle metastore doesnt come up when 
> hive.cluster.delegation.token.store.class is set to DBTokenStore
> -
>
> Key: HIVE-6573
> URL: https://issues.apache.org/jira/browse/HIVE-6573
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Security
>Affects Versions: 0.12.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Blocker
> Fix For: 0.13.0
>
> Attachments: HIVE-6573.patch
>
>
> This config {{hive.cluster.delegation.token.store.class}} was introduced in 
> HIVE-3255 and is useful only if oracle metastore is used in secure setup with 
> HA config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6573) Oracle metastore doesnt come up when hive.cluster.delegation.token.store.class is set to DBTokenStore

2014-03-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925084#comment-13925084
 ] 

Ashutosh Chauhan commented on HIVE-6573:


Committed to 0.13 branch as well.

> Oracle metastore doesnt come up when 
> hive.cluster.delegation.token.store.class is set to DBTokenStore
> -
>
> Key: HIVE-6573
> URL: https://issues.apache.org/jira/browse/HIVE-6573
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Security
>Affects Versions: 0.12.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Blocker
> Fix For: 0.13.0
>
> Attachments: HIVE-6573.patch
>
>
> This config {{hive.cluster.delegation.token.store.class}} was introduced in 
> HIVE-3255 and is useful only if oracle metastore is used in secure setup with 
> HA config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6583) wrong sql comments : ----... instead of -- ---...

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925083#comment-13925083
 ] 

Hive QA commented on HIVE-6583:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633382/HIVE-6583.1.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5374 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_dyn_part
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1667/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1667/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12633382

> wrong sql comments : ... instead of -- ---...
> -
>
> Key: HIVE-6583
> URL: https://issues.apache.org/jira/browse/HIVE-6583
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 0.14.0
>Reporter: Pierre Nerzic
>Priority: Minor
> Fix For: 0.14.0
>
> Attachments: HIVE-6583.1.patch.txt
>
>
> In file metastore/scripts/upgrade/mysql/hive-schema-0.13.0.mysql.sql, lines 
> 799 and 801, a comment is written as -- (uninterrupted line of 
> dashes) and should be -- --  (a space after 2 dashes)
> (source https://dev.mysql.com/doc/refman/5.7/en/comments.html)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Timeline for the Hive 0.13 release?

2014-03-08 Thread Xuefu Zhang
Hi Harish,


6414 seem missing from:
https://cwiki.apache.org/confluence/display/Hive/Hive+0.13+release+status.

Thanks,
Xuefu


On Fri, Mar 7, 2014 at 10:41 AM, Harish Butani wrote:

> ok adding 6414, 6572, 6574 to the list
> On Mar 7, 2014, at 10:27 AM, Xuefu Zhang  wrote:
>
> > HIVE-6414 seems bad enough to included.
> >
> >
> > On Thu, Mar 6, 2014 at 3:28 PM, Sushanth Sowmyan 
> wrote:
> >
> >> One more, I'm afraid. I'll add it to the wiki. I just uploaded a patch
> >> for https://issues.apache.org/jira/browse/HIVE-6572 . This fixes one
> >> glaring bug on usage of mapred.min.split.size.per.rack and
> >> mapred.min.split.size.per.node, and makes sure some of the other conf
> >> parameters we use are correctly through shims. I want this to go into
> >> 0.13 because I don't want a release of ours to go out with the
> >> dual/old configs (and erroneous config in the per.rack/node case) and
> >> have that be expected behaviour to support in the future.
> >>
> >> On Thu, Mar 6, 2014 at 11:04 AM, Harish Butani  >
> >> wrote:
> >>> ok sure.
> >>> Tracking these with the JQL below. I don’t have permission to setup a
> >> Shared Filter; can someone help with this.
> >>> Of the 35 issues: 11 are still open, 22 are patch available, 2 are
> >> resolved.
> >>>
> >>> regards,
> >>> Harish.
> >>>
> >>> JQL:
> >>>
> >>> id in (HIVE-5317, HIVE-5843, HIVE-6060, HIVE-6319, HIVE-6460,
> HIVE-5687,
> >> HIVE-5943, HIVE-5942, HIVE-6547, HIVE-5155, HIVE-6486, HIVE-6455,
> >> HIVE-4177, HIVE-4764, HIVE-6306, HIVE-6350, HIVE-6485, HIVE-6507,
> >> HIVE-6499, HIVE-6325, HIVE-6558, HIVE-6403, HIVE-4790, HIVE-4293,
> >> HIVE-6551, HIVE-6359, HIVE-6314, HIVE-6241, HIVE-5768, HIVE-2752,
> >> HIVE-6312, HIVE-6129, HIVE-6012, HIVE-6434, HIVE-6562) ORDER BY status
> ASC,
> >> assignee
> >>>
> >>> On Mar 5, 2014, at 6:50 PM, Prasanth Jayachandran <
> >> pjayachand...@hortonworks.com> wrote:
> >>>
>  Can you consider HIVE-6562 as well?
> 
>  HIVE-6562 - Protection from exceptions in ORC predicate evaluation
> 
>  Thanks
>  Prasanth Jayachandran
> 
>  On Mar 5, 2014, at 5:56 PM, Jason Dere  wrote:
> 
> >
> > Would like to get these in, if possible:
> >
> > HIVE-6012 restore backward compatibility of arithmetic operations
> > HIVE-6434 Restrict function create/drop to admin roles
> >
> > On Mar 5, 2014, at 5:41 PM, Navis류승우  wrote:
> >
> >> I have really big wish list(65 pending) but it would be time to
> focus
> >> on
> >> finalization.
> >>
> >> - Small bugs
> >> HIVE-6403 uncorrelated subquery is failing with
> auto.convert.join=true
> >> HIVE-4790 MapredLocalTask task does not make virtual columns
> >> HIVE-4293 Predicates following UDTF operator are removed by PPD
> >>
> >> - Trivials
> >> HIVE-6551 group by after join with skew join optimization references
> >> invalid task sometimes
> >> HIVE-6359 beeline -f fails on scripts with tabs in them.
> >> HIVE-6314 The logging (progress reporting) is too verbose
> >> HIVE-6241 Remove direct reference of Hadoop23Shims inQTestUtil
> >> HIVE-5768 Beeline connection cannot be closed with !close command
> >> HIVE-2752 Index names are case sensitive
> >>
> >> - Memory leakage
> >> HIVE-6312 doAs with plain sasl auth should be session aware
> >>
> >> - Implementation is not accord with document
> >> HIVE-6129 alter exchange is implemented in inverted manner
> >>
> >> I'll update the wiki, too.
> >>
> >>
> >>
> >>
> >> 2014-03-05 12:18 GMT+09:00 Harish Butani :
> >>
> >>> Tracking jiras to be applied to branch 0.13 here:
> >>>
> >>
> https://cwiki.apache.org/confluence/display/Hive/Hive+0.13+release+status
> >>>
> >>> On Mar 4, 2014, at 5:45 PM, Harish Butani  >
> >> wrote:
> >>>
>  the branch is created.
>  have changed the poms in both branches.
>  Planning to setup a wikipage to track jiras that will get ported
> to
> >> 0.13
> 
>  regards,
>  Harish.
> 
> 
>  On Mar 4, 2014, at 5:05 PM, Harish Butani <
> hbut...@hortonworks.com>
> >>> wrote:
> 
> > branching now. Will be changing the pom files on trunk.
> > Will send another email when the branch and trunk changes are in.
> >
> >
> > On Mar 4, 2014, at 4:03 PM, Sushanth Sowmyan  >
> >>> wrote:
> >
> >> I have two patches still as patch-available, that have had +1s
> as
> >> well, but are waiting on pre-commit tests picking them up go in
> to
> >> 0.13:
> >>
> >> https://issues.apache.org/jira/browse/HIVE-6507 (refactor of
> >> table
> >> property names from string constants to an enum in OrcFile)
> >> https://issues.apache.org/jira/browse/HIVE-6499 (fixes bug
> where
> >> calls
> >> like create table and drop table can fail if metastore-side
>

[jira] [Commented] (HIVE-6457) Ensure Parquet integration has good error messages for data types not supported

2014-03-08 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925078#comment-13925078
 ] 

Xuefu Zhang commented on HIVE-6457:
---

+1 pending test result.

> Ensure Parquet integration has good error messages for data types not 
> supported
> ---
>
> Key: HIVE-6457
> URL: https://issues.apache.org/jira/browse/HIVE-6457
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 0.13.0
>Reporter: Brock Noland
>Assignee: Brock Noland
>  Labels: parquet
> Attachments: HIVE-6457.patch, HIVE-6457.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6595) Hive 0.11.0 build failure

2014-03-08 Thread Amit Anand (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925076#comment-13925076
 ] 

Amit Anand commented on HIVE-6595:
--

I tried applying patch 
"https://issues.apache.org/jira/secure/attachment/12589481/HIVE-4496-2.patch"; 
on relse 0.12.0 and still jdbc compilation fails.

I am not able to compile 0.11.0 and 0.12.0. with Jdk 7.

> Hive 0.11.0 build failure
> -
>
> Key: HIVE-6595
> URL: https://issues.apache.org/jira/browse/HIVE-6595
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.11.0
> Environment: CentOS 6.5, java version "1.7.0_45", Hadoop 2.2.0
>Reporter: Amit Anand
>
> I am unable to build Hive 0.11.0 from the source. I have a single node hadoop 
> 2.2.0, that I built from the source, running. 
> I followed steps given below:
> svn co http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/ hive-0.11.0
> cd hive-0.11.0
> ant clean
> ant package
> I got messages given below 
> compile:
>  [echo] Project: jdbc
> [javac] Compiling 28 source files to 
> /opt/apache/source/hive-0.11.0/build/jdbc/classes
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java:48:
>  error: HiveCallableStatement is not abstract and does not override abstract 
> method getObject(String,Class) in CallableStatement
> [javac] public class HiveCallableStatement implements 
> java.sql.CallableStatement {
> [javac]^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java:65:
>  error: HiveConnection is not abstract and does not override abstract method 
> getNetworkTimeout() in Connection
> [javac] public class HiveConnection implements java.sql.Connection {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDataSource.java:31:
>  error: HiveDataSource is not abstract and does not override abstract method 
> getParentLogger() in CommonDataSource
> [javac] public class HiveDataSource implements DataSource {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:56:
>  error: HiveDatabaseMetaData is not abstract and does not override abstract 
> method generatedKeyAlwaysReturned() in DatabaseMetaData
> [javac] public class HiveDatabaseMetaData implements DatabaseMetaData {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:707:
>  error:  is not 
> abstract and does not override abstract method getObject(String,Class) 
> in ResultSet
> [javac] , null) {
> [javac] ^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDriver.java:35:
>  error: HiveDriver is not abstract and does not override abstract method 
> getParentLogger() in Driver
> [javac] public class HiveDriver implements Driver {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java:56:
>  error: HivePreparedStatement is not abstract and does not override abstract 
> method isCloseOnCompletion() in Statement
> [javac] public class HivePreparedStatement implements PreparedStatement {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java:48:
>  error: HiveQueryResultSet is not abstract and does not override abstract 
> method getObject(String,Class) in ResultSet
> [javac] public class HiveQueryResultSet extends HiveBaseResultSet {
> [javac]^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java:42:
>  error: HiveStatement is not abstract and does not override abstract method 
> isCloseOnCompletion() in Statement
> [javac] public class HiveStatement implements java.sql.Statement {
> [javac]^
> [javac] Note: Some input files use or override a deprecated API.
> [javac] Note: Recompile with -Xlint:deprecation for details.
> [javac] Note: Some input files use unchecked or unsafe operations.
> [javac] Note: Recompile with -Xlint:unchecked for details.
> [javac] 9 errors
> BUILD FAILED
> /opt/apache/source/hive-0.11.0/build.xml:274: The following error occurred 
>

[jira] [Commented] (HIVE-6575) select * fails on parquet table with map datatype

2014-03-08 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925075#comment-13925075
 ] 

Xuefu Zhang commented on HIVE-6575:
---

+1 pending test result.

> select * fails on parquet table with map datatype
> -
>
> Key: HIVE-6575
> URL: https://issues.apache.org/jira/browse/HIVE-6575
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 0.13.0
>Reporter: Szehon Ho
>Assignee: Szehon Ho
>  Labels: parquet
> Attachments: HIVE-6575.2.patch, HIVE-6575.3.patch, HIVE-6575.patch
>
>
> Create parquet table with map and run select * from parquet_table, returns 
> following exception:
> {noformat}
>  FAILED: RuntimeException java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.io.parquet.serde.DeepParquetHiveMapInspector cannot 
> be cast to 
> org.apache.hadoop.hive.ql.io.parquet.serde.StandardParquetHiveMapInspector
> {noformat}
> However select  from parquet_table seems to work, and thus joins will 
> work.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 18925: HIVE-6575 select * fails on parquet table with map datatype

2014-03-08 Thread Xuefu Zhang


> On March 8, 2014, 12:33 a.m., Xuefu Zhang wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java,
> >  line 154
> > 
> >
> > I guess I wasn't clear. It's not inapproapriate, but goes beyond its 
> > responsibility. Equality implementation is within a context which is the 
> > class. The instance to be checked doesn't necessarily has the runtime class 
> > info. In fact, the context shouldn't be aware the runtime class of these 
> > instances, as child classes can be added any time. Doing getClass == 
> > other.getClass() goes beyond the current context.
> > 
> > What's more appropriate to check type compatibility by calling if 
> > (other instanceof this.class). This is different from checking 
> > this.getClass() == other.getClass().
> > 
> > Take Java ArrayList.equals() method as an example. 
> > (http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/AbstractList.java#AbstractList.equals%28java.lang.Object%29).
> >  This method doesn't do runtime class check. The implementation is saying, 
> > other.getClass() doesn't have to be ArrayList, but has to be an instance of 
> > ArrayList. It could be an instance of MyArrayList as long as MyArrayList is 
> > inherited from ArrayList.
> > 
> > If we think it's more protical to do such a check, we'd expect that 
> > ArrayList.equals() would also check this.getClass() == other.getClass().
> > 
> > Btw, I don't understand how it breaks transitivity by removing this 
> > check.
> > 
> > I understand this check was there before your change. I missed it in my 
> > previous review.
> >
> 
> Szehon Ho wrote:
> Hm I actually did not realize that Java's code has that for collections, 
> thanks for pointing that out.  I suppose in list case, the semantic is the 
> user doesn't care about list implementation, but about the contents. 
> 
> What I meant about breaking the transitive property if you allow each 
> class to decide:  Say we remove the check of RT class equality.  There is a 
> subclass called 'A' which choose to override equal to return true only if 
> 'other' is A.  Another subclass 'B' doesn't override .equals, and by 
> inheritance can return true if 'other' is any subclass of parent (A or B).  
> A.equals(B) is false, B.equals(A) is true, breaking transitive.  Now that I 
> think about it, this argument doesn't justify having the parent one way or 
> another, all I meant is that a class cannot implement .equals just in its own 
> context as you mentioned, all subclass must choose the same way to be 
> consistent, and I felt that having this check at the parent would ensure that 
> all the children followed it.
> 
> But coming back down to this particular issue, I still don't think its 
> safe to remove that check.  There are two subclass of 
> AbstractParquetMapInspector, the Deep and Standard one depending on the type 
> of map.  If we don't do this check, then Deep will be considered equal to 
> Standard, and perhaps the wrong one may be returned from cache and used in 
> the inspection, they are not interchangeable.  This is unlike java list,map, 
> here the actual class matters more than the content.  At least that is my 
> understanding looking at the code.

okay. Frankly, I don't know what's the difference between the two child class: 
the whole parquet code is very confusing. Since the code was there before this, 
it's fine to keep it as it is.


- Xuefu


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18925/#review36586
---


On March 8, 2014, 12:01 a.m., Szehon Ho wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/18925/
> ---
> 
> (Updated March 8, 2014, 12:01 a.m.)
> 
> 
> Review request for hive, Brock Noland, justin coffey, and Xuefu Zhang.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> The issue is, as part of select * query, a DeepParquetHiveMapInspector is 
> used for one column of an overall parquet-table struct object inspector.  
> 
> The problem lies in the ObjectInspectorFactory's cache for struct object 
> inspector.  For performance, there is a cache keyed on an array list, of all 
> object inspectors of columns.  The second time the query is run, it attempts 
> to lookup cached struct inspector.  But when the hashmap looks up the part of 
> the key consisting of the DeepParquetHiveMapInspector, java calls .equals 
> against the existing DeepParquetHivemapInspector.  This fails, as the .equals 
> method casted the "other" to a "StandardParquetHiveInspector".
> 
> Regenerating the .e

[jira] [Commented] (HIVE-6595) Hive 0.11.0 build failure

2014-03-08 Thread Amit Anand (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925061#comment-13925061
 ] 

Amit Anand commented on HIVE-6595:
--

Is there a patch for 0.11.0? 

> Hive 0.11.0 build failure
> -
>
> Key: HIVE-6595
> URL: https://issues.apache.org/jira/browse/HIVE-6595
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.11.0
> Environment: CentOS 6.5, java version "1.7.0_45", Hadoop 2.2.0
>Reporter: Amit Anand
>
> I am unable to build Hive 0.11.0 from the source. I have a single node hadoop 
> 2.2.0, that I built from the source, running. 
> I followed steps given below:
> svn co http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/ hive-0.11.0
> cd hive-0.11.0
> ant clean
> ant package
> I got messages given below 
> compile:
>  [echo] Project: jdbc
> [javac] Compiling 28 source files to 
> /opt/apache/source/hive-0.11.0/build/jdbc/classes
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java:48:
>  error: HiveCallableStatement is not abstract and does not override abstract 
> method getObject(String,Class) in CallableStatement
> [javac] public class HiveCallableStatement implements 
> java.sql.CallableStatement {
> [javac]^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java:65:
>  error: HiveConnection is not abstract and does not override abstract method 
> getNetworkTimeout() in Connection
> [javac] public class HiveConnection implements java.sql.Connection {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDataSource.java:31:
>  error: HiveDataSource is not abstract and does not override abstract method 
> getParentLogger() in CommonDataSource
> [javac] public class HiveDataSource implements DataSource {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:56:
>  error: HiveDatabaseMetaData is not abstract and does not override abstract 
> method generatedKeyAlwaysReturned() in DatabaseMetaData
> [javac] public class HiveDatabaseMetaData implements DatabaseMetaData {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:707:
>  error:  is not 
> abstract and does not override abstract method getObject(String,Class) 
> in ResultSet
> [javac] , null) {
> [javac] ^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDriver.java:35:
>  error: HiveDriver is not abstract and does not override abstract method 
> getParentLogger() in Driver
> [javac] public class HiveDriver implements Driver {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java:56:
>  error: HivePreparedStatement is not abstract and does not override abstract 
> method isCloseOnCompletion() in Statement
> [javac] public class HivePreparedStatement implements PreparedStatement {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java:48:
>  error: HiveQueryResultSet is not abstract and does not override abstract 
> method getObject(String,Class) in ResultSet
> [javac] public class HiveQueryResultSet extends HiveBaseResultSet {
> [javac]^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java:42:
>  error: HiveStatement is not abstract and does not override abstract method 
> isCloseOnCompletion() in Statement
> [javac] public class HiveStatement implements java.sql.Statement {
> [javac]^
> [javac] Note: Some input files use or override a deprecated API.
> [javac] Note: Recompile with -Xlint:deprecation for details.
> [javac] Note: Some input files use unchecked or unsafe operations.
> [javac] Note: Recompile with -Xlint:unchecked for details.
> [javac] 9 errors
> BUILD FAILED
> /opt/apache/source/hive-0.11.0/build.xml:274: The following error occurred 
> while executing this line:
> /opt/apache/source/hive-0.11.0/build.xml:113: The following error occurred 
> while executing this line:
> /opt/apache/source/hive-0.11.0/build.xml:115

[jira] [Commented] (HIVE-6508) Mismatched results between vector and non-vector mode with decimal field

2014-03-08 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925059#comment-13925059
 ] 

Jitendra Nath Pandey commented on HIVE-6508:


Committed to branch-0.13

> Mismatched results between vector and non-vector mode with decimal field
> 
>
> Key: HIVE-6508
> URL: https://issues.apache.org/jira/browse/HIVE-6508
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Fix For: 0.13.0, 0.14.0
>
> Attachments: HIVE-6508.1.patch, HIVE-6508.1.patch
>
>
> Following query has a little mismatch in result as compared to the non-vector 
> mode.
> {code}
> select d_year, i_brand_id, i_brand,
>sum(ss_ext_sales_price) as sum_agg
> from date_dim
> join store_sales on date_dim.d_date_sk = store_sales.ss_sold_date_sk
> join item on store_sales.ss_item_sk = item.i_item_sk
> where i_manufact_id = 128
>   and d_moy = 11
> group by d_year, i_brand, i_brand_id
> order by d_year, sum_agg desc, i_brand_id
> limit 100;
> {code}
> This query is on tpcds data.
> The field ss_ext_sales_price is of type decimal(7,2) and everything else is 
> an integer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6595) Hive 0.11.0 build failure

2014-03-08 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925057#comment-13925057
 ] 

Szehon Ho commented on HIVE-6595:
-

This seems fixed already in HIVE-4496 for 0.12

> Hive 0.11.0 build failure
> -
>
> Key: HIVE-6595
> URL: https://issues.apache.org/jira/browse/HIVE-6595
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.11.0
> Environment: CentOS 6.5, java version "1.7.0_45", Hadoop 2.2.0
>Reporter: Amit Anand
>
> I am unable to build Hive 0.11.0 from the source. I have a single node hadoop 
> 2.2.0, that I built from the source, running. 
> I followed steps given below:
> svn co http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/ hive-0.11.0
> cd hive-0.11.0
> ant clean
> ant package
> I got messages given below 
> compile:
>  [echo] Project: jdbc
> [javac] Compiling 28 source files to 
> /opt/apache/source/hive-0.11.0/build/jdbc/classes
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java:48:
>  error: HiveCallableStatement is not abstract and does not override abstract 
> method getObject(String,Class) in CallableStatement
> [javac] public class HiveCallableStatement implements 
> java.sql.CallableStatement {
> [javac]^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java:65:
>  error: HiveConnection is not abstract and does not override abstract method 
> getNetworkTimeout() in Connection
> [javac] public class HiveConnection implements java.sql.Connection {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDataSource.java:31:
>  error: HiveDataSource is not abstract and does not override abstract method 
> getParentLogger() in CommonDataSource
> [javac] public class HiveDataSource implements DataSource {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:56:
>  error: HiveDatabaseMetaData is not abstract and does not override abstract 
> method generatedKeyAlwaysReturned() in DatabaseMetaData
> [javac] public class HiveDatabaseMetaData implements DatabaseMetaData {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:707:
>  error:  is not 
> abstract and does not override abstract method getObject(String,Class) 
> in ResultSet
> [javac] , null) {
> [javac] ^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDriver.java:35:
>  error: HiveDriver is not abstract and does not override abstract method 
> getParentLogger() in Driver
> [javac] public class HiveDriver implements Driver {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java:56:
>  error: HivePreparedStatement is not abstract and does not override abstract 
> method isCloseOnCompletion() in Statement
> [javac] public class HivePreparedStatement implements PreparedStatement {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java:48:
>  error: HiveQueryResultSet is not abstract and does not override abstract 
> method getObject(String,Class) in ResultSet
> [javac] public class HiveQueryResultSet extends HiveBaseResultSet {
> [javac]^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java:42:
>  error: HiveStatement is not abstract and does not override abstract method 
> isCloseOnCompletion() in Statement
> [javac] public class HiveStatement implements java.sql.Statement {
> [javac]^
> [javac] Note: Some input files use or override a deprecated API.
> [javac] Note: Recompile with -Xlint:deprecation for details.
> [javac] Note: Some input files use unchecked or unsafe operations.
> [javac] Note: Recompile with -Xlint:unchecked for details.
> [javac] 9 errors
> BUILD FAILED
> /opt/apache/source/hive-0.11.0/build.xml:274: The following error occurred 
> while executing this line:
> /opt/apache/source/hive-0.11.0/build.xml:113: The following error occurred 
> while executing this line:
> /opt/apache/source/hive-0.11.

[jira] [Commented] (HIVE-6582) missing ; in HTML entities like < in conf file

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925055#comment-13925055
 ] 

Hive QA commented on HIVE-6582:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633369/HIVE-6582.1.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5373 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1666/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1666/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12633369

> missing ; in HTML entities like < in conf file
> -
>
> Key: HIVE-6582
> URL: https://issues.apache.org/jira/browse/HIVE-6582
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 0.14.0
>Reporter: Pierre Nerzic
>Priority: Trivial
> Attachments: HIVE-6582.1.patch.txt
>
>
> In conf/hive-default.xml.template, line 2392, the description of the property 
> is malformed :  
> on the same line.
> (I have problems with wikification to display < ; and not <)
> This causes an error when launching hive : org.xml.sax.SAXParseException 
> (translated from french) reference to entity "lthive.user.install.directory" 
> must end with ';'.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6595) Hive 0.11.0 build failure

2014-03-08 Thread Amit Anand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Anand updated HIVE-6595:
-

Description: 
I am unable to build Hive 0.11.0 from the source. I have a single node hadoop 
2.2.0, that I built from the source, running. 

I followed steps given below:

svn co http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/ hive-0.11.0
cd hive-0.11.0
ant clean
ant package

I got messages given below 


compile:
 [echo] Project: jdbc
[javac] Compiling 28 source files to 
/opt/apache/source/hive-0.11.0/build/jdbc/classes
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java:48:
 error: HiveCallableStatement is not abstract and does not override abstract 
method getObject(String,Class) in CallableStatement
[javac] public class HiveCallableStatement implements 
java.sql.CallableStatement {
[javac]^
[javac]   where T is a type-variable:
[javac] T extends Object declared in method 
getObject(String,Class)
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java:65:
 error: HiveConnection is not abstract and does not override abstract method 
getNetworkTimeout() in Connection
[javac] public class HiveConnection implements java.sql.Connection {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDataSource.java:31:
 error: HiveDataSource is not abstract and does not override abstract method 
getParentLogger() in CommonDataSource
[javac] public class HiveDataSource implements DataSource {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:56:
 error: HiveDatabaseMetaData is not abstract and does not override abstract 
method generatedKeyAlwaysReturned() in DatabaseMetaData
[javac] public class HiveDatabaseMetaData implements DatabaseMetaData {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:707:
 error:  is not abstract 
and does not override abstract method getObject(String,Class) in ResultSet
[javac] , null) {
[javac] ^
[javac]   where T is a type-variable:
[javac] T extends Object declared in method 
getObject(String,Class)
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDriver.java:35:
 error: HiveDriver is not abstract and does not override abstract method 
getParentLogger() in Driver
[javac] public class HiveDriver implements Driver {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java:56:
 error: HivePreparedStatement is not abstract and does not override abstract 
method isCloseOnCompletion() in Statement
[javac] public class HivePreparedStatement implements PreparedStatement {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java:48:
 error: HiveQueryResultSet is not abstract and does not override abstract 
method getObject(String,Class) in ResultSet
[javac] public class HiveQueryResultSet extends HiveBaseResultSet {
[javac]^
[javac]   where T is a type-variable:
[javac] T extends Object declared in method 
getObject(String,Class)
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java:42:
 error: HiveStatement is not abstract and does not override abstract method 
isCloseOnCompletion() in Statement
[javac] public class HiveStatement implements java.sql.Statement {
[javac]^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 9 errors

BUILD FAILED
/opt/apache/source/hive-0.11.0/build.xml:274: The following error occurred 
while executing this line:
/opt/apache/source/hive-0.11.0/build.xml:113: The following error occurred 
while executing this line:
/opt/apache/source/hive-0.11.0/build.xml:115: The following error occurred 
while executing this line:
/opt/apache/source/hive-0.11.0/jdbc/build.xml:51: Compile failed; see the 
compiler error output for details.



  was:
k n,,  l,jvdh/.bvcx,mnbbvvvkjccvvcc   x, vv   FDCc I am unable to 
build Hive 0.11.0 from the source. I have a single node hadoop 2.2.0, that I 
built from the source, running. 

I followed steps given below:

svn co http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/ hive-0.11.0
cd hive-0.11.0
ant clean
ant package

I got messages given below 


compile:
 [echo] Project: jdbc
[javac] Compiling 28 source files to 
/opt/apache/source

[jira] [Updated] (HIVE-6575) select * fails on parquet table with map datatype

2014-03-08 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-6575:


Attachment: HIVE-6575.3.patch

Thanks Xuefu for review, I responded to latest comment + updated the latest 
patch, not sure if you had a chance to take a look.

> select * fails on parquet table with map datatype
> -
>
> Key: HIVE-6575
> URL: https://issues.apache.org/jira/browse/HIVE-6575
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 0.13.0
>Reporter: Szehon Ho
>Assignee: Szehon Ho
>  Labels: parquet
> Attachments: HIVE-6575.2.patch, HIVE-6575.3.patch, HIVE-6575.patch
>
>
> Create parquet table with map and run select * from parquet_table, returns 
> following exception:
> {noformat}
>  FAILED: RuntimeException java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.io.parquet.serde.DeepParquetHiveMapInspector cannot 
> be cast to 
> org.apache.hadoop.hive.ql.io.parquet.serde.StandardParquetHiveMapInspector
> {noformat}
> However select  from parquet_table seems to work, and thus joins will 
> work.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6508) Mismatched results between vector and non-vector mode with decimal field

2014-03-08 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-6508:
---

Fix Version/s: 0.14.0
   0.13.0

> Mismatched results between vector and non-vector mode with decimal field
> 
>
> Key: HIVE-6508
> URL: https://issues.apache.org/jira/browse/HIVE-6508
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Fix For: 0.13.0, 0.14.0
>
> Attachments: HIVE-6508.1.patch, HIVE-6508.1.patch
>
>
> Following query has a little mismatch in result as compared to the non-vector 
> mode.
> {code}
> select d_year, i_brand_id, i_brand,
>sum(ss_ext_sales_price) as sum_agg
> from date_dim
> join store_sales on date_dim.d_date_sk = store_sales.ss_sold_date_sk
> join item on store_sales.ss_item_sk = item.i_item_sk
> where i_manufact_id = 128
>   and d_moy = 11
> group by d_year, i_brand, i_brand_id
> order by d_year, sum_agg desc, i_brand_id
> limit 100;
> {code}
> This query is on tpcds data.
> The field ss_ext_sales_price is of type decimal(7,2) and everything else is 
> an integer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6508) Mismatched results between vector and non-vector mode with decimal field

2014-03-08 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-6508:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk. It is a correctness bug, therefore I will port it to 
hive-13 branch as well.

> Mismatched results between vector and non-vector mode with decimal field
> 
>
> Key: HIVE-6508
> URL: https://issues.apache.org/jira/browse/HIVE-6508
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Attachments: HIVE-6508.1.patch, HIVE-6508.1.patch
>
>
> Following query has a little mismatch in result as compared to the non-vector 
> mode.
> {code}
> select d_year, i_brand_id, i_brand,
>sum(ss_ext_sales_price) as sum_agg
> from date_dim
> join store_sales on date_dim.d_date_sk = store_sales.ss_sold_date_sk
> join item on store_sales.ss_item_sk = item.i_item_sk
> where i_manufact_id = 128
>   and d_moy = 11
> group by d_year, i_brand, i_brand_id
> order by d_year, sum_agg desc, i_brand_id
> limit 100;
> {code}
> This query is on tpcds data.
> The field ss_ext_sales_price is of type decimal(7,2) and everything else is 
> an integer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6531) Runtime errors in vectorized execution.

2014-03-08 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925042#comment-13925042
 ] 

Jitendra Nath Pandey commented on HIVE-6531:


The failed test is addressed by HIVE-6511. I have tested it after applying 
HIVE-6511 patch.

> Runtime errors in vectorized execution.
> ---
>
> Key: HIVE-6531
> URL: https://issues.apache.org/jira/browse/HIVE-6531
> Project: Hive
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-6531.1.patch, HIVE-6531.2.patch, HIVE-6531.3.patch
>
>
> There are a few runtime errors observed in some of the tpcds queries for 
> following reasons:
> 1) VectorFileSinkOperator fails with LazyBinarySerde.
> 2) Decimal128 and Unsigned128 don't serialize correctly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6024) Load data local inpath unnecessarily creates a copy task

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925027#comment-13925027
 ] 

Hive QA commented on HIVE-6024:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633344/HIVE-6024.4.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5374 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_local_dir_test
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1665/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1665/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12633344

> Load data local inpath unnecessarily creates a copy task
> 
>
> Key: HIVE-6024
> URL: https://issues.apache.org/jira/browse/HIVE-6024
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Ashutosh Chauhan
>Assignee: Mohammad Kamrul Islam
> Attachments: HIVE-6024.1.patch, HIVE-6024.2.patch, HIVE-6024.3.patch, 
> HIVE-6024.4.patch
>
>
> Load data command creates an additional copy task only when its loading from 
> {{local}} It doesn't create this additional copy task while loading from DFS 
> though.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Proposal to switch to pull requests

2014-03-08 Thread Brock Noland
In my read of the Apache git - github integration blog post we cannot use
pull requests as patches. Just that we'll be notified of them and could
perhaps use them as code review.

One additional item I think we should investigate is disabling merge
commits on trunk and feature branches.
On Mar 7, 2014 7:57 PM, "Edward Capriolo"  wrote:

> We need to keep patches in Jira I feel. We have gotten better on the
> documentation front but having a patch in the jira is critical I feel. We
> must at least have a perma link to the changes.
>
>
> On Fri, Mar 7, 2014 at 8:40 PM, Sergey Shelukhin  >wrote:
>
> > +1 to git!
> >
> >
> > On Fri, Mar 7, 2014 at 12:46 PM, Xuefu Zhang 
> wrote:
> >
> > > Switching to git from svn seems to be a proposal slightly different
> from
> > > that of switching to pull request from the head of the thread.
> Personally
> > > I'm +1 to git, but I think patches are very portable and widely adopted
> > in
> > > Hadoop ecosystem and we should keep the practice. Thus, +1 to that
> also.
> > >
> > > --Xuefu
> > >
> > >
> > > On Fri, Mar 7, 2014 at 12:27 PM, Gunther Hagleitner <
> > > ghagleit...@hortonworks.com> wrote:
> > >
> > > > Once Prasad's loop finishes I'd like to add my +1 too.
> > > >
> > > >
> > > > On Fri, Mar 7, 2014 at 11:44 AM, Vaibhav Gumashta <
> > > > vgumas...@hortonworks.com
> > > > > wrote:
> > > >
> > > > > +1 for moving to git!
> > > > >
> > > > > Thanks,
> > > > > --Vaibhav
> > > > >
> > > > >
> > > > > On Fri, Mar 7, 2014 at 9:46 AM, Prasad Mujumdar <
> > pras...@cloudera.com
> > > > > >wrote:
> > > > >
> > > > > >   while (true) {
> > > > > >+1
> > > > > >   }
> > > > > >
> > > > > >   +1  // another, just in case ;)
> > > > > >
> > > > > > thanks
> > > > > > Prasad
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Fri, Mar 7, 2014 at 6:47 AM, kulkarni.swar...@gmail.com <
> > > > > > kulkarni.swar...@gmail.com> wrote:
> > > > > >
> > > > > > > +1
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Mar 7, 2014 at 1:05 AM, Thejas Nair <
> > > the...@hortonworks.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Should we start with moving our primary source code
> repository
> > > from
> > > > > > > > svn to git ? I feel git is more powerful and easy to use
> (once
> > > you
> > > > go
> > > > > > > > past the learning curve!).
> > > > > > > >
> > > > > > > >
> > > > > > > > On Wed, Mar 5, 2014 at 7:39 AM, Brock Noland <
> > br...@cloudera.com
> > > >
> > > > > > wrote:
> > > > > > > > > Personally I prefer the Github workflow, but I believe
> there
> > > have
> > > > > > been
> > > > > > > > > some challenges with that since the source for apache
> > projects
> > > > must
> > > > > > be
> > > > > > > > > stored in apache source control (git or svn).
> > > > > > > > >
> > > > > > > > > Relevent:
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
> > > > > > > > >
> > > > > > > > > On Wed, Mar 5, 2014 at 9:19 AM, kulkarni.swar...@gmail.com
> > > > > > > > >  wrote:
> > > > > > > > >> Hello,
> > > > > > > > >>
> > > > > > > > >> Since we have a nice mirrored git repository for hive[1],
> > any
> > > > > > specific
> > > > > > > > >> reason why we can't switch to doing pull requests instead
> of
> > > > > > patches?
> > > > > > > > IMHO
> > > > > > > > >> pull requests are awesome for peer review plus it is also
> > very
> > > > > easy
> > > > > > to
> > > > > > > > keep
> > > > > > > > >> track of JIRAs with open pull requests instead of looking
> > for
> > > > > JIRAs
> > > > > > > in a
> > > > > > > > >> "Patch Available" state. Also since they get updated
> > > > > automatically,
> > > > > > it
> > > > > > > > is
> > > > > > > > >> also very easy to see if a review comment made by a
> reviewer
> > > was
> > > > > > > > addressed
> > > > > > > > >> properly or not.
> > > > > > > > >>
> > > > > > > > >> Thoughts?
> > > > > > > > >>
> > > > > > > > >> Thanks,
> > > > > > > > >>
> > > > > > > > >> [1] https://github.com/apache/hive
> > > > > > > > >>
> > > > > > > > >> --
> > > > > > > > >> Swarnim
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > Apache MRUnit - Unit testing MapReduce -
> > > > http://mrunit.apache.org
> > > > > > > >
> > > > > > > > --
> > > > > > > > CONFIDENTIALITY NOTICE
> > > > > > > > NOTICE: This message is intended for the use of the
> individual
> > or
> > > > > > entity
> > > > > > > to
> > > > > > > > which it is addressed and may contain information that is
> > > > > confidential,
> > > > > > > > privileged and exempt from disclosure under applicable law.
> If
> > > the
> > > > > > reader
> > > > > > > > of this message is not the intended recipient, you are hereby
> > > > > notified
> > > > > > > that
> > > > > > > > any printing, copying, dissemination, distribution,
> disclosure
> > or
> > > > > > > > forwarding of this communication is strictly prohibited. If
> you
> > > > have
> > > > > > > 

[jira] [Commented] (HIVE-6222) Make Vector Group By operator abandon grouping if too many distinct keys

2014-03-08 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925004#comment-13925004
 ] 

Remus Rusanu commented on HIVE-6222:


https://reviews.apache.org/r/18943/

> Make Vector Group By operator abandon grouping if too many distinct keys
> 
>
> Key: HIVE-6222
> URL: https://issues.apache.org/jira/browse/HIVE-6222
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
>Priority: Minor
> Attachments: HIVE-6222.1.patch
>
>
> Row mode GBY is becoming a pass-through if not enough aggregation occurs on 
> the map side, relying on the shuffle+reduce side to do the work. Have VGBY do 
> the same.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 18943: Make Vector Group By operator abandon grouping if too many distinct keys

2014-03-08 Thread Remus Rusanu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18943/
---

Review request for hive, Eric Hanson and Jitendra Pandey.


Bugs: HIVE-6222
https://issues.apache.org/jira/browse/HIVE-6222


Repository: hive-git


Description
---

See HIVE-6222


Diffs
-

  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFAvg.txt 547a60a 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMax.txt dcc1dfb 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMaxDecimal.txt de9a84c 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMaxString.txt 1f8b28c 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFSum.txt cb0be33 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFVar.txt 49b0edd 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFVarDecimal.txt e626161 
  ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java c4c85fa 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorAggregationBufferRow.java
 7aa4b11 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java 
4568496 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorHashKeyWrapper.java 
a2a7266 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorHashKeyWrapperBatch.java
 bd6c24b 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorUtilBatchObjectPool.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorAggregateExpression.java
 1836169 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFAvgDecimal.java
 8418587 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFCount.java
 086f91f 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFCountStar.java
 4926f6c 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFSumDecimal.java
 a931887 

Diff: https://reviews.apache.org/r/18943/diff/


Testing
---

Manually tested. I plan to add test cases in TestVGBy


Thanks,

Remus Rusanu



[jira] [Commented] (HIVE-6222) Make Vector Group By operator abandon grouping if too many distinct keys

2014-03-08 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13925001#comment-13925001
 ] 

Remus Rusanu commented on HIVE-6222:


The 1.patch refactors the VectorGroupByOperator to delegate the algorithm used 
to a nested processingMode object. Three processing modes are provided:

 - global aggregate. This is the trivial mode when there are no keys. All 
values are aggregated into a single row of aggregation buffers and the values 
are emitted at operator closeOp()
 - hash aggregate. This is all the previous VGBy operator logic,with hash table 
and including memory pressure flushes
 - streaming aggregate. This mode aggregates intermediate values as keys change 
in the input and flushes at each key value change. It relies on MR shuffle and 
row-mode GBy reduce phase to merge the intermediate values. Due to the way 
aggregators operate on batches, the logic of flushing is not strictly 'on new 
key' but 'for all new keys in a batch, except last'. Identical Identical keys 
in a batch are not aggregated, unless they make a contiguous run.

This patch will conflict with HIVE-6518 because the relevant code is moved into 
the new nested ProcessingModeHashAggregate class. Porting the fix is trivial. I 
will rebase either this or HIVE-6518 depending which gets committed first.

> Make Vector Group By operator abandon grouping if too many distinct keys
> 
>
> Key: HIVE-6222
> URL: https://issues.apache.org/jira/browse/HIVE-6222
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
>Priority: Minor
> Attachments: HIVE-6222.1.patch
>
>
> Row mode GBY is becoming a pass-through if not enough aggregation occurs on 
> the map side, relying on the shuffle+reduce side to do the work. Have VGBY do 
> the same.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6222) Make Vector Group By operator abandon grouping if too many distinct keys

2014-03-08 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-6222:
---

Attachment: HIVE-6222.1.patch

> Make Vector Group By operator abandon grouping if too many distinct keys
> 
>
> Key: HIVE-6222
> URL: https://issues.apache.org/jira/browse/HIVE-6222
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
>Priority: Minor
> Attachments: HIVE-6222.1.patch
>
>
> Row mode GBY is becoming a pass-through if not enough aggregation occurs on 
> the map side, relying on the shuffle+reduce side to do the work. Have VGBY do 
> the same.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6222) Make Vector Group By operator abandon grouping if too many distinct keys

2014-03-08 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-6222:
---

Status: Patch Available  (was: Open)

> Make Vector Group By operator abandon grouping if too many distinct keys
> 
>
> Key: HIVE-6222
> URL: https://issues.apache.org/jira/browse/HIVE-6222
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
>Priority: Minor
> Attachments: HIVE-6222.1.patch
>
>
> Row mode GBY is becoming a pass-through if not enough aggregation occurs on 
> the map side, relying on the shuffle+reduce side to do the work. Have VGBY do 
> the same.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6531) Runtime errors in vectorized execution.

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924994#comment-13924994
 ] 

Hive QA commented on HIVE-6531:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633314/HIVE-6531.3.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5373 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_expressions
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1664/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1664/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12633314

> Runtime errors in vectorized execution.
> ---
>
> Key: HIVE-6531
> URL: https://issues.apache.org/jira/browse/HIVE-6531
> Project: Hive
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-6531.1.patch, HIVE-6531.2.patch, HIVE-6531.3.patch
>
>
> There are a few runtime errors observed in some of the tpcds queries for 
> following reasons:
> 1) VectorFileSinkOperator fails with LazyBinarySerde.
> 2) Decimal128 and Unsigned128 don't serialize correctly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6593) Create a maven assembly for hive-jdbc

2014-03-08 Thread Mark Grover (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924974#comment-13924974
 ] 

Mark Grover commented on HIVE-6593:
---

Thanks Szehon for taking this up and everyone for their input. Cos or I can 
review the patch.

> Create a maven assembly for hive-jdbc
> -
>
> Key: HIVE-6593
> URL: https://issues.apache.org/jira/browse/HIVE-6593
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Affects Versions: 0.12.0
>Reporter: Mark Grover
>Assignee: Szehon Ho
> Attachments: HIVE-6593.patch
>
>
> Currently in Apache Bigtop we bundle and distribute Hive. In particular, for 
> users to not have to install the entirety of Hive on machines that are just 
> jdbc clients, we have a special package which is a subset of hive, called 
> hive-jdbc that bundles only the jdbc driver jar and it's dependencies.
> However, because Hive doesn't have an assembly for the jdbc jar, we have to 
> hack and hardcode the list of jdbc jars and it's dependencies:
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/rpm/hive/SPECS/hive.spec#L361
> As Hive moves to Maven, it would be pretty fantastic if Hive could leverage 
> the maven-assembly-plugin and generate a .tar.gz assembly for what's required 
> for jdbc gateway machines. That we can simply take that distribution and 
> build a jdbc package from it without having to hard code jar names and 
> dependencies. That would make the process much less error prone.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6579) HiveLockObjectData constructor makes too many queryStr instance causing oom

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924972#comment-13924972
 ] 

Hive QA commented on HIVE-6579:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633317/HIVE-6579.1.patch.txt

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1663/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1663/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1663/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 
'hbase-handler/src/test/org/apache/hadoop/hive/hbase/HBaseTestCompositeKey.java'
Reverted 
'hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java'
Reverted 'hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java'
Reverted 
'hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java'
Reverted 'hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java'
Reverted 
'hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java'
Reverted 'hbase-handler/pom.xml'
Reverted 'itests/util/pom.xml'
Reverted 'serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBase.java'
Reverted 'serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObject.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/LazySimpleStructObjectInspector.java'
Reverted 'serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarStructBase.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryObject.java'
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryStruct.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java'
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20/target 
shims/0.20S/target shims/0.23/target shims/aggregator/target 
shims/common/target shims/common-secure/target packaging/target 
hbase-handler/target 
hbase-handler/src/test/results/positive/hbase_custom_key.q.out 
hbase-handler/src/test/results/positive/hbase_custom_key2.q.out 
hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java 
hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory2.java 
hbase-handler/src/test/queries/positive/hbase_custom_key.q 
hbase-handler/src/test/queries/positive/hbase_custom_key2.q 
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java 
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseScanRange.java 
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseWritableKeyFactory.java
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java 
testutils/target jdbc/target metastore/target itests/target 
itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target 
itests/hive-unit/target itests/custom-serde/target itests/util/target 
hcatalog/target hcatalog/storage-handlers/hbase/target 
hcatalog/server-extensions/target hcatalog/core/target 
hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target 
hcatalog/hcatalog-pig-adapter/target hwi/target common/target common/src/gen 
service/target contrib/target serde/target 
serde/src/java/org/apache/hadoop/hive/serde2/StructObjectBaseInspector.java 
serde/src/java/org/apache/hadoop/hive/serde2/StructObject.java beeline/target 
odbc/target cli

[jira] [Commented] (HIVE-6411) Support more generic way of using composite key for HBaseHandler

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924970#comment-13924970
 ] 

Hive QA commented on HIVE-6411:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633301/HIVE-6411.6.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5375 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1662/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1662/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12633301

> Support more generic way of using composite key for HBaseHandler
> 
>
> Key: HIVE-6411
> URL: https://issues.apache.org/jira/browse/HIVE-6411
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-6411.1.patch.txt, HIVE-6411.2.patch.txt, 
> HIVE-6411.3.patch.txt, HIVE-6411.4.patch.txt, HIVE-6411.5.patch.txt, 
> HIVE-6411.6.patch.txt
>
>
> HIVE-2599 introduced using custom object for the row key. But it forces key 
> objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
> If user provides proper Object and OI, we can replace internal key and keyOI 
> with those. 
> Initial implementation is based on factory interface.
> {code}
> public interface HBaseKeyFactory {
>   void init(SerDeParameters parameters, Properties properties) throws 
> SerDeException;
>   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
>   LazyObjectBase createObject(ObjectInspector inspector) throws 
> SerDeException;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6593) Create a maven assembly for hive-jdbc

2014-03-08 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-6593:


Attachment: HIVE-6593.patch

I made a sub-project of 'packaging' called 'jdbc-packaging' that makes the 
following assembly when hive builds with -Pdist profile.  The contents are like 
so:

{noformat}
./lib/commons-logging-1.1.3.jar
./lib/hive-exec-0.14.0-SNAPSHOT.jar
./lib/hive-jdbc-0.14.0-SNAPSHOT.jar
./lib/hive-metastore-0.14.0-SNAPSHOT.jar
./lib/hive-serde-0.14.0-SNAPSHOT.jar
./lib/hive-service-0.14.0-SNAPSHOT.jar
./lib/libfb303-0.9.0.jar
./lib/libthrift-0.9.0.jar
./lib/log4j-1.2.16.jar
{noformat}

Let me know if this works.

> Create a maven assembly for hive-jdbc
> -
>
> Key: HIVE-6593
> URL: https://issues.apache.org/jira/browse/HIVE-6593
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Affects Versions: 0.12.0
>Reporter: Mark Grover
>Assignee: Szehon Ho
> Attachments: HIVE-6593.patch
>
>
> Currently in Apache Bigtop we bundle and distribute Hive. In particular, for 
> users to not have to install the entirety of Hive on machines that are just 
> jdbc clients, we have a special package which is a subset of hive, called 
> hive-jdbc that bundles only the jdbc driver jar and it's dependencies.
> However, because Hive doesn't have an assembly for the jdbc jar, we have to 
> hack and hardcode the list of jdbc jars and it's dependencies:
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/rpm/hive/SPECS/hive.spec#L361
> As Hive moves to Maven, it would be pretty fantastic if Hive could leverage 
> the maven-assembly-plugin and generate a .tar.gz assembly for what's required 
> for jdbc gateway machines. That we can simply take that distribution and 
> build a jdbc package from it without having to hard code jar names and 
> dependencies. That would make the process much less error prone.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6593) Create a maven assembly for hive-jdbc

2014-03-08 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-6593:


Description: 
Currently in Apache Bigtop we bundle and distribute Hive. In particular, for 
users to not have to install the entirety of Hive on machines that are just 
jdbc clients, we have a special package which is a subset of hive, called 
hive-jdbc that bundles only the jdbc driver jar and it's dependencies.

However, because Hive doesn't have an assembly for the jdbc jar, we have to 
hack and hardcode the list of jdbc jars and it's dependencies:
https://github.com/apache/bigtop/blob/master/bigtop-packages/src/rpm/hive/SPECS/hive.spec#L361

As Hive moves to Maven, it would be pretty fantastic if Hive could leverage the 
maven-assembly-plugin and generate a .tar.gz assembly for what's required for 
jdbc gateway machines. That we can simply take that distribution and build a 
jdbc package from it without having to hard code jar names and dependencies. 
That would make the process much less error prone.

NO PRECOMMIT TESTS

  was:
Currently in Apache Bigtop we bundle and distribute Hive. In particular, for 
users to not have to install the entirety of Hive on machines that are just 
jdbc clients, we have a special package which is a subset of hive, called 
hive-jdbc that bundles only the jdbc driver jar and it's dependencies.

However, because Hive doesn't have an assembly for the jdbc jar, we have to 
hack and hardcode the list of jdbc jars and it's dependencies:
https://github.com/apache/bigtop/blob/master/bigtop-packages/src/rpm/hive/SPECS/hive.spec#L361

As Hive moves to Maven, it would be pretty fantastic if Hive could leverage the 
maven-assembly-plugin and generate a .tar.gz assembly for what's required for 
jdbc gateway machines. That we can simply take that distribution and build a 
jdbc package from it without having to hard code jar names and dependencies. 
That would make the process much less error prone.


> Create a maven assembly for hive-jdbc
> -
>
> Key: HIVE-6593
> URL: https://issues.apache.org/jira/browse/HIVE-6593
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Affects Versions: 0.12.0
>Reporter: Mark Grover
>Assignee: Szehon Ho
> Attachments: HIVE-6593.patch
>
>
> Currently in Apache Bigtop we bundle and distribute Hive. In particular, for 
> users to not have to install the entirety of Hive on machines that are just 
> jdbc clients, we have a special package which is a subset of hive, called 
> hive-jdbc that bundles only the jdbc driver jar and it's dependencies.
> However, because Hive doesn't have an assembly for the jdbc jar, we have to 
> hack and hardcode the list of jdbc jars and it's dependencies:
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/rpm/hive/SPECS/hive.spec#L361
> As Hive moves to Maven, it would be pretty fantastic if Hive could leverage 
> the maven-assembly-plugin and generate a .tar.gz assembly for what's required 
> for jdbc gateway machines. That we can simply take that distribution and 
> build a jdbc package from it without having to hard code jar names and 
> dependencies. That would make the process much less error prone.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6593) Create a maven assembly for hive-jdbc

2014-03-08 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-6593:


Status: Patch Available  (was: Open)

> Create a maven assembly for hive-jdbc
> -
>
> Key: HIVE-6593
> URL: https://issues.apache.org/jira/browse/HIVE-6593
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Affects Versions: 0.12.0
>Reporter: Mark Grover
>Assignee: Szehon Ho
> Attachments: HIVE-6593.patch
>
>
> Currently in Apache Bigtop we bundle and distribute Hive. In particular, for 
> users to not have to install the entirety of Hive on machines that are just 
> jdbc clients, we have a special package which is a subset of hive, called 
> hive-jdbc that bundles only the jdbc driver jar and it's dependencies.
> However, because Hive doesn't have an assembly for the jdbc jar, we have to 
> hack and hardcode the list of jdbc jars and it's dependencies:
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/rpm/hive/SPECS/hive.spec#L361
> As Hive moves to Maven, it would be pretty fantastic if Hive could leverage 
> the maven-assembly-plugin and generate a .tar.gz assembly for what's required 
> for jdbc gateway machines. That we can simply take that distribution and 
> build a jdbc package from it without having to hard code jar names and 
> dependencies. That would make the process much less error prone.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6508) Mismatched results between vector and non-vector mode with decimal field

2014-03-08 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924962#comment-13924962
 ] 

Remus Rusanu commented on HIVE-6508:


The failure is unrelated to the patch

> Mismatched results between vector and non-vector mode with decimal field
> 
>
> Key: HIVE-6508
> URL: https://issues.apache.org/jira/browse/HIVE-6508
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Attachments: HIVE-6508.1.patch, HIVE-6508.1.patch
>
>
> Following query has a little mismatch in result as compared to the non-vector 
> mode.
> {code}
> select d_year, i_brand_id, i_brand,
>sum(ss_ext_sales_price) as sum_agg
> from date_dim
> join store_sales on date_dim.d_date_sk = store_sales.ss_sold_date_sk
> join item on store_sales.ss_item_sk = item.i_item_sk
> where i_manufact_id = 128
>   and d_moy = 11
> group by d_year, i_brand, i_brand_id
> order by d_year, sum_agg desc, i_brand_id
> limit 100;
> {code}
> This query is on tpcds data.
> The field ss_ext_sales_price is of type decimal(7,2) and everything else is 
> an integer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5901) Query cancel should stop running MR tasks

2014-03-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924949#comment-13924949
 ] 

Thejas M Nair commented on HIVE-5901:
-

Thanks Harish, patch committed to 0.13 branch as well.

> Query cancel should stop running MR tasks
> -
>
> Key: HIVE-5901
> URL: https://issues.apache.org/jira/browse/HIVE-5901
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-5901.1.patch.txt, HIVE-5901.2.patch.txt, 
> HIVE-5901.3.patch.txt, HIVE-5901.4.patch.txt, HIVE-5901.5.patch.txt, 
> HIVE-5901.6.patch.txt, HIVE-5901.7.patch.txt
>
>
> Currently, query canceling does not stop running MR job immediately.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5901) Query cancel should stop running MR tasks

2014-03-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5901:


Fix Version/s: (was: 0.14.0)
   0.13.0

> Query cancel should stop running MR tasks
> -
>
> Key: HIVE-5901
> URL: https://issues.apache.org/jira/browse/HIVE-5901
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-5901.1.patch.txt, HIVE-5901.2.patch.txt, 
> HIVE-5901.3.patch.txt, HIVE-5901.4.patch.txt, HIVE-5901.5.patch.txt, 
> HIVE-5901.6.patch.txt, HIVE-5901.7.patch.txt
>
>
> Currently, query canceling does not stop running MR job immediately.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6595) Hive 0.11.0 build failure

2014-03-08 Thread Amit Anand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Anand updated HIVE-6595:
-

Description: 
k n,,  l,jvdh/.bvcx,mnbbvvvkjccvvcc   x, vv   FDCc I am unable to 
build Hive 0.11.0 from the source. I have a single node hadoop 2.2.0, that I 
built from the source, running. 

I followed steps given below:

svn co http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/ hive-0.11.0
cd hive-0.11.0
ant clean
ant package

I got messages given below 


compile:
 [echo] Project: jdbc
[javac] Compiling 28 source files to 
/opt/apache/source/hive-0.11.0/build/jdbc/classes
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java:48:
 error: HiveCallableStatement is not abstract and does not override abstract 
method getObject(String,Class) in CallableStatement
[javac] public class HiveCallableStatement implements 
java.sql.CallableStatement {
[javac]^
[javac]   where T is a type-variable:
[javac] T extends Object declared in method 
getObject(String,Class)
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java:65:
 error: HiveConnection is not abstract and does not override abstract method 
getNetworkTimeout() in Connection
[javac] public class HiveConnection implements java.sql.Connection {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDataSource.java:31:
 error: HiveDataSource is not abstract and does not override abstract method 
getParentLogger() in CommonDataSource
[javac] public class HiveDataSource implements DataSource {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:56:
 error: HiveDatabaseMetaData is not abstract and does not override abstract 
method generatedKeyAlwaysReturned() in DatabaseMetaData
[javac] public class HiveDatabaseMetaData implements DatabaseMetaData {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:707:
 error:  is not abstract 
and does not override abstract method getObject(String,Class) in ResultSet
[javac] , null) {
[javac] ^
[javac]   where T is a type-variable:
[javac] T extends Object declared in method 
getObject(String,Class)
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDriver.java:35:
 error: HiveDriver is not abstract and does not override abstract method 
getParentLogger() in Driver
[javac] public class HiveDriver implements Driver {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java:56:
 error: HivePreparedStatement is not abstract and does not override abstract 
method isCloseOnCompletion() in Statement
[javac] public class HivePreparedStatement implements PreparedStatement {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java:48:
 error: HiveQueryResultSet is not abstract and does not override abstract 
method getObject(String,Class) in ResultSet
[javac] public class HiveQueryResultSet extends HiveBaseResultSet {
[javac]^
[javac]   where T is a type-variable:
[javac] T extends Object declared in method 
getObject(String,Class)
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java:42:
 error: HiveStatement is not abstract and does not override abstract method 
isCloseOnCompletion() in Statement
[javac] public class HiveStatement implements java.sql.Statement {
[javac]^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 9 errors

BUILD FAILED
/opt/apache/source/hive-0.11.0/build.xml:274: The following error occurred 
while executing this line:
/opt/apache/source/hive-0.11.0/build.xml:113: The following error occurred 
while executing this line:
/opt/apache/source/hive-0.11.0/build.xml:115: The following error occurred 
while executing this line:
/opt/apache/source/hive-0.11.0/jdbc/build.xml:51: Compile failed; see the 
compiler error output for details.



  was:
I am unable to build Hive 0.11.0 from the source. I have a single node hadoop 
2.2.0, that I built from the source, running. 

I followed steps given below:

svn co http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/ hive-0.11.0
cd hive-0.11.0
ant clean
ant package

I got messages given below 


compile:
 [echo] Project: jdbc
[javac] Compiling 28 source files to 
/opt/apache/source

[jira] [Updated] (HIVE-6403) uncorrelated subquery is failing with auto.convert.join=true

2014-03-08 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6403:


Fix Version/s: 0.13.0

> uncorrelated subquery is failing with auto.convert.join=true
> 
>
> Key: HIVE-6403
> URL: https://issues.apache.org/jira/browse/HIVE-6403
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
> Fix For: 0.13.0
>
> Attachments: HIVE-6403.1.patch, HIVE-6403.2.patch, 
> HIVE-6403.3.patch.txt, HIVE-6403.4.patch.txt, HIVE-6403.5.patch.txt, 
> HIVE-6403.6.patch.txt, navis.patch, navis2.patch
>
>
> Fixing HIVE-5690, I've found query in subquery_multiinsert.q is not working 
> with hive.auto.convert.join=true 
> {noformat}
> set hive.auto.convert.join=true;
> hive> explain
> > from src b 
> > INSERT OVERWRITE TABLE src_4 
> >   select * 
> >   where b.key in 
> >(select a.key 
> > from src a 
> > where b.value = a.value and a.key > '9'
> >) 
> > INSERT OVERWRITE TABLE src_5 
> >   select *  
> >   where b.key not in  ( select key from src s1 where s1.key > '2') 
> >   order by key 
> > ;
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinLocalWork(MapJoinProcessor.java:149)
>   at 
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:256)
>   at 
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:481)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
>   at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
>   at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:194)
>   at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:139)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinResolver.resolve(CommonJoinResolver.java:79)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:100)
>   at 
> org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:290)
>   at 
> org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:216)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9167)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:64)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:446)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:346)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1056)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1099)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:992)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:982)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:687)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:626)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> org.apache.hadoop.hive.ql.parse.SemanticException: Failed to generate new 
> mapJoin operator by exception : Index: 0, Size: 0
>   at 
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor

[jira] [Updated] (HIVE-6403) uncorrelated subquery is failing with auto.convert.join=true

2014-03-08 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6403:


  Resolution: Fixed
Release Note: 
also added to 0.13
thanks Navis
  Status: Resolved  (was: Patch Available)

> uncorrelated subquery is failing with auto.convert.join=true
> 
>
> Key: HIVE-6403
> URL: https://issues.apache.org/jira/browse/HIVE-6403
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
> Attachments: HIVE-6403.1.patch, HIVE-6403.2.patch, 
> HIVE-6403.3.patch.txt, HIVE-6403.4.patch.txt, HIVE-6403.5.patch.txt, 
> HIVE-6403.6.patch.txt, navis.patch, navis2.patch
>
>
> Fixing HIVE-5690, I've found query in subquery_multiinsert.q is not working 
> with hive.auto.convert.join=true 
> {noformat}
> set hive.auto.convert.join=true;
> hive> explain
> > from src b 
> > INSERT OVERWRITE TABLE src_4 
> >   select * 
> >   where b.key in 
> >(select a.key 
> > from src a 
> > where b.value = a.value and a.key > '9'
> >) 
> > INSERT OVERWRITE TABLE src_5 
> >   select *  
> >   where b.key not in  ( select key from src s1 where s1.key > '2') 
> >   order by key 
> > ;
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinLocalWork(MapJoinProcessor.java:149)
>   at 
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:256)
>   at 
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:481)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
>   at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
>   at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:194)
>   at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:139)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinResolver.resolve(CommonJoinResolver.java:79)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:100)
>   at 
> org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:290)
>   at 
> org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:216)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9167)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:64)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:446)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:346)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1056)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1099)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:992)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:982)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:687)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:626)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> org.apache.hadoop.hive.ql.parse.SemanticException: Failed to generate new 
> mapJoin operator by exception : Index: 0, Size: 0
>   at 
> org.apache.hadoop.hive.q

[jira] [Commented] (HIVE-6508) Mismatched results between vector and non-vector mode with decimal field

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924930#comment-13924930
 ] 

Hive QA commented on HIVE-6508:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633294/HIVE-6508.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5374 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_parallel_orderby
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1660/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1660/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12633294

> Mismatched results between vector and non-vector mode with decimal field
> 
>
> Key: HIVE-6508
> URL: https://issues.apache.org/jira/browse/HIVE-6508
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Attachments: HIVE-6508.1.patch, HIVE-6508.1.patch
>
>
> Following query has a little mismatch in result as compared to the non-vector 
> mode.
> {code}
> select d_year, i_brand_id, i_brand,
>sum(ss_ext_sales_price) as sum_agg
> from date_dim
> join store_sales on date_dim.d_date_sk = store_sales.ss_sold_date_sk
> join item on store_sales.ss_item_sk = item.i_item_sk
> where i_manufact_id = 128
>   and d_moy = 11
> group by d_year, i_brand, i_brand_id
> order by d_year, sum_agg desc, i_brand_id
> limit 100;
> {code}
> This query is on tpcds data.
> The field ss_ext_sales_price is of type decimal(7,2) and everything else is 
> an integer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5901) Query cancel should stop running MR tasks

2014-03-08 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924927#comment-13924927
 ] 

Harish Butani commented on HIVE-5901:
-

+1 for 0.13

> Query cancel should stop running MR tasks
> -
>
> Key: HIVE-5901
> URL: https://issues.apache.org/jira/browse/HIVE-5901
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Fix For: 0.14.0
>
> Attachments: HIVE-5901.1.patch.txt, HIVE-5901.2.patch.txt, 
> HIVE-5901.3.patch.txt, HIVE-5901.4.patch.txt, HIVE-5901.5.patch.txt, 
> HIVE-5901.6.patch.txt, HIVE-5901.7.patch.txt
>
>
> Currently, query canceling does not stop running MR job immediately.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6573) Oracle metastore doesnt come up when hive.cluster.delegation.token.store.class is set to DBTokenStore

2014-03-08 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924898#comment-13924898
 ] 

Harish Butani commented on HIVE-6573:
-

+1 for 0.13

> Oracle metastore doesnt come up when 
> hive.cluster.delegation.token.store.class is set to DBTokenStore
> -
>
> Key: HIVE-6573
> URL: https://issues.apache.org/jira/browse/HIVE-6573
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Security
>Affects Versions: 0.12.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Blocker
> Fix For: 0.14.0
>
> Attachments: HIVE-6573.patch
>
>
> This config {{hive.cluster.delegation.token.store.class}} was introduced in 
> HIVE-3255 and is useful only if oracle metastore is used in secure setup with 
> HA config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6578) Use ORC file footer statistics for analyze command

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924895#comment-13924895
 ] 

Hive QA commented on HIVE-6578:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633299/HIVE-6578.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5374 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_parallel_orderby
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1659/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1659/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12633299

> Use ORC file footer statistics for analyze command
> --
>
> Key: HIVE-6578
> URL: https://issues.apache.org/jira/browse/HIVE-6578
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 0.13.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>  Labels: orcfile
> Attachments: HIVE-6578.1.patch
>
>
> ORC provides file level statistics which can be used in analyze partialscan 
> and noscan cases to compute basic statistics like number of rows, number of 
> files, total file size and raw data size.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: --hiveconf vs -hiveconf

2014-03-08 Thread Edward Capriolo
Great thanks for following up. THere might be a number of etl processes in
the wild saying -hiveconf which is why it is important to keep around for
the cli at least.


On Sat, Mar 8, 2014 at 1:56 AM, Xuefu Zhang  wrote:

> This is just getting more and more interesting. I never thought of
> -hiveconf option, and always assumed it was a typo of --hiveconf. (That's
> why I edited the one, which triggered the discovery.) I just checked and
> found that both work, which is out of my surprise.
>
> With this assumption, Beeline has implemented only --hiveconf to mimic CLI.
>
> As to the documentation, I think we can stick to --hiveconf from now on,
> since they are supported by both CLI and Beeline. However, -hiveconf will
> continue to work for CLI until its death.
>
> Thanks,
> Xuefu
>
>
> On Fri, Mar 7, 2014 at 10:36 PM, Lefty Leverenz  >wrote:
>
> > > OK, so just one of the pages in wiki has changed, and hive behavior has
> > not changed
> >
> > That's right, and a closer look at the wiki shows that all the examples
> are
> > -hiveconf except the new change.  The only place --hiveconf appears is in
> > duplications of help messages for the hive command, the old Hive server,
> or
> > Beeline.
> >
> > In a fresh export of the wiki --hiveconf occurs in these docs:
> >
> >- CLI<
> >
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli#LanguageManualCli-HiveCommandLineOptions
> > >
> > repeats
> >what hive -H says (--hiveconf) but gives 3 examples of -hiveconf.
> >- Admin Config<
> >
> https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration#AdminManualConfiguration-ConfiguringHive
> > >
> > says
> >--hiveconf twice, in text and an example (both changed this week).
> >- Hive Server<
> > https://cwiki.apache.org/confluence/display/Hive/HiveServer>
> > says
> >--hiveconf once, but that's the Thrift server help message.
> >- HiveServer2
> > Clients<
> >
> https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineCommandOptions
> > >says
> > --hiveconf twice, but that's the Beeline option.
> >
> > These wikidocs say -hiveconf:
> >
> >- Getting Started (4 in config
> > overview<
> >
> https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-ConfigurationManagementOverview
> > >
> > and
> >2 in error logs<
> >
> https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-ErrorLogs
> > >
> >)
> >- Avro SerDe<
> >
> https://cwiki.apache.org/confluence/display/Hive/AvroSerDe#AvroSerDe-SpecifyingtheAvroschemaforatable
> > >(2
> > in example and text)
> >- Developer Guide<
> >
> https://cwiki.apache.org/confluence/display/Hive/DeveloperGuide#DeveloperGuide-RunningHiveWithoutaHadoopCluster
> > >(4
> > in "export HIVE_OPTS")
> >- HBase Integration<
> >
> https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration#HBaseIntegration-Usage
> > >(2
> > in examples)
> >- Variable Substitution<
> >
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+VariableSubstitution
> > >(1
> > in the "evil laugh" example)
> >- CLI (2 in one
> > example<
> >
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli#LanguageManualCli-Examples
> > >,
> >1 in logging<
> >
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli#LanguageManualCli-Logging
> > >
> >)
> >
> > (My grep hits were inflated because "-i" caught HiveConf.)
> >
> > So what's it supposed to be?
> >
> >
> > -- Lefty
> >
> >
> > On Fri, Mar 7, 2014 at 11:06 PM, Thejas Nair 
> > wrote:
> >
> > > OK, so just one of the pages in wiki has changed, and hive behavior
> > > has not changed ? (I have been using -hiveconf, but i haven't verified
> > > that with the tip of the trunk as of now).
> > >
> > > On Fri, Mar 7, 2014 at 6:19 PM, Xuefu Zhang 
> wrote:
> > > > I didn't know that -hiveconf is supported. However, from hive -H,
> > double
> > > > dashes are seen.
> > > >
> > > >  -h connecting to Hive Server on remote
> > > host
> > > > --hiveconfUse value for given property
> > > > --hivevar  Variable subsitution to apply to
> hive
> > > >
> > > > Thanks,
> > > > Xuefu
> > > >
> > > >
> > > > On Fri, Mar 7, 2014 at 6:00 PM, Edward Capriolo <
> edlinuxg...@gmail.com
> > > >wrote:
> > > >
> > > >> I was not around when this change was made but I think we should
> have
> > > kept
> > > >> the old - dash version. We should consider adding it back.
> > > >>
> > > >>
> > > >> On Fri, Mar 7, 2014 at 8:56 PM, Lefty Leverenz <
> > leftylever...@gmail.com
> > > >> >wrote:
> > > >>
> > > >> > Xuefu just fixed the AdminManual Configuration
> > > >> > wiki<
> > > >> >
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration
> > > >> > >,
> > > >> > changing bin/hive -hiveconf ... to --hiveconf, so I grepped the
> wiki
> > > >> > archive and found many more cases of single-dash hiveconf than
> > >

[jira] [Updated] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization

2014-03-08 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-6594:
---

Attachment: HIVE-6594.2.patch

Patch .2 contains updated expected results (now correct)

> UnsignedInt128 addition does not increase internal int array count resulting 
> in corrupted values during serialization
> -
>
> Key: HIVE-6594
> URL: https://issues.apache.org/jira/browse/HIVE-6594
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Attachments: HIVE-6594.1.patch, HIVE-6594.2.patch
>
>
> Discovered this while investigating why my fix for HIVE-6222 produced diffs. 
> I discovered that Decimal128.addDestructive does not adjust the internal 
> count when an the number of relevant ints increases. Since this count is used 
> in the fast HiveDecimalWriter conversion code, the results are off. 
> The root cause is UnsignedDecimal128.differenceInternal does not do an 
> updateCount() on the result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization

2014-03-08 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-6594:
---

Status: Patch Available  (was: Open)

> UnsignedInt128 addition does not increase internal int array count resulting 
> in corrupted values during serialization
> -
>
> Key: HIVE-6594
> URL: https://issues.apache.org/jira/browse/HIVE-6594
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Attachments: HIVE-6594.1.patch, HIVE-6594.2.patch
>
>
> Discovered this while investigating why my fix for HIVE-6222 produced diffs. 
> I discovered that Decimal128.addDestructive does not adjust the internal 
> count when an the number of relevant ints increases. Since this count is used 
> in the fast HiveDecimalWriter conversion code, the results are off. 
> The root cause is UnsignedDecimal128.differenceInternal does not do an 
> updateCount() on the result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work stopped] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization

2014-03-08 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-6594 stopped by Remus Rusanu.

> UnsignedInt128 addition does not increase internal int array count resulting 
> in corrupted values during serialization
> -
>
> Key: HIVE-6594
> URL: https://issues.apache.org/jira/browse/HIVE-6594
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Attachments: HIVE-6594.1.patch
>
>
> Discovered this while investigating why my fix for HIVE-6222 produced diffs. 
> I discovered that Decimal128.addDestructive does not adjust the internal 
> count when an the number of relevant ints increases. Since this count is used 
> in the fast HiveDecimalWriter conversion code, the results are off. 
> The root cause is UnsignedDecimal128.differenceInternal does not do an 
> updateCount() on the result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6562) Protection from exceptions in ORC predicate evaluation

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924862#comment-13924862
 ] 

Hive QA commented on HIVE-6562:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633017/HIVE-6562.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5374 tests executed
*Failed tests:*
{noformat}
org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementAsync
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1658/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1658/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12633017

> Protection from exceptions in ORC predicate evaluation
> --
>
> Key: HIVE-6562
> URL: https://issues.apache.org/jira/browse/HIVE-6562
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0
>Reporter: Prasanth J
>Assignee: Prasanth J
>  Labels: orcfile
> Attachments: HIVE-6562.1.patch
>
>
> ORC evaluates predicate expressions to select row groups that satisfy 
> predicate condition. There can be exceptions (mostly ClassCastException) when 
> data types of predicate constant and min/max values are different. 
> To avoid this patch catches any such exception and provides a default 
> behaviour i.e; selecting the row group.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6595) Hive 0.11.0 build failure

2014-03-08 Thread Amit Anand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Anand updated HIVE-6595:
-

Summary: Hive 0.11.0 build failure  (was: Hive 0.11.0 build failes)

> Hive 0.11.0 build failure
> -
>
> Key: HIVE-6595
> URL: https://issues.apache.org/jira/browse/HIVE-6595
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.11.0
> Environment: CentOS 6.5, java version "1.7.0_45", Hadoop 2.2.0
>Reporter: Amit Anand
>
> I am unable to build Hive 0.11.0 from the source. I have a single node hadoop 
> 2.2.0, that I built from the source, running. 
> I followed steps given below:
> svn co http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/ hive-0.11.0
> cd hive-0.11.0
> ant clean
> ant package
> I got messages given below 
> compile:
>  [echo] Project: jdbc
> [javac] Compiling 28 source files to 
> /opt/apache/source/hive-0.11.0/build/jdbc/classes
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java:48:
>  error: HiveCallableStatement is not abstract and does not override abstract 
> method getObject(String,Class) in CallableStatement
> [javac] public class HiveCallableStatement implements 
> java.sql.CallableStatement {
> [javac]^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java:65:
>  error: HiveConnection is not abstract and does not override abstract method 
> getNetworkTimeout() in Connection
> [javac] public class HiveConnection implements java.sql.Connection {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDataSource.java:31:
>  error: HiveDataSource is not abstract and does not override abstract method 
> getParentLogger() in CommonDataSource
> [javac] public class HiveDataSource implements DataSource {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:56:
>  error: HiveDatabaseMetaData is not abstract and does not override abstract 
> method generatedKeyAlwaysReturned() in DatabaseMetaData
> [javac] public class HiveDatabaseMetaData implements DatabaseMetaData {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:707:
>  error:  is not 
> abstract and does not override abstract method getObject(String,Class) 
> in ResultSet
> [javac] , null) {
> [javac] ^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDriver.java:35:
>  error: HiveDriver is not abstract and does not override abstract method 
> getParentLogger() in Driver
> [javac] public class HiveDriver implements Driver {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java:56:
>  error: HivePreparedStatement is not abstract and does not override abstract 
> method isCloseOnCompletion() in Statement
> [javac] public class HivePreparedStatement implements PreparedStatement {
> [javac]^
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java:48:
>  error: HiveQueryResultSet is not abstract and does not override abstract 
> method getObject(String,Class) in ResultSet
> [javac] public class HiveQueryResultSet extends HiveBaseResultSet {
> [javac]^
> [javac]   where T is a type-variable:
> [javac] T extends Object declared in method 
> getObject(String,Class)
> [javac] 
> /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java:42:
>  error: HiveStatement is not abstract and does not override abstract method 
> isCloseOnCompletion() in Statement
> [javac] public class HiveStatement implements java.sql.Statement {
> [javac]^
> [javac] Note: Some input files use or override a deprecated API.
> [javac] Note: Recompile with -Xlint:deprecation for details.
> [javac] Note: Some input files use unchecked or unsafe operations.
> [javac] Note: Recompile with -Xlint:unchecked for details.
> [javac] 9 errors
> BUILD FAILED
> /opt/apache/source/hive-0.11.0/build.xml:274: The following error occurred 
> while executing this line:
> /opt/apache/source/hive-0.11.0/build.xml:113: The following error occurred 
> while executing this line:
> /opt/apache/source/hive-0.11.0/build.xml:115: The following

[jira] [Created] (HIVE-6595) Hive 0.11.0 build failes

2014-03-08 Thread Amit Anand (JIRA)
Amit Anand created HIVE-6595:


 Summary: Hive 0.11.0 build failes
 Key: HIVE-6595
 URL: https://issues.apache.org/jira/browse/HIVE-6595
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.11.0
 Environment: CentOS 6.5, java version "1.7.0_45", Hadoop 2.2.0

Reporter: Amit Anand


I am unable to build Hive 0.11.0 from the source. I have a single node hadoop 
2.2.0, that I built from the source, running. 

I followed steps given below:

svn co http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/ hive-0.11.0
cd hive-0.11.0
ant clean
ant package

I got messages given below 


compile:
 [echo] Project: jdbc
[javac] Compiling 28 source files to 
/opt/apache/source/hive-0.11.0/build/jdbc/classes
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java:48:
 error: HiveCallableStatement is not abstract and does not override abstract 
method getObject(String,Class) in CallableStatement
[javac] public class HiveCallableStatement implements 
java.sql.CallableStatement {
[javac]^
[javac]   where T is a type-variable:
[javac] T extends Object declared in method 
getObject(String,Class)
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java:65:
 error: HiveConnection is not abstract and does not override abstract method 
getNetworkTimeout() in Connection
[javac] public class HiveConnection implements java.sql.Connection {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDataSource.java:31:
 error: HiveDataSource is not abstract and does not override abstract method 
getParentLogger() in CommonDataSource
[javac] public class HiveDataSource implements DataSource {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:56:
 error: HiveDatabaseMetaData is not abstract and does not override abstract 
method generatedKeyAlwaysReturned() in DatabaseMetaData
[javac] public class HiveDatabaseMetaData implements DatabaseMetaData {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:707:
 error:  is not abstract 
and does not override abstract method getObject(String,Class) in ResultSet
[javac] , null) {
[javac] ^
[javac]   where T is a type-variable:
[javac] T extends Object declared in method 
getObject(String,Class)
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDriver.java:35:
 error: HiveDriver is not abstract and does not override abstract method 
getParentLogger() in Driver
[javac] public class HiveDriver implements Driver {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java:56:
 error: HivePreparedStatement is not abstract and does not override abstract 
method isCloseOnCompletion() in Statement
[javac] public class HivePreparedStatement implements PreparedStatement {
[javac]^
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java:48:
 error: HiveQueryResultSet is not abstract and does not override abstract 
method getObject(String,Class) in ResultSet
[javac] public class HiveQueryResultSet extends HiveBaseResultSet {
[javac]^
[javac]   where T is a type-variable:
[javac] T extends Object declared in method 
getObject(String,Class)
[javac] 
/opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java:42:
 error: HiveStatement is not abstract and does not override abstract method 
isCloseOnCompletion() in Statement
[javac] public class HiveStatement implements java.sql.Statement {
[javac]^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 9 errors

BUILD FAILED
/opt/apache/source/hive-0.11.0/build.xml:274: The following error occurred 
while executing this line:
/opt/apache/source/hive-0.11.0/build.xml:113: The following error occurred 
while executing this line:
/opt/apache/source/hive-0.11.0/build.xml:115: The following error occurred 
while executing this line:
/opt/apache/source/hive-0.11.0/jdbc/build.xml:51: Compile failed; see the 
compiler error output for details.





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization

2014-03-08 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-6594:
---

Description: 
Discovered this while investigating why my fix for HIVE-6222 produced diffs. I 
discovered that Decimal128.addDestructive does not adjust the internal count 
when an the number of relevant ints increases. Since this count is used in the 
fast HiveDecimalWriter conversion code, the results are off. 

The root cause is UnsignedDecimal128.differenceInternal does not do an 
updateCount() on the result.

  was:
Discovered this while investigating why my fix for HIVE-6222 produced diffs. I 
discovered that Decimal128.addDestructive does not adjust the internal count 
when an the number of relevant ints increases. Since this count i use din the 
fast HiveDecimal conversion code, the results are off. 

The root cause is UnsignedDecimal128.differenceInternal does not do an 
updateCount() on the result.


> UnsignedInt128 addition does not increase internal int array count resulting 
> in corrupted values during serialization
> -
>
> Key: HIVE-6594
> URL: https://issues.apache.org/jira/browse/HIVE-6594
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Attachments: HIVE-6594.1.patch
>
>
> Discovered this while investigating why my fix for HIVE-6222 produced diffs. 
> I discovered that Decimal128.addDestructive does not adjust the internal 
> count when an the number of relevant ints increases. Since this count is used 
> in the fast HiveDecimalWriter conversion code, the results are off. 
> The root cause is UnsignedDecimal128.differenceInternal does not do an 
> updateCount() on the result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization

2014-03-08 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-6594:
---

Attachment: HIVE-6594.1.patch

> UnsignedInt128 addition does not increase internal int array count resulting 
> in corrupted values during serialization
> -
>
> Key: HIVE-6594
> URL: https://issues.apache.org/jira/browse/HIVE-6594
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Attachments: HIVE-6594.1.patch
>
>
> Discovered this while investigating why my fix for HIVE-6222 produced diffs. 
> I discovered that Decimal128.addDestructive does not adjust the internal 
> count when an the number of relevant ints increases. Since this count i use 
> din the fast HiveDecimal conversion code, the results are off. 
> The root cause is UnsignedDecimal128.differenceInternal does not do an 
> updateCount() on the result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization

2014-03-08 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-6594 started by Remus Rusanu.

> UnsignedInt128 addition does not increase internal int array count resulting 
> in corrupted values during serialization
> -
>
> Key: HIVE-6594
> URL: https://issues.apache.org/jira/browse/HIVE-6594
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Attachments: HIVE-6594.1.patch
>
>
> Discovered this while investigating why my fix for HIVE-6222 produced diffs. 
> I discovered that Decimal128.addDestructive does not adjust the internal 
> count when an the number of relevant ints increases. Since this count i use 
> din the fast HiveDecimal conversion code, the results are off. 
> The root cause is UnsignedDecimal128.differenceInternal does not do an 
> updateCount() on the result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization

2014-03-08 Thread Remus Rusanu (JIRA)
Remus Rusanu created HIVE-6594:
--

 Summary: UnsignedInt128 addition does not increase internal int 
array count resulting in corrupted values during serialization
 Key: HIVE-6594
 URL: https://issues.apache.org/jira/browse/HIVE-6594
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.0
Reporter: Remus Rusanu
Assignee: Remus Rusanu


Discovered this while investigating why my fix for HIVE-6222 produced diffs. I 
discovered that Decimal128.addDestructive does not adjust the internal count 
when an the number of relevant ints increases. Since this count i use din the 
fast HiveDecimal conversion code, the results are off. 

The root cause is UnsignedDecimal128.differenceInternal does not do an 
updateCount() on the result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HIVE-6491) ClassCastException in AbstractParquetMapInspector

2014-03-08 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland resolved HIVE-6491.


Resolution: Duplicate

> ClassCastException in AbstractParquetMapInspector
> -
>
> Key: HIVE-6491
> URL: https://issues.apache.org/jira/browse/HIVE-6491
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
> Environment: cdh5-beta2, trunk
>Reporter: Andrey Stepachev
>
> AbstractParquetMapInspector uses wrong class cast 
> https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java#L144
> It should be AbstractParquetMapInspector
> {code:java}
> final StandardParquetHiveMapInspector other = 
> (StandardParquetHiveMapInspector) obj;
> {code}
> Such conversion leads to class cast exception in case of 
> DeepParquetHiveMapInspector.
> {code}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.io.parquet.serde.DeepParquetHiveMapInspector cannot 
> be cast to 
> org.apache.hadoop.hive.ql.io.parquet.serde.StandardParquetHiveMapInspector
> at 
> org.apache.hadoop.hive.ql.io.parquet.serde.AbstractParquetMapInspector.equals(AbstractParquetMapInspector.java:131)
> at java.util.AbstractList.equals(AbstractList.java:523)
> at java.util.AbstractList.equals(AbstractList.java:523)
> at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:996)
> at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getStandardStructObjectInspector(ObjectInspectorFactory.java:281)
> at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getStandardStructObjectInspector(ObjectInspectorFactory.java:268)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:1022)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:65)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:377)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:453)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:409)
> at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:188)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:377)
> at 
> org.apache.hadoop.hive.ql.exec.FetchTask.initialize(FetchTask.java:80)
> ... 31 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6491) ClassCastException in AbstractParquetMapInspector

2014-03-08 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924843#comment-13924843
 ] 

Brock Noland commented on HIVE-6491:


Being taken forward in HIVE-6575.

> ClassCastException in AbstractParquetMapInspector
> -
>
> Key: HIVE-6491
> URL: https://issues.apache.org/jira/browse/HIVE-6491
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
> Environment: cdh5-beta2, trunk
>Reporter: Andrey Stepachev
>
> AbstractParquetMapInspector uses wrong class cast 
> https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java#L144
> It should be AbstractParquetMapInspector
> {code:java}
> final StandardParquetHiveMapInspector other = 
> (StandardParquetHiveMapInspector) obj;
> {code}
> Such conversion leads to class cast exception in case of 
> DeepParquetHiveMapInspector.
> {code}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.ql.io.parquet.serde.DeepParquetHiveMapInspector cannot 
> be cast to 
> org.apache.hadoop.hive.ql.io.parquet.serde.StandardParquetHiveMapInspector
> at 
> org.apache.hadoop.hive.ql.io.parquet.serde.AbstractParquetMapInspector.equals(AbstractParquetMapInspector.java:131)
> at java.util.AbstractList.equals(AbstractList.java:523)
> at java.util.AbstractList.equals(AbstractList.java:523)
> at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:996)
> at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getStandardStructObjectInspector(ObjectInspectorFactory.java:281)
> at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getStandardStructObjectInspector(ObjectInspectorFactory.java:268)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:1022)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:65)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:377)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:453)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:409)
> at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:188)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:377)
> at 
> org.apache.hadoop.hive.ql.exec.FetchTask.initialize(FetchTask.java:80)
> ... 31 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6576) sending user.name as a form parameter in POST doesn't work post HADOOP-10193

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924812#comment-13924812
 ] 

Hive QA commented on HIVE-6576:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633293/HIVE-6576.patch

{color:green}SUCCESS:{color} +1 5373 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1657/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1657/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12633293

> sending user.name as a form parameter in POST doesn't work post HADOOP-10193
> 
>
> Key: HIVE-6576
> URL: https://issues.apache.org/jira/browse/HIVE-6576
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-6576.patch
>
>
> WebHCat uses AuthFilter to handle authentication.  In simple mode that means 
> using PseudoAuthenticationHandler.  Prior to HADOOP-10193, the latter handled 
> user.name as form parameter in a POST request.  Now it only handles it as a 
> query parameter.  
> to maintain webhcat backwards compat, we need to make WebHCat still extract 
> it from form param.  This will be deprecated immediately and removed in 0.15
> Also, all examples in WebHCat reference manual should be updated to use 
> user.name in query string from current form param (curl -d user.name=foo)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5931) SQL std auth - add metastore get_role_participants api - to support DESCRIBE ROLE

2014-03-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924810#comment-13924810
 ] 

Thejas M Nair commented on HIVE-5931:
-

The reason why 'describe function ' works is that for some historic 
reason the keyword function was not marked as a non reserved keyword in 
IdentifiersParser.g. This is not the case with ROLE keyword.

> SQL std auth - add metastore get_role_participants api - to support DESCRIBE 
> ROLE
> -
>
> Key: HIVE-5931
> URL: https://issues.apache.org/jira/browse/HIVE-5931
> Project: Hive
>  Issue Type: Sub-task
>  Components: Authorization
>Reporter: Thejas M Nair
> Attachments: HIVE-5931.thriftapi.2.patch, 
> HIVE-5931.thriftapi.3.patch, HIVE-5931.thriftapi.followup.patch, 
> HIVE-5931.thriftapi.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> This is necessary for DESCRIBE ROLE role statement. This will list
> all users and roles that participate in a role. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5901) Query cancel should stop running MR tasks

2014-03-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5901:


   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thanks Navis!

[~rhbutani] I think we should include this in 0.13 as well . It is a very 
useful feature. I had mentioned about this jira in the mailing list as well.


> Query cancel should stop running MR tasks
> -
>
> Key: HIVE-5901
> URL: https://issues.apache.org/jira/browse/HIVE-5901
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Fix For: 0.14.0
>
> Attachments: HIVE-5901.1.patch.txt, HIVE-5901.2.patch.txt, 
> HIVE-5901.3.patch.txt, HIVE-5901.4.patch.txt, HIVE-5901.5.patch.txt, 
> HIVE-5901.6.patch.txt, HIVE-5901.7.patch.txt
>
>
> Currently, query canceling does not stop running MR job immediately.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5901) Query cancel should stop running MR tasks

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924785#comment-13924785
 ] 

Hive QA commented on HIVE-5901:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633277/HIVE-5901.7.patch.txt

{color:green}SUCCESS:{color} +1 5373 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1656/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1656/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12633277

> Query cancel should stop running MR tasks
> -
>
> Key: HIVE-5901
> URL: https://issues.apache.org/jira/browse/HIVE-5901
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-5901.1.patch.txt, HIVE-5901.2.patch.txt, 
> HIVE-5901.3.patch.txt, HIVE-5901.4.patch.txt, HIVE-5901.5.patch.txt, 
> HIVE-5901.6.patch.txt, HIVE-5901.7.patch.txt
>
>
> Currently, query canceling does not stop running MR job immediately.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5931) SQL std auth - add metastore get_role_participants api - to support DESCRIBE ROLE

2014-03-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924779#comment-13924779
 ] 

Thejas M Nair commented on HIVE-5931:
-

I am working on adding support for the describe role syntax as well, as part of 
this jira.

But there is a problem, the current describe table syntax allows the following -

{code}
DESCRIBE t1 key1;
DESCRIBE EXTENDED t1 key1;
{code}
In hive, almost all keywords are also identifiers . So "describe role 
" also gets translated to a describe table command with "role" as 
table name and "" as a column.

This is not a documented syntax AFAIK,  but we do have .q tests for it and it 
would break backward compatibility.The documented syntax requires a  dot 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Describe
 . Dot became optional with HIVE-1977 . Since there are tests added as part of 
the patch that use this format, it looks like the change was intentional.

I will look at syntax alternatives to 'describe role'. 




> SQL std auth - add metastore get_role_participants api - to support DESCRIBE 
> ROLE
> -
>
> Key: HIVE-5931
> URL: https://issues.apache.org/jira/browse/HIVE-5931
> Project: Hive
>  Issue Type: Sub-task
>  Components: Authorization
>Reporter: Thejas M Nair
> Attachments: HIVE-5931.thriftapi.2.patch, 
> HIVE-5931.thriftapi.3.patch, HIVE-5931.thriftapi.followup.patch, 
> HIVE-5931.thriftapi.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> This is necessary for DESCRIBE ROLE role statement. This will list
> all users and roles that participate in a role. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6508) Mismatched results between vector and non-vector mode with decimal field

2014-03-08 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924776#comment-13924776
 ] 

Remus Rusanu commented on HIVE-6508:


[~sershe] There is a new test case testSumDecimalHive6508 the covers possible 
regression

> Mismatched results between vector and non-vector mode with decimal field
> 
>
> Key: HIVE-6508
> URL: https://issues.apache.org/jira/browse/HIVE-6508
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.13.0
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
> Attachments: HIVE-6508.1.patch, HIVE-6508.1.patch
>
>
> Following query has a little mismatch in result as compared to the non-vector 
> mode.
> {code}
> select d_year, i_brand_id, i_brand,
>sum(ss_ext_sales_price) as sum_agg
> from date_dim
> join store_sales on date_dim.d_date_sk = store_sales.ss_sold_date_sk
> join item on store_sales.ss_item_sk = item.i_item_sk
> where i_manufact_id = 128
>   and d_moy = 11
> group by d_year, i_brand, i_brand_id
> order by d_year, sum_agg desc, i_brand_id
> limit 100;
> {code}
> This query is on tpcds data.
> The field ss_ext_sales_price is of type decimal(7,2) and everything else is 
> an integer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6403) uncorrelated subquery is failing with auto.convert.join=true

2014-03-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924775#comment-13924775
 ] 

Hive QA commented on HIVE-6403:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12633001/HIVE-6403.6.patch.txt

{color:green}SUCCESS:{color} +1 5373 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1655/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1655/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12633001

> uncorrelated subquery is failing with auto.convert.join=true
> 
>
> Key: HIVE-6403
> URL: https://issues.apache.org/jira/browse/HIVE-6403
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
> Attachments: HIVE-6403.1.patch, HIVE-6403.2.patch, 
> HIVE-6403.3.patch.txt, HIVE-6403.4.patch.txt, HIVE-6403.5.patch.txt, 
> HIVE-6403.6.patch.txt, navis.patch, navis2.patch
>
>
> Fixing HIVE-5690, I've found query in subquery_multiinsert.q is not working 
> with hive.auto.convert.join=true 
> {noformat}
> set hive.auto.convert.join=true;
> hive> explain
> > from src b 
> > INSERT OVERWRITE TABLE src_4 
> >   select * 
> >   where b.key in 
> >(select a.key 
> > from src a 
> > where b.value = a.value and a.key > '9'
> >) 
> > INSERT OVERWRITE TABLE src_5 
> >   select *  
> >   where b.key not in  ( select key from src s1 where s1.key > '2') 
> >   order by key 
> > ;
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinLocalWork(MapJoinProcessor.java:149)
>   at 
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:256)
>   at 
> org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:481)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
>   at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
>   at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:194)
>   at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:139)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinResolver.resolve(CommonJoinResolver.java:79)
>   at 
> org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:100)
>   at 
> org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:290)
>   at 
> org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:216)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9167)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:64)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:446)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:346)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1056)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1099)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:992)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:982)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:687)
>   at org.apache.ha

[jira] [Commented] (HIVE-6593) Create a maven assembly for hive-jdbc

2014-03-08 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924773#comment-13924773
 ] 

Szehon Ho commented on HIVE-6593:
-

Yea I was just checking if those translated to any other special requirements 
on hive side as to structure, permission of any file (if any), etc for the jdbc 
tarball.  I guess I'll work on making a tarball of the lib/*.jar I listed 
above.  

Just as FYI, dug a little bit and it seems slf4j has been removed from lib by 
HIVE-6162, its in the bigtop spec for but not effective anymore.

> Create a maven assembly for hive-jdbc
> -
>
> Key: HIVE-6593
> URL: https://issues.apache.org/jira/browse/HIVE-6593
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Affects Versions: 0.12.0
>Reporter: Mark Grover
>Assignee: Szehon Ho
>
> Currently in Apache Bigtop we bundle and distribute Hive. In particular, for 
> users to not have to install the entirety of Hive on machines that are just 
> jdbc clients, we have a special package which is a subset of hive, called 
> hive-jdbc that bundles only the jdbc driver jar and it's dependencies.
> However, because Hive doesn't have an assembly for the jdbc jar, we have to 
> hack and hardcode the list of jdbc jars and it's dependencies:
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/rpm/hive/SPECS/hive.spec#L361
> As Hive moves to Maven, it would be pretty fantastic if Hive could leverage 
> the maven-assembly-plugin and generate a .tar.gz assembly for what's required 
> for jdbc gateway machines. That we can simply take that distribution and 
> build a jdbc package from it without having to hard code jar names and 
> dependencies. That would make the process much less error prone.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6593) Create a maven assembly for hive-jdbc

2014-03-08 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924769#comment-13924769
 ] 

Konstantin Boudnik commented on HIVE-6593:
--

Actually, we don't ask to create a JDBC tarball. What we need for successful 
integration of hive-jdbc is a build produce an assembly with the things that 
should be given to a JDBC user. Hopefully, that outlines the difference.

> Create a maven assembly for hive-jdbc
> -
>
> Key: HIVE-6593
> URL: https://issues.apache.org/jira/browse/HIVE-6593
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Affects Versions: 0.12.0
>Reporter: Mark Grover
>Assignee: Szehon Ho
>
> Currently in Apache Bigtop we bundle and distribute Hive. In particular, for 
> users to not have to install the entirety of Hive on machines that are just 
> jdbc clients, we have a special package which is a subset of hive, called 
> hive-jdbc that bundles only the jdbc driver jar and it's dependencies.
> However, because Hive doesn't have an assembly for the jdbc jar, we have to 
> hack and hardcode the list of jdbc jars and it's dependencies:
> https://github.com/apache/bigtop/blob/master/bigtop-packages/src/rpm/hive/SPECS/hive.spec#L361
> As Hive moves to Maven, it would be pretty fantastic if Hive could leverage 
> the maven-assembly-plugin and generate a .tar.gz assembly for what's required 
> for jdbc gateway machines. That we can simply take that distribution and 
> build a jdbc package from it without having to hard code jar names and 
> dependencies. That would make the process much less error prone.



--
This message was sent by Atlassian JIRA
(v6.2#6252)