[jira] [Commented] (HIVE-6339) Implement new JDK7 schema management APIs in java.sql.Connection

2014-02-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902661#comment-13902661
 ] 

Hive QA commented on HIVE-6339:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12629120/HIVE-6339.4.patch

{color:green}SUCCESS:{color} +1 5121 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1336/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1336/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12629120

 Implement new JDK7 schema management APIs in java.sql.Connection 
 -

 Key: HIVE-6339
 URL: https://issues.apache.org/jira/browse/HIVE-6339
 Project: Hive
  Issue Type: Improvement
  Components: JDBC
Affects Versions: 0.13.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Attachments: HIVE-6339.1.patch, HIVE-6339.2.patch, HIVE-6339.4.patch


 JDK7 has added a few metadata methods in 
 [java.sql.Conntion|http://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html]
  
 {noformat}
 getSchema()
 setSchema()
 getCatalog()
 setCatalog()
 {noformat}
 Currently Hive JDBC just has stub implementation for all these methods throws 
 unsupported exception. This needs to be fixed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6362) Support union all on tez

2014-02-16 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-6362:
-

Attachment: HIVE-6362.8.patch

.8 is functional. Follow up things to fix. There's two stats aggr tasks in some 
cases, auto-merge is off with this.

 Support union all on tez
 

 Key: HIVE-6362
 URL: https://issues.apache.org/jira/browse/HIVE-6362
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: tez-branch

 Attachments: HIVE-6362.1.patch, HIVE-6362.2.patch, HIVE-6362.3.patch, 
 HIVE-6362.4.patch, HIVE-6362.5.patch, HIVE-6362.8.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6362) Support union all on tez

2014-02-16 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-6362:
-

Attachment: HIVE-6362.9.patch

.9 has updated golden files

 Support union all on tez
 

 Key: HIVE-6362
 URL: https://issues.apache.org/jira/browse/HIVE-6362
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: tez-branch

 Attachments: HIVE-6362.1.patch, HIVE-6362.2.patch, HIVE-6362.3.patch, 
 HIVE-6362.4.patch, HIVE-6362.5.patch, HIVE-6362.8.patch, HIVE-6362.9.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Timeline for the Hive 0.13 release?

2014-02-16 Thread Lefty Leverenz
I'll try to catch up on the wikidocs backlog for 0.13.0 patches in time for
the release.  It's a long and growing list, though, so no promises.

Feel free to do your own documentation, or hand it off to a friendly
in-house writer.

-- Lefty, self-appointed Hive docs maven



On Sat, Feb 15, 2014 at 1:28 PM, Thejas Nair the...@hortonworks.com wrote:

 Sounds good to me.


 On Fri, Feb 14, 2014 at 7:29 PM, Harish Butani hbut...@hortonworks.com
 wrote:

  Hi,
 
  Its mid feb. Wanted to check if the community is ready to cut a branch.
  Could we cut the branch in a week , say 5pm PST 2/21/14?
  The goal is to keep the release cycle short: couple of weeks; so after
 the
  branch we go into stabilizing mode for hive 0.13, checking in only
  blocker/critical bug fixes.
 
  regards,
  Harish.
 
 
  On Jan 20, 2014, at 9:25 AM, Brock Noland br...@cloudera.com wrote:
 
   Hi,
  
   I agree that picking a date to branch and then restricting commits to
  that
   branch would be a less time intensive plan for the RM.
  
   Brock
  
  
   On Sat, Jan 18, 2014 at 4:21 PM, Harish Butani 
 hbut...@hortonworks.com
  wrote:
  
   Yes agree it is time to start planning for the next release.
   I would like to volunteer to do the release management duties for this
   release(will be a great experience for me)
   Will be happy to do it, if the community is fine with this.
  
   regards,
   Harish.
  
   On Jan 17, 2014, at 7:05 PM, Thejas Nair the...@hortonworks.com
  wrote:
  
   Yes, I think it is time to start planning for the next release.
   For 0.12 release I created a branch and then accepted patches that
   people asked to be included for sometime, before moving a phase of
   accepting only critical bug fixes. This turned out to be laborious.
   I think we should instead give everyone a few weeks to get any
 patches
   they are working on to be ready, cut the branch, and take in only
   critical bug fixes to the branch after that.
   How about cutting the branch around mid-February and targeting to
   release in a week or two after that.
  
   Thanks,
   Thejas
  
  
   On Fri, Jan 17, 2014 at 4:39 PM, Carl Steinbach c...@apache.org
  wrote:
   I was wondering what people think about setting a tentative date for
  the
   Hive 0.13 release? At an old Hive Contrib meeting we agreed that
 Hive
   should follow a time-based release model with new releases every
 four
   months. If we follow that schedule we're due for the next release in
   mid-February.
  
   Thoughts?
  
   Thanks.
  
   Carl
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
  entity
   to
   which it is addressed and may contain information that is
 confidential,
   privileged and exempt from disclosure under applicable law. If the
  reader
   of this message is not the intended recipient, you are hereby
 notified
   that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
   immediately
   and delete it from your system. Thank You.
  
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
  entity to
   which it is addressed and may contain information that is
 confidential,
   privileged and exempt from disclosure under applicable law. If the
  reader
   of this message is not the intended recipient, you are hereby notified
  that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
  immediately
   and delete it from your system. Thank You.
  
  
  
  
   --
   Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
 
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender 

[jira] [Updated] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5958:


Description: 
Statement such as create table, alter table that specify an path uri should be 
allowed under the new authorization scheme only if URI(Path) specified has 
permissions including read/write and ownership of the file/dir and its children.
Also, fix issue of database not getting set as output for create-table.

  was:
In the first pass, statement such as create table, alter table that specify an 
path uri will get an authorization error under SQL std auth .



 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902694#comment-13902694
 ] 

Thejas M Nair commented on HIVE-5958:
-

I am following the suggestions of Alan for allowing the use of URI. But it will 
be restricted to files that are owned by the user and permissions on the file 
should also be permissive enough.


 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Review Request 18168: SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18168/
---

Review request for hive and Ashutosh Chauhan.


Bugs: HIVE-5958
https://issues.apache.org/jira/browse/HIVE-5958


Repository: hive-git


Description
---

Statement such as create table, alter table that specify an path uri should be 
allowed under the new authorization scheme only if URI(Path) specified has 
permissions including read/write and ownership of the file/dir and its children.
Also, fix issue of database not getting set as output for create-table.


Diffs
-

  common/src/java/org/apache/hadoop/hive/common/FileUtils.java c1f8842 
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 83d5bfc 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/ReadEntity.java c9a 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/WriteEntity.java 0493302 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 0b7c128 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 1f539ef 
  ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java a22a15f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 
  ql/src/java/org/apache/hadoop/hive/ql/plan/HiveOperation.java 93c89de 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java
 fae6844 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/RequiredPrivileges.java
 10a582b 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java
 4a9149f 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java
 40461f7 
  ql/src/test/queries/clientnegative/authorization_uri_add_partition.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_create_table1.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_create_table_ext.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_load_data.q PRE-CREATION 
  ql/src/test/results/clientnegative/authorization_addpartition.q.out f4d3b4f 
  ql/src/test/results/clientnegative/authorization_createview.q.out cb81b83 
  ql/src/test/results/clientnegative/authorization_ctas.q.out 1070468 
  ql/src/test/results/clientnegative/authorization_droppartition.q.out 7de553b 
  ql/src/test/results/clientnegative/authorization_fail_1.q.out ab1abe2 
  ql/src/test/results/clientnegative/authorization_fail_2.q.out 2c03b65 
  ql/src/test/results/clientnegative/authorization_fail_3.q.out bfba08a 
  ql/src/test/results/clientnegative/authorization_fail_4.q.out 34ad4ef 
  ql/src/test/results/clientnegative/authorization_fail_5.q.out a0289fb 
  ql/src/test/results/clientnegative/authorization_fail_6.q.out 47f8bd1 
  ql/src/test/results/clientnegative/authorization_fail_7.q.out a9bf0cc 
  ql/src/test/results/clientnegative/authorization_grant_table_allpriv.q.out 
0e17c94 
  ql/src/test/results/clientnegative/authorization_grant_table_fail1.q.out 
0c83849 
  
ql/src/test/results/clientnegative/authorization_grant_table_fail_nogrant.q.out 
129b5fa 
  ql/src/test/results/clientnegative/authorization_insert_noinspriv.q.out 
6d510f1 
  ql/src/test/results/clientnegative/authorization_insert_noselectpriv.q.out 
5b9b93a 
  ql/src/test/results/clientnegative/authorization_invalid_priv_v1.q.out 
10d1ca8 
  ql/src/test/results/clientnegative/authorization_invalid_priv_v2.q.out 
62aa8da 
  
ql/src/test/results/clientnegative/authorization_not_owner_alter_tab_rename.q.out
 e41702a 
  
ql/src/test/results/clientnegative/authorization_not_owner_alter_tab_serdeprop.q.out
 e41702a 
  ql/src/test/results/clientnegative/authorization_not_owner_drop_tab.q.out 
b456aca 
  ql/src/test/results/clientnegative/authorization_not_owner_drop_view.q.out 
2433846 
  ql/src/test/results/clientnegative/authorization_part.q.out 31dfda9 
  ql/src/test/results/clientnegative/authorization_priv_current_role_neg.q.out 
f932a3d 
  ql/src/test/results/clientnegative/authorization_revoke_table_fail1.q.out 
0f4c966 
  ql/src/test/results/clientnegative/authorization_revoke_table_fail2.q.out 
c671c8a 
  ql/src/test/results/clientnegative/authorization_select.q.out 1070468 
  ql/src/test/results/clientnegative/authorization_select_view.q.out e70a79c 
  ql/src/test/results/clientnegative/authorization_truncate.q.out c188831 
  ql/src/test/results/clientnegative/authorization_uri_add_partition.q.out 
PRE-CREATION 
  ql/src/test/results/clientnegative/authorization_uri_create_table1.q.out 
PRE-CREATION 
  ql/src/test/results/clientnegative/authorization_uri_create_table_ext.q.out 
PRE-CREATION 
  ql/src/test/results/clientnegative/authorization_uri_load_data.q.out 
PRE-CREATION 
  ql/src/test/results/clientnegative/exim_22_export_authfail.q.out 1339bbc 
  

[jira] [Commented] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902696#comment-13902696
 ] 

Thejas M Nair commented on HIVE-5958:
-

[~navis] This patch includes the change in HIVE-2818 for adding database to 
create-table outputs. I decided to add it here as there are failures in 
HIVE-2818 that still need to be investigated. I will help by rebasing that 
patch and re-generating the q.out files in HIVE-2818 if this goes in first.


 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Timeline for the Hive 0.13 release?

2014-02-16 Thread 杨卓荦
We are looking forward to Hive 0.13. For Tez and other cool features, we'd
like to try it on our Yarn cluster.

Thanks,
Zhuoluo (Clark) Yang


2014-02-16 19:38 GMT+08:00 Lefty Leverenz leftylever...@gmail.com:

 I'll try to catch up on the wikidocs backlog for 0.13.0 patches in time for
 the release.  It's a long and growing list, though, so no promises.

 Feel free to do your own documentation, or hand it off to a friendly
 in-house writer.

 -- Lefty, self-appointed Hive docs maven



 On Sat, Feb 15, 2014 at 1:28 PM, Thejas Nair the...@hortonworks.com
 wrote:

  Sounds good to me.
 
 
  On Fri, Feb 14, 2014 at 7:29 PM, Harish Butani hbut...@hortonworks.com
  wrote:
 
   Hi,
  
   Its mid feb. Wanted to check if the community is ready to cut a branch.
   Could we cut the branch in a week , say 5pm PST 2/21/14?
   The goal is to keep the release cycle short: couple of weeks; so after
  the
   branch we go into stabilizing mode for hive 0.13, checking in only
   blocker/critical bug fixes.
  
   regards,
   Harish.
  
  
   On Jan 20, 2014, at 9:25 AM, Brock Noland br...@cloudera.com wrote:
  
Hi,
   
I agree that picking a date to branch and then restricting commits to
   that
branch would be a less time intensive plan for the RM.
   
Brock
   
   
On Sat, Jan 18, 2014 at 4:21 PM, Harish Butani 
  hbut...@hortonworks.com
   wrote:
   
Yes agree it is time to start planning for the next release.
I would like to volunteer to do the release management duties for
 this
release(will be a great experience for me)
Will be happy to do it, if the community is fine with this.
   
regards,
Harish.
   
On Jan 17, 2014, at 7:05 PM, Thejas Nair the...@hortonworks.com
   wrote:
   
Yes, I think it is time to start planning for the next release.
For 0.12 release I created a branch and then accepted patches that
people asked to be included for sometime, before moving a phase of
accepting only critical bug fixes. This turned out to be laborious.
I think we should instead give everyone a few weeks to get any
  patches
they are working on to be ready, cut the branch, and take in only
critical bug fixes to the branch after that.
How about cutting the branch around mid-February and targeting to
release in a week or two after that.
   
Thanks,
Thejas
   
   
On Fri, Jan 17, 2014 at 4:39 PM, Carl Steinbach c...@apache.org
   wrote:
I was wondering what people think about setting a tentative date
 for
   the
Hive 0.13 release? At an old Hive Contrib meeting we agreed that
  Hive
should follow a time-based release model with new releases every
  four
months. If we follow that schedule we're due for the next release
 in
mid-February.
   
Thoughts?
   
Thanks.
   
Carl
   
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or
   entity
to
which it is addressed and may contain information that is
  confidential,
privileged and exempt from disclosure under applicable law. If the
   reader
of this message is not the intended recipient, you are hereby
  notified
that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you
 have
received this communication in error, please contact the sender
immediately
and delete it from your system. Thank You.
   
   
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or
   entity to
which it is addressed and may contain information that is
  confidential,
privileged and exempt from disclosure under applicable law. If the
   reader
of this message is not the intended recipient, you are hereby
 notified
   that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender
   immediately
and delete it from your system. Thank You.
   
   
   
   
--
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
  
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
 entity
  to
   which it is addressed and may contain information that is confidential,
   privileged and exempt from disclosure under applicable law. If the
 reader
   of this message is not the intended recipient, you are hereby notified
  that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
  immediately
   and delete it from your system. Thank You.
  
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is 

[jira] [Commented] (HIVE-6406) Introduce immutable-table table property and if set, disallow insert-into

2014-02-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902736#comment-13902736
 ] 

Hive QA commented on HIVE-6406:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12629134/HIVE-6406.3.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5122 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketmapjoin6
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_mapreduce_stack_trace_hadoop20
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1341/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1341/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12629134

 Introduce immutable-table table property and if set, disallow insert-into
 -

 Key: HIVE-6406
 URL: https://issues.apache.org/jira/browse/HIVE-6406
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog, Metastore, Query Processor, Thrift API
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6406.2.patch, HIVE-6406.3.patch, HIVE-6406.patch


 As part of HIVE-6405's attempt to make HCatalog and Hive behave in similar 
 ways with regards to immutable tables, this is a companion task to introduce 
 the notion of an immutable table, wherein all tables are not immutable by 
 default, and have this be a table property. If this property is set for a 
 table, and we attempt to write to a table that already has data (or a 
 partition), disallow INSERT INTO into it from hive(if destination directory 
 is non-empty). This property being set will allow hive to mimic HCatalog's 
 current immutable-table property.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6386) sql std auth - database should have an owner

2014-02-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6386:
---

Status: Patch Available  (was: Open)

 sql std auth - database should have an owner
 

 Key: HIVE-6386
 URL: https://issues.apache.org/jira/browse/HIVE-6386
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, Metastore
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6386.1.patch, HIVE-6386.2.patch, HIVE-6386.3.patch, 
 HIVE-6386.4.patch, HIVE-6386.patch


 Database in metastore does not have owner associated with it. Database owner 
 is needed for sql std authorization rules.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6326) Split generation in ORC may generate wrong split boundaries because of unaccounted padded bytes

2014-02-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902760#comment-13902760
 ] 

Hive QA commented on HIVE-6326:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12629175/HIVE-6326.4.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5120 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketmapjoin6
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_truncate_column_buckets
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1342/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1342/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12629175

 Split generation in ORC may generate wrong split boundaries because of 
 unaccounted padded bytes
 ---

 Key: HIVE-6326
 URL: https://issues.apache.org/jira/browse/HIVE-6326
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.0
Reporter: Prasanth J
Assignee: Prasanth J
  Labels: orcfile
 Attachments: HIVE-6326.1.patch, HIVE-6326.2.patch, HIVE-6326.3.patch, 
 HIVE-6326.4.patch


 HIVE-5091 added padding to ORC files to avoid ORC stripes straddling HDFS 
 blocks. The length of this padded bytes are not stored in stripe information. 
 OrcInputFormat.getSplits() uses stripeInformation.getLength() for split 
 computation. stripeInformation.getLength() is sum of index length, data 
 length and stripe footer length. It does not account for the length of padded 
 bytes which may result in wrong split boundary.
 The fix for this is to use the offset of next stripe as the length of current 
 stripe which includes the padded bytes as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5783) Native Parquet Support in Hive

2014-02-16 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HIVE-5783:
--

Release Note: Added support for 'STORED AS PARQUET' and for setting parquet 
as the default storage engine.  (was: adds stored as parquet and setting 
parquet as the default storage engine.)

 Native Parquet Support in Hive
 --

 Key: HIVE-5783
 URL: https://issues.apache.org/jira/browse/HIVE-5783
 Project: Hive
  Issue Type: New Feature
  Components: Serializers/Deserializers
Reporter: Justin Coffey
Assignee: Justin Coffey
Priority: Minor
  Labels: Parquet
 Fix For: 0.13.0

 Attachments: HIVE-5783.noprefix.patch, HIVE-5783.noprefix.patch, 
 HIVE-5783.patch, HIVE-5783.patch, HIVE-5783.patch, HIVE-5783.patch, 
 HIVE-5783.patch, HIVE-5783.patch, HIVE-5783.patch, HIVE-5783.patch, 
 HIVE-5783.patch, HIVE-5783.patch, HIVE-5783.patch, HIVE-5783.patch, 
 HIVE-5783.patch, HIVE-5783.patch, HIVE-5783.patch, HIVE-5783.patch, 
 HIVE-5783.patch


 Problem Statement:
 Hive would be easier to use if it had native Parquet support. Our 
 organization, Criteo, uses Hive extensively. Therefore we built the Parquet 
 Hive integration and would like to now contribute that integration to Hive.
 About Parquet:
 Parquet is a columnar storage format for Hadoop and integrates with many 
 Hadoop ecosystem tools such as Thrift, Avro, Hadoop MapReduce, Cascading, 
 Pig, Drill, Crunch, and Hive. Pig, Crunch, and Drill all contain native 
 Parquet integration.
 Changes Details:
 Parquet was built with dependency management in mind and therefore only a 
 single Parquet jar will be added as a dependency.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5636) Introduce getPartitionColumns() functionality from HCatInputFormat

2014-02-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902787#comment-13902787
 ] 

Hive QA commented on HIVE-5636:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12629157/HIVE-5636.2.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5121 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketmapjoin6
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1343/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1343/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12629157

 Introduce getPartitionColumns() functionality from HCatInputFormat
 --

 Key: HIVE-5636
 URL: https://issues.apache.org/jira/browse/HIVE-5636
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-5636.2.patch, HIVE-5636.patch


 As of HCat 0.5, we made the class InputJobInfo private for hcatalog use only, 
 and we made it so that setInput would not modify the InputJobInfo being 
 passed in.
 However, if a user of HCatInputFormat wants to get what Partitioning columns 
 or Data columns exist for the job, they are not able to do so directly from 
 HCatInputFormat and are forced to use InputJobInfo, which currently does not 
 work. Thus, we need to expose this functionality.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6347) ZeroCopy read path for ORC RecordReader

2014-02-16 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902800#comment-13902800
 ] 

Gopal V commented on HIVE-6347:
---

munmap() is async and delayed action

 ZeroCopy read path for ORC RecordReader
 ---

 Key: HIVE-6347
 URL: https://issues.apache.org/jira/browse/HIVE-6347
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Affects Versions: tez-branch
Reporter: Gopal V
Assignee: Gopal V
 Attachments: HIVE-6347.1.patch, HIVE-6347.2-tez.patch, 
 HIVE-6347.3-tez.patch


 ORC can use the new HDFS Caching APIs and the ZeroCopy readers to avoid extra 
 data copies into memory while scanning files.
 Implement ORC zcr codepath and a hive.orc.zerocopy flag.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6439) Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6439:


Status: Open  (was: Patch Available)

 Introduce CBO step in Semantic Analyzer
 ---

 Key: HIVE-6439
 URL: https://issues.apache.org/jira/browse/HIVE-6439
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-6439.1.patch


 This patch introduces CBO step in SemanticAnalyzer. For now the 
 CostBasedOptimizer is an empty shell. 
 The contract between SemAly and CBO is:
 - CBO step  is controlled by the 'hive.enable.cbo.flag'. 
 - When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
 annotated with stats). If it can CBO will return a better plan in Hive AST 
 form.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Review Request 18172: HIVE-6439 Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Harish Butani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18172/
---

Review request for hive.


Bugs: HIVE-6439
https://issues.apache.org/jira/browse/HIVE-6439


Repository: hive-git


Description
---

This patch introduces CBO step in SemanticAnalyzer. For now the 
CostBasedOptimizer is an empty shell. 
The contract between SemAly and CBO is:
CBO step is controlled by the 'hive.enable.cbo.flag'.
When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
annotated with stats). If it can CBO will return a better plan in Hive AST form.


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java a182cd7 
  conf/hive-default.xml.template 0d08aa2 
  ql/src/java/org/apache/hadoop/hive/ql/QueryProperties.java 1ba5654 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/PreCBOOptimizer.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/optiq/CostBasedOptimizer.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java 52c39c0 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 

Diff: https://reviews.apache.org/r/18172/diff/


Testing
---


Thanks,

Harish Butani



[jira] [Updated] (HIVE-6439) Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6439:


Status: Patch Available  (was: Open)

 Introduce CBO step in Semantic Analyzer
 ---

 Key: HIVE-6439
 URL: https://issues.apache.org/jira/browse/HIVE-6439
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-6439.1.patch, HIVE-6439.2.patch


 This patch introduces CBO step in SemanticAnalyzer. For now the 
 CostBasedOptimizer is an empty shell. 
 The contract between SemAly and CBO is:
 - CBO step  is controlled by the 'hive.enable.cbo.flag'. 
 - When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
 annotated with stats). If it can CBO will return a better plan in Hive AST 
 form.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6439) Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6439:


Attachment: HIVE-6439.2.patch

 Introduce CBO step in Semantic Analyzer
 ---

 Key: HIVE-6439
 URL: https://issues.apache.org/jira/browse/HIVE-6439
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-6439.1.patch, HIVE-6439.2.patch


 This patch introduces CBO step in SemanticAnalyzer. For now the 
 CostBasedOptimizer is an empty shell. 
 The contract between SemAly and CBO is:
 - CBO step  is controlled by the 'hive.enable.cbo.flag'. 
 - When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
 annotated with stats). If it can CBO will return a better plan in Hive AST 
 form.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6439) Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902828#comment-13902828
 ] 

Harish Butani commented on HIVE-6439:
-

Review at https://reviews.apache.org/r/18172/

 Introduce CBO step in Semantic Analyzer
 ---

 Key: HIVE-6439
 URL: https://issues.apache.org/jira/browse/HIVE-6439
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-6439.1.patch, HIVE-6439.2.patch


 This patch introduces CBO step in SemanticAnalyzer. For now the 
 CostBasedOptimizer is an empty shell. 
 The contract between SemAly and CBO is:
 - CBO step  is controlled by the 'hive.enable.cbo.flag'. 
 - When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
 annotated with stats). If it can CBO will return a better plan in Hive AST 
 form.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6345) Add DECIMAL support to vectorized JOIN operators

2014-02-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902835#comment-13902835
 ] 

Hive QA commented on HIVE-6345:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12629212/HIVE-6345.3.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 5134 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_vectorization_ppd
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_aggregate
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_mapjoin
org.apache.hadoop.hive.ql.exec.vector.TestVectorizationContext.testBetweenFilters
org.apache.hadoop.hive.ql.exec.vector.TestVectorizationContext.testInFiltersAndExprs
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1347/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1347/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12629212

 Add DECIMAL support to vectorized JOIN operators
 

 Key: HIVE-6345
 URL: https://issues.apache.org/jira/browse/HIVE-6345
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Reporter: Remus Rusanu
Assignee: Remus Rusanu
  Labels: vectorization
 Attachments: HIVE-6345.2.patch, HIVE-6345.3.patch, HIVE-6345.3.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-02-16 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902839#comment-13902839
 ] 

Vikram Dixit K commented on HIVE-6325:
--

One more try. Face palm. Made a mistake in the last version.

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-6325.1.patch, HIVE-6325.2.patch, HIVE-6325.3.patch, 
 HIVE-6325.4.patch, HIVE-6325.5.patch, HIVE-6325.6.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-02-16 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-6325:
-

Status: Open  (was: Patch Available)

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-6325.1.patch, HIVE-6325.2.patch, HIVE-6325.3.patch, 
 HIVE-6325.4.patch, HIVE-6325.5.patch, HIVE-6325.6.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-02-16 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-6325:
-

Attachment: HIVE-6325.7.patch

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-6325.1.patch, HIVE-6325.2.patch, HIVE-6325.3.patch, 
 HIVE-6325.4.patch, HIVE-6325.5.patch, HIVE-6325.6.patch, HIVE-6325.7.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-02-16 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-6325:
-

Status: Patch Available  (was: Open)

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-6325.1.patch, HIVE-6325.2.patch, HIVE-6325.3.patch, 
 HIVE-6325.4.patch, HIVE-6325.5.patch, HIVE-6325.6.patch, HIVE-6325.7.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 18172: HIVE-6439 Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18172/#review34599
---



ql/src/java/org/apache/hadoop/hive/ql/QueryProperties.java
https://reviews.apache.org/r/18172/#comment64759

indentation needs fixing (2 spaces)



ql/src/java/org/apache/hadoop/hive/ql/QueryProperties.java
https://reviews.apache.org/r/18172/#comment64758

The coding conventions followed by hive require braces with if statements.

As hive follows Sun/Java code conventions (except for indentation of 2 
chars, and line limit of 100 chars), you can select the java code convention 
under eclipse formatter and select the java profile, edit it for these two 
settings and save it as hive profile.

Then highlight your section of new/edited code, right-click source-format .
I will add these instructions to HowToContribute once the wiki is working 
again.




ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
https://reviews.apache.org/r/18172/#comment64760

indentation issues



ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
https://reviews.apache.org/r/18172/#comment64761

braces needed for if



- Thejas Nair


On Feb. 16, 2014, 8:39 p.m., Harish Butani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18172/
 ---
 
 (Updated Feb. 16, 2014, 8:39 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6439
 https://issues.apache.org/jira/browse/HIVE-6439
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 This patch introduces CBO step in SemanticAnalyzer. For now the 
 CostBasedOptimizer is an empty shell. 
 The contract between SemAly and CBO is:
 CBO step is controlled by the 'hive.enable.cbo.flag'.
 When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
 annotated with stats). If it can CBO will return a better plan in Hive AST 
 form.
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java a182cd7 
   conf/hive-default.xml.template 0d08aa2 
   ql/src/java/org/apache/hadoop/hive/ql/QueryProperties.java 1ba5654 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/PreCBOOptimizer.java 
 PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/optiq/CostBasedOptimizer.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java 52c39c0 
   ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 
 
 Diff: https://reviews.apache.org/r/18172/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Harish Butani
 




Re: Review Request 18172: HIVE-6439 Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Gunther Hagleitner

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18172/#review34602
---



conf/hive-default.xml.template
https://reviews.apache.org/r/18172/#comment64764

max joins is missing.



ql/src/java/org/apache/hadoop/hive/ql/optimizer/optiq/CostBasedOptimizer.java
https://reviews.apache.org/r/18172/#comment64765

if this is meant to be a generic contract this shouldn't be in the optiq 
package.



ql/src/java/org/apache/hadoop/hive/ql/optimizer/optiq/CostBasedOptimizer.java
https://reviews.apache.org/r/18172/#comment64763

Why can't we use a generic type here?


- Gunther Hagleitner


On Feb. 16, 2014, 8:39 p.m., Harish Butani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18172/
 ---
 
 (Updated Feb. 16, 2014, 8:39 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6439
 https://issues.apache.org/jira/browse/HIVE-6439
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 This patch introduces CBO step in SemanticAnalyzer. For now the 
 CostBasedOptimizer is an empty shell. 
 The contract between SemAly and CBO is:
 CBO step is controlled by the 'hive.enable.cbo.flag'.
 When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
 annotated with stats). If it can CBO will return a better plan in Hive AST 
 form.
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java a182cd7 
   conf/hive-default.xml.template 0d08aa2 
   ql/src/java/org/apache/hadoop/hive/ql/QueryProperties.java 1ba5654 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/PreCBOOptimizer.java 
 PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/optiq/CostBasedOptimizer.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java 52c39c0 
   ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 
 
 Diff: https://reviews.apache.org/r/18172/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Harish Butani
 




Re: Review Request 18172: HIVE-6439 Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Gunther Hagleitner

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18172/#review34603
---



common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
https://reviews.apache.org/r/18172/#comment64766

this shouldn't be part of the contract should it?


- Gunther Hagleitner


On Feb. 16, 2014, 8:39 p.m., Harish Butani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18172/
 ---
 
 (Updated Feb. 16, 2014, 8:39 p.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6439
 https://issues.apache.org/jira/browse/HIVE-6439
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 This patch introduces CBO step in SemanticAnalyzer. For now the 
 CostBasedOptimizer is an empty shell. 
 The contract between SemAly and CBO is:
 CBO step is controlled by the 'hive.enable.cbo.flag'.
 When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
 annotated with stats). If it can CBO will return a better plan in Hive AST 
 form.
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java a182cd7 
   conf/hive-default.xml.template 0d08aa2 
   ql/src/java/org/apache/hadoop/hive/ql/QueryProperties.java 1ba5654 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/PreCBOOptimizer.java 
 PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/optiq/CostBasedOptimizer.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java 52c39c0 
   ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 
 
 Diff: https://reviews.apache.org/r/18172/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Harish Butani
 




[jira] [Commented] (HIVE-6439) Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902860#comment-13902860
 ] 

Hive QA commented on HIVE-6439:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12629280/HIVE-6439.2.patch

{color:green}SUCCESS:{color} +1 5120 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1348/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1348/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12629280

 Introduce CBO step in Semantic Analyzer
 ---

 Key: HIVE-6439
 URL: https://issues.apache.org/jira/browse/HIVE-6439
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-6439.1.patch, HIVE-6439.2.patch


 This patch introduces CBO step in SemanticAnalyzer. For now the 
 CostBasedOptimizer is an empty shell. 
 The contract between SemAly and CBO is:
 - CBO step  is controlled by the 'hive.enable.cbo.flag'. 
 - When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
 annotated with stats). If it can CBO will return a better plan in Hive AST 
 form.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-02-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902884#comment-13902884
 ] 

Hive QA commented on HIVE-6325:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12629282/HIVE-6325.7.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5123 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket5
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1349/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1349/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12629282

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-6325.1.patch, HIVE-6325.2.patch, HIVE-6325.3.patch, 
 HIVE-6325.4.patch, HIVE-6325.5.patch, HIVE-6325.6.patch, HIVE-6325.7.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6386) sql std auth - database should have an owner

2014-02-16 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902885#comment-13902885
 ] 

Lefty Leverenz commented on HIVE-6386:
--

Can the owner be changed?

 sql std auth - database should have an owner
 

 Key: HIVE-6386
 URL: https://issues.apache.org/jira/browse/HIVE-6386
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, Metastore
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6386.1.patch, HIVE-6386.2.patch, HIVE-6386.3.patch, 
 HIVE-6386.4.patch, HIVE-6386.patch


 Database in metastore does not have owner associated with it. Database owner 
 is needed for sql std authorization rules.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6386) sql std auth - database should have an owner

2014-02-16 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902894#comment-13902894
 ] 

Thejas M Nair commented on HIVE-6386:
-

bq. Can the owner be changed?
Not with the changes in this jira. I have created another jira for that feature 
- HIVE-6440

 sql std auth - database should have an owner
 

 Key: HIVE-6386
 URL: https://issues.apache.org/jira/browse/HIVE-6386
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, Metastore
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6386.1.patch, HIVE-6386.2.patch, HIVE-6386.3.patch, 
 HIVE-6386.4.patch, HIVE-6386.patch


 Database in metastore does not have owner associated with it. Database owner 
 is needed for sql std authorization rules.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6440) sql std auth - add command to change owner of database

2014-02-16 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-6440:
---

 Summary: sql std auth - add command to change owner of database
 Key: HIVE-6440
 URL: https://issues.apache.org/jira/browse/HIVE-6440
 Project: Hive
  Issue Type: Sub-task
Reporter: Thejas M Nair


It should be possible to change the owner of a database once it is created.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 17887: Support subquery for single sourced multi query

2014-02-16 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17887/
---

(Updated Feb. 17, 2014, 1:17 a.m.)


Review request for hive.


Changes
---

Rebased to trunk. 

TOK_QUERY ^(TOK_FROM TOK_INSERT) is changed to TOK_QUERY ^(TOK_INSERT TOK_FROM) 
for simplicity of replacing INSERT clause (see top-level uNION_ALL cases)


Bugs: HIVE-5690
https://issues.apache.org/jira/browse/HIVE-5690


Repository: hive-git


Description
---

Single sourced multi (insert) query is very useful for various ETL processes 
but it does not allow subqueries included. For example, 
{noformat}
explain from src 
insert overwrite table x1 select * from (select distinct key,value) b order by 
key
insert overwrite table x2 select * from (select distinct key,value) c order by 
value;
{noformat}


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g 97ce484 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 4d58f96 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java a8b436e 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java a7cec5d 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBSubQuery.java 92ccbea 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java 8ffbe07 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java 9a947ec 
  ql/src/test/org/apache/hadoop/hive/ql/parse/TestQBSubQuery.java 7e57471 
  ql/src/test/queries/clientpositive/multi_insert_subquery.q PRE-CREATION 
  ql/src/test/results/clientnegative/create_view_failure3.q.out 5ddbdb6 
  ql/src/test/results/clientnegative/subquery_exists_implicit_gby.q.out 4830c00 
  ql/src/test/results/clientnegative/subquery_in_groupby.q.out 809bb0a 
  ql/src/test/results/clientnegative/subquery_in_select.q.out 3d74132 
  ql/src/test/results/clientnegative/subquery_multiple_cols_in_select.q.out 
7a16bae 
  ql/src/test/results/clientnegative/subquery_nested_subquery.q.out 68a3a98 
  ql/src/test/results/clientnegative/subquery_notexists_implicit_gby.q.out 
74422af 
  ql/src/test/results/clientnegative/subquery_subquery_chain.q.out 448bfb2 
  ql/src/test/results/clientnegative/subquery_windowing_corr.q.out 3cc2fa4 
  ql/src/test/results/clientnegative/uniquejoin3.q.out e10a47b 
  ql/src/test/results/clientpositive/alter_partition_coltype.q.out 49c1051 
  ql/src/test/results/clientpositive/annotate_stats_filter.q.out e6eae8a 
  ql/src/test/results/clientpositive/annotate_stats_groupby.q.out e55c35b 
  ql/src/test/results/clientpositive/annotate_stats_join.q.out 523d386 
  ql/src/test/results/clientpositive/annotate_stats_limit.q.out e6db870 
  ql/src/test/results/clientpositive/annotate_stats_part.q.out 2a56d6e 
  ql/src/test/results/clientpositive/annotate_stats_select.q.out 023b1c3 
  ql/src/test/results/clientpositive/annotate_stats_table.q.out 89fa6b1 
  ql/src/test/results/clientpositive/annotate_stats_union.q.out df1e386 
  ql/src/test/results/clientpositive/auto_join_reordering_values.q.out 48ca65f 
  ql/src/test/results/clientpositive/auto_sortmerge_join_1.q.out e84e7b2 
  ql/src/test/results/clientpositive/auto_sortmerge_join_11.q.out 8ac2c06 
  ql/src/test/results/clientpositive/auto_sortmerge_join_12.q.out d462218 
  ql/src/test/results/clientpositive/auto_sortmerge_join_2.q.out 0488485 
  ql/src/test/results/clientpositive/auto_sortmerge_join_3.q.out 1537f65 
  ql/src/test/results/clientpositive/auto_sortmerge_join_4.q.out 6dd49c4 
  ql/src/test/results/clientpositive/auto_sortmerge_join_5.q.out 0f4f59f 
  ql/src/test/results/clientpositive/auto_sortmerge_join_7.q.out b176c55 
  ql/src/test/results/clientpositive/auto_sortmerge_join_8.q.out 5d2342c 
  ql/src/test/results/clientpositive/binary_output_format.q.out bcfb8eb 
  ql/src/test/results/clientpositive/bucket1.q.out 5ade5f8 
  ql/src/test/results/clientpositive/bucket2.q.out 672903d 
  ql/src/test/results/clientpositive/bucket3.q.out 9232f6b 
  ql/src/test/results/clientpositive/bucket4.q.out fb2f619 
  ql/src/test/results/clientpositive/bucket5.q.out 8a49352 
  ql/src/test/results/clientpositive/bucket_map_join_1.q.out 75bcda8 
  ql/src/test/results/clientpositive/bucket_map_join_2.q.out a737f82 
  ql/src/test/results/clientpositive/bucketcontext_1.q.out 930be79 
  ql/src/test/results/clientpositive/bucketcontext_2.q.out 88f747a 
  ql/src/test/results/clientpositive/bucketcontext_3.q.out 3da1cc9 
  ql/src/test/results/clientpositive/bucketcontext_4.q.out 33dee62 
  ql/src/test/results/clientpositive/bucketcontext_5.q.out eb751f3 
  ql/src/test/results/clientpositive/bucketcontext_6.q.out 320b8b9 
  ql/src/test/results/clientpositive/bucketcontext_7.q.out ef4f295 
  ql/src/test/results/clientpositive/bucketcontext_8.q.out f9e6835 
  ql/src/test/results/clientpositive/bucketmapjoin1.q.out 81ca8a7 
  

Re: Review Request 17887: Support subquery for single sourced multi query

2014-02-16 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17887/
---

(Updated Feb. 17, 2014, 1:18 a.m.)


Review request for hive.


Bugs: HIVE-5690
https://issues.apache.org/jira/browse/HIVE-5690


Repository: hive-git


Description
---

Single sourced multi (insert) query is very useful for various ETL processes 
but it does not allow subqueries included. For example, 
{noformat}
explain from src 
insert overwrite table x1 select * from (select distinct key,value) b order by 
key
insert overwrite table x2 select * from (select distinct key,value) c order by 
value;
{noformat}


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g 97ce484 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 4d58f96 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java a8b436e 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java a7cec5d 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBSubQuery.java 92ccbea 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java 8ffbe07 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java 9a947ec 
  ql/src/test/org/apache/hadoop/hive/ql/parse/TestQBSubQuery.java 7e57471 
  ql/src/test/queries/clientpositive/multi_insert_subquery.q PRE-CREATION 
  ql/src/test/results/clientnegative/create_view_failure3.q.out 5ddbdb6 
  ql/src/test/results/clientnegative/subquery_exists_implicit_gby.q.out 4830c00 
  ql/src/test/results/clientnegative/subquery_in_groupby.q.out 809bb0a 
  ql/src/test/results/clientnegative/subquery_in_select.q.out 3d74132 
  ql/src/test/results/clientnegative/subquery_multiple_cols_in_select.q.out 
7a16bae 
  ql/src/test/results/clientnegative/subquery_nested_subquery.q.out 68a3a98 
  ql/src/test/results/clientnegative/subquery_notexists_implicit_gby.q.out 
74422af 
  ql/src/test/results/clientnegative/subquery_subquery_chain.q.out 448bfb2 
  ql/src/test/results/clientnegative/subquery_windowing_corr.q.out 3cc2fa4 
  ql/src/test/results/clientnegative/uniquejoin3.q.out e10a47b 
  ql/src/test/results/clientpositive/alter_partition_coltype.q.out 49c1051 
  ql/src/test/results/clientpositive/annotate_stats_filter.q.out e6eae8a 
  ql/src/test/results/clientpositive/annotate_stats_groupby.q.out e55c35b 
  ql/src/test/results/clientpositive/annotate_stats_join.q.out 523d386 
  ql/src/test/results/clientpositive/annotate_stats_limit.q.out e6db870 
  ql/src/test/results/clientpositive/annotate_stats_part.q.out 2a56d6e 
  ql/src/test/results/clientpositive/annotate_stats_select.q.out 023b1c3 
  ql/src/test/results/clientpositive/annotate_stats_table.q.out 89fa6b1 
  ql/src/test/results/clientpositive/annotate_stats_union.q.out df1e386 
  ql/src/test/results/clientpositive/auto_join_reordering_values.q.out 48ca65f 
  ql/src/test/results/clientpositive/auto_sortmerge_join_1.q.out e84e7b2 
  ql/src/test/results/clientpositive/auto_sortmerge_join_11.q.out 8ac2c06 
  ql/src/test/results/clientpositive/auto_sortmerge_join_12.q.out d462218 
  ql/src/test/results/clientpositive/auto_sortmerge_join_2.q.out 0488485 
  ql/src/test/results/clientpositive/auto_sortmerge_join_3.q.out 1537f65 
  ql/src/test/results/clientpositive/auto_sortmerge_join_4.q.out 6dd49c4 
  ql/src/test/results/clientpositive/auto_sortmerge_join_5.q.out 0f4f59f 
  ql/src/test/results/clientpositive/auto_sortmerge_join_7.q.out b176c55 
  ql/src/test/results/clientpositive/auto_sortmerge_join_8.q.out 5d2342c 
  ql/src/test/results/clientpositive/binary_output_format.q.out bcfb8eb 
  ql/src/test/results/clientpositive/bucket1.q.out 5ade5f8 
  ql/src/test/results/clientpositive/bucket2.q.out 672903d 
  ql/src/test/results/clientpositive/bucket3.q.out 9232f6b 
  ql/src/test/results/clientpositive/bucket4.q.out fb2f619 
  ql/src/test/results/clientpositive/bucket5.q.out 8a49352 
  ql/src/test/results/clientpositive/bucket_map_join_1.q.out 75bcda8 
  ql/src/test/results/clientpositive/bucket_map_join_2.q.out a737f82 
  ql/src/test/results/clientpositive/bucketcontext_1.q.out 930be79 
  ql/src/test/results/clientpositive/bucketcontext_2.q.out 88f747a 
  ql/src/test/results/clientpositive/bucketcontext_3.q.out 3da1cc9 
  ql/src/test/results/clientpositive/bucketcontext_4.q.out 33dee62 
  ql/src/test/results/clientpositive/bucketcontext_5.q.out eb751f3 
  ql/src/test/results/clientpositive/bucketcontext_6.q.out 320b8b9 
  ql/src/test/results/clientpositive/bucketcontext_7.q.out ef4f295 
  ql/src/test/results/clientpositive/bucketcontext_8.q.out f9e6835 
  ql/src/test/results/clientpositive/bucketmapjoin1.q.out 81ca8a7 
  ql/src/test/results/clientpositive/bucketmapjoin10.q.out 60c66ea 
  ql/src/test/results/clientpositive/bucketmapjoin11.q.out 2cc2bd4 
  ql/src/test/results/clientpositive/bucketmapjoin12.q.out 2da135e 
  

[jira] [Commented] (HIVE-5690) Support subquery for single sourced multi query

2014-02-16 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902899#comment-13902899
 ] 

Navis commented on HIVE-5690:
-

Test fails seemed not related to this.

Could anyone review this? This patch extends versatility of single sourced 
multi-query. For complex type columns in table, it's really useful.

 Support subquery for single sourced multi query
 ---

 Key: HIVE-5690
 URL: https://issues.apache.org/jira/browse/HIVE-5690
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: D13791.1.patch, HIVE-5690.2.patch.txt, 
 HIVE-5690.3.patch.txt, HIVE-5690.4.patch.txt, HIVE-5690.5.patch.txt


 Single sourced multi (insert) query is very useful for various ETL processes 
 but it does not allow subqueries included. For example, 
 {noformat}
 explain from src 
 insert overwrite table x1 select * from (select distinct key,value) b order 
 by key
 insert overwrite table x2 select * from (select distinct key,value) c order 
 by value;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 17887: Support subquery for single sourced multi query

2014-02-16 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17887/
---

(Updated Feb. 17, 2014, 1:17 a.m.)


Review request for hive.


Bugs: HIVE-5690
https://issues.apache.org/jira/browse/HIVE-5690


Repository: hive-git


Description
---

Single sourced multi (insert) query is very useful for various ETL processes 
but it does not allow subqueries included. For example, 
{noformat}
explain from src 
insert overwrite table x1 select * from (select distinct key,value) b order by 
key
insert overwrite table x2 select * from (select distinct key,value) c order by 
value;
{noformat}


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g 97ce484 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 4d58f96 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java a8b436e 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java a7cec5d 
  ql/src/java/org/apache/hadoop/hive/ql/parse/QBSubQuery.java 92ccbea 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SubQueryUtils.java 8ffbe07 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java 9a947ec 
  ql/src/test/org/apache/hadoop/hive/ql/parse/TestQBSubQuery.java 7e57471 
  ql/src/test/queries/clientpositive/multi_insert_subquery.q PRE-CREATION 
  ql/src/test/results/clientnegative/create_view_failure3.q.out 5ddbdb6 
  ql/src/test/results/clientnegative/subquery_exists_implicit_gby.q.out 4830c00 
  ql/src/test/results/clientnegative/subquery_in_groupby.q.out 809bb0a 
  ql/src/test/results/clientnegative/subquery_in_select.q.out 3d74132 
  ql/src/test/results/clientnegative/subquery_multiple_cols_in_select.q.out 
7a16bae 
  ql/src/test/results/clientnegative/subquery_nested_subquery.q.out 68a3a98 
  ql/src/test/results/clientnegative/subquery_notexists_implicit_gby.q.out 
74422af 
  ql/src/test/results/clientnegative/subquery_subquery_chain.q.out 448bfb2 
  ql/src/test/results/clientnegative/subquery_windowing_corr.q.out 3cc2fa4 
  ql/src/test/results/clientnegative/uniquejoin3.q.out e10a47b 
  ql/src/test/results/clientpositive/alter_partition_coltype.q.out 49c1051 
  ql/src/test/results/clientpositive/annotate_stats_filter.q.out e6eae8a 
  ql/src/test/results/clientpositive/annotate_stats_groupby.q.out e55c35b 
  ql/src/test/results/clientpositive/annotate_stats_join.q.out 523d386 
  ql/src/test/results/clientpositive/annotate_stats_limit.q.out e6db870 
  ql/src/test/results/clientpositive/annotate_stats_part.q.out 2a56d6e 
  ql/src/test/results/clientpositive/annotate_stats_select.q.out 023b1c3 
  ql/src/test/results/clientpositive/annotate_stats_table.q.out 89fa6b1 
  ql/src/test/results/clientpositive/annotate_stats_union.q.out df1e386 
  ql/src/test/results/clientpositive/auto_join_reordering_values.q.out 48ca65f 
  ql/src/test/results/clientpositive/auto_sortmerge_join_1.q.out e84e7b2 
  ql/src/test/results/clientpositive/auto_sortmerge_join_11.q.out 8ac2c06 
  ql/src/test/results/clientpositive/auto_sortmerge_join_12.q.out d462218 
  ql/src/test/results/clientpositive/auto_sortmerge_join_2.q.out 0488485 
  ql/src/test/results/clientpositive/auto_sortmerge_join_3.q.out 1537f65 
  ql/src/test/results/clientpositive/auto_sortmerge_join_4.q.out 6dd49c4 
  ql/src/test/results/clientpositive/auto_sortmerge_join_5.q.out 0f4f59f 
  ql/src/test/results/clientpositive/auto_sortmerge_join_7.q.out b176c55 
  ql/src/test/results/clientpositive/auto_sortmerge_join_8.q.out 5d2342c 
  ql/src/test/results/clientpositive/binary_output_format.q.out bcfb8eb 
  ql/src/test/results/clientpositive/bucket1.q.out 5ade5f8 
  ql/src/test/results/clientpositive/bucket2.q.out 672903d 
  ql/src/test/results/clientpositive/bucket3.q.out 9232f6b 
  ql/src/test/results/clientpositive/bucket4.q.out fb2f619 
  ql/src/test/results/clientpositive/bucket5.q.out 8a49352 
  ql/src/test/results/clientpositive/bucket_map_join_1.q.out 75bcda8 
  ql/src/test/results/clientpositive/bucket_map_join_2.q.out a737f82 
  ql/src/test/results/clientpositive/bucketcontext_1.q.out 930be79 
  ql/src/test/results/clientpositive/bucketcontext_2.q.out 88f747a 
  ql/src/test/results/clientpositive/bucketcontext_3.q.out 3da1cc9 
  ql/src/test/results/clientpositive/bucketcontext_4.q.out 33dee62 
  ql/src/test/results/clientpositive/bucketcontext_5.q.out eb751f3 
  ql/src/test/results/clientpositive/bucketcontext_6.q.out 320b8b9 
  ql/src/test/results/clientpositive/bucketcontext_7.q.out ef4f295 
  ql/src/test/results/clientpositive/bucketcontext_8.q.out f9e6835 
  ql/src/test/results/clientpositive/bucketmapjoin1.q.out 81ca8a7 
  ql/src/test/results/clientpositive/bucketmapjoin10.q.out 60c66ea 
  ql/src/test/results/clientpositive/bucketmapjoin11.q.out 2cc2bd4 
  ql/src/test/results/clientpositive/bucketmapjoin12.q.out 2da135e 
  

Re: Timeline for the Hive 0.13 release?

2014-02-16 Thread Navis류승우
HIVE-6037 is for generating hive-default.template file from HiveConf. Could
it be included in this release? If it's not, I'll suspend further rebasing
of it till next release (conflicts too frequently).


2014-02-16 20:38 GMT+09:00 Lefty Leverenz leftylever...@gmail.com:

 I'll try to catch up on the wikidocs backlog for 0.13.0 patches in time for
 the release.  It's a long and growing list, though, so no promises.

 Feel free to do your own documentation, or hand it off to a friendly
 in-house writer.

 -- Lefty, self-appointed Hive docs maven



 On Sat, Feb 15, 2014 at 1:28 PM, Thejas Nair the...@hortonworks.com
 wrote:

  Sounds good to me.
 
 
  On Fri, Feb 14, 2014 at 7:29 PM, Harish Butani hbut...@hortonworks.com
  wrote:
 
   Hi,
  
   Its mid feb. Wanted to check if the community is ready to cut a branch.
   Could we cut the branch in a week , say 5pm PST 2/21/14?
   The goal is to keep the release cycle short: couple of weeks; so after
  the
   branch we go into stabilizing mode for hive 0.13, checking in only
   blocker/critical bug fixes.
  
   regards,
   Harish.
  
  
   On Jan 20, 2014, at 9:25 AM, Brock Noland br...@cloudera.com wrote:
  
Hi,
   
I agree that picking a date to branch and then restricting commits to
   that
branch would be a less time intensive plan for the RM.
   
Brock
   
   
On Sat, Jan 18, 2014 at 4:21 PM, Harish Butani 
  hbut...@hortonworks.com
   wrote:
   
Yes agree it is time to start planning for the next release.
I would like to volunteer to do the release management duties for
 this
release(will be a great experience for me)
Will be happy to do it, if the community is fine with this.
   
regards,
Harish.
   
On Jan 17, 2014, at 7:05 PM, Thejas Nair the...@hortonworks.com
   wrote:
   
Yes, I think it is time to start planning for the next release.
For 0.12 release I created a branch and then accepted patches that
people asked to be included for sometime, before moving a phase of
accepting only critical bug fixes. This turned out to be laborious.
I think we should instead give everyone a few weeks to get any
  patches
they are working on to be ready, cut the branch, and take in only
critical bug fixes to the branch after that.
How about cutting the branch around mid-February and targeting to
release in a week or two after that.
   
Thanks,
Thejas
   
   
On Fri, Jan 17, 2014 at 4:39 PM, Carl Steinbach c...@apache.org
   wrote:
I was wondering what people think about setting a tentative date
 for
   the
Hive 0.13 release? At an old Hive Contrib meeting we agreed that
  Hive
should follow a time-based release model with new releases every
  four
months. If we follow that schedule we're due for the next release
 in
mid-February.
   
Thoughts?
   
Thanks.
   
Carl
   
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or
   entity
to
which it is addressed and may contain information that is
  confidential,
privileged and exempt from disclosure under applicable law. If the
   reader
of this message is not the intended recipient, you are hereby
  notified
that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you
 have
received this communication in error, please contact the sender
immediately
and delete it from your system. Thank You.
   
   
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or
   entity to
which it is addressed and may contain information that is
  confidential,
privileged and exempt from disclosure under applicable law. If the
   reader
of this message is not the intended recipient, you are hereby
 notified
   that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender
   immediately
and delete it from your system. Thank You.
   
   
   
   
--
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
  
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
 entity
  to
   which it is addressed and may contain information that is confidential,
   privileged and exempt from disclosure under applicable law. If the
 reader
   of this message is not the intended recipient, you are hereby notified
  that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
  immediately
   and delete it from your system. Thank You.
  
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  

[jira] [Commented] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902901#comment-13902901
 ] 

Navis commented on HIVE-5958:
-

[~thejas] I've been a little frustrated on generating q.out files on too many 
issues. I really appreciated if anyone can help this process. 

 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6037) Synchronize HiveConf with hive-default.xml.template and support show conf

2014-02-16 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902902#comment-13902902
 ] 

Brock Noland commented on HIVE-6037:


+1 pending tests

 Synchronize HiveConf with hive-default.xml.template and support show conf
 -

 Key: HIVE-6037
 URL: https://issues.apache.org/jira/browse/HIVE-6037
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: CHIVE-6037.3.patch.txt, HIVE-6037.1.patch.txt, 
 HIVE-6037.10.patch.txt, HIVE-6037.2.patch.txt, HIVE-6037.4.patch.txt, 
 HIVE-6037.5.patch.txt, HIVE-6037.6.patch.txt, HIVE-6037.7.patch.txt, 
 HIVE-6037.8.patch.txt, HIVE-6037.9.patch.txt


 see HIVE-5879



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Timeline for the Hive 0.13 release?

2014-02-16 Thread Brock Noland
I'd love to see HIVE-6037 in the 0.13 release. I have +1'ed it pending tests.

Brock

On Sun, Feb 16, 2014 at 7:23 PM, Navis류승우 navis@nexr.com wrote:
 HIVE-6037 is for generating hive-default.template file from HiveConf. Could
 it be included in this release? If it's not, I'll suspend further rebasing
 of it till next release (conflicts too frequently).


 2014-02-16 20:38 GMT+09:00 Lefty Leverenz leftylever...@gmail.com:

 I'll try to catch up on the wikidocs backlog for 0.13.0 patches in time for
 the release.  It's a long and growing list, though, so no promises.

 Feel free to do your own documentation, or hand it off to a friendly
 in-house writer.

 -- Lefty, self-appointed Hive docs maven



 On Sat, Feb 15, 2014 at 1:28 PM, Thejas Nair the...@hortonworks.com
 wrote:

  Sounds good to me.
 
 
  On Fri, Feb 14, 2014 at 7:29 PM, Harish Butani hbut...@hortonworks.com
  wrote:
 
   Hi,
  
   Its mid feb. Wanted to check if the community is ready to cut a branch.
   Could we cut the branch in a week , say 5pm PST 2/21/14?
   The goal is to keep the release cycle short: couple of weeks; so after
  the
   branch we go into stabilizing mode for hive 0.13, checking in only
   blocker/critical bug fixes.
  
   regards,
   Harish.
  
  
   On Jan 20, 2014, at 9:25 AM, Brock Noland br...@cloudera.com wrote:
  
Hi,
   
I agree that picking a date to branch and then restricting commits to
   that
branch would be a less time intensive plan for the RM.
   
Brock
   
   
On Sat, Jan 18, 2014 at 4:21 PM, Harish Butani 
  hbut...@hortonworks.com
   wrote:
   
Yes agree it is time to start planning for the next release.
I would like to volunteer to do the release management duties for
 this
release(will be a great experience for me)
Will be happy to do it, if the community is fine with this.
   
regards,
Harish.
   
On Jan 17, 2014, at 7:05 PM, Thejas Nair the...@hortonworks.com
   wrote:
   
Yes, I think it is time to start planning for the next release.
For 0.12 release I created a branch and then accepted patches that
people asked to be included for sometime, before moving a phase of
accepting only critical bug fixes. This turned out to be laborious.
I think we should instead give everyone a few weeks to get any
  patches
they are working on to be ready, cut the branch, and take in only
critical bug fixes to the branch after that.
How about cutting the branch around mid-February and targeting to
release in a week or two after that.
   
Thanks,
Thejas
   
   
On Fri, Jan 17, 2014 at 4:39 PM, Carl Steinbach c...@apache.org
   wrote:
I was wondering what people think about setting a tentative date
 for
   the
Hive 0.13 release? At an old Hive Contrib meeting we agreed that
  Hive
should follow a time-based release model with new releases every
  four
months. If we follow that schedule we're due for the next release
 in
mid-February.
   
Thoughts?
   
Thanks.
   
Carl
   
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or
   entity
to
which it is addressed and may contain information that is
  confidential,
privileged and exempt from disclosure under applicable law. If the
   reader
of this message is not the intended recipient, you are hereby
  notified
that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you
 have
received this communication in error, please contact the sender
immediately
and delete it from your system. Thank You.
   
   
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or
   entity to
which it is addressed and may contain information that is
  confidential,
privileged and exempt from disclosure under applicable law. If the
   reader
of this message is not the intended recipient, you are hereby
 notified
   that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender
   immediately
and delete it from your system. Thank You.
   
   
   
   
--
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
  
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
 entity
  to
   which it is addressed and may contain information that is confidential,
   privileged and exempt from disclosure under applicable law. If the
 reader
   of this message is not the intended recipient, you are hereby notified
  that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
  immediately
   

[jira] [Commented] (HIVE-6439) Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902903#comment-13902903
 ] 

Brock Noland commented on HIVE-6439:


Hi Laljo,

Essentially that is correct. The problem with that code is that it catches 
Throwable and logs at debug. This means nasty errors such as OOM or internal 
JVM errors which occur and will not be logged since most users do not log at 
the debug level in production. I'd suggest not catching throwable and instead 
catching exception. Catching exception is still a poor coding practice as you 
will catch all kinds of runtime errors which you do not expect. I am guessing 
the author is looking to try CBO and fail back if a bug is hit. Even in that 
case, catching exception, we should be logging at a ERROR level unless there is 
a very good reason not to.

 Introduce CBO step in Semantic Analyzer
 ---

 Key: HIVE-6439
 URL: https://issues.apache.org/jira/browse/HIVE-6439
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-6439.1.patch, HIVE-6439.2.patch


 This patch introduces CBO step in SemanticAnalyzer. For now the 
 CostBasedOptimizer is an empty shell. 
 The contract between SemAly and CBO is:
 - CBO step  is controlled by the 'hive.enable.cbo.flag'. 
 - When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
 annotated with stats). If it can CBO will return a better plan in Hive AST 
 form.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6339) Implement new JDK7 schema management APIs in java.sql.Connection

2014-02-16 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6339:


   Resolution: Fixed
Fix Version/s: 0.13.0
 Release Note: Now supports getSchema()/setSchema() in jdbc for hiveserver2
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Prasad!

 Implement new JDK7 schema management APIs in java.sql.Connection 
 -

 Key: HIVE-6339
 URL: https://issues.apache.org/jira/browse/HIVE-6339
 Project: Hive
  Issue Type: Improvement
  Components: JDBC
Affects Versions: 0.13.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.13.0

 Attachments: HIVE-6339.1.patch, HIVE-6339.2.patch, HIVE-6339.4.patch


 JDK7 has added a few metadata methods in 
 [java.sql.Conntion|http://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html]
  
 {noformat}
 getSchema()
 setSchema()
 getCatalog()
 setCatalog()
 {noformat}
 Currently Hive JDBC just has stub implementation for all these methods throws 
 unsupported exception. This needs to be fixed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6386) sql std auth - database should have an owner

2014-02-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902908#comment-13902908
 ] 

Hive QA commented on HIVE-6386:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12629139/HIVE-6386.4.patch

{color:green}SUCCESS:{color} +1 5097 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1350/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1350/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12629139

 sql std auth - database should have an owner
 

 Key: HIVE-6386
 URL: https://issues.apache.org/jira/browse/HIVE-6386
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, Metastore
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6386.1.patch, HIVE-6386.2.patch, HIVE-6386.3.patch, 
 HIVE-6386.4.patch, HIVE-6386.patch


 Database in metastore does not have owner associated with it. Database owner 
 is needed for sql std authorization rules.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6203) Privileges of role granted indrectily to user is not applied

2014-02-16 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6203:


Attachment: HIVE-6203.3.patch.txt

Rebased to trunk

 Privileges of role granted indrectily to user is not applied
 

 Key: HIVE-6203
 URL: https://issues.apache.org/jira/browse/HIVE-6203
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-6203.1.patch.txt, HIVE-6203.2.patch.txt, 
 HIVE-6203.3.patch.txt


 For example, 
 {noformat}
 create role r1;
 create role r2;
 grant select on table eq to role r1;
 grant role r1 to role r2;
 grant role r2 to user admin;
 select * from eq limit 5;
 {noformat}
 admin - r2 - r1 - SEL on table eq
 but user admin fails to access table eq



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Timeline for the Hive 0.13 release?

2014-02-16 Thread Lefty Leverenz
I can focus on reviewing HIVE-6037 for the 0.13 release if it's going in.
 Some doc fixes might get pushed back, but that's not too bad because the
wiki is independent of the release.

-- Lefty


On Sun, Feb 16, 2014 at 5:32 PM, Brock Noland br...@cloudera.com wrote:

 I'd love to see HIVE-6037 in the 0.13 release. I have +1'ed it pending
 tests.

 Brock

 On Sun, Feb 16, 2014 at 7:23 PM, Navis류승우 navis@nexr.com wrote:
  HIVE-6037 is for generating hive-default.template file from HiveConf.
 Could
  it be included in this release? If it's not, I'll suspend further
 rebasing
  of it till next release (conflicts too frequently).
 
 
  2014-02-16 20:38 GMT+09:00 Lefty Leverenz leftylever...@gmail.com:
 
  I'll try to catch up on the wikidocs backlog for 0.13.0 patches in time
 for
  the release.  It's a long and growing list, though, so no promises.
 
  Feel free to do your own documentation, or hand it off to a friendly
  in-house writer.
 
  -- Lefty, self-appointed Hive docs maven
 
 
 
  On Sat, Feb 15, 2014 at 1:28 PM, Thejas Nair the...@hortonworks.com
  wrote:
 
   Sounds good to me.
  
  
   On Fri, Feb 14, 2014 at 7:29 PM, Harish Butani 
 hbut...@hortonworks.com
   wrote:
  
Hi,
   
Its mid feb. Wanted to check if the community is ready to cut a
 branch.
Could we cut the branch in a week , say 5pm PST 2/21/14?
The goal is to keep the release cycle short: couple of weeks; so
 after
   the
branch we go into stabilizing mode for hive 0.13, checking in only
blocker/critical bug fixes.
   
regards,
Harish.
   
   
On Jan 20, 2014, at 9:25 AM, Brock Noland br...@cloudera.com
 wrote:
   
 Hi,

 I agree that picking a date to branch and then restricting
 commits to
that
 branch would be a less time intensive plan for the RM.

 Brock


 On Sat, Jan 18, 2014 at 4:21 PM, Harish Butani 
   hbut...@hortonworks.com
wrote:

 Yes agree it is time to start planning for the next release.
 I would like to volunteer to do the release management duties for
  this
 release(will be a great experience for me)
 Will be happy to do it, if the community is fine with this.

 regards,
 Harish.

 On Jan 17, 2014, at 7:05 PM, Thejas Nair the...@hortonworks.com
 
wrote:

 Yes, I think it is time to start planning for the next release.
 For 0.12 release I created a branch and then accepted patches
 that
 people asked to be included for sometime, before moving a phase
 of
 accepting only critical bug fixes. This turned out to be
 laborious.
 I think we should instead give everyone a few weeks to get any
   patches
 they are working on to be ready, cut the branch, and take in
 only
 critical bug fixes to the branch after that.
 How about cutting the branch around mid-February and targeting
 to
 release in a week or two after that.

 Thanks,
 Thejas


 On Fri, Jan 17, 2014 at 4:39 PM, Carl Steinbach c...@apache.org
 
wrote:
 I was wondering what people think about setting a tentative
 date
  for
the
 Hive 0.13 release? At an old Hive Contrib meeting we agreed
 that
   Hive
 should follow a time-based release model with new releases
 every
   four
 months. If we follow that schedule we're due for the next
 release
  in
 mid-February.

 Thoughts?

 Thanks.

 Carl

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual
 or
entity
 to
 which it is addressed and may contain information that is
   confidential,
 privileged and exempt from disclosure under applicable law. If
 the
reader
 of this message is not the intended recipient, you are hereby
   notified
 that
 any printing, copying, dissemination, distribution, disclosure
 or
 forwarding of this communication is strictly prohibited. If you
  have
 received this communication in error, please contact the sender
 immediately
 and delete it from your system. Thank You.


 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or
entity to
 which it is addressed and may contain information that is
   confidential,
 privileged and exempt from disclosure under applicable law. If
 the
reader
 of this message is not the intended recipient, you are hereby
  notified
that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you
 have
 received this communication in error, please contact the sender
immediately
 and delete it from your system. Thank You.




 --
 Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
   
   
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or
  entity
   to
which 

[jira] [Commented] (HIVE-6403) uncorrelated subquery is failing with auto.convert.join=true

2014-02-16 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902912#comment-13902912
 ] 

Navis commented on HIVE-6403:
-

[~rhbutani] 
bq. i don't see a union
Ah, it's auto_join27.q, not auto_join17.q. Sorry for that.
bq. should favor the right alias as the big table
Seemed a bug which was not fixed properly in HIVE-5945. I'll check that, too.
bq. my contribution is very tiny
I would have never thought of multiInsertBigTableCheck() will be needed. Most 
of codes I've suggested are just easier part. I'll make this issue done. Thanks.

 uncorrelated subquery is failing with auto.convert.join=true
 

 Key: HIVE-6403
 URL: https://issues.apache.org/jira/browse/HIVE-6403
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-6403.1.patch, HIVE-6403.2.patch, navis.patch, 
 navis2.patch


 Fixing HIVE-5690, I've found query in subquery_multiinsert.q is not working 
 with hive.auto.convert.join=true 
 {noformat}
 set hive.auto.convert.join=true;
 hive explain
  from src b 
  INSERT OVERWRITE TABLE src_4 
select * 
where b.key in 
 (select a.key 
  from src a 
  where b.value = a.value and a.key  '9'
 ) 
  INSERT OVERWRITE TABLE src_5 
select *  
where b.key not in  ( select key from src s1 where s1.key  '2') 
order by key 
  ;
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
   at java.util.ArrayList.get(ArrayList.java:411)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinLocalWork(MapJoinProcessor.java:149)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:256)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:481)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:194)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:139)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinResolver.resolve(CommonJoinResolver.java:79)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:100)
   at 
 org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:290)
   at 
 org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:216)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9167)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:64)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:446)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:346)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1056)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1099)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:992)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:982)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
   at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:687)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:626)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 

[jira] [Updated] (HIVE-6403) uncorrelated subquery is failing with auto.convert.join=true

2014-02-16 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6403:


Attachment: HIVE-6403.3.patch.txt

 uncorrelated subquery is failing with auto.convert.join=true
 

 Key: HIVE-6403
 URL: https://issues.apache.org/jira/browse/HIVE-6403
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-6403.1.patch, HIVE-6403.2.patch, 
 HIVE-6403.3.patch.txt, navis.patch, navis2.patch


 Fixing HIVE-5690, I've found query in subquery_multiinsert.q is not working 
 with hive.auto.convert.join=true 
 {noformat}
 set hive.auto.convert.join=true;
 hive explain
  from src b 
  INSERT OVERWRITE TABLE src_4 
select * 
where b.key in 
 (select a.key 
  from src a 
  where b.value = a.value and a.key  '9'
 ) 
  INSERT OVERWRITE TABLE src_5 
select *  
where b.key not in  ( select key from src s1 where s1.key  '2') 
order by key 
  ;
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
   at java.util.ArrayList.get(ArrayList.java:411)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinLocalWork(MapJoinProcessor.java:149)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:256)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:481)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:194)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:139)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinResolver.resolve(CommonJoinResolver.java:79)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:100)
   at 
 org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:290)
   at 
 org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:216)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9167)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:64)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:446)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:346)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1056)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1099)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:992)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:982)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
   at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:687)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:626)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 org.apache.hadoop.hive.ql.parse.SemanticException: Failed to generate new 
 mapJoin operator by exception : Index: 0, Size: 0
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:266)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
   at 
 

Review Request 18177: uncorrelated subquery is failing with auto.convert.join=true

2014-02-16 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18177/
---

Review request for hive.


Bugs: HIVE-6403
https://issues.apache.org/jira/browse/HIVE-6403


Repository: hive-git


Description
---

Fixing HIVE-5690, I've found query in subquery_multiinsert.q is not working 
with hive.auto.convert.join=true 
{noformat}
set hive.auto.convert.join=true;
hive explain
 from src b 
 INSERT OVERWRITE TABLE src_4 
   select * 
   where b.key in 
(select a.key 
 from src a 
 where b.value = a.value and a.key  '9'
) 
 INSERT OVERWRITE TABLE src_5 
   select *  
   where b.key not in  ( select key from src s1 where s1.key  '2') 
   order by key 
 ;
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinLocalWork(MapJoinProcessor.java:149)
at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:256)
at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:481)
at 
org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:194)
at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:139)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinResolver.resolve(CommonJoinResolver.java:79)
at 
org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:100)
at 
org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:290)
at 
org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:216)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9167)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
at 
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:64)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:446)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:346)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1056)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1099)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:992)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:982)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:687)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:626)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
org.apache.hadoop.hive.ql.parse.SemanticException: Failed to generate new 
mapJoin operator by exception : Index: 0, Size: 0
at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:266)
at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:481)
at 
org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
at 

[jira] [Commented] (HIVE-6037) Synchronize HiveConf with hive-default.xml.template and support show conf

2014-02-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902928#comment-13902928
 ] 

Hive QA commented on HIVE-6037:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12629293/HIVE-6037.10.patch.txt

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5122 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16
org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveOperationType.checkHiveOperationTypeMatch
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1354/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1354/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12629293

 Synchronize HiveConf with hive-default.xml.template and support show conf
 -

 Key: HIVE-6037
 URL: https://issues.apache.org/jira/browse/HIVE-6037
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: CHIVE-6037.3.patch.txt, HIVE-6037.1.patch.txt, 
 HIVE-6037.10.patch.txt, HIVE-6037.2.patch.txt, HIVE-6037.4.patch.txt, 
 HIVE-6037.5.patch.txt, HIVE-6037.6.patch.txt, HIVE-6037.7.patch.txt, 
 HIVE-6037.8.patch.txt, HIVE-6037.9.patch.txt


 see HIVE-5879



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6037) Synchronize HiveConf with hive-default.xml.template and support show conf

2014-02-16 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6037:


Attachment: HIVE-6037.11.patch.txt

Missed one file. Sorry.

 Synchronize HiveConf with hive-default.xml.template and support show conf
 -

 Key: HIVE-6037
 URL: https://issues.apache.org/jira/browse/HIVE-6037
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: CHIVE-6037.3.patch.txt, HIVE-6037.1.patch.txt, 
 HIVE-6037.10.patch.txt, HIVE-6037.11.patch.txt, HIVE-6037.2.patch.txt, 
 HIVE-6037.4.patch.txt, HIVE-6037.5.patch.txt, HIVE-6037.6.patch.txt, 
 HIVE-6037.7.patch.txt, HIVE-6037.8.patch.txt, HIVE-6037.9.patch.txt


 see HIVE-5879



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Parquet support (HIVE-5783)

2014-02-16 Thread Brock Noland
Hi Gunther,

Please find my response inline.

On Sat, Feb 15, 2014 at 5:52 PM, Gunther Hagleitner gunt...@apache.org wrote:
 I read through the ticket, patch and documentation

Thank you very much for reading through these items!

 and would like to
 suggest some changes.

There was ample time to suggest these changes prior to commit. The
JIRA was created three months ago, and the title you object to and the
patch was up there over two months ago.

 As far as I can tell this basically adds parquet SerDes to hive, but the
 file format remains external to hive. There is no way for hive devs to
 makes changes, fix bugs add, change datatypes, add features to parquet
 itself.

As stated in many locations including the JIRA discussed here, we
shouldn't be picking winner/loser file formats. We use many external
libraries, none of which, all Hive developers have the ability to
modify. For example most Hive developers do not have the ability to
modify Sequence File. Tez is also an external library which few Hive
developers can change.

 So:

 - I suggest we document it as one of the built-in SerDes and not as a
 native format like here:
 https://cwiki.apache.org/confluence/display/Hive/Parquet (and here:
 https://cwiki.apache.org/confluence/display/Hive/LanguageManual)
 - I vote for the jira to say Add parquet SerDes to Hive and not Native
 support

The change provides the ability to create a parquet table with Hive,
natively. Therefore I don't see the issue you have with the word
native.

 - I think we should revert the change to the grammar to allow STORED AS
 PARQUET until we have a mechanism to do that for all SerDes, i.e.: someone
 picks up: HIVE-5976. (I also don't think this actually works properly
 unless we bundle parquet in hive-exec, which I don't think we want.)

Again, you could have provided this feedback many moons ago. I am
personally interested in HIVE-5976 but it's orthogonal to this issue.
That change just makes it easier and cleaner to add STORED AS
keywords. The contributors of the Parquet integration are not required
to fix Hive. That is our job.

 - We should revert the deprecated classes (At least I don't understand how
 a first drop needs to add deprecated stuff)

The deprecated classes are shells (no actual code) to support existing
users of Parquet, of which there are many. I see no justification for
impacting existing users when the workaround is trivial and
non-impacting to any other user.

 In general though, I'm also confused on why adding this SerDe to the hive
 code base is beneficial. Seems to me that that just makes upgrading
 Parquet, bug fixing, etc more difficult by tying a SerDe release to a Hive
 release. To me that outweighs the benefit of a slightly more involved setup
 of Hive + serde in the cluster.

The Hive APIs, which are not clearly defined, have changed often in
the past few releases making maintaining a file format extremely
difficult. For example, 0.12 and 0.13 break most if not all external
code bases.

However, beyond that, the community felt it was beneficial to make
Parquet easier to use. If you are not interested in Parquet then
ignore it as this change does not impact you. Tez integration is
something which does not interest myself and many other Hive
developers. Indeed other than a few cursory reviews and a few times
where I championed the refactoring you guys were doing in order to
support Tez, I have ignored the Tez work.

Sincerely,
Brock


Re: Review Request 18168: SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18168/
---

(Updated Feb. 17, 2014, 4:11 a.m.)


Review request for hive and Ashutosh Chauhan.


Changes
---

HIVE-5958.2.patch - more test cases, NPE fixes


Bugs: HIVE-5958
https://issues.apache.org/jira/browse/HIVE-5958


Repository: hive-git


Description
---

Statement such as create table, alter table that specify an path uri should be 
allowed under the new authorization scheme only if URI(Path) specified has 
permissions including read/write and ownership of the file/dir and its children.
Also, fix issue of database not getting set as output for create-table.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/common/FileUtils.java c1f8842 
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 83d5bfc 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/ReadEntity.java c9a 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/WriteEntity.java 0493302 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 0b7c128 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 1f539ef 
  ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java a22a15f 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 
  ql/src/java/org/apache/hadoop/hive/ql/plan/HiveOperation.java 93c89de 
  
ql/src/java/org/apache/hadoop/hive/ql/security/SessionStateConfigUserAuthenticator.java
 812105c 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java
 fae6844 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/RequiredPrivileges.java
 10a582b 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java
 4a9149f 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java
 40461f7 
  ql/src/test/queries/clientnegative/authorization_addpartition.q 64d8a3d 
  ql/src/test/queries/clientnegative/authorization_droppartition.q 45ed99b 
  ql/src/test/queries/clientnegative/authorization_uri_add_partition.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_alterpart_loc.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_altertab_setloc.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_create_table1.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_create_table_ext.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_createdb.q PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_index.q PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_insert.q PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_insert_local.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/authorization_uri_load_data.q PRE-CREATION 
  ql/src/test/results/clientnegative/authorization_addpartition.q.out f4d3b4f 
  ql/src/test/results/clientnegative/authorization_createview.q.out cb81b83 
  ql/src/test/results/clientnegative/authorization_ctas.q.out 1070468 
  ql/src/test/results/clientnegative/authorization_droppartition.q.out 7de553b 
  ql/src/test/results/clientnegative/authorization_fail_1.q.out ab1abe2 
  ql/src/test/results/clientnegative/authorization_fail_2.q.out 2c03b65 
  ql/src/test/results/clientnegative/authorization_fail_3.q.out bfba08a 
  ql/src/test/results/clientnegative/authorization_fail_4.q.out 34ad4ef 
  ql/src/test/results/clientnegative/authorization_fail_5.q.out a0289fb 
  ql/src/test/results/clientnegative/authorization_fail_6.q.out 47f8bd1 
  ql/src/test/results/clientnegative/authorization_fail_7.q.out a9bf0cc 
  ql/src/test/results/clientnegative/authorization_grant_table_allpriv.q.out 
0e17c94 
  ql/src/test/results/clientnegative/authorization_grant_table_fail1.q.out 
0c83849 
  
ql/src/test/results/clientnegative/authorization_grant_table_fail_nogrant.q.out 
129b5fa 
  ql/src/test/results/clientnegative/authorization_insert_noinspriv.q.out 
6d510f1 
  ql/src/test/results/clientnegative/authorization_insert_noselectpriv.q.out 
5b9b93a 
  ql/src/test/results/clientnegative/authorization_invalid_priv_v1.q.out 
10d1ca8 
  ql/src/test/results/clientnegative/authorization_invalid_priv_v2.q.out 
62aa8da 
  
ql/src/test/results/clientnegative/authorization_not_owner_alter_tab_rename.q.out
 e41702a 
  
ql/src/test/results/clientnegative/authorization_not_owner_alter_tab_serdeprop.q.out
 e41702a 
  ql/src/test/results/clientnegative/authorization_not_owner_drop_tab.q.out 
b456aca 
  ql/src/test/results/clientnegative/authorization_not_owner_drop_view.q.out 
2433846 
  ql/src/test/results/clientnegative/authorization_part.q.out 31dfda9 
  ql/src/test/results/clientnegative/authorization_priv_current_role_neg.q.out 
f932a3d 
  

[jira] [Updated] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5958:


Attachment: HIVE-5958.2.patch

HIVE-5958.2.patch - more test cases, NPE fixes

 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch, HIVE-5958.2.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 18177: uncorrelated subquery is failing with auto.convert.join=true

2014-02-16 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18177/
---

(Updated Feb. 17, 2014, 4:15 a.m.)


Review request for hive.


Changes
---

Add/fix comments and minor refactorings


Bugs: HIVE-6403
https://issues.apache.org/jira/browse/HIVE-6403


Repository: hive-git


Description
---

Fixing HIVE-5690, I've found query in subquery_multiinsert.q is not working 
with hive.auto.convert.join=true 
{noformat}
set hive.auto.convert.join=true;
hive explain
 from src b 
 INSERT OVERWRITE TABLE src_4 
   select * 
   where b.key in 
(select a.key 
 from src a 
 where b.value = a.value and a.key  '9'
) 
 INSERT OVERWRITE TABLE src_5 
   select *  
   where b.key not in  ( select key from src s1 where s1.key  '2') 
   order by key 
 ;
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinLocalWork(MapJoinProcessor.java:149)
at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:256)
at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:481)
at 
org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:194)
at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:139)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinResolver.resolve(CommonJoinResolver.java:79)
at 
org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:100)
at 
org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:290)
at 
org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:216)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9167)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
at 
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:64)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:446)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:346)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1056)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1099)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:992)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:982)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:687)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:626)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
org.apache.hadoop.hive.ql.parse.SemanticException: Failed to generate new 
mapJoin operator by exception : Index: 0, Size: 0
at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:266)
at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:481)
at 

[jira] [Updated] (HIVE-6403) uncorrelated subquery is failing with auto.convert.join=true

2014-02-16 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6403:


Attachment: HIVE-6403.4.patch.txt

Add/fix comments and minor refactorings

 uncorrelated subquery is failing with auto.convert.join=true
 

 Key: HIVE-6403
 URL: https://issues.apache.org/jira/browse/HIVE-6403
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-6403.1.patch, HIVE-6403.2.patch, 
 HIVE-6403.3.patch.txt, HIVE-6403.4.patch.txt, navis.patch, navis2.patch


 Fixing HIVE-5690, I've found query in subquery_multiinsert.q is not working 
 with hive.auto.convert.join=true 
 {noformat}
 set hive.auto.convert.join=true;
 hive explain
  from src b 
  INSERT OVERWRITE TABLE src_4 
select * 
where b.key in 
 (select a.key 
  from src a 
  where b.value = a.value and a.key  '9'
 ) 
  INSERT OVERWRITE TABLE src_5 
select *  
where b.key not in  ( select key from src s1 where s1.key  '2') 
order by key 
  ;
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
   at java.util.ArrayList.get(ArrayList.java:411)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinLocalWork(MapJoinProcessor.java:149)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:256)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:481)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:194)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:139)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinResolver.resolve(CommonJoinResolver.java:79)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:100)
   at 
 org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:290)
   at 
 org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:216)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9167)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:64)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:446)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:346)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1056)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1099)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:992)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:982)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
   at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:687)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:626)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 org.apache.hadoop.hive.ql.parse.SemanticException: Failed to generate new 
 mapJoin operator by exception : Index: 0, Size: 0
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:266)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
   at 
 

Re: Review Request 18168: SQL std auth - authorize statements that work with paths

2014-02-16 Thread Brock Noland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18168/#review34608
---



common/src/java/org/apache/hadoop/hive/common/FileUtils.java
https://reviews.apache.org/r/18168/#comment64773

standard is private static final



common/src/java/org/apache/hadoop/hive/common/FileUtils.java
https://reviews.apache.org/r/18168/#comment64774

since we return in both the true and false case, this should just be:

return permissions.getGroupAction().implies(action);



ql/src/java/org/apache/hadoop/hive/ql/security/SessionStateConfigUserAuthenticator.java
https://reviews.apache.org/r/18168/#comment64775

why use negation in an if with an else? That makes the code confusing.

It should be

if (newUserName == null || newUserName.trim().isEmpty()) {
  return System...
} else {
  return newUserName;
}

or even cleaner:

String newUserName = ... get(user.name, ).trim();
if(newUserName.isEmpty()) 
...


- Brock Noland


On Feb. 17, 2014, 4:11 a.m., Thejas Nair wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18168/
 ---
 
 (Updated Feb. 17, 2014, 4:11 a.m.)
 
 
 Review request for hive and Ashutosh Chauhan.
 
 
 Bugs: HIVE-5958
 https://issues.apache.org/jira/browse/HIVE-5958
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/common/FileUtils.java c1f8842 
   ql/src/java/org/apache/hadoop/hive/ql/Driver.java 83d5bfc 
   ql/src/java/org/apache/hadoop/hive/ql/hooks/ReadEntity.java c9a 
   ql/src/java/org/apache/hadoop/hive/ql/hooks/WriteEntity.java 0493302 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 0b7c128 
   ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
 1f539ef 
   ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java 
 a22a15f 
   ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 
   ql/src/java/org/apache/hadoop/hive/ql/plan/HiveOperation.java 93c89de 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/SessionStateConfigUserAuthenticator.java
  812105c 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java
  fae6844 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/RequiredPrivileges.java
  10a582b 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java
  4a9149f 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java
  40461f7 
   ql/src/test/queries/clientnegative/authorization_addpartition.q 64d8a3d 
   ql/src/test/queries/clientnegative/authorization_droppartition.q 45ed99b 
   ql/src/test/queries/clientnegative/authorization_uri_add_partition.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_alterpart_loc.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_altertab_setloc.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_create_table1.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_create_table_ext.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_createdb.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_index.q PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_insert.q PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_insert_local.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_load_data.q 
 PRE-CREATION 
   ql/src/test/results/clientnegative/authorization_addpartition.q.out f4d3b4f 
   ql/src/test/results/clientnegative/authorization_createview.q.out cb81b83 
   ql/src/test/results/clientnegative/authorization_ctas.q.out 1070468 
   ql/src/test/results/clientnegative/authorization_droppartition.q.out 
 7de553b 
   ql/src/test/results/clientnegative/authorization_fail_1.q.out ab1abe2 
   ql/src/test/results/clientnegative/authorization_fail_2.q.out 2c03b65 
   ql/src/test/results/clientnegative/authorization_fail_3.q.out bfba08a 
   ql/src/test/results/clientnegative/authorization_fail_4.q.out 34ad4ef 
   ql/src/test/results/clientnegative/authorization_fail_5.q.out a0289fb 
   ql/src/test/results/clientnegative/authorization_fail_6.q.out 47f8bd1 
   

[jira] [Assigned] (HIVE-5976) Decouple input formats from STORED as keywords

2014-02-16 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland reassigned HIVE-5976:
--

Assignee: Brock Noland

I have a patch for this using the ServiceLoader facility[1].

http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland

 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HIVE-5976) Decouple input formats from STORED as keywords

2014-02-16 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902946#comment-13902946
 ] 

Brock Noland edited comment on HIVE-5976 at 2/17/14 4:34 AM:
-

I have a patch for this using the ServiceLoader facility:

http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html

I will post it tomorrow.


was (Author: brocknoland):
I have a patch for this using the ServiceLoader facility:

http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland

 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HIVE-5976) Decouple input formats from STORED as keywords

2014-02-16 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902946#comment-13902946
 ] 

Brock Noland edited comment on HIVE-5976 at 2/17/14 4:33 AM:
-

I have a patch for this using the ServiceLoader facility:

http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html


was (Author: brocknoland):
I have a patch for this using the ServiceLoader facility[1].

http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html

 Decouple input formats from STORED as keywords
 --

 Key: HIVE-5976
 URL: https://issues.apache.org/jira/browse/HIVE-5976
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland

 As noted in HIVE-5783, we hard code the input formats mapped to keywords. 
 It'd be nice if there was a registration system so we didn't need to do that.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6441) Unmappable character for encoding UTF-8 in Operation2Privilege

2014-02-16 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6441:


Status: Patch Available  (was: Open)

 Unmappable character for encoding UTF-8 in Operation2Privilege
 --

 Key: HIVE-6441
 URL: https://issues.apache.org/jira/browse/HIVE-6441
 Project: Hive
  Issue Type: Task
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-6441.1.patch.txt


 NO PRECOMMIT TESTS
 [WARNING] 
 /home/navis/apache/oss-hive/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java:[85,94]
  warning: unmappable character for encoding UTF-8



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6441) Unmappable character for encoding UTF-8 in Operation2Privilege

2014-02-16 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6441:


Description: 
NO PRECOMMIT TESTS

[WARNING] 
/home/navis/apache/oss-hive/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java:[85,94]
 warning: unmappable character for encoding UTF-8

  was:[WARNING] 
/home/navis/apache/oss-hive/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java:[85,94]
 warning: unmappable character for encoding UTF-8


 Unmappable character for encoding UTF-8 in Operation2Privilege
 --

 Key: HIVE-6441
 URL: https://issues.apache.org/jira/browse/HIVE-6441
 Project: Hive
  Issue Type: Task
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-6441.1.patch.txt


 NO PRECOMMIT TESTS
 [WARNING] 
 /home/navis/apache/oss-hive/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java:[85,94]
  warning: unmappable character for encoding UTF-8



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6441) Unmappable character for encoding UTF-8 in Operation2Privilege

2014-02-16 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6441:


Attachment: HIVE-6441.1.patch.txt

 Unmappable character for encoding UTF-8 in Operation2Privilege
 --

 Key: HIVE-6441
 URL: https://issues.apache.org/jira/browse/HIVE-6441
 Project: Hive
  Issue Type: Task
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-6441.1.patch.txt


 NO PRECOMMIT TESTS
 [WARNING] 
 /home/navis/apache/oss-hive/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java:[85,94]
  warning: unmappable character for encoding UTF-8



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-860) Persistent distributed cache

2014-02-16 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902950#comment-13902950
 ] 

Brock Noland commented on HIVE-860:
---

I think we can take a similar approach to 
https://issues.apache.org/jira/browse/PIG-2672.

 Persistent distributed cache
 

 Key: HIVE-860
 URL: https://issues.apache.org/jira/browse/HIVE-860
 Project: Hive
  Issue Type: Improvement
Reporter: Zheng Shao

 DistributedCache is shared across multiple jobs, if the hdfs file name is the 
 same.
 We need to make sure Hive put the same file into the same location every time 
 and do not overwrite if the file content is the same.
 We can achieve 2 different results:
 A1. Files added with the same name, timestamp, and md5 in the same session 
 will have a single copy in distributed cache.
 A2. Filed added with the same name, timestamp, and md5 will have a single 
 copy in distributed cache.
 A2 has a bigger benefit in sharing but may raise a question on when Hive 
 should clean it up in hdfs.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6441) Unmappable character for encoding UTF-8 in Operation2Privilege

2014-02-16 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902951#comment-13902951
 ] 

Brock Noland commented on HIVE-6441:


+1

 Unmappable character for encoding UTF-8 in Operation2Privilege
 --

 Key: HIVE-6441
 URL: https://issues.apache.org/jira/browse/HIVE-6441
 Project: Hive
  Issue Type: Task
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-6441.1.patch.txt


 NO PRECOMMIT TESTS
 [WARNING] 
 /home/navis/apache/oss-hive/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java:[85,94]
  warning: unmappable character for encoding UTF-8



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HIVE-860) Persistent distributed cache

2014-02-16 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland reassigned HIVE-860:
-

Assignee: Brock Noland

 Persistent distributed cache
 

 Key: HIVE-860
 URL: https://issues.apache.org/jira/browse/HIVE-860
 Project: Hive
  Issue Type: Improvement
Reporter: Zheng Shao
Assignee: Brock Noland

 DistributedCache is shared across multiple jobs, if the hdfs file name is the 
 same.
 We need to make sure Hive put the same file into the same location every time 
 and do not overwrite if the file content is the same.
 We can achieve 2 different results:
 A1. Files added with the same name, timestamp, and md5 in the same session 
 will have a single copy in distributed cache.
 A2. Filed added with the same name, timestamp, and md5 will have a single 
 copy in distributed cache.
 A2 has a bigger benefit in sharing but may raise a question on when Hive 
 should clean it up in hdfs.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5958:


Attachment: HIVE-5958.3.patch

HIVE-5958.3.patch - patch with q.out file updates

 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch, HIVE-5958.2.patch, HIVE-5958.3.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5958:


Attachment: (was: HIVE-5958.3.patch)

 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch, HIVE-5958.2.patch, HIVE-5958.3.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5922) In orc.InStream.CompressedStream, the desired position passed to seek can equal offsets[i] + bytes[i].remaining() when ORC predicate pushdown is enabled

2014-02-16 Thread Puneet Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13902956#comment-13902956
 ] 

Puneet Gupta commented on HIVE-5922:


From what is know 0.12.0 does not have vectorization support .So that can not 
be the issue.  Also this happens only on seeking while predicate push-down is 
enabled . Normal iteration is fine . 

 In orc.InStream.CompressedStream, the desired position passed to seek can 
 equal offsets[i] + bytes[i].remaining() when ORC predicate pushdown is enabled
 

 Key: HIVE-5922
 URL: https://issues.apache.org/jira/browse/HIVE-5922
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Reporter: Yin Huai

 Two stack traces ...
 {code}
 java.io.IOException: IO error in map input file 
 hdfs://10.38.55.204:8020/user/hive/warehouse/ssdb_bin_compress_orc_large_0_13.db/cycle/04_0
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:236)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:210)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
 Caused by: java.io.IOException: java.io.IOException: Seek outside of data in 
 compressed stream Stream for column 9 kind DATA position: 21496054 length: 
 33790900 range: 2 offset: 1048588 limit: 1048588 range 0 = 13893791 to 
 1048588;  range 1 = 17039555 to 1310735;  range 2 = 20447466 to 1048588;  
 range 3 = 23855377 to 1048588;  range 4 = 27263288 to 1048588;  range 5 = 
 30409052 to 1310735 uncompressed: 262144 to 262144 to 21496054
   at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
   at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
   at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
   at 
 org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79)
   at 
 org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:33)
   at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:230)
   ... 9 more
 Caused by: java.io.IOException: Seek outside of data in compressed stream 
 Stream for column 9 kind DATA position: 21496054 length: 33790900 range: 2 
 offset: 1048588 limit: 1048588 range 0 = 13893791 to 1048588;  range 1 = 
 17039555 to 1310735;  range 2 = 20447466 to 1048588;  range 3 = 23855377 to 
 1048588;  range 4 = 27263288 to 1048588;  range 5 = 30409052 to 1310735 
 uncompressed: 262144 to 262144 to 21496054
   at 
 org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.seek(InStream.java:328)
   at 
 org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:161)
   at 
 org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:205)
   at 
 org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
   at 
 org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:240)
   at 
 org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:53)
   at 
 org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:288)
   at 
 org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$IntTreeReader.next(RecordReaderImpl.java:510)
   at 
 org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1581)
   at 
 org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2707)
   at 
 org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:110)
   at 
 org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:86)
   at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
   ... 13 more
 {\code}
 {code}
 java.io.IOException: 

[jira] [Updated] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5958:


Attachment: HIVE-5958.3.patch

 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch, HIVE-5958.2.patch, HIVE-5958.3.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 18122: Support more generic way of using composite key for HBaseHandler

2014-02-16 Thread Navis Ryu


 On Feb. 14, 2014, 4:35 p.m., Swarnim Kulkarni wrote:
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java, 
  line 106
  https://reviews.apache.org/r/18122/diff/1/?file=485256#file485256line106
 
  Javadoc on this factory class would be very helpful for consumers.

sure


 On Feb. 14, 2014, 4:35 p.m., Swarnim Kulkarni wrote:
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBase.java, line 
  21
  https://reviews.apache.org/r/18122/diff/1/?file=485270#file485270line21
 
  Are we not breaking our consumers with this non-passive change? 
  
  If we want to go this route, may be we should deprecate out the 
  existing abstract class.

LazyObjectBase is an internal class and user are not supposed to use that. We 
may create another interface but it felt a little waste to me.


 On Feb. 14, 2014, 4:35 p.m., Swarnim Kulkarni wrote:
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java, line 182
  https://reviews.apache.org/r/18122/diff/1/?file=485271#file485271line182
 
  Nit: Could change this to SerDeException to catch that specific checked 
  exception

ok.


- Navis


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18122/#review34498
---


On Feb. 14, 2014, 3:19 p.m., Swarnim Kulkarni wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18122/
 ---
 
 (Updated Feb. 14, 2014, 3:19 p.m.)
 
 
 Review request for hive, Brock Noland, Navis Ryu, and Swarnim Kulkarni.
 
 
 Bugs: HIVE-6411
 https://issues.apache.org/jira/browse/HIVE-6411
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Refer to description on HIVE-6411.
 
 
 Diffs
 -
 
   hbase-handler/pom.xml 7c3524c 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 
 5008f15 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java 
 PRE-CREATION 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java
  PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 2cd65cb 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
 8cd594b 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java 
 fc40195 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java 
 PRE-CREATION 
   hbase-handler/src/test/queries/positive/hbase_custom_key.q PRE-CREATION 
   hbase-handler/src/test/results/positive/hbase_custom_key.q.out PRE-CREATION 
   itests/util/pom.xml 9885c53 
   serde/src/java/org/apache/hadoop/hive/serde2/StructObject.java PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/StructObjectBaseInspector.java 
 PRE-CREATION 
   
 serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarStructBase.java 
 1fd6853 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObject.java 10f4c05 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBase.java 
 3334dff 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java 8a1ea46 
   
 serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/LazySimpleStructObjectInspector.java
  8a5386a 
   
 serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryObject.java 
 598683f 
   
 serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryStruct.java 
 caf3517 
 
 Diff: https://reviews.apache.org/r/18122/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Swarnim Kulkarni
 




Review Request 18179: Support more generic way of using composite key for HBaseHandler

2014-02-16 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18179/
---

Review request for hive.


Bugs: HIVE-6411
https://issues.apache.org/jira/browse/HIVE-6411


Repository: hive-git


Description
---

HIVE-2599 introduced using custom object for the row key. But it forces key 
objects to extend HBaseCompositeKey, which is again extension of LazyStruct. If 
user provides proper Object and OI, we can replace internal key and keyOI with 
those. 

Initial implementation is based on factory interface.
{code}
public interface HBaseKeyFactory {
  void init(SerDeParameters parameters, Properties properties) throws 
SerDeException;
  ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
  LazyObjectBase createObject(ObjectInspector inspector) throws SerDeException;
}
{code}


Diffs
-

  hbase-handler/pom.xml 7c3524c 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 
5008f15 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java 
PRE-CREATION 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java 
PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 2cd65cb 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
8cd594b 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java fc40195 
  hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java 
PRE-CREATION 
  hbase-handler/src/test/queries/positive/hbase_custom_key.q PRE-CREATION 
  hbase-handler/src/test/results/positive/hbase_custom_key.q.out PRE-CREATION 
  itests/util/pom.xml 9885c53 
  serde/src/java/org/apache/hadoop/hive/serde2/StructObject.java PRE-CREATION 
  serde/src/java/org/apache/hadoop/hive/serde2/StructObjectBaseInspector.java 
PRE-CREATION 
  serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarStructBase.java 
1fd6853 
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObject.java 10f4c05 
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBase.java 3334dff 
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java 8a1ea46 
  
serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/LazySimpleStructObjectInspector.java
 8a5386a 
  serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryObject.java 
598683f 
  serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryStruct.java 
caf3517 

Diff: https://reviews.apache.org/r/18179/diff/


Testing
---


Thanks,

Navis Ryu



Re: Review Request 18168: SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas Nair


 On Feb. 17, 2014, 4:23 a.m., Brock Noland wrote:
  common/src/java/org/apache/hadoop/hive/common/FileUtils.java, line 350
  https://reviews.apache.org/r/18168/diff/2/?file=487389#file487389line350
 
  since we return in both the true and false case, this should just be:
  
  return permissions.getGroupAction().implies(action);

good point, not sure how that if-else came to be!


 On Feb. 17, 2014, 4:23 a.m., Brock Noland wrote:
  ql/src/java/org/apache/hadoop/hive/ql/security/SessionStateConfigUserAuthenticator.java,
   line 48
  https://reviews.apache.org/r/18168/diff/2/?file=487398#file487398line48
 
  why use negation in an if with an else? That makes the code confusing.
  
  It should be
  
  if (newUserName == null || newUserName.trim().isEmpty()) {
return System...
  } else {
return newUserName;
  }
  
  or even cleaner:
  
  String newUserName = ... get(user.name, ).trim();
  if(newUserName.isEmpty()) 
  ...

I guess I tend to think of if checking is this the expected case (ie not null 
etc), and else taking care of invalid/unexpected case, but that is probably 
just me. Thanks for the suggestion to use get(user.name, ). Thats much 
cleaner.


- Thejas


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18168/#review34608
---


On Feb. 17, 2014, 4:11 a.m., Thejas Nair wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18168/
 ---
 
 (Updated Feb. 17, 2014, 4:11 a.m.)
 
 
 Review request for hive and Ashutosh Chauhan.
 
 
 Bugs: HIVE-5958
 https://issues.apache.org/jira/browse/HIVE-5958
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/common/FileUtils.java c1f8842 
   ql/src/java/org/apache/hadoop/hive/ql/Driver.java 83d5bfc 
   ql/src/java/org/apache/hadoop/hive/ql/hooks/ReadEntity.java c9a 
   ql/src/java/org/apache/hadoop/hive/ql/hooks/WriteEntity.java 0493302 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 0b7c128 
   ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
 1f539ef 
   ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java 
 a22a15f 
   ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 
   ql/src/java/org/apache/hadoop/hive/ql/plan/HiveOperation.java 93c89de 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/SessionStateConfigUserAuthenticator.java
  812105c 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java
  fae6844 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/RequiredPrivileges.java
  10a582b 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java
  4a9149f 
   
 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java
  40461f7 
   ql/src/test/queries/clientnegative/authorization_addpartition.q 64d8a3d 
   ql/src/test/queries/clientnegative/authorization_droppartition.q 45ed99b 
   ql/src/test/queries/clientnegative/authorization_uri_add_partition.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_alterpart_loc.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_altertab_setloc.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_create_table1.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_create_table_ext.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_createdb.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_index.q PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_insert.q PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_insert_local.q 
 PRE-CREATION 
   ql/src/test/queries/clientnegative/authorization_uri_load_data.q 
 PRE-CREATION 
   ql/src/test/results/clientnegative/authorization_addpartition.q.out f4d3b4f 
   ql/src/test/results/clientnegative/authorization_createview.q.out cb81b83 
   ql/src/test/results/clientnegative/authorization_ctas.q.out 1070468 
   ql/src/test/results/clientnegative/authorization_droppartition.q.out 
 7de553b 
   ql/src/test/results/clientnegative/authorization_fail_1.q.out ab1abe2 
   

[jira] [Updated] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5958:


Attachment: HIVE-5958.4.patch

HIVE-5958.4.patch - addressing Brock's review comments


 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch, HIVE-5958.2.patch, HIVE-5958.3.patch, 
 HIVE-5958.4.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5958:


Attachment: (was: HIVE-5958.4.patch)

 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch, HIVE-5958.2.patch, HIVE-5958.3.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: hive precommit tests on bigtop jenkins

2014-02-16 Thread Navis류승우
bq. even if a JIRA is in the queue twice it will only be tested once.
Good to know!

bq. removing order-by clauses just for conforming purpose (my comment)

I've tested it in https://issues.apache.org/jira/browse/HIVE-6438, making
556 sec - 418 sec for join_filters.q. Would it be worthwhile to rewrite
and update so many tests/results?



2014-02-14 15:58 GMT+09:00 Brock Noland br...@cloudera.com:

 Hi,

 The pre-commit tests:

 1) only test the latest attachment
 2) post the attachment id to the JIRA
 3) Verify the attachment id has not been tested before running

 This means that even if a JIRA is in the queue twice it will only be tested
 once.

 Below are relevant portions of the script:

 curl -s -S --location --retry 3 ${JIRA_ROOT_URL}/jira/browse/${JIRA_NAME}
  $JIRA_TEXT
 ...
 PATCH_URL=$(grep -o '/jira/secure/attachment/[0-9]*/[^]*' $JIRA_TEXT | \
   grep -v -e 'htm[l]*$' | sort | tail -1 | \
   grep -o '/jira/secure/attachment/[0-9]*/[^]*')
 ...
 # ensure attachment has not already been tested
 ATTACHMENT_ID=$(basename $(dirname $PATCH_URL))
 if grep -q ATTACHMENT ID: $ATTACHMENT_ID $JIRA_TEXT
 then
   echo Attachment $ATTACHMENT_ID is already tested for $JIRA_NAME
   exit 1
 fi





 On Fri, Feb 14, 2014 at 12:51 AM, Navis류승우 navis@nexr.com wrote:

  Recently, precommit test takes more than 1 day (including queue time).
 
  Deduping work queue (currently, HIVE-6403 and HIVE-6418 is queued twice)
  can make this better. Rewriting some test queries simpler (I'm thinking
 of
  removing order-by clauses just for conforming purpose). Any other ideas?
 
 
  2014-02-14 6:46 GMT+09:00 Thejas Nair the...@hortonworks.com:
 
   I see a new job now running there. Maybe there is nothing wrong with
 the
   infra and builds actually finished (except for the 3 aborted ones).
   Can't complain about a shorter queue ! :)
  
  
  
   On Thu, Feb 13, 2014 at 1:30 PM, Thejas Nair the...@hortonworks.com
   wrote:
  
Is the jenkins infra used for hive precommit tests under maintenance
   ? I
see that the long queue has suddenly disappeared. The last few test
   builds
have been aborted.
   
The jenkins used for hive precommit tests -
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/
   
Thanks,
Thejas
   
   
   
   
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
 entity
  to
   which it is addressed and may contain information that is confidential,
   privileged and exempt from disclosure under applicable law. If the
 reader
   of this message is not the intended recipient, you are hereby notified
  that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
  immediately
   and delete it from your system. Thank You.
  
 



 --
 Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org



[jira] [Updated] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5958:


Status: Patch Available  (was: Open)

 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch, HIVE-5958.2.patch, HIVE-5958.3.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: hive precommit tests on bigtop jenkins

2014-02-16 Thread Brock Noland
On Sun, Feb 16, 2014 at 11:11 PM, Navis류승우 navis@nexr.com wrote:
 bq. even if a JIRA is in the queue twice it will only be tested once.
 Good to know!

 bq. removing order-by clauses just for conforming purpose (my comment)

 I've tested it in https://issues.apache.org/jira/browse/HIVE-6438, making
 556 sec - 418 sec for join_filters.q. Would it be worthwhile to rewrite
 and update so many tests/results?

Faster is always better :)  I'll look at 6438 tomorrow.




 2014-02-14 15:58 GMT+09:00 Brock Noland br...@cloudera.com:

 Hi,

 The pre-commit tests:

 1) only test the latest attachment
 2) post the attachment id to the JIRA
 3) Verify the attachment id has not been tested before running

 This means that even if a JIRA is in the queue twice it will only be tested
 once.

 Below are relevant portions of the script:

 curl -s -S --location --retry 3 ${JIRA_ROOT_URL}/jira/browse/${JIRA_NAME}
  $JIRA_TEXT
 ...
 PATCH_URL=$(grep -o '/jira/secure/attachment/[0-9]*/[^]*' $JIRA_TEXT | \
   grep -v -e 'htm[l]*$' | sort | tail -1 | \
   grep -o '/jira/secure/attachment/[0-9]*/[^]*')
 ...
 # ensure attachment has not already been tested
 ATTACHMENT_ID=$(basename $(dirname $PATCH_URL))
 if grep -q ATTACHMENT ID: $ATTACHMENT_ID $JIRA_TEXT
 then
   echo Attachment $ATTACHMENT_ID is already tested for $JIRA_NAME
   exit 1
 fi





 On Fri, Feb 14, 2014 at 12:51 AM, Navis류승우 navis@nexr.com wrote:

  Recently, precommit test takes more than 1 day (including queue time).
 
  Deduping work queue (currently, HIVE-6403 and HIVE-6418 is queued twice)
  can make this better. Rewriting some test queries simpler (I'm thinking
 of
  removing order-by clauses just for conforming purpose). Any other ideas?
 
 
  2014-02-14 6:46 GMT+09:00 Thejas Nair the...@hortonworks.com:
 
   I see a new job now running there. Maybe there is nothing wrong with
 the
   infra and builds actually finished (except for the 3 aborted ones).
   Can't complain about a shorter queue ! :)
  
  
  
   On Thu, Feb 13, 2014 at 1:30 PM, Thejas Nair the...@hortonworks.com
   wrote:
  
Is the jenkins infra used for hive precommit tests under maintenance
   ? I
see that the long queue has suddenly disappeared. The last few test
   builds
have been aborted.
   
The jenkins used for hive precommit tests -
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/
   
Thanks,
Thejas
   
   
   
   
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
 entity
  to
   which it is addressed and may contain information that is confidential,
   privileged and exempt from disclosure under applicable law. If the
 reader
   of this message is not the intended recipient, you are hereby notified
  that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
  immediately
   and delete it from your system. Thank You.
  
 



 --
 Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org




-- 
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org


[jira] [Updated] (HIVE-6386) sql std auth - database should have an owner

2014-02-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6386:


Fix Version/s: 0.13.0

 sql std auth - database should have an owner
 

 Key: HIVE-6386
 URL: https://issues.apache.org/jira/browse/HIVE-6386
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, Metastore
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

 Attachments: HIVE-6386.1.patch, HIVE-6386.2.patch, HIVE-6386.3.patch, 
 HIVE-6386.4.patch, HIVE-6386.patch


 Database in metastore does not have owner associated with it. Database owner 
 is needed for sql std authorization rules.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6386) sql std auth - database should have an owner

2014-02-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6386:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thanks Ashutosh!


 sql std auth - database should have an owner
 

 Key: HIVE-6386
 URL: https://issues.apache.org/jira/browse/HIVE-6386
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization, Metastore
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6386.1.patch, HIVE-6386.2.patch, HIVE-6386.3.patch, 
 HIVE-6386.4.patch, HIVE-6386.patch


 Database in metastore does not have owner associated with it. Database owner 
 is needed for sql std authorization rules.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5958) SQL std auth - authorize statements that work with paths

2014-02-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5958:


Attachment: HIVE-5958.4.patch

 SQL std auth - authorize statements that work with paths
 

 Key: HIVE-5958
 URL: https://issues.apache.org/jira/browse/HIVE-5958
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5958.1.patch, HIVE-5958.2.patch, HIVE-5958.3.patch, 
 HIVE-5958.4.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 Statement such as create table, alter table that specify an path uri should 
 be allowed under the new authorization scheme only if URI(Path) specified has 
 permissions including read/write and ownership of the file/dir and its 
 children.
 Also, fix issue of database not getting set as output for create-table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: hive precommit tests on bigtop jenkins

2014-02-16 Thread Thejas Nair
In theory, if there are 10 pending patches, we would combine the 10 into
one patch (or find a group that applies together) and then run the tests
once. Assuming the number of failures are a fraction of these tests, the
failed tests can be then run with each of these patches one at a time. If
the number of failures are very small, we could even do use a binary search
style method to find the patch that caused the failure.
(More heuristics can be added such as first running a smoke-test suite
before including the patch in a group).




On Sun, Feb 16, 2014 at 9:16 PM, Brock Noland br...@cloudera.com wrote:

 On Sun, Feb 16, 2014 at 11:11 PM, Navis류승우 navis@nexr.com wrote:
  bq. even if a JIRA is in the queue twice it will only be tested once.
  Good to know!
 
  bq. removing order-by clauses just for conforming purpose (my comment)
 
  I've tested it in https://issues.apache.org/jira/browse/HIVE-6438,
 making
  556 sec - 418 sec for join_filters.q. Would it be worthwhile to rewrite
  and update so many tests/results?

 Faster is always better :)  I'll look at 6438 tomorrow.

 
 
 
  2014-02-14 15:58 GMT+09:00 Brock Noland br...@cloudera.com:
 
  Hi,
 
  The pre-commit tests:
 
  1) only test the latest attachment
  2) post the attachment id to the JIRA
  3) Verify the attachment id has not been tested before running
 
  This means that even if a JIRA is in the queue twice it will only be
 tested
  once.
 
  Below are relevant portions of the script:
 
  curl -s -S --location --retry 3
 ${JIRA_ROOT_URL}/jira/browse/${JIRA_NAME}
   $JIRA_TEXT
  ...
  PATCH_URL=$(grep -o '/jira/secure/attachment/[0-9]*/[^]*' $JIRA_TEXT
 | \
grep -v -e 'htm[l]*$' | sort | tail -1 | \
grep -o '/jira/secure/attachment/[0-9]*/[^]*')
  ...
  # ensure attachment has not already been tested
  ATTACHMENT_ID=$(basename $(dirname $PATCH_URL))
  if grep -q ATTACHMENT ID: $ATTACHMENT_ID $JIRA_TEXT
  then
echo Attachment $ATTACHMENT_ID is already tested for $JIRA_NAME
exit 1
  fi
 
 
 
 
 
  On Fri, Feb 14, 2014 at 12:51 AM, Navis류승우 navis@nexr.com wrote:
 
   Recently, precommit test takes more than 1 day (including queue time).
  
   Deduping work queue (currently, HIVE-6403 and HIVE-6418 is queued
 twice)
   can make this better. Rewriting some test queries simpler (I'm
 thinking
  of
   removing order-by clauses just for conforming purpose). Any other
 ideas?
  
  
   2014-02-14 6:46 GMT+09:00 Thejas Nair the...@hortonworks.com:
  
I see a new job now running there. Maybe there is nothing wrong with
  the
infra and builds actually finished (except for the 3 aborted ones).
Can't complain about a shorter queue ! :)
   
   
   
On Thu, Feb 13, 2014 at 1:30 PM, Thejas Nair 
 the...@hortonworks.com
wrote:
   
 Is the jenkins infra used for hive precommit tests under
 maintenance
? I
 see that the long queue has suddenly disappeared. The last few
 test
builds
 have been aborted.

 The jenkins used for hive precommit tests -
 http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/

 Thanks,
 Thejas




   
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or
  entity
   to
which it is addressed and may contain information that is
 confidential,
privileged and exempt from disclosure under applicable law. If the
  reader
of this message is not the intended recipient, you are hereby
 notified
   that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender
   immediately
and delete it from your system. Thank You.
   
  
 
 
 
  --
  Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
 



 --
 Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Review Request 18172: HIVE-6439 Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Harish Butani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18172/
---

(Updated Feb. 17, 2014, 6:49 a.m.)


Review request for hive.


Changes
---

John,
I agree with Gunther's points, I propose:
- we take out the max joins parameter from HiveConf for now; at least until it 
becomes clearer that this is one of the ways we want to control CBO use.
- we move CostBasedOptimizer to the ql.optimizer package.

Do you agree?


Bugs: HIVE-6439
https://issues.apache.org/jira/browse/HIVE-6439


Repository: hive-git


Description
---

This patch introduces CBO step in SemanticAnalyzer. For now the 
CostBasedOptimizer is an empty shell. 
The contract between SemAly and CBO is:
CBO step is controlled by the 'hive.enable.cbo.flag'.
When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
annotated with stats). If it can CBO will return a better plan in Hive AST form.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java a182cd7 
  conf/hive-default.xml.template 0d08aa2 
  ql/src/java/org/apache/hadoop/hive/ql/QueryProperties.java 1ba5654 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/PreCBOOptimizer.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/optiq/CostBasedOptimizer.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java 52c39c0 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 

Diff: https://reviews.apache.org/r/18172/diff/


Testing
---


Thanks,

Harish Butani



Re: Review Request 18172: HIVE-6439 Introduce CBO step in Semantic Analyzer

2014-02-16 Thread John Pullokkaran

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18172/#review34624
---


1. In most query engines 20 way joins and beyond is hard to reorder. So i would 
imagine that we would need a similar flag(hive.cbo.max.joins.supported) to 
control the length of join graph that is being considered for reordering.
2. Its reasonable to introduce ql.optimizer.CostBasedOptimizer which then calls 
in to Optiq based Optimizer. 
   One thing to keep in mind, Optiq based optimizer would have both rule based 
and cost based portions.

- John Pullokkaran


On Feb. 17, 2014, 6:49 a.m., Harish Butani wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18172/
 ---
 
 (Updated Feb. 17, 2014, 6:49 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6439
 https://issues.apache.org/jira/browse/HIVE-6439
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 This patch introduces CBO step in SemanticAnalyzer. For now the 
 CostBasedOptimizer is an empty shell. 
 The contract between SemAly and CBO is:
 CBO step is controlled by the 'hive.enable.cbo.flag'.
 When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
 annotated with stats). If it can CBO will return a better plan in Hive AST 
 form.
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java a182cd7 
   conf/hive-default.xml.template 0d08aa2 
   ql/src/java/org/apache/hadoop/hive/ql/QueryProperties.java 1ba5654 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/PreCBOOptimizer.java 
 PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/optiq/CostBasedOptimizer.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java 52c39c0 
   ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 
 
 Diff: https://reviews.apache.org/r/18172/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Harish Butani
 




Re: Review Request 18172: HIVE-6439 Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Harish Butani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18172/
---

(Updated Feb. 17, 2014, 7:35 a.m.)


Review request for hive.


Changes
---

ok added max.joins to conf; moved CBO to ql.optimizer


Bugs: HIVE-6439
https://issues.apache.org/jira/browse/HIVE-6439


Repository: hive-git


Description
---

This patch introduces CBO step in SemanticAnalyzer. For now the 
CostBasedOptimizer is an empty shell. 
The contract between SemAly and CBO is:
CBO step is controlled by the 'hive.enable.cbo.flag'.
When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
annotated with stats). If it can CBO will return a better plan in Hive AST form.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java a182cd7 
  conf/hive-default.xml.template 0d08aa2 
  ql/src/java/org/apache/hadoop/hive/ql/QueryProperties.java 1ba5654 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/CostBasedOptimizer.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/PreCBOOptimizer.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java 52c39c0 
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 77388dd 

Diff: https://reviews.apache.org/r/18172/diff/


Testing
---


Thanks,

Harish Butani



[jira] [Updated] (HIVE-6439) Introduce CBO step in Semantic Analyzer

2014-02-16 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6439:


Attachment: HIVE-6439.4.patch

 Introduce CBO step in Semantic Analyzer
 ---

 Key: HIVE-6439
 URL: https://issues.apache.org/jira/browse/HIVE-6439
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-6439.1.patch, HIVE-6439.2.patch, HIVE-6439.4.patch


 This patch introduces CBO step in SemanticAnalyzer. For now the 
 CostBasedOptimizer is an empty shell. 
 The contract between SemAly and CBO is:
 - CBO step  is controlled by the 'hive.enable.cbo.flag'. 
 - When true Hive SemAly will hand CBO a Hive Operator tree (with operators 
 annotated with stats). If it can CBO will return a better plan in Hive AST 
 form.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)