[jira] [Commented] (HIVE-5252) Add ql syntax for inline java code creation

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793603#comment-13793603
 ] 

Hudson commented on HIVE-5252:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #498 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/498/])
HIVE-5252 - Add ql syntax for inline java code creation (Edward Capriolo via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531549)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/ivy/libraries.properties
* /hive/trunk/ql/ivy.xml
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/CommandProcessorFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/CompileProcessor.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/HiveCommand.java
* /hive/trunk/ql/src/test/queries/clientnegative/compile_processor.q
* /hive/trunk/ql/src/test/queries/clientpositive/compile_processor.q
* /hive/trunk/ql/src/test/results/clientnegative/compile_processor.q.out
* /hive/trunk/ql/src/test/results/clientpositive/compile_processor.q.out


 Add ql syntax for inline java code creation
 ---

 Key: HIVE-5252
 URL: https://issues.apache.org/jira/browse/HIVE-5252
 Project: Hive
  Issue Type: Sub-task
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Fix For: 0.13.0

 Attachments: HIVE-5252.1.patch.txt, HIVE-5252.2.patch.txt


 Something to the effect of compile 'my code here' using 'groovycompiler'.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5512) metastore filter pushdown should support between

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793602#comment-13793602
 ] 

Hudson commented on HIVE-5512:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #498 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/498/])
HIVE-5512 : metastore filter pushdown should support between (Sergey Shelukhin 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531555)
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/parser/Filter.g
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/metastore/TestMetastoreExpr.java
* /hive/trunk/ql/src/test/queries/clientpositive/filter_numeric.q
* /hive/trunk/ql/src/test/results/clientpositive/filter_numeric.q.out


 metastore filter pushdown should support between
 --

 Key: HIVE-5512
 URL: https://issues.apache.org/jira/browse/HIVE-5512
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: HIVE-5512.01.patch


 Currently, metastore filter pushdown supports compare operators, and and 
 or. Between is just = and =, so it should be easy to add thru changes 
 to Filter.g or even client-side modification in partition pruner.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5513) Set the short version directly via build script

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793601#comment-13793601
 ] 

Hudson commented on HIVE-5513:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #498 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/498/])
HIVE-5513 - Set the short version directly via build script (Prasad Mujumdar 
via Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531550)
* /hive/trunk/build.properties
* /hive/trunk/common/build.xml


 Set the short version directly via build script
 ---

 Key: HIVE-5513
 URL: https://issues.apache.org/jira/browse/HIVE-5513
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure, Diagnosability
Affects Versions: 0.13.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5513.1.patch


 This is a followup to HIVE-5484. The short version should be configurable 
 directly from build script.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4662) first_value can't have more than one order by column

2013-10-13 Thread N Campbell (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793651#comment-13793651
 ] 

N Campbell commented on HIVE-4662:
--

That is a fairly significant limitation.

 first_value can't have more than one order by column
 

 Key: HIVE-4662
 URL: https://issues.apache.org/jira/browse/HIVE-4662
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.11.0
Reporter: Frans Drijver

 In the current implementation of the first_value function, it's not allowed 
 to have more than one (1) order by column, as so:
 {quote}
 select distinct 
 first_value(kastr.DEWNKNR) over ( partition by kastr.DEKTRNR order by 
 kastr.DETRADT, kastr.DEVPDNR )
 from RTAVP_DRKASTR kastr
 ;
 {quote}
 Error given:
 {quote}
 FAILED: SemanticException Range based Window Frame can have only 1 Sort Key
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4662) first_value can't have more than one order by column

2013-10-13 Thread N Campbell (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793652#comment-13793652
 ] 

N Campbell commented on HIVE-4662:
--

not just first value

select c1, c2, sum ( c3 ) over ( partition by c1 order by c2, c3 ) from t

 first_value can't have more than one order by column
 

 Key: HIVE-4662
 URL: https://issues.apache.org/jira/browse/HIVE-4662
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.11.0
Reporter: Frans Drijver

 In the current implementation of the first_value function, it's not allowed 
 to have more than one (1) order by column, as so:
 {quote}
 select distinct 
 first_value(kastr.DEWNKNR) over ( partition by kastr.DEKTRNR order by 
 kastr.DETRADT, kastr.DEVPDNR )
 from RTAVP_DRKASTR kastr
 ;
 {quote}
 Error given:
 {quote}
 FAILED: SemanticException Range based Window Frame can have only 1 Sort Key
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-3745) Hive does improper = based string comparisons for strings with trailing whitespaces

2013-10-13 Thread N Campbell (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793658#comment-13793658
 ] 

N Campbell commented on HIVE-3745:
--

The ISO-SQL standard is very clear as to what a vendor may choose to do re 
blank padding semantics. Similarly, how operations such as min, max, distinct 
etc operate on a variable length character type. Persons simply comparing to 
another RDBMS need to compare what ISO states, where it allows 'vendor 
implementation' and to see what a given vendor claims.

For example, if you were to use Postgres and other vendors derived from it. You 
will find various differences with respect to 

length( char (n) ) vs varchar(n)
group by
min
distinct/union 

To some persons trailing spaces are of no interest and they may assume that one 
general string type will 'ignore' spaces. Others may state to their business 
application that trailing spaces are significant. That is distinct from what a 
given standard states or perhaps what a vendor chooses to implement 
irrespective of any given standard.

It would help either way if the Hive QL documentation could be improved to 
state intent of a given construct/feature.



 Hive does improper = based string comparisons for strings with trailing 
 whitespaces
 -

 Key: HIVE-3745
 URL: https://issues.apache.org/jira/browse/HIVE-3745
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.9.0
Reporter: Harsh J
Assignee: Kevin Wilfong

 Compared to other systems such as DB2, MySQL, etc., which disregard trailing 
 whitespaces in a string used when comparing two strings with the {{=}} 
 relational operator, Hive does not do this.
 For example, note the following line from the MySQL manual: 
 http://dev.mysql.com/doc/refman/5.1/en/char.html
 {quote}
 All MySQL collations are of type PADSPACE. This means that all CHAR and 
 VARCHAR values in MySQL are compared without regard to any trailing spaces. 
 {quote}
 Hive still is whitespace sensitive and regards trailing spaces of a string as 
 worthy elements when comparing. Ideally {{LIKE}} should consider this 
 strongly, but {{=}} should not.
 Is there a specific reason behind this difference of implementation in Hive's 
 SQL?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5474) drop table hangs when concurrency=true

2013-10-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5474:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Jason!

 drop table hangs when concurrency=true
 --

 Key: HIVE-5474
 URL: https://issues.apache.org/jira/browse/HIVE-5474
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, Locking
Reporter: Thejas M Nair
Assignee: Jason Dere
 Fix For: 0.13.0

 Attachments: HIVE-5474.1.patch, HIVE-5474.2.patch


 This is seen in hive 0.12 branch sequential test run. 
 TestThriftHttpCLIService.testExecuteStatement
 https://builds.apache.org/job/Hive-branch-0.12-hadoop1/13/testReport/org.apache.hive.service.cli.thrift/TestThriftHttpCLIService/testExecuteStatement/
 stderr has FAILED: Error in acquiring locks: Locks on the underlying
 objects cannot be acquired. retry after some time



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5496) hcat -e drop database if exists fails on authorizing non-existent null db

2013-10-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5496:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Sushanth!

 hcat -e drop database if exists fails on authorizing non-existent null db
 ---

 Key: HIVE-5496
 URL: https://issues.apache.org/jira/browse/HIVE-5496
 Project: Hive
  Issue Type: Bug
  Components: Authorization, HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5496.patch


 When running a drop database if exists call on hcat commandline, it fails 
 authorization with a NPE because it tries to authorize access to a null 
 database. This should be changed to not call authorize if the db for the 
 DropDatabaseDesc is null.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5485) SBAP errors on null partition being passed into partition level authorization

2013-10-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5485:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Sushanth!

 SBAP errors on null partition being passed into partition level authorization
 -

 Key: HIVE-5485
 URL: https://issues.apache.org/jira/browse/HIVE-5485
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5485.patch


 SBAP causes an NPE when null is passed in as a partition for partition-level 
 or column-level authorization.
 Personally, in my opinion, this is not a SBAP bug, but incorrect usage of 
 AuthorizationProviders - one should not be calling the column-level authorize 
 (given that column-level is more basic than partition-level) function and 
 pass in a null as the partition value. However, that happens on code 
 introduced by HIVE-1887, and unless we rewrite that (and possibly a whole 
 bunch more(will need evaluation)), we have to accommodate that null and 
 appropriately attempt to fall back to table-level authorization in that case.
 The offending code section is in Driver.java:685
 {code}
  678 // if we reach here, it means it needs to do a table 
 authorization
  679 // check, and the table authorization may already happened 
 because of other
  680 // partitions
  681 if (tbl != null  
 !tableAuthChecked.contains(tbl.getTableName()) 
  682 !(tableUsePartLevelAuth.get(tbl.getTableName()) == 
 Boolean.TRUE)) {
  683   ListString cols = tab2Cols.get(tbl);
  684   if (cols != null  cols.size()  0) {
  685 ss.getAuthorizer().authorize(tbl, null, cols,
  686 op.getInputRequiredPrivileges(), null);
  687   } else {
  688 ss.getAuthorizer().authorize(tbl, 
 op.getInputRequiredPrivileges(),
  689 null);
  690   }
  691   tableAuthChecked.add(tbl.getTableName());
  692 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5479) SBAP restricts hcat -e 'show databases'

2013-10-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5479:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Sushanth!

 SBAP restricts hcat -e 'show databases'
 ---

 Key: HIVE-5479
 URL: https://issues.apache.org/jira/browse/HIVE-5479
 Project: Hive
  Issue Type: Bug
  Components: Authorization, HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5479.patch


 During testing for 0.12, it was found that if someone tries to use the SBAP 
 as a client-side authorization provider, and runs hcat -e show databases;, 
 SBAP denies permission to the user.
 Looking at SBAP code, why it does so is self-evident from this section:
 {code}
   @Override
   public void authorize(Privilege[] readRequiredPriv, Privilege[] 
 writeRequiredPriv)
   throws HiveException, AuthorizationException {
 // Currently not used in hive code-base, but intended to authorize actions
 // that are directly user-level. As there's no storage based aspect to 
 this,
 // we can follow one of two routes:
 // a) We can allow by default - that way, this call stays out of the way
 // b) We can deny by default - that way, no privileges are authorized that
 // is not understood and explicitly allowed.
 // Both approaches have merit, but given that things like grants and 
 revokes
 // that are user-level do not make sense from the context of 
 storage-permission
 // based auth, denying seems to be more canonical here.
 throw new 
 AuthorizationException(StorageBasedAuthorizationProvider.class.getName() +
  does not allow user-level authorization);
   }
 {code}
 Thus, this deny-by-default behaviour affects the show databases call from 
 hcat cli, which uses user-level privileges to determine if a user can perform 
 that.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5485) SBAP errors on null partition being passed into partition level authorization

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793738#comment-13793738
 ] 

Hudson commented on HIVE-5485:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #203 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/203/])
HIVE-5485 : SBAP errors on null partition being passed into partition level 
authorization (Sushanth Sowmyan via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531707)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/StorageBasedAuthorizationProvider.java


 SBAP errors on null partition being passed into partition level authorization
 -

 Key: HIVE-5485
 URL: https://issues.apache.org/jira/browse/HIVE-5485
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5485.patch


 SBAP causes an NPE when null is passed in as a partition for partition-level 
 or column-level authorization.
 Personally, in my opinion, this is not a SBAP bug, but incorrect usage of 
 AuthorizationProviders - one should not be calling the column-level authorize 
 (given that column-level is more basic than partition-level) function and 
 pass in a null as the partition value. However, that happens on code 
 introduced by HIVE-1887, and unless we rewrite that (and possibly a whole 
 bunch more(will need evaluation)), we have to accommodate that null and 
 appropriately attempt to fall back to table-level authorization in that case.
 The offending code section is in Driver.java:685
 {code}
  678 // if we reach here, it means it needs to do a table 
 authorization
  679 // check, and the table authorization may already happened 
 because of other
  680 // partitions
  681 if (tbl != null  
 !tableAuthChecked.contains(tbl.getTableName()) 
  682 !(tableUsePartLevelAuth.get(tbl.getTableName()) == 
 Boolean.TRUE)) {
  683   ListString cols = tab2Cols.get(tbl);
  684   if (cols != null  cols.size()  0) {
  685 ss.getAuthorizer().authorize(tbl, null, cols,
  686 op.getInputRequiredPrivileges(), null);
  687   } else {
  688 ss.getAuthorizer().authorize(tbl, 
 op.getInputRequiredPrivileges(),
  689 null);
  690   }
  691   tableAuthChecked.add(tbl.getTableName());
  692 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5474) drop table hangs when concurrency=true

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793740#comment-13793740
 ] 

Hudson commented on HIVE-5474:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #203 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/203/])
HIVE-5474 : drop table hangs when concurrency=true (Jason Dere via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531704)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/TestDriver.java
* /hive/trunk/ql/src/test/queries/clientpositive/drop_with_concurrency.q
* /hive/trunk/ql/src/test/results/clientpositive/drop_with_concurrency.q.out
* 
/hive/trunk/service/src/test/org/apache/hive/service/cli/thrift/ThriftCLIServiceTest.java


 drop table hangs when concurrency=true
 --

 Key: HIVE-5474
 URL: https://issues.apache.org/jira/browse/HIVE-5474
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, Locking
Reporter: Thejas M Nair
Assignee: Jason Dere
 Fix For: 0.13.0

 Attachments: HIVE-5474.1.patch, HIVE-5474.2.patch


 This is seen in hive 0.12 branch sequential test run. 
 TestThriftHttpCLIService.testExecuteStatement
 https://builds.apache.org/job/Hive-branch-0.12-hadoop1/13/testReport/org.apache.hive.service.cli.thrift/TestThriftHttpCLIService/testExecuteStatement/
 stderr has FAILED: Error in acquiring locks: Locks on the underlying
 objects cannot be acquired. retry after some time



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5479) SBAP restricts hcat -e 'show databases'

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793739#comment-13793739
 ] 

Hudson commented on HIVE-5479:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #203 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/203/])
HIVE-5479 : SBAP restricts hcat -e show databases (Sushanth Sowmyan via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531708)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/StorageBasedAuthorizationProvider.java


 SBAP restricts hcat -e 'show databases'
 ---

 Key: HIVE-5479
 URL: https://issues.apache.org/jira/browse/HIVE-5479
 Project: Hive
  Issue Type: Bug
  Components: Authorization, HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5479.patch


 During testing for 0.12, it was found that if someone tries to use the SBAP 
 as a client-side authorization provider, and runs hcat -e show databases;, 
 SBAP denies permission to the user.
 Looking at SBAP code, why it does so is self-evident from this section:
 {code}
   @Override
   public void authorize(Privilege[] readRequiredPriv, Privilege[] 
 writeRequiredPriv)
   throws HiveException, AuthorizationException {
 // Currently not used in hive code-base, but intended to authorize actions
 // that are directly user-level. As there's no storage based aspect to 
 this,
 // we can follow one of two routes:
 // a) We can allow by default - that way, this call stays out of the way
 // b) We can deny by default - that way, no privileges are authorized that
 // is not understood and explicitly allowed.
 // Both approaches have merit, but given that things like grants and 
 revokes
 // that are user-level do not make sense from the context of 
 storage-permission
 // based auth, denying seems to be more canonical here.
 throw new 
 AuthorizationException(StorageBasedAuthorizationProvider.class.getName() +
  does not allow user-level authorization);
   }
 {code}
 Thus, this deny-by-default behaviour affects the show databases call from 
 hcat cli, which uses user-level privileges to determine if a user can perform 
 that.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5496) hcat -e drop database if exists fails on authorizing non-existent null db

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793737#comment-13793737
 ] 

Hudson commented on HIVE-5496:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #203 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/203/])
HIVE-5496 : hcat -e drop database if exists fails on authorizing non-existent 
null db (Sushanth Sowmyan via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531706)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java


 hcat -e drop database if exists fails on authorizing non-existent null db
 ---

 Key: HIVE-5496
 URL: https://issues.apache.org/jira/browse/HIVE-5496
 Project: Hive
  Issue Type: Bug
  Components: Authorization, HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5496.patch


 When running a drop database if exists call on hcat commandline, it fails 
 authorization with a NPE because it tries to authorize access to a null 
 database. This should be changed to not call authorize if the db for the 
 DropDatabaseDesc is null.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Should we turn off the apache jenkins/huson builds?

2013-10-13 Thread Edward Capriolo
They seem very unreliable at this point. It seems they almost never pass
FAILURE: Integrated in Hive-trunk-hadoop2 #498 (See [
https://builds.apache.org/job/Hive-trunk-hadoop2/498/])
HIVE-5252 - Add ql syntax for inline java code creation (Edward Capriolo
via Brock Noland) (brock:
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531549)
* /hive/trunk/common/src/java/
org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/ivy/libraries.properties
* /hive/trunk/ql/ivy.xml
*
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/CommandProcessorFactory.java
*
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/CompileProcessor.java
*
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/HiveCommand.java
* /hive/trunk/ql/src/test/queries/clientnegative/compile_processor.q
* /hive/trunk/ql/src/test/queries/clientpositive/compile_processor.q
* /hive/trunk/ql/src/test/results/clientnegative/compile_processor.q.out
* /hive/trunk/ql/src/test/results/clientpositive/compile_processor.q.out

It is also very annoying they post back to the ticket what almost surely is
a false negative test result.


[jira] [Commented] (HIVE-5512) metastore filter pushdown should support between

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793766#comment-13793766
 ] 

Hudson commented on HIVE-5512:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2397 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2397/])
HIVE-5512 : metastore filter pushdown should support between (Sergey Shelukhin 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531555)
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/parser/Filter.g
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/metastore/TestMetastoreExpr.java
* /hive/trunk/ql/src/test/queries/clientpositive/filter_numeric.q
* /hive/trunk/ql/src/test/results/clientpositive/filter_numeric.q.out


 metastore filter pushdown should support between
 --

 Key: HIVE-5512
 URL: https://issues.apache.org/jira/browse/HIVE-5512
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: HIVE-5512.01.patch


 Currently, metastore filter pushdown supports compare operators, and and 
 or. Between is just = and =, so it should be easy to add thru changes 
 to Filter.g or even client-side modification in partition pruner.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5474) drop table hangs when concurrency=true

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793770#comment-13793770
 ] 

Hudson commented on HIVE-5474:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2398 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2398/])
HIVE-5474 : drop table hangs when concurrency=true (Jason Dere via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531704)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/TestDriver.java
* /hive/trunk/ql/src/test/queries/clientpositive/drop_with_concurrency.q
* /hive/trunk/ql/src/test/results/clientpositive/drop_with_concurrency.q.out
* 
/hive/trunk/service/src/test/org/apache/hive/service/cli/thrift/ThriftCLIServiceTest.java


 drop table hangs when concurrency=true
 --

 Key: HIVE-5474
 URL: https://issues.apache.org/jira/browse/HIVE-5474
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, Locking
Reporter: Thejas M Nair
Assignee: Jason Dere
 Fix For: 0.13.0

 Attachments: HIVE-5474.1.patch, HIVE-5474.2.patch


 This is seen in hive 0.12 branch sequential test run. 
 TestThriftHttpCLIService.testExecuteStatement
 https://builds.apache.org/job/Hive-branch-0.12-hadoop1/13/testReport/org.apache.hive.service.cli.thrift/TestThriftHttpCLIService/testExecuteStatement/
 stderr has FAILED: Error in acquiring locks: Locks on the underlying
 objects cannot be acquired. retry after some time



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5479) SBAP restricts hcat -e 'show databases'

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793769#comment-13793769
 ] 

Hudson commented on HIVE-5479:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2398 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2398/])
HIVE-5479 : SBAP restricts hcat -e show databases (Sushanth Sowmyan via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531708)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/StorageBasedAuthorizationProvider.java


 SBAP restricts hcat -e 'show databases'
 ---

 Key: HIVE-5479
 URL: https://issues.apache.org/jira/browse/HIVE-5479
 Project: Hive
  Issue Type: Bug
  Components: Authorization, HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5479.patch


 During testing for 0.12, it was found that if someone tries to use the SBAP 
 as a client-side authorization provider, and runs hcat -e show databases;, 
 SBAP denies permission to the user.
 Looking at SBAP code, why it does so is self-evident from this section:
 {code}
   @Override
   public void authorize(Privilege[] readRequiredPriv, Privilege[] 
 writeRequiredPriv)
   throws HiveException, AuthorizationException {
 // Currently not used in hive code-base, but intended to authorize actions
 // that are directly user-level. As there's no storage based aspect to 
 this,
 // we can follow one of two routes:
 // a) We can allow by default - that way, this call stays out of the way
 // b) We can deny by default - that way, no privileges are authorized that
 // is not understood and explicitly allowed.
 // Both approaches have merit, but given that things like grants and 
 revokes
 // that are user-level do not make sense from the context of 
 storage-permission
 // based auth, denying seems to be more canonical here.
 throw new 
 AuthorizationException(StorageBasedAuthorizationProvider.class.getName() +
  does not allow user-level authorization);
   }
 {code}
 Thus, this deny-by-default behaviour affects the show databases call from 
 hcat cli, which uses user-level privileges to determine if a user can perform 
 that.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5485) SBAP errors on null partition being passed into partition level authorization

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793768#comment-13793768
 ] 

Hudson commented on HIVE-5485:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2398 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2398/])
HIVE-5485 : SBAP errors on null partition being passed into partition level 
authorization (Sushanth Sowmyan via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531707)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/StorageBasedAuthorizationProvider.java


 SBAP errors on null partition being passed into partition level authorization
 -

 Key: HIVE-5485
 URL: https://issues.apache.org/jira/browse/HIVE-5485
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5485.patch


 SBAP causes an NPE when null is passed in as a partition for partition-level 
 or column-level authorization.
 Personally, in my opinion, this is not a SBAP bug, but incorrect usage of 
 AuthorizationProviders - one should not be calling the column-level authorize 
 (given that column-level is more basic than partition-level) function and 
 pass in a null as the partition value. However, that happens on code 
 introduced by HIVE-1887, and unless we rewrite that (and possibly a whole 
 bunch more(will need evaluation)), we have to accommodate that null and 
 appropriately attempt to fall back to table-level authorization in that case.
 The offending code section is in Driver.java:685
 {code}
  678 // if we reach here, it means it needs to do a table 
 authorization
  679 // check, and the table authorization may already happened 
 because of other
  680 // partitions
  681 if (tbl != null  
 !tableAuthChecked.contains(tbl.getTableName()) 
  682 !(tableUsePartLevelAuth.get(tbl.getTableName()) == 
 Boolean.TRUE)) {
  683   ListString cols = tab2Cols.get(tbl);
  684   if (cols != null  cols.size()  0) {
  685 ss.getAuthorizer().authorize(tbl, null, cols,
  686 op.getInputRequiredPrivileges(), null);
  687   } else {
  688 ss.getAuthorizer().authorize(tbl, 
 op.getInputRequiredPrivileges(),
  689 null);
  690   }
  691   tableAuthChecked.add(tbl.getTableName());
  692 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5496) hcat -e drop database if exists fails on authorizing non-existent null db

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793767#comment-13793767
 ] 

Hudson commented on HIVE-5496:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2398 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2398/])
HIVE-5496 : hcat -e drop database if exists fails on authorizing non-existent 
null db (Sushanth Sowmyan via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531706)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java


 hcat -e drop database if exists fails on authorizing non-existent null db
 ---

 Key: HIVE-5496
 URL: https://issues.apache.org/jira/browse/HIVE-5496
 Project: Hive
  Issue Type: Bug
  Components: Authorization, HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5496.patch


 When running a drop database if exists call on hcat commandline, it fails 
 authorization with a NPE because it tries to authorize access to a null 
 database. This should be changed to not call authorize if the db for the 
 DropDatabaseDesc is null.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5496) hcat -e drop database if exists fails on authorizing non-existent null db

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793804#comment-13793804
 ] 

Hudson commented on HIVE-5496:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/138/])
HIVE-5496 : hcat -e drop database if exists fails on authorizing non-existent 
null db (Sushanth Sowmyan via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531706)
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java


 hcat -e drop database if exists fails on authorizing non-existent null db
 ---

 Key: HIVE-5496
 URL: https://issues.apache.org/jira/browse/HIVE-5496
 Project: Hive
  Issue Type: Bug
  Components: Authorization, HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5496.patch


 When running a drop database if exists call on hcat commandline, it fails 
 authorization with a NPE because it tries to authorize access to a null 
 database. This should be changed to not call authorize if the db for the 
 DropDatabaseDesc is null.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5485) SBAP errors on null partition being passed into partition level authorization

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793805#comment-13793805
 ] 

Hudson commented on HIVE-5485:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/138/])
HIVE-5485 : SBAP errors on null partition being passed into partition level 
authorization (Sushanth Sowmyan via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531707)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/StorageBasedAuthorizationProvider.java


 SBAP errors on null partition being passed into partition level authorization
 -

 Key: HIVE-5485
 URL: https://issues.apache.org/jira/browse/HIVE-5485
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5485.patch


 SBAP causes an NPE when null is passed in as a partition for partition-level 
 or column-level authorization.
 Personally, in my opinion, this is not a SBAP bug, but incorrect usage of 
 AuthorizationProviders - one should not be calling the column-level authorize 
 (given that column-level is more basic than partition-level) function and 
 pass in a null as the partition value. However, that happens on code 
 introduced by HIVE-1887, and unless we rewrite that (and possibly a whole 
 bunch more(will need evaluation)), we have to accommodate that null and 
 appropriately attempt to fall back to table-level authorization in that case.
 The offending code section is in Driver.java:685
 {code}
  678 // if we reach here, it means it needs to do a table 
 authorization
  679 // check, and the table authorization may already happened 
 because of other
  680 // partitions
  681 if (tbl != null  
 !tableAuthChecked.contains(tbl.getTableName()) 
  682 !(tableUsePartLevelAuth.get(tbl.getTableName()) == 
 Boolean.TRUE)) {
  683   ListString cols = tab2Cols.get(tbl);
  684   if (cols != null  cols.size()  0) {
  685 ss.getAuthorizer().authorize(tbl, null, cols,
  686 op.getInputRequiredPrivileges(), null);
  687   } else {
  688 ss.getAuthorizer().authorize(tbl, 
 op.getInputRequiredPrivileges(),
  689 null);
  690   }
  691   tableAuthChecked.add(tbl.getTableName());
  692 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5479) SBAP restricts hcat -e 'show databases'

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793806#comment-13793806
 ] 

Hudson commented on HIVE-5479:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/138/])
HIVE-5479 : SBAP restricts hcat -e show databases (Sushanth Sowmyan via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531708)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/StorageBasedAuthorizationProvider.java


 SBAP restricts hcat -e 'show databases'
 ---

 Key: HIVE-5479
 URL: https://issues.apache.org/jira/browse/HIVE-5479
 Project: Hive
  Issue Type: Bug
  Components: Authorization, HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.13.0

 Attachments: HIVE-5479.patch


 During testing for 0.12, it was found that if someone tries to use the SBAP 
 as a client-side authorization provider, and runs hcat -e show databases;, 
 SBAP denies permission to the user.
 Looking at SBAP code, why it does so is self-evident from this section:
 {code}
   @Override
   public void authorize(Privilege[] readRequiredPriv, Privilege[] 
 writeRequiredPriv)
   throws HiveException, AuthorizationException {
 // Currently not used in hive code-base, but intended to authorize actions
 // that are directly user-level. As there's no storage based aspect to 
 this,
 // we can follow one of two routes:
 // a) We can allow by default - that way, this call stays out of the way
 // b) We can deny by default - that way, no privileges are authorized that
 // is not understood and explicitly allowed.
 // Both approaches have merit, but given that things like grants and 
 revokes
 // that are user-level do not make sense from the context of 
 storage-permission
 // based auth, denying seems to be more canonical here.
 throw new 
 AuthorizationException(StorageBasedAuthorizationProvider.class.getName() +
  does not allow user-level authorization);
   }
 {code}
 Thus, this deny-by-default behaviour affects the show databases call from 
 hcat cli, which uses user-level privileges to determine if a user can perform 
 that.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5474) drop table hangs when concurrency=true

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793807#comment-13793807
 ] 

Hudson commented on HIVE-5474:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/138/])
HIVE-5474 : drop table hangs when concurrency=true (Jason Dere via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531704)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/TestDriver.java
* /hive/trunk/ql/src/test/queries/clientpositive/drop_with_concurrency.q
* /hive/trunk/ql/src/test/results/clientpositive/drop_with_concurrency.q.out
* 
/hive/trunk/service/src/test/org/apache/hive/service/cli/thrift/ThriftCLIServiceTest.java


 drop table hangs when concurrency=true
 --

 Key: HIVE-5474
 URL: https://issues.apache.org/jira/browse/HIVE-5474
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, Locking
Reporter: Thejas M Nair
Assignee: Jason Dere
 Fix For: 0.13.0

 Attachments: HIVE-5474.1.patch, HIVE-5474.2.patch


 This is seen in hive 0.12 branch sequential test run. 
 TestThriftHttpCLIService.testExecuteStatement
 https://builds.apache.org/job/Hive-branch-0.12-hadoop1/13/testReport/org.apache.hive.service.cli.thrift/TestThriftHttpCLIService/testExecuteStatement/
 stderr has FAILED: Error in acquiring locks: Locks on the underlying
 objects cannot be acquired. retry after some time



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5531) Hiverserver2 doesn't honer command line argument when initializing log4j

2013-10-13 Thread Shuaishuai Nie (JIRA)
Shuaishuai Nie created HIVE-5531:


 Summary: Hiverserver2 doesn't honer command line argument when 
initializing log4j
 Key: HIVE-5531
 URL: https://issues.apache.org/jira/browse/HIVE-5531
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5531) Hiverserver2 doesn't honer command line argument when initializing log4j

2013-10-13 Thread Shuaishuai Nie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuaishuai Nie updated HIVE-5531:
-

Description: The reason is in the main function of hiveserver2, the log4j 
is initialized before processing the arguments to the function

 Hiverserver2 doesn't honer command line argument when initializing log4j
 

 Key: HIVE-5531
 URL: https://issues.apache.org/jira/browse/HIVE-5531
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-5531.1.patch


 The reason is in the main function of hiveserver2, the log4j is initialized 
 before processing the arguments to the function



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: [VOTE] Apache Hive 0.12.0 Release Candidate 1

2013-10-13 Thread Carl Steinbach
+1 (binding)


 Regarding the 3 day deadline for voting, that is what is in the hive
 bylaws. I also see that has been followed in last few releases I
 checked.


3 days is the minimum length of the voting period, not the maximum.

Thanks.

Carl


[jira] [Updated] (HIVE-5531) Hiverserver2 doesn't honer command line argument when initializing log4j

2013-10-13 Thread Shuaishuai Nie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuaishuai Nie updated HIVE-5531:
-

Status: Patch Available  (was: Open)

 Hiverserver2 doesn't honer command line argument when initializing log4j
 

 Key: HIVE-5531
 URL: https://issues.apache.org/jira/browse/HIVE-5531
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-5531.1.patch


 The reason is in the main function of hiveserver2, the log4j is initialized 
 before processing the arguments to the function



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5531) Hiverserver2 doesn't honer command line argument when initializing log4j

2013-10-13 Thread Shuaishuai Nie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuaishuai Nie updated HIVE-5531:
-

Attachment: HIVE-5531.1.patch

 Hiverserver2 doesn't honer command line argument when initializing log4j
 

 Key: HIVE-5531
 URL: https://issues.apache.org/jira/browse/HIVE-5531
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-5531.1.patch


 The reason is in the main function of hiveserver2, the log4j is initialized 
 before processing the arguments to the function



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5532) In Windows, MapredLocalTask need to remove quotes on environment variable before launch ExecDriver in another JVM

2013-10-13 Thread Shuaishuai Nie (JIRA)
Shuaishuai Nie created HIVE-5532:


 Summary: In Windows, MapredLocalTask need to remove quotes on 
environment variable before launch ExecDriver in another JVM
 Key: HIVE-5532
 URL: https://issues.apache.org/jira/browse/HIVE-5532
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5532) In Windows, MapredLocalTask need to remove quotes on environment variable before launch ExecDriver in another JVM

2013-10-13 Thread Shuaishuai Nie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuaishuai Nie updated HIVE-5532:
-

Description: 
In Windows environment, environment variable is quoted to preserve special 
characters like space. However since MapredLocalTask will call hadoop.cmd to 
launch ExecDriver and hadoop.cmd will reconstruct environment variable like 
{code}
set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
{code}
the quotes for HADOOP_CLIENT_OPTS will be part of HADOOP_OPTS and nullify the 
variables in HADOOP_CLIENT_OPTS after appending to HADOOP_OPTS. 
Example HADOOP_OPTS:
{code}
-Dhadoop.log.dir=C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\logs 
-Dhadoop.log.file=hadoop.log 
-Dhadoop.home.dir=C:\apps\dist\hadoop-1.2.0.1.3.1.0-06 
-Dhadoop.root.logger=INFO,TLA 
-Djava.library.path=;C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\lib\native\Windows_NT-amd64-64;C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\lib\native
 -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.tasklog.taskid=attempt_201310092133_0006_m_00_0 
-Dhadoop.tasklog.iscleanup=false -Dhadoop.tasklog.totalLogFileSize=0 
{code}
One failure scenario is when map side join is launched by Oozie, the job fails 
because cannot find hadoop.tasklog.taskid and throw exception when 
initializing log4j

 In Windows, MapredLocalTask need to remove quotes on environment variable 
 before launch ExecDriver in another JVM
 -

 Key: HIVE-5532
 URL: https://issues.apache.org/jira/browse/HIVE-5532
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie

 In Windows environment, environment variable is quoted to preserve special 
 characters like space. However since MapredLocalTask will call hadoop.cmd to 
 launch ExecDriver and hadoop.cmd will reconstruct environment variable like 
 {code}
 set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
 {code}
 the quotes for HADOOP_CLIENT_OPTS will be part of HADOOP_OPTS and nullify the 
 variables in HADOOP_CLIENT_OPTS after appending to HADOOP_OPTS. 
 Example HADOOP_OPTS:
 {code}
 -Dhadoop.log.dir=C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\logs 
 -Dhadoop.log.file=hadoop.log 
 -Dhadoop.home.dir=C:\apps\dist\hadoop-1.2.0.1.3.1.0-06 
 -Dhadoop.root.logger=INFO,TLA 
 -Djava.library.path=;C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\lib\native\Windows_NT-amd64-64;C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\lib\native
  -Dhadoop.policy.file=hadoop-policy.xml 
 -Dhadoop.tasklog.taskid=attempt_201310092133_0006_m_00_0 
 -Dhadoop.tasklog.iscleanup=false -Dhadoop.tasklog.totalLogFileSize=0 
 {code}
 One failure scenario is when map side join is launched by Oozie, the job 
 fails because cannot find hadoop.tasklog.taskid and throw exception when 
 initializing log4j



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5532) In Windows, MapredLocalTask need to remove quotes on environment variable before launch ExecDriver in another JVM

2013-10-13 Thread Shuaishuai Nie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuaishuai Nie updated HIVE-5532:
-

Attachment: HIVE-5532.1.patch

 In Windows, MapredLocalTask need to remove quotes on environment variable 
 before launch ExecDriver in another JVM
 -

 Key: HIVE-5532
 URL: https://issues.apache.org/jira/browse/HIVE-5532
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-5532.1.patch


 In Windows environment, environment variable is quoted to preserve special 
 characters like space. However since MapredLocalTask will call hadoop.cmd to 
 launch ExecDriver and hadoop.cmd will reconstruct environment variable like 
 {code}
 set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
 {code}
 the quotes for HADOOP_CLIENT_OPTS will be part of HADOOP_OPTS and nullify the 
 variables in HADOOP_CLIENT_OPTS after appending to HADOOP_OPTS. 
 Example HADOOP_OPTS:
 {code}
 -Dhadoop.log.dir=C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\logs 
 -Dhadoop.log.file=hadoop.log 
 -Dhadoop.home.dir=C:\apps\dist\hadoop-1.2.0.1.3.1.0-06 
 -Dhadoop.root.logger=INFO,TLA 
 -Djava.library.path=;C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\lib\native\Windows_NT-amd64-64;C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\lib\native
  -Dhadoop.policy.file=hadoop-policy.xml 
 -Dhadoop.tasklog.taskid=attempt_201310092133_0006_m_00_0 
 -Dhadoop.tasklog.iscleanup=false -Dhadoop.tasklog.totalLogFileSize=0 
 {code}
 One failure scenario is when map side join is launched by Oozie, the job 
 fails because cannot find hadoop.tasklog.taskid and throw exception when 
 initializing log4j



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5532) In Windows, MapredLocalTask need to remove quotes on environment variable before launch ExecDriver in another JVM

2013-10-13 Thread Shuaishuai Nie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuaishuai Nie updated HIVE-5532:
-

Status: Patch Available  (was: Open)

 In Windows, MapredLocalTask need to remove quotes on environment variable 
 before launch ExecDriver in another JVM
 -

 Key: HIVE-5532
 URL: https://issues.apache.org/jira/browse/HIVE-5532
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-5532.1.patch


 In Windows environment, environment variable is quoted to preserve special 
 characters like space. However since MapredLocalTask will call hadoop.cmd to 
 launch ExecDriver and hadoop.cmd will reconstruct environment variable like 
 {code}
 set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
 {code}
 the quotes for HADOOP_CLIENT_OPTS will be part of HADOOP_OPTS and nullify the 
 variables in HADOOP_CLIENT_OPTS after appending to HADOOP_OPTS. 
 Example HADOOP_OPTS:
 {code}
 -Dhadoop.log.dir=C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\logs 
 -Dhadoop.log.file=hadoop.log 
 -Dhadoop.home.dir=C:\apps\dist\hadoop-1.2.0.1.3.1.0-06 
 -Dhadoop.root.logger=INFO,TLA 
 -Djava.library.path=;C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\lib\native\Windows_NT-amd64-64;C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\lib\native
  -Dhadoop.policy.file=hadoop-policy.xml 
 -Dhadoop.tasklog.taskid=attempt_201310092133_0006_m_00_0 
 -Dhadoop.tasklog.iscleanup=false -Dhadoop.tasklog.totalLogFileSize=0 
 {code}
 One failure scenario is when map side join is launched by Oozie, the job 
 fails because cannot find hadoop.tasklog.taskid and throw exception when 
 initializing log4j



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4773) Templeton intermittently fail to commit output to file system

2013-10-13 Thread Shuaishuai Nie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuaishuai Nie updated HIVE-4773:
-

Status: Patch Available  (was: Open)

 Templeton intermittently fail to commit output to file system
 -

 Key: HIVE-4773
 URL: https://issues.apache.org/jira/browse/HIVE-4773
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-4773.1.patch, HIVE-4773.2.patch, HIVE-4773.3.patch


 With ASV as a default FS, we saw instances where output is not fully flushed 
 to storage before the Templeton controller process exits. This results in 
 stdout and stderr being empty even though the job completed successfully.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5531) Hiverserver2 doesn't honer command line argument when initializing log4j

2013-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793870#comment-13793870
 ] 

Hive QA commented on HIVE-5531:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12608226/HIVE-5531.1.patch

{color:green}SUCCESS:{color} +1 4398 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1116/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1116/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 Hiverserver2 doesn't honer command line argument when initializing log4j
 

 Key: HIVE-5531
 URL: https://issues.apache.org/jira/browse/HIVE-5531
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-5531.1.patch


 The reason is in the main function of hiveserver2, the log4j is initialized 
 before processing the arguments to the function



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5520) Use factory methods to instantiate HiveDecimal instead of constructors

2013-10-13 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5520:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Xuefu!

 Use factory methods to instantiate HiveDecimal instead of constructors
 --

 Key: HIVE-5520
 URL: https://issues.apache.org/jira/browse/HIVE-5520
 Project: Hive
  Issue Type: Improvement
  Components: Types
Affects Versions: 0.11.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5520.1.patch, HIVE-5520.patch


 Currently HiveDecimal class provided a bunch of constructors that  
 unfortunately also throws a runtime exception. For example,
 {code}
  public HiveDecimal(BigInteger unscaled, int scale) {
 bd = this.normalize(new BigDecimal(unscaled, scale), MAX_PRECISION, 
 false);
 if (bd == null) {
  throw new NumberFormatException(Assignment would result in truncation);
}
 {code}
 As a result, it's hard for the caller to detect error occurrences and the 
 error handling is also complicated. In many cases, the error handling is 
 omitted or missed. For instance,
 {code}
  HiveDecimalWritable result = new 
 HiveDecimalWritable(HiveDecimal.ZERO);
 try {
   result.set(aggregation.sum.divide(new 
 HiveDecimal(aggregation.count)));
 } catch (NumberFormatException e) {
   result = null;
 }
 {code} 
 Throwing runtime exception while expecting caller to catch seems 
 anti-pattern. In the case of constructor, factory class or methods seem more 
 appropriate. With such a change, the apis are cleaner, and the error handling 
 is simplified.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5532) In Windows, MapredLocalTask need to remove quotes on environment variable before launch ExecDriver in another JVM

2013-10-13 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793892#comment-13793892
 ] 

Hive QA commented on HIVE-5532:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12608227/HIVE-5532.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4398 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_parallel_orderby
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1117/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1117/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

 In Windows, MapredLocalTask need to remove quotes on environment variable 
 before launch ExecDriver in another JVM
 -

 Key: HIVE-5532
 URL: https://issues.apache.org/jira/browse/HIVE-5532
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-5532.1.patch


 In Windows environment, environment variable is quoted to preserve special 
 characters like space. However since MapredLocalTask will call hadoop.cmd to 
 launch ExecDriver and hadoop.cmd will reconstruct environment variable like 
 {code}
 set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
 {code}
 the quotes for HADOOP_CLIENT_OPTS will be part of HADOOP_OPTS and nullify the 
 variables in HADOOP_CLIENT_OPTS after appending to HADOOP_OPTS. 
 Example HADOOP_OPTS:
 {code}
 -Dhadoop.log.dir=C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\logs 
 -Dhadoop.log.file=hadoop.log 
 -Dhadoop.home.dir=C:\apps\dist\hadoop-1.2.0.1.3.1.0-06 
 -Dhadoop.root.logger=INFO,TLA 
 -Djava.library.path=;C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\lib\native\Windows_NT-amd64-64;C:\apps\dist\hadoop-1.2.0.1.3.1.0-06\lib\native
  -Dhadoop.policy.file=hadoop-policy.xml 
 -Dhadoop.tasklog.taskid=attempt_201310092133_0006_m_00_0 
 -Dhadoop.tasklog.iscleanup=false -Dhadoop.tasklog.totalLogFileSize=0 
 {code}
 One failure scenario is when map side join is launched by Oozie, the job 
 fails because cannot find hadoop.tasklog.taskid and throw exception when 
 initializing log4j



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5207) Support data encryption for Hive tables

2013-10-13 Thread Jerry Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793897#comment-13793897
 ] 

Jerry Chen commented on HIVE-5207:
--

Hi Larry, thanks for you pointing out the docs. Yes, we will complement more 
javadocs and document as our next work.
 
{quote}1. TwoTieredKey - exactly the purpose, how it's used what the tiers are, 
etc{quote}
TwoTiredKey is used for the case that the table key is stored in the Hive 
metastore. The table key will be encrypted with the master key which is 
provided externally. In this case, user maintains and manages only the master 
key externally other than manages all the table keys externally. This is useful 
when there is no full-fledged key management system available.
 
{quote}2. External KeyManagement integration - where and what is the expected 
contract for this integration{quote}
To integrate with external key management system, we use the KeyProvider 
interface in HADOOP-9331. Implementation of KeyProvider interface for a 
specified key management system can be set as KeyProvider for retrieving key.
 
{quote}3. A specific usecase description for exporting keys into an external 
keystore and who has the authority to initiate the export and where the 
password comes from{quote}
Exporting of the internal keys comes with the Hive command line. As the 
internal table keys were encrypted with the master key, when performing the 
exporting, the master key must be provided in the environment which is 
controlled by the user.  If the master key is not available, the encrypted 
table keys for exporting cannot be decrypted and thus cannot be exported. The 
KeyProvider implementation for retrieving master key can provide its own 
authentication and authorization for deciding whether the current user has 
access to a specific key.
 
{quote}4. An explanation as to why we should ever store the key with the data 
which seems like a bad idea. I understand that it is encrypted with the master 
secret - which takes me to the next question.  {quote}
Exactly speaking, it is not with the data. The table key is stored in the Hive 
metastore. I see your points at this question. Just as mentioned, for use cases 
that there is no full-fledged and ready to use key management system available, 
it is useful. We provide several alternatives for managing keys. When creating 
an encrypted table, user can specify whether the key is managed externally or 
internally. For externally managed keys, only the key name (alias) will be 
stored in the Hive metastore and the key will be retrieved through KeyProvider 
set in the configuration.
 
{quote}5. Where is the master secret established and stored and how is it 
protected{quote}
Currently, we assume that the user manages the master key. For example, for 
simple uses cases, he can stores the master key in java KeyStore which 
protected by a password and stores in the folder which is read-only for 
specific user or groups. User can also stores the master key in other key 
management system as the master key is retrieved through KeyProvider.
 
Really appreciate your time reviewing this.
Thanks

 Support data encryption for Hive tables
 ---

 Key: HIVE-5207
 URL: https://issues.apache.org/jira/browse/HIVE-5207
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.12.0
Reporter: Jerry Chen
  Labels: Rhino
 Attachments: HIVE-5207.patch, HIVE-5207.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

 For sensitive and legally protected data such as personal information, it is 
 a common practice that the data is stored encrypted in the file system. To 
 enable Hive with the ability to store and query the encrypted data is very 
 crucial for Hive data analysis in enterprise. 
  
 When creating table, user can specify whether a table is an encrypted table 
 or not by specify a property in TBLPROPERTIES. Once an encrypted table is 
 created, query on the encrypted table is transparent as long as the 
 corresponding key management facilities are set in the running environment of 
 query. We can use hadoop crypto provided by HADOOP-9331 for underlying data 
 encryption and decryption. 
  
 As to key management, we would support several common key management use 
 cases. First, the table key (data key) can be stored in the Hive metastore 
 associated with the table in properties. The table key can be explicit 
 specified or auto generated and will be encrypted with a master key. There 
 are cases that the data being processed is generated by other applications, 
 we need to support externally managed or imported table keys. Also, the data 
 generated by Hive may be consumed by other applications in the system. We 
 need to a tool or command for exporting the table key to a java keystore for 
 using externally.
  
 

[jira] [Commented] (HIVE-5220) Add option for removing intermediate directory for partition, which is empty

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793900#comment-13793900
 ] 

Hudson commented on HIVE-5220:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2399 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2399/])
HIVE-5220 : Use factory methods to instantiate HiveDecimal instead of 
constructors (Xuefu Zhang via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531781)
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ColumnStatisticsImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMinus.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMod.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMultiply.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFPosMod.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFPower.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFRound.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFAverage.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFSum.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcSerDeStats.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/TestGenericUDFAbs.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/RegexSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyHiveDecimal.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/JavaHiveDecimalObjectInspector.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorConverter.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/WritableHiveDecimalObjectInspector.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/binarysortable/TestBinarySortableSerDe.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/io/TestTimestampWritable.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/TestObjectInspectorConverters.java


 Add option for removing intermediate directory for partition, which is empty
 

 Key: HIVE-5220
 URL: https://issues.apache.org/jira/browse/HIVE-5220
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-5220.D12729.1.patch


 For deeply nested partitioned table, intermediate directories are not removed 
 even if there is no partitions in it by removing them.
 {noformat}
 /deep_part/c=09/d=01
 /deep_part/c=09/d=01/e=01
 /deep_part/c=09/d=01/e=02
 /deep_part/c=09/d=02
 /deep_part/c=09/d=02/e=01
 /deep_part/c=09/d=02/e=02
 {noformat}
 After removing partition (c='09'), directory remains like this, 
 {noformat}
 /deep_part/c=09/d=01
 /deep_part/c=09/d=02
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5520) Use factory methods to instantiate HiveDecimal instead of constructors

2013-10-13 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793909#comment-13793909
 ] 

Xuefu Zhang commented on HIVE-5520:
---

Thank you for reviewing and committing the patch, Ashutosh.

 Use factory methods to instantiate HiveDecimal instead of constructors
 --

 Key: HIVE-5520
 URL: https://issues.apache.org/jira/browse/HIVE-5520
 Project: Hive
  Issue Type: Improvement
  Components: Types
Affects Versions: 0.11.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-5520.1.patch, HIVE-5520.patch


 Currently HiveDecimal class provided a bunch of constructors that  
 unfortunately also throws a runtime exception. For example,
 {code}
  public HiveDecimal(BigInteger unscaled, int scale) {
 bd = this.normalize(new BigDecimal(unscaled, scale), MAX_PRECISION, 
 false);
 if (bd == null) {
  throw new NumberFormatException(Assignment would result in truncation);
}
 {code}
 As a result, it's hard for the caller to detect error occurrences and the 
 error handling is also complicated. In many cases, the error handling is 
 omitted or missed. For instance,
 {code}
  HiveDecimalWritable result = new 
 HiveDecimalWritable(HiveDecimal.ZERO);
 try {
   result.set(aggregation.sum.divide(new 
 HiveDecimal(aggregation.count)));
 } catch (NumberFormatException e) {
   result = null;
 }
 {code} 
 Throwing runtime exception while expecting caller to catch seems 
 anti-pattern. In the case of constructor, factory class or methods seem more 
 appropriate. With such a change, the apis are cleaner, and the error handling 
 is simplified.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5533) Re-connect Tez session after AM timeout

2013-10-13 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-5533:


 Summary: Re-connect Tez session after AM timeout
 Key: HIVE-5533
 URL: https://issues.apache.org/jira/browse/HIVE-5533
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: tez-branch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5533) Re-connect Tez session after AM timeout

2013-10-13 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-5533:
-

Attachment: HIVE-5533.1.patch

 Re-connect Tez session after AM timeout
 ---

 Key: HIVE-5533
 URL: https://issues.apache.org/jira/browse/HIVE-5533
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: tez-branch

 Attachments: HIVE-5533.1.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5533) Re-connect Tez session after AM timeout

2013-10-13 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-5533.
--

Resolution: Fixed

Committed to branch.

 Re-connect Tez session after AM timeout
 ---

 Key: HIVE-5533
 URL: https://issues.apache.org/jira/browse/HIVE-5533
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: tez-branch

 Attachments: HIVE-5533.1.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5220) Add option for removing intermediate directory for partition, which is empty

2013-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793933#comment-13793933
 ] 

Hudson commented on HIVE-5220:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #139 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/139/])
HIVE-5220 : Use factory methods to instantiate HiveDecimal instead of 
constructors (Xuefu Zhang via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1531781)
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ColumnStatisticsImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMinus.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMod.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMultiply.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPPlus.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFPosMod.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFPower.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFRound.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFAverage.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFSum.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcSerDeStats.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/TestGenericUDFAbs.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/RegexSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyHiveDecimal.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/JavaHiveDecimalObjectInspector.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorConverter.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/WritableHiveDecimalObjectInspector.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/binarysortable/TestBinarySortableSerDe.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/io/TestTimestampWritable.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/TestObjectInspectorConverters.java


 Add option for removing intermediate directory for partition, which is empty
 

 Key: HIVE-5220
 URL: https://issues.apache.org/jira/browse/HIVE-5220
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-5220.D12729.1.patch


 For deeply nested partitioned table, intermediate directories are not removed 
 even if there is no partitions in it by removing them.
 {noformat}
 /deep_part/c=09/d=01
 /deep_part/c=09/d=01/e=01
 /deep_part/c=09/d=01/e=02
 /deep_part/c=09/d=02
 /deep_part/c=09/d=02/e=01
 /deep_part/c=09/d=02/e=02
 {noformat}
 After removing partition (c='09'), directory remains like this, 
 {noformat}
 /deep_part/c=09/d=01
 /deep_part/c=09/d=02
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)