Hive-trunk-h0.21 - Build # 1384 - Failure

2012-04-20 Thread Apache Jenkins Server
Changes for Build #1384
[hashutosh] HIVE-2958 [jira] GROUP BY causing ClassCastException 
[LazyDioInteger cannot be
cast LazyInteger]
(Navis Ryu via Ashutosh Chauhan)

Summary:
DPAL- GROUP BY causing ClassCastException [LazyDioInteger cannot be cast
LazyInteger]

This relates to https://issues.apache.org/jira/browse/HIVE-1634.

The following work fine:

CREATE EXTERNAL TABLE tim_hbase_occurrence (
  id int,
  scientific_name string,
  data_resource_id int
) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
SERDEPROPERTIES (
  hbase.columns.mapping = :key#b,v:scientific_name#s,v:data_resource_id#b
) TBLPROPERTIES(
  hbase.table.name = mini_occurrences,
  hbase.table.default.storage.type = binary
);
SELECT * FROM tim_hbase_occurrence LIMIT 3;
SELECT * FROM tim_hbase_occurrence WHERE data_resource_id=1081 LIMIT 3;

However, the following fails:

SELECT data_resource_id, count(*) FROM tim_hbase_occurrence GROUP BY
data_resource_id;

The error given:

0 TS
2012-04-17 16:58:45,693 INFO org.apache.hadoop.hive.ql.exec.MapOperator:
Initialization Done 7 MAP
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator:
Processing alias tim_hbase_occurrence for file
hdfs://c1n2.gbif.org/user/hive/warehouse/tim_hbase_occurrence
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7
forwarding 1 rows
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 0
forwarding 1 rows
2012-04-17 16:58:45,716 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 1
forwarding 1 rows
2012-04-17 16:58:45,723 FATAL ExecMapper:
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
processing row {id:1444,scientific_name:null,data_resource_id:1081}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.ClassCastException:
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:737)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
... 9 more
Caused by: java.lang.ClassCastException:
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at
org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector.copyObject(LazyIntObjectInspector.java:43)
at
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:239)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119)
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:750)
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:722)
... 18 more

Test Plan: EMPTY

Reviewers: JIRA, ashutoshc

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D2871




1 tests failed.
REGRESSION:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1

Error Message:
Unexpected exception See build/ql/tmp/hive.log, or try ant test ... 
-Dtest.silent=false to get more logs.

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception
See 

[jira] [Commented] (HIVE-1634) Allow access to Primitive types stored in binary format in HBase

2012-04-20 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258036#comment-13258036
 ] 

Hudson commented on HIVE-1634:
--

Integrated in Hive-trunk-h0.21 #1384 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1384/])
HIVE-2958 [jira] GROUP BY causing ClassCastException [LazyDioInteger cannot 
be
cast LazyInteger]
(Navis Ryu via Ashutosh Chauhan)

Summary:
DPAL- GROUP BY causing ClassCastException [LazyDioInteger cannot be cast
LazyInteger]

This relates to https://issues.apache.org/jira/browse/HIVE-1634.

The following work fine:

CREATE EXTERNAL TABLE tim_hbase_occurrence (
  id int,
  scientific_name string,
  data_resource_id int
) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
SERDEPROPERTIES (
  hbase.columns.mapping = :key#b,v:scientific_name#s,v:data_resource_id#b
) TBLPROPERTIES(
  hbase.table.name = mini_occurrences,
  hbase.table.default.storage.type = binary
);
SELECT * FROM tim_hbase_occurrence LIMIT 3;
SELECT * FROM tim_hbase_occurrence WHERE data_resource_id=1081 LIMIT 3;

However, the following fails:

SELECT data_resource_id, count(*) FROM tim_hbase_occurrence GROUP BY
data_resource_id;

The error given:

0 TS
2012-04-17 16:58:45,693 INFO org.apache.hadoop.hive.ql.exec.MapOperator:
Initialization Done 7 MAP
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator:
Processing alias tim_hbase_occurrence for file
hdfs://c1n2.gbif.org/user/hive/warehouse/tim_hbase_occurrence
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7
forwarding 1 rows
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 0
forwarding 1 rows
2012-04-17 16:58:45,716 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 1
forwarding 1 rows
2012-04-17 16:58:45,723 FATAL ExecMapper:
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
processing row {id:1444,scientific_name:null,data_resource_id:1081}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.ClassCastException:
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:737)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
... 9 more
Caused by: java.lang.ClassCastException:
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at
org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector.copyObject(LazyIntObjectInspector.java:43)
at
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:239)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119)
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:750)
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:722)
... 18 more

Test Plan: EMPTY

Reviewers: JIRA, ashutoshc

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D2871 (Revision 1328157)

 Result = FAILURE

[jira] [Commented] (HIVE-2958) GROUP BY causing ClassCastException [LazyDioInteger cannot be cast LazyInteger]

2012-04-20 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258037#comment-13258037
 ] 

Hudson commented on HIVE-2958:
--

Integrated in Hive-trunk-h0.21 #1384 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1384/])
HIVE-2958 [jira] GROUP BY causing ClassCastException [LazyDioInteger cannot 
be
cast LazyInteger]
(Navis Ryu via Ashutosh Chauhan)

Summary:
DPAL- GROUP BY causing ClassCastException [LazyDioInteger cannot be cast
LazyInteger]

This relates to https://issues.apache.org/jira/browse/HIVE-1634.

The following work fine:

CREATE EXTERNAL TABLE tim_hbase_occurrence (
  id int,
  scientific_name string,
  data_resource_id int
) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
SERDEPROPERTIES (
  hbase.columns.mapping = :key#b,v:scientific_name#s,v:data_resource_id#b
) TBLPROPERTIES(
  hbase.table.name = mini_occurrences,
  hbase.table.default.storage.type = binary
);
SELECT * FROM tim_hbase_occurrence LIMIT 3;
SELECT * FROM tim_hbase_occurrence WHERE data_resource_id=1081 LIMIT 3;

However, the following fails:

SELECT data_resource_id, count(*) FROM tim_hbase_occurrence GROUP BY
data_resource_id;

The error given:

0 TS
2012-04-17 16:58:45,693 INFO org.apache.hadoop.hive.ql.exec.MapOperator:
Initialization Done 7 MAP
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator:
Processing alias tim_hbase_occurrence for file
hdfs://c1n2.gbif.org/user/hive/warehouse/tim_hbase_occurrence
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7
forwarding 1 rows
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 0
forwarding 1 rows
2012-04-17 16:58:45,716 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 1
forwarding 1 rows
2012-04-17 16:58:45,723 FATAL ExecMapper:
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
processing row {id:1444,scientific_name:null,data_resource_id:1081}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.ClassCastException:
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:737)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
... 9 more
Caused by: java.lang.ClassCastException:
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at
org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector.copyObject(LazyIntObjectInspector.java:43)
at
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:239)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119)
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:750)
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:722)
... 18 more

Test Plan: EMPTY

Reviewers: JIRA, ashutoshc

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D2871 (Revision 1328157)

 Result = FAILURE

POST limit

2012-04-20 Thread John Sichi
Ashutosh, are you submitting an exceptionally large patch of some kind?

http://stackoverflow.com/questions/6279897/php-post-content-length-of-11933650-bytes-exceeds-the-limit-of-8388608-bytes

We could try bumping up that limit on the server side, but first it
would be good to find out whether that is really the problem (and if
so what is contributing to such a big size).

JVS

On Thu, Apr 19, 2012 at 7:35 PM, Ashutosh Chauhan hashut...@apache.org wrote:
 Hit a new problem with arc today:

 Fatal error: Uncaught exception 'Exception' with message 'Host returned
 HTTP/200, but invalid JSON data in response to a Conduit method call:
 br /
 bWarning/b:  Unknown: POST Content-Length of 9079953 bytes exceeds the
 limit of 8388608 bytes in bUnknown/b on line b0/bbr /
 for(;;);{result:null,error_code:ERR-INVALID-SESSION,error_info:Session
 key is not present.}' in
 /Users/ashutosh/work/hive/libphutil/src/conduit/client/ConduitFuture.php:48
 Stack trace:
 #0
 /Users/ashutosh/work/hive/libphutil/src/future/proxy/FutureProxy.php(62):
 ConduitFuture-didReceiveResult(Array)
 #1
 /Users/ashutosh/work/hive/libphutil/src/future/proxy/FutureProxy.php(39):
 FutureProxy-getResult()
 #2
 /Users/ashutosh/work/hive/libphutil/src/conduit/client/ConduitClient.php(52):
 FutureProxy-resolve()
 #3
 /Users/ashutosh/work/hive/arcanist/src/workflow/diff/ArcanistDiffWorkflow.php(341):
 ConduitClient-callMethodSynchronous('differential.cr...', Array)
 #4 /Users/ashutosh/work/hive/arcanist/scripts/arcanist.php(266):
 ArcanistDiffWo in
 /Users/ashutosh/work/hive/libphutil/src/conduit/client/ConduitFuture.php on
 line 48


 Any ideas how to solve this?

 Thanks,
 Ashutosh


hive pull request: Dsp 563 row missing from hive

2012-04-20 Thread Git at Apache
GitHub user xedin opened the pull request at
https://github.com/apache/hive/pull/4


Dsp 563 row missing from hive




You can merge this pull request into a Git repository by running
$ git pull https://github.com/riptano/hive DSP-563-row-missing-from-hive

Alternatively you can review and apply these changes as the patch at
https://github.com/apache/hive/pull/4.patch



hive pull request: Dsp 563 row missing from hive

2012-04-20 Thread Git at Apache
Github user xedin closed the pull request at
https://github.com/apache/hive/pull/4



[jira] [Updated] (HIVE-2721) ability to select a view qualified by the database / schema name

2012-04-20 Thread Martin Traverso (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Traverso updated HIVE-2721:
--

Assignee: Martin Traverso

 ability to select a view qualified by the database / schema name
 

 Key: HIVE-2721
 URL: https://issues.apache.org/jira/browse/HIVE-2721
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema, Query Processor
Affects Versions: 0.7.0, 0.7.1, 0.8.0
Reporter: Robert Morton
Assignee: Martin Traverso
Priority: Blocker

 HIVE-1517 added support for selecting tables from different databases (aka 
 schemas) by qualifying the tables with the database name. The feature work 
 did not however extend this support to views. Note that this point came up in 
 the earlier JIRA, but was not addressed. See the following two comments:
 https://issues.apache.org/jira/browse/HIVE-1517?focusedCommentId=12996641page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996641
 https://issues.apache.org/jira/browse/HIVE-1517?focusedCommentId=12996679page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996679

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2721) ability to select a view qualified by the database / schema name

2012-04-20 Thread Robert Morton (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258311#comment-13258311
 ] 

Robert Morton commented on HIVE-2721:
-

I believe index DDL operations like CREATE INDEX have a similar limitation.

 ability to select a view qualified by the database / schema name
 

 Key: HIVE-2721
 URL: https://issues.apache.org/jira/browse/HIVE-2721
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema, Query Processor
Affects Versions: 0.7.0, 0.7.1, 0.8.0
Reporter: Robert Morton
Assignee: Martin Traverso
Priority: Blocker

 HIVE-1517 added support for selecting tables from different databases (aka 
 schemas) by qualifying the tables with the database name. The feature work 
 did not however extend this support to views. Note that this point came up in 
 the earlier JIRA, but was not addressed. See the following two comments:
 https://issues.apache.org/jira/browse/HIVE-1517?focusedCommentId=12996641page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996641
 https://issues.apache.org/jira/browse/HIVE-1517?focusedCommentId=12996679page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996679

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: POST limit

2012-04-20 Thread Ashutosh Chauhan
Hey John,

Yeah this is exceptionally a large patch.
https://issues.apache.org/jira/browse/HIVE-2965

Thanks,
Ashutosh

On Thu, Apr 19, 2012 at 23:19, John Sichi jsi...@gmail.com wrote:

 Ashutosh, are you submitting an exceptionally large patch of some kind?


 http://stackoverflow.com/questions/6279897/php-post-content-length-of-11933650-bytes-exceeds-the-limit-of-8388608-bytes

 We could try bumping up that limit on the server side, but first it
 would be good to find out whether that is really the problem (and if
 so what is contributing to such a big size).

 JVS

 On Thu, Apr 19, 2012 at 7:35 PM, Ashutosh Chauhan hashut...@apache.org
 wrote:
  Hit a new problem with arc today:
 
  Fatal error: Uncaught exception 'Exception' with message 'Host returned
  HTTP/200, but invalid JSON data in response to a Conduit method call:
  br /
  bWarning/b:  Unknown: POST Content-Length of 9079953 bytes exceeds
 the
  limit of 8388608 bytes in bUnknown/b on line b0/bbr /
 
 for(;;);{result:null,error_code:ERR-INVALID-SESSION,error_info:Session
  key is not present.}' in
 
 /Users/ashutosh/work/hive/libphutil/src/conduit/client/ConduitFuture.php:48
  Stack trace:
  #0
  /Users/ashutosh/work/hive/libphutil/src/future/proxy/FutureProxy.php(62):
  ConduitFuture-didReceiveResult(Array)
  #1
  /Users/ashutosh/work/hive/libphutil/src/future/proxy/FutureProxy.php(39):
  FutureProxy-getResult()
  #2
 
 /Users/ashutosh/work/hive/libphutil/src/conduit/client/ConduitClient.php(52):
  FutureProxy-resolve()
  #3
 
 /Users/ashutosh/work/hive/arcanist/src/workflow/diff/ArcanistDiffWorkflow.php(341):
  ConduitClient-callMethodSynchronous('differential.cr...', Array)
  #4 /Users/ashutosh/work/hive/arcanist/scripts/arcanist.php(266):
  ArcanistDiffWo in
  /Users/ashutosh/work/hive/libphutil/src/conduit/client/ConduitFuture.php
 on
  line 48
 
 
  Any ideas how to solve this?
 
  Thanks,
  Ashutosh



[jira] [Commented] (HIVE-2965) Revert HIVE-2612

2012-04-20 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258382#comment-13258382
 ] 

Ashutosh Chauhan commented on HIVE-2965:


All tests passed. 
{code}
BUILD SUCCESSFUL
Total time: 354 minutes 58 seconds
{code}

 Revert HIVE-2612
 

 Key: HIVE-2965
 URL: https://issues.apache.org/jira/browse/HIVE-2965
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
Priority: Blocker
 Fix For: 0.9.0

 Attachments: hive-2765.patch


 In 4/19 contrib meeting it was decided to revert HIVE-2612.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1603) support CSV text file format

2012-04-20 Thread M Shaw (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258428#comment-13258428
 ] 

M Shaw commented on HIVE-1603:
--

storing output in a csv format is very desirable feature for end users.

 support CSV text file format
 

 Key: HIVE-1603
 URL: https://issues.apache.org/jira/browse/HIVE-1603
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.7.0
Reporter: Ning Zhang

 Comma Separated Values (CSV) text format are commonly used in exchanging 
 relational data between heterogeneous systems. Currently Hive uses TextFile 
 format when displaying query results. This could cause confusions when column 
 values contain new lines or tabs. A CSVTextFile format could get around this 
 problem. This will require a new CSVTextInputFormat, CSVTextOutputFormat, and 
 CSVSerDe. 
 A proposed use case is like:
 {code}
 -- exporting a table to CSV files in a directory
 hive set hive.io.output.fileformat=CSVTextFile;
 hive insert overwrite local directory '/tmp/CSVrepos/' select * from S where 
 ... ;
 -- query result in CSV
 hive -e 'set hive.io.output.fileformat=CSVTextFile; select * from T;' | 
 sql_loader_to_other_systems
 -- query CSV files directory from Hive
 hive create table T (...) stored as CSVTextFile;
 hive load data local inpath '/my/CSVfiles' into table T;
 hive select * from T where ...;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2965) Revert HIVE-2612

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2965:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.9

 Revert HIVE-2612
 

 Key: HIVE-2965
 URL: https://issues.apache.org/jira/browse/HIVE-2965
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
Priority: Blocker
 Fix For: 0.9.0

 Attachments: hive-2765.patch


 In 4/19 contrib meeting it was decided to revert HIVE-2612.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2933) analyze command throw NPE when table doesn't exists

2012-04-20 Thread ransom.hezhiqiang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258450#comment-13258450
 ] 

ransom.hezhiqiang commented on HIVE-2933:
-

please review it.

 analyze command throw NPE when table doesn't exists
 ---

 Key: HIVE-2933
 URL: https://issues.apache.org/jira/browse/HIVE-2933
 Project: Hive
  Issue Type: Bug
  Components: Statistics
Affects Versions: 0.8.1
Reporter: alex gemini
Priority: Minor
 Attachments: HIVE-2933-0.8.1-2.patch


 analyze command throw NPE when table doesn't exists

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-20 Thread Carl Steinbach (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258454#comment-13258454
 ] 

Carl Steinbach commented on HIVE-2646:
--

@Thomas: I'm going to try to fix the pdk problem later today. In the meantime I 
was wondering if you could look at the eclipse templates and make sure that 
they're up to date? Thanks.

 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
 

 Key: HIVE-2646
 URL: https://issues.apache.org/jira/browse/HIVE-2646
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.8.0
Reporter: Andrew Bayer
Assignee: Andrew Bayer
Priority: Critical
 Fix For: 0.9.0

 Attachments: HIVE-2646-fixtests.patch, HIVE-2646.D2133.1.patch, 
 HIVE-2646.D2133.10.patch, HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, 
 HIVE-2646.D2133.13.patch, HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, 
 HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, 
 HIVE-2646.D2133.5.patch, HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, 
 HIVE-2646.D2133.8.patch, HIVE-2646.D2133.9.patch, HIVE-2646.D2883.1.patch, 
 HIVE-2646.diff.txt


 The current Hive Ivy dependency logic for its Hadoop dependencies is 
 problematic - depending on the tarball and extracting the jars from there, 
 rather than depending on the jars directly. It'd be great if this was fixed 
 to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-20 Thread Carl Steinbach (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258452#comment-13258452
 ] 

Carl Steinbach commented on HIVE-2646:
--

@Thomas: I tried running the tests on Jenkins and noticed that the build is 
failing despite all tests passing. It looks like this is because compilation of 
the pdk subproject is failing:

{noformat}
BUILD FAILED
/var/lib/hudson/production/workspace/carl-HIVE-patch-test1/build.xml:316: The 
following error occurred while executing this line:
/var/lib/hudson/production/workspace/carl-HIVE-patch-test1/build.xml:321: The 
following error occurred while executing this line:
/var/lib/hudson/production/workspace/carl-HIVE-patch-test1/pdk/build.xml:52: 
The following error occurred while executing this line:
/var/lib/hudson/production/workspace/carl-HIVE-patch-test1/build/dist/scripts/pdk/build-plugin.xml:58:
 
/var/lib/hudson/production/workspace/carl-HIVE-patch-test1/build/pdk/test-plugin/${build.ivy.lib.dir}/default
 does not exist.
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:474)
at org.apache.tools.ant.types.FileSet.iterator(FileSet.java:69)
at 
org.apache.tools.ant.types.resources.Union.getCollection(Union.java:123)
at 
org.apache.tools.ant.types.resources.Union.getCollection(Union.java:107)
at 
org.apache.tools.ant.types.resources.BaseResourceCollectionContainer.cacheCollection(BaseResourceCollectionContainer.java:265)
at 
org.apache.tools.ant.types.resources.BaseResourceCollectionContainer.iterator(BaseResourceCollectionContainer.java:142)
at org.apache.tools.ant.types.Path.iterator(Path.java:704)
at org.apache.tools.ant.types.Path.iterator(Path.java:698)
at 
org.apache.tools.ant.types.resources.Union.getCollection(Union.java:123)
at org.apache.tools.ant.types.resources.Union.list(Union.java:86)
at org.apache.tools.ant.types.Path.list(Path.java:372)
at org.apache.tools.ant.types.Path.addExisting(Path.java:331)
at org.apache.tools.ant.types.Path.addExisting(Path.java:319)
at org.apache.tools.ant.types.Path.concatSpecialPath(Path.java:566)
at org.apache.tools.ant.types.Path.concatSystemClasspath(Path.java:526)
at 
org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.getCompileClasspath(DefaultCompilerAdapter.java:155)
at 
org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.setupJavacCommandlineSwitches(DefaultCompilerAdapter.java:183)
at 
org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.setupModernJavacCommandlineSwitches(DefaultCompilerAdapter.java:321)
at 
org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.setupModernJavacCommand(DefaultCompilerAdapter.java:368)
at 
org.apache.tools.ant.taskdefs.compilers.Javac13.execute(Javac13.java:48)
at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1097)
at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:906)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397)
at 
org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1249)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397)
at 
org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1249)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at 

[jira] [Commented] (HIVE-2721) ability to select a view qualified by the database / schema name

2012-04-20 Thread Martin Traverso (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258462#comment-13258462
 ] 

Martin Traverso commented on HIVE-2721:
---

Diff: https://reviews.facebook.net/D2901

 ability to select a view qualified by the database / schema name
 

 Key: HIVE-2721
 URL: https://issues.apache.org/jira/browse/HIVE-2721
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema, Query Processor
Affects Versions: 0.7.0, 0.7.1, 0.8.0
Reporter: Robert Morton
Assignee: Martin Traverso
Priority: Blocker

 HIVE-1517 added support for selecting tables from different databases (aka 
 schemas) by qualifying the tables with the database name. The feature work 
 did not however extend this support to views. Note that this point came up in 
 the earlier JIRA, but was not addressed. See the following two comments:
 https://issues.apache.org/jira/browse/HIVE-1517?focusedCommentId=12996641page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996641
 https://issues.apache.org/jira/browse/HIVE-1517?focusedCommentId=12996679page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996679

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2721) ability to select a view qualified by the database / schema name

2012-04-20 Thread Martin Traverso (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Traverso updated HIVE-2721:
--

Attachment: HIVE-2721.patch

 ability to select a view qualified by the database / schema name
 

 Key: HIVE-2721
 URL: https://issues.apache.org/jira/browse/HIVE-2721
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema, Query Processor
Affects Versions: 0.7.0, 0.7.1, 0.8.0
Reporter: Robert Morton
Assignee: Martin Traverso
Priority: Blocker
 Attachments: HIVE-2721.patch


 HIVE-1517 added support for selecting tables from different databases (aka 
 schemas) by qualifying the tables with the database name. The feature work 
 did not however extend this support to views. Note that this point came up in 
 the earlier JIRA, but was not addressed. See the following two comments:
 https://issues.apache.org/jira/browse/HIVE-1517?focusedCommentId=12996641page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996641
 https://issues.apache.org/jira/browse/HIVE-1517?focusedCommentId=12996679page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996679

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HIVE-2339) Preserve RS key columns in columnExprMap after CP optimization, which might be useful to other optimizers

2012-04-20 Thread Ashutosh Chauhan (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-2339.


   Resolution: Duplicate
Fix Version/s: (was: 0.9.0)
   0.8.0

Fixed starting 0.8 by HIVE-1989

 Preserve RS key columns in columnExprMap after CP optimization, which might 
 be useful to other optimizers
 -

 Key: HIVE-2339
 URL: https://issues.apache.org/jira/browse/HIVE-2339
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.8.0

 Attachments: HIVE-2339.1.patch


 In ColumnPrunerProcFactory#pruneReduceSinkOperator, only VALUE parts are 
 retained from columnExprMap. Doesn't anyone want KEY parts to retained, 
 either? In my case, it was very useful for backtracking column names and 
 removing RS in *-RS-*-RS-GBY case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2264) Hive server is SHUTTING DOWN when invalid queries beeing executed.

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2264:
---

Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 Hive server is SHUTTING DOWN when invalid queries beeing executed.
 --

 Key: HIVE-2264
 URL: https://issues.apache.org/jira/browse/HIVE-2264
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
 Environment: SuSE-Linux-11
Reporter: rohithsharma
Assignee: Navis
Priority: Critical
 Attachments: HIVE-2264.1.patch.txt


 When invalid query is beeing executed, Hive server is shutting down.
 {noformat}
 CREATE TABLE SAMPLETABLE(IP STRING , showtime BIGINT ) partitioned by (ds 
 string,ipz int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\040'
 ALTER TABLE SAMPLETABLE add Partition(ds='sf') location 
 '/user/hive/warehouse' Partition(ipz=100) location '/user/hive/warehouse'
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2332) If all of the parameters of distinct functions are exists in group by columns, query fails in runtime

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2332:
---

Affects Version/s: 0.9.0
Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 If all of the parameters of distinct functions are exists in group by 
 columns, query fails in runtime
 -

 Key: HIVE-2332
 URL: https://issues.apache.org/jira/browse/HIVE-2332
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Navis
Assignee: Navis
Priority: Critical
 Attachments: HIVE-2332.1.patch.txt, HIVE-2332.2.patch.txt, 
 HIVE-2332.D663.1.patch


 select sum(key_int1), sum(distinct key_int1) from t1 group by key_int1;
 fails with message..
 {code}
 FAILED: Execution Error, return code 2 from 
 org.apache.hadoop.hive.ql.exec.MapRedTask
 {code}
 hadoop says..
 {code}
 Caused by: java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.get(ArrayList.java:322)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.init(StandardStructObjectInspector.java:95)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.(StandardStructObjectInspector.java:86)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getStandardStructObjectInspector(ObjectInspectorFactory.java:252)
   at 
 org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.initEvaluatorsAndReturnStruct(ReduceSinkOperator.java:188)
   at 
 org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:197)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:85)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:532)
 {code}
 I think the deficient number of key expression, compared to number of key 
 column, is the problem, which should be equal or more. 
 Would it be solved if add some key expression? I'll try.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2498) Group by operator doesnt estimate size of Timestamp Binary data correctly

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2498:
---

Affects Version/s: 0.9.0
Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 Group by operator doesnt estimate size of Timestamp  Binary data correctly
 ---

 Key: HIVE-2498
 URL: https://issues.apache.org/jira/browse/HIVE-2498
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.0, 0.8.1, 0.9.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-2498.D1185.1.patch, hive-2498.patch, 
 hive-2498_1.patch


 It currently defaults to default case and returns constant value, whereas we 
 can do better by getting actual size at runtime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2966) Revert HIVE-2795

2012-04-20 Thread Thejas M Nair (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-2966:


Attachment: HIVE-2966.2.patch

Including HIVE-2961 changes. phabricator is also updated.

 Revert HIVE-2795
 

 Key: HIVE-2966
 URL: https://issues.apache.org/jira/browse/HIVE-2966
 Project: Hive
  Issue Type: Task
  Components: Metastore
Affects Versions: 0.9.0
Reporter: Ashutosh Chauhan
Priority: Blocker
 Fix For: 0.9.0

 Attachments: HIVE-2966.1.patch, HIVE-2966.2.patch


 In 4/18/12 contrib meeting, it was decided to revert HIVE-2795

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2544) Nullpointer on registering udfs.

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2544:
---

Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 Nullpointer on registering udfs.
 

 Key: HIVE-2544
 URL: https://issues.apache.org/jira/browse/HIVE-2544
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Bennie Schut
Assignee: Bennie Schut
 Attachments: HIVE-2544.1.patch.txt


 Currently the Function registry can throw NullPointers when multiple threads 
 are trying to register the same function. The normal put() will replace the 
 existing registered function object even if it's exactly the same function.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2586) Float comparison doesn't work

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2586:
---

Affects Version/s: 0.9.0
Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 Float comparison doesn't work
 -

 Key: HIVE-2586
 URL: https://issues.apache.org/jira/browse/HIVE-2586
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.9.0
Reporter: Robert Surówka
Assignee: Navis
Priority: Minor
 Attachments: HIVE-2586.1.patch.txt


 Create table with float column, insert to it e.g. 1410.1, and then do select 
 * from that table where that column = 1410.1 . Nothing will be found. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HIVE-2657) builtins JAR is not being published to Maven repo hive-cli POM does not depend on it either

2012-04-20 Thread Ashutosh Chauhan (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-2657:
--

Assignee: Carl Steinbach

 builtins JAR is not being published to Maven repo  hive-cli POM does not 
 depend on it either
 -

 Key: HIVE-2657
 URL: https://issues.apache.org/jira/browse/HIVE-2657
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0
Reporter: Alejandro Abdelnur
Assignee: Carl Steinbach
 Fix For: 0.9.0

 Attachments: HIVE-2657.D897.1.patch, HIVE-2657.D897.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2679) ctas should check the table do not exists just right before move the data to the table's directory

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2679:
---

  Component/s: Query Processor
   Metastore
Affects Version/s: 0.9.0
Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 ctas should check the table do not exists just right before move the data to 
 the table's directory 
 ---

 Key: HIVE-2679
 URL: https://issues.apache.org/jira/browse/HIVE-2679
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Query Processor
Affects Versions: 0.9.0
Reporter: binlijin
 Attachments: hive-2679.patch


 Some one first runs a ctas sql and the MR job takes a long time, another one 
 runs another ctas sql and the MR job takes a short time and finishs before 
 the first one. when the two ctas sql's tablename are the same, the first one 
 will move data to the table's directory success but the create table task 
 will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HIVE-2679) ctas should check the table do not exists just right before move the data to the table's directory

2012-04-20 Thread Ashutosh Chauhan (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-2679:
--

Assignee: binlijin

 ctas should check the table do not exists just right before move the data to 
 the table's directory 
 ---

 Key: HIVE-2679
 URL: https://issues.apache.org/jira/browse/HIVE-2679
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Query Processor
Affects Versions: 0.9.0
Reporter: binlijin
Assignee: binlijin
 Attachments: hive-2679.patch


 Some one first runs a ctas sql and the MR job takes a long time, another one 
 runs another ctas sql and the MR job takes a short time and finishs before 
 the first one. when the two ctas sql's tablename are the same, the first one 
 will move data to the table's directory success but the create table task 
 will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2715) Upgrade Thrift dependency to 0.9.0

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2715:
---

Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 Upgrade Thrift dependency to 0.9.0
 --

 Key: HIVE-2715
 URL: https://issues.apache.org/jira/browse/HIVE-2715
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.8.0
Reporter: Mithun Radhakrishnan

 I work on HCatalog (0.2). Recently, we ran into HCat_server running out of 
 memory every few days, and it boiled down to a bug in thrift, (THRIFT-1468, 
 recently fixed).
 HCat-0.2-branch depends on Hive-0.8, which in turn depends on thrift-0.5.0. 
 (The bug also exists on 0.7.0.)
 May I please enquire if Hive can't depend on a more current version of 
 thrift? (Does it break the metastore?) I'm afraid I'm not privy to the 
 reasoning behind Hive's dependency on a slightly dated thrift-lib. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2723) should throw Ambiguous column reference key Exception in particular join condition

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2723:
---

Affects Version/s: 0.9.0
Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 should throw  Ambiguous column reference key  Exception in particular join 
 condition
 --

 Key: HIVE-2723
 URL: https://issues.apache.org/jira/browse/HIVE-2723
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
 Environment: Linux zongren-VirtualBox 3.0.0-14-generic #23-Ubuntu SMP 
 Mon Nov 21 20:34:47 UTC 2011 i686 i686 i386 GNU/Linux
 java version 1.6.0_25
 hadoop-0.20.2-cdh3u0
 hive-0.7.0-cdh3u0
Reporter: caofangkun
Assignee: Navis
Priority: Minor
  Labels: exception-handling, query, queryparser
 Attachments: HIVE-2723.D1275.1.patch, HIVE-2723.D1275.2.patch


 This Bug can be Repeated as following :
 create table test(key string, value string);
 create table test1(key string, value string);
 1: Correct!
 select t.key 
 from 
   (select a.key, b.key from (select * from src ) a right outer join (select * 
 from src1) b on (a.key = b.key)) t;
 FAILED: Error in semantic analysis: Ambiguous column reference key
 2: Uncorrect!! Should throw Exception as above too!
 select t.key --Is this a.key or b.key ? It's ambiduous!
 from 
   (select a.\*, b.\* from (select * from src ) a right outer join (select * 
 from src1) b on (a.value = b.value)) t;
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks not specified. Defaulting to jobconf value of: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201201170959_0004, Tracking URL = 
 http://zongren-VirtualBox:50030/jobdetails.jsp?jobid=job_201201170959_0004
 Kill Command = /home/zongren/workspace/hadoop-adh/bin/hadoop job  
 -Dmapred.job.tracker=zongren-VirtualBox:9001 -kill job_201201170959_0004
 Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 
 1
 2012-01-17 11:02:47,507 Stage-1 map = 0%,  reduce = 0%
 2012-01-17 11:02:55,002 Stage-1 map = 100%,  reduce = 0%
 2012-01-17 11:03:04,240 Stage-1 map = 100%,  reduce = 33%
 2012-01-17 11:03:05,258 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201201170959_0004
 MapReduce Jobs Launched: 
 Job 0: Map: 2  Reduce: 1   HDFS Read: 669 HDFS Write: 216 SUCESS
 Total MapReduce CPU Time Spent: 0 msec
 OK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2736) Hive UDFs cannot emit binary constants

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2736:
---

Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 Hive UDFs cannot emit binary constants
 --

 Key: HIVE-2736
 URL: https://issues.apache.org/jira/browse/HIVE-2736
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Serializers/Deserializers, UDF
Affects Versions: 0.9.0
Reporter: Philip Tromans
Assignee: Philip Tromans
Priority: Minor
  Labels: newbie
 Attachments: HIVE-2736.1.patch.txt, HIVE-2736.2.patch.txt

   Original Estimate: 4h
  Remaining Estimate: 4h

 I recently wrote a UDF which emits BINARY values (as implemented in 
 [HIVE-2380|https://issues.apache.org/jira/browse/HIVE-2380]). When testing 
 this, I encountered the following exception (because I was evaluating 
 f(g(constant string))) and g() was emitting a BytesWritable type.
 FAILED: Hive Internal Error: java.lang.RuntimeException(Internal error: 
 Cannot find ConstantObjectInspector for BINARY)
 java.lang.RuntimeException: Internal error: Cannot find 
 ConstantObjectInspector for BINARY
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory.getPrimitiveWritableConstantObjectInspector(PrimitiveObjectInspectorFactory.java:196)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getConstantObjectInspector(ObjectInspectorUtils.java:899)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:128)
   at 
 org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214)
   at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:684)
   at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:805)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
   at 
 org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:161)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7708)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2301)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2103)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:6126)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:6097)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6723)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7484)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
 It looks like a pretty simple fix - add a case for BINARY in 
 PrimitiveObjectInspectorFactory.getPrimitiveWritableConstantObjectInspector() 
 and implement a WritableConstantByteArrayObjectInspector class (almost 
 identical to the others). I'm happy to do this, although this is my first 
 foray into the world of contributing to FOSS so I might end up asking a few 
 stupid questions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 

[jira] [Resolved] (HIVE-2741) Single binary built against 0.20 and 0.23, does not work against 0.23 clusters.

2012-04-20 Thread Ashutosh Chauhan (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-2741.


Resolution: Not A Problem

 Single binary built against 0.20 and 0.23, does not work against 0.23 
 clusters.
 ---

 Key: HIVE-2741
 URL: https://issues.apache.org/jira/browse/HIVE-2741
 Project: Hive
  Issue Type: Bug
  Components: Shims
Affects Versions: 0.8.1
Reporter: Amareshwari Sriramadasu
 Fix For: 0.9.0


 After HIVE-2629, if single binary is built for 0.20 and 0.23, it results in 
 following exception in 0.23 clusters:
 java.lang.IncompatibleClassChangeError: Found interface 
 org.apache.hadoop.mapred.Counters$Counter, but class was
 expected
 at 
 org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:341)
 at 
 org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:685)
 at 
 org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:458)
 at 
 org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133)
 at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
 FAILED: Execution Error, return code -101 from 
 org.apache.hadoop.hive.ql.exec.MapRedTask
 If we have to make single binary work against both 0.20 and 0.23, we need to 
 move all such in-compatibilities to Shims.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2721) ability to select a view qualified by the database / schema name

2012-04-20 Thread Kevin Wilfong (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-2721:


Status: Patch Available  (was: Open)

 ability to select a view qualified by the database / schema name
 

 Key: HIVE-2721
 URL: https://issues.apache.org/jira/browse/HIVE-2721
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema, Query Processor
Affects Versions: 0.8.0, 0.7.1, 0.7.0
Reporter: Robert Morton
Assignee: Martin Traverso
Priority: Blocker
 Attachments: HIVE-2721.patch


 HIVE-1517 added support for selecting tables from different databases (aka 
 schemas) by qualifying the tables with the database name. The feature work 
 did not however extend this support to views. Note that this point came up in 
 the earlier JIRA, but was not addressed. See the following two comments:
 https://issues.apache.org/jira/browse/HIVE-1517?focusedCommentId=12996641page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996641
 https://issues.apache.org/jira/browse/HIVE-1517?focusedCommentId=12996679page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996679

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2820) Invalid tag is used for MapJoinProcessor

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2820:
---

Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 Invalid tag is used for MapJoinProcessor
 

 Key: HIVE-2820
 URL: https://issues.apache.org/jira/browse/HIVE-2820
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
 Environment: ubuntu
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-2820.D1935.1.patch, HIVE-2820.D1935.2.patch, 
 HIVE-2820.D1935.3.patch


 Testing HIVE-2810, I've found tag and alias are used in very confusing 
 manner. For example, query below fails..
 {code}
 hive set hive.auto.convert.join=true;
  
 hive select /*+ STREAMTABLE(a) */ * from myinput1 a join myinput1 b on 
 a.key=b.key join myinput1 c on a.key=c.key;
 Total MapReduce jobs = 4
 Ended Job = 1667415037, job is filtered out (removed at runtime).
 Ended Job = 1739566906, job is filtered out (removed at runtime).
 Ended Job = 1113337780, job is filtered out (removed at runtime).
 12/02/24 10:27:14 WARN conf.HiveConf: DEPRECATED: Ignoring hive-default.xml 
 found on the CLASSPATH at /home/navis/hive/conf/hive-default.xml
 Execution log at: 
 /tmp/navis/navis_20120224102727_cafe0d8d-9b21-441d-bd4e-b83303b31cdc.log
 2012-02-24 10:27:14   Starting to launch local task to process map join;  
 maximum memory = 932118528
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.exec.HashTableSinkOperator.processOp(HashTableSinkOperator.java:312)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at 
 org.apache.hadoop.hive.ql.exec.MapredLocalTask.startForward(MapredLocalTask.java:325)
   at 
 org.apache.hadoop.hive.ql.exec.MapredLocalTask.executeFromChildJVM(MapredLocalTask.java:272)
   at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:685)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
 Execution failed with exit status: 2
 Obtaining error information
 {code}
 Failed task has a plan which doesn't make sense.
 {noformat}
   Stage: Stage-8
 Map Reduce Local Work
   Alias - Map Local Tables:
 b 
   Fetch Operator
 limit: -1
 c 
   Fetch Operator
 limit: -1
   Alias - Map Local Operator Tree:
 b 
   TableScan
 alias: b
 HashTable Sink Operator
   condition expressions:
 0 {key} {value}
 1 {key} {value}
 2 {key} {value}
   handleSkewJoin: false
   keys:
 0 [Column[key]]
 1 [Column[key]]
 2 [Column[key]]
   Position of Big Table: 0
 c 
   TableScan
 alias: c
 Map Join Operator
   condition map:
Inner Join 0 to 1
Inner Join 0 to 2
   condition expressions:
 0 {key} {value}
 1 {key} {value}
 2 {key} {value}
   handleSkewJoin: false
   keys:
 0 [Column[key]]
 1 [Column[key]]
 2 [Column[key]]
   outputColumnNames: _col0, _col1, _col4, _col5, _col8, _col9
   Position of Big Table: 0
   Select Operator
 expressions:
   expr: _col0
   type: int
   expr: _col1
   type: int
   expr: _col4
   type: int
   expr: _col5
   type: int
   expr: _col8
   type: int
   expr: _col9
   type: int
 outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
 File Output Operator
   compressed: false
   GlobalTableId: 0
   table:
   input format: 

[jira] [Updated] (HIVE-2839) Filters on outer join with mapjoin hint is not applied correctly

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2839:
---

Affects Version/s: 0.9.0
Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 Filters on outer join with mapjoin hint is not applied correctly
 

 Key: HIVE-2839
 URL: https://issues.apache.org/jira/browse/HIVE-2839
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-2839.D2079.1.patch, HIVE-2839.D2079.2.patch


 Testing HIVE-2820, I've found some queries with mapjoin hint makes exceptions.
 {code}
 SELECT /*+ MAPJOIN(a) */ * FROM src a RIGHT OUTER JOIN src b on a.key=b.key 
 AND true limit 10;
 FAILED: Hive Internal Error: 
 java.lang.ClassCastException(org.apache.hadoop.hive.ql.plan.ExprNodeConstantDesc
  cannot be cast to org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc)
 java.lang.ClassCastException: 
 org.apache.hadoop.hive.ql.plan.ExprNodeConstantDesc cannot be cast to 
 org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.convertMapJoin(MapJoinProcessor.java:363)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.generateMapJoinOperator(MapJoinProcessor.java:483)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.transform(MapJoinProcessor.java:689)
   at 
 org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:87)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7519)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:891)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
 {code}
 and 
 {code}
 SELECT /*+ MAPJOIN(a) */ * FROM src a RIGHT OUTER JOIN src b on a.key=b.key 
 AND b.key * 10  '1000' limit 10;
 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException
   at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:416)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
   at org.apache.hadoop.mapred.Child.main(Child.java:264)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:198)
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:212)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1321)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1325)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1325)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:495)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
   ... 8 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2859) STRING data corruption in internationalized data -- based on LANG env variable

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2859:
---

Affects Version/s: 0.9.0
   0.8.0
   0.8.1
Fix Version/s: (was: 0.9.0)
   (was: 0.7.1)

Unlinking from 0.9

 STRING data corruption in internationalized data -- based on LANG env variable
 --

 Key: HIVE-2859
 URL: https://issues.apache.org/jira/browse/HIVE-2859
 Project: Hive
  Issue Type: Bug
  Components: Configuration, Import/Export, Serializers/Deserializers, 
 Types
Affects Versions: 0.7.1, 0.8.0, 0.8.1, 0.9.0
 Environment: Windows / RHEL5 with LANG = en_US.CP1252
Reporter: John Gordon
   Original Estimate: 6h
  Remaining Estimate: 6h

 This is a bug in Hive that is exacerbated by replatforming it to Windows 
 without CYGWIN.  Basically, it assumes that the default file.encoding is 
 UTF8.  There are something like 6-7 getBytes() calls and write() calls that 
 don't specify the encoding.  The rest specify UTF-8 explicitly, which blocks 
 auto-detection of UTF-16 data in files with a BOM present.  The mix of 
 explicit encodings and default encoding assumptions means that Hive must be 
 run in a JVM whose default encoding is UTF-8 and only UTF-8.
  
 When the JVM starts up, it derives the default encoding from the C runtime 
 setlocale() call.  On Linux/Unix, this would use the LANG env variable (which 
 is almost always locale.UTF8 for machines handling internationalized data, 
 but not guaranteed to be so).  On Windows, this is derived from the user's 
 language settings, and cannot return a UTF-8 encoding, right now.  So there 
 isn't an environment setting for Windows that would reliably provide the JVM 
 with a set of inputs to cause it to set the default encoding to UTF-8 on 
 startup without additional options.
 However, there are 2 feasible options: 
 1.) the JVM has a startup option -Dfile.encoding=UTF-8 which should 
 explicitly override the default encoding detection behavior  in the JVM to 
 make it always UTF-8 regardless of the environmental configuration.  This 
 would make all deployments on all OS/environment configs behave consistently. 
  I don't know where Hive sets the JVM options we use when it starts the 
 service.
 2.) We could add UTF8 explicitly to all the remaining getBytes() calls that 
 need it, and make all the string I/O explicitly UTF-8 encoded.  This is 
 probably being changed right now as part of Hive-1505, so we would duplicate 
 effort and maybe make that change harder.  Seems easier to trick the JVM into 
 behaving like it is on a well-configured machine WRT default encoding instead 
 of setting explicit encodings everywhere.
  
 So:
 - Pretty much any globalized strings than Western European are going to 
 be corrupted in the current Hive service on Windows with this bug present 
 because there really isn't a way to have the JVM read the environment and 
 determine by default that UTF8 should be the default encoding.
 - Anyone can repro this on Linux fairly easily -- Add export 
 LANG=en_US.CP1252 to /etc/profile to modify the global LANG default encoding 
 to CP1252 explicitly, then restart the service and do a query over 
 internationalized UTF-8 data.
 - We shouldn't rely on JVM default codepage selection if we want to 
 support UTF-8 consistently and reliably as the default encoding.
 -   The estimate can range wildly, but adding an explicit default 
 encoding on startup should only take a little while if you know where to do 
 it, theoretically.
 -   I don't know where to update the start arguments of the JVM when the 
 service is started, just getting into the code for the first time with this 
 bug investigation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2957) Hive JDBC doesn't support TIMESTAMP column

2012-04-20 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2957:
---

Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 Hive JDBC doesn't support TIMESTAMP column
 --

 Key: HIVE-2957
 URL: https://issues.apache.org/jira/browse/HIVE-2957
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.8.1, 0.9.0
Reporter: Bharath Ganesh
Assignee: Bharath Ganesh
Priority: Minor
 Attachments: HIVE-2957.patch


 Steps to replicate:
 1. Create a table with at least one column of type TIMESTAMP
 2. Do a DatabaseMetaData.getColumns () such that this TIMESTAMP column is 
 part of the resultset.
 3. When you iterate over the TIMESTAMP column it would fail, throwing the 
 below exception:
 Exception in thread main java.sql.SQLException: Unrecognized column type: 
 timestamp
   at org.apache.hadoop.hive.jdbc.Utils.hiveTypeToSqlType(Utils.java:56)
   at org.apache.hadoop.hive.jdbc.JdbcColumn.getSqlType(JdbcColumn.java:62)
   at 
 org.apache.hadoop.hive.jdbc.HiveDatabaseMetaData$2.next(HiveDatabaseMetaData.java:244)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive-trunk-h0.21 - Build # 1385 - Fixed

2012-04-20 Thread Apache Jenkins Server
Changes for Build #1384
[hashutosh] HIVE-2958 [jira] GROUP BY causing ClassCastException 
[LazyDioInteger cannot be
cast LazyInteger]
(Navis Ryu via Ashutosh Chauhan)

Summary:
DPAL- GROUP BY causing ClassCastException [LazyDioInteger cannot be cast
LazyInteger]

This relates to https://issues.apache.org/jira/browse/HIVE-1634.

The following work fine:

CREATE EXTERNAL TABLE tim_hbase_occurrence (
  id int,
  scientific_name string,
  data_resource_id int
) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
SERDEPROPERTIES (
  hbase.columns.mapping = :key#b,v:scientific_name#s,v:data_resource_id#b
) TBLPROPERTIES(
  hbase.table.name = mini_occurrences,
  hbase.table.default.storage.type = binary
);
SELECT * FROM tim_hbase_occurrence LIMIT 3;
SELECT * FROM tim_hbase_occurrence WHERE data_resource_id=1081 LIMIT 3;

However, the following fails:

SELECT data_resource_id, count(*) FROM tim_hbase_occurrence GROUP BY
data_resource_id;

The error given:

0 TS
2012-04-17 16:58:45,693 INFO org.apache.hadoop.hive.ql.exec.MapOperator:
Initialization Done 7 MAP
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator:
Processing alias tim_hbase_occurrence for file
hdfs://c1n2.gbif.org/user/hive/warehouse/tim_hbase_occurrence
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7
forwarding 1 rows
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 0
forwarding 1 rows
2012-04-17 16:58:45,716 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 1
forwarding 1 rows
2012-04-17 16:58:45,723 FATAL ExecMapper:
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
processing row {id:1444,scientific_name:null,data_resource_id:1081}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.ClassCastException:
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:737)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
... 9 more
Caused by: java.lang.ClassCastException:
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at
org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector.copyObject(LazyIntObjectInspector.java:43)
at
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:239)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119)
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:750)
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:722)
... 18 more

Test Plan: EMPTY

Reviewers: JIRA, ashutoshc

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D2871


Changes for Build #1385



All tests passed

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1385)

Status: Fixed

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1385/ to 
view the results.

[jira] [Updated] (HIVE-1634) Allow access to Primitive types stored in binary format in HBase

2012-04-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-1634:
---

Issue Type: New Feature  (was: Improvement)

 Allow access to Primitive types stored in binary format in HBase
 

 Key: HIVE-1634
 URL: https://issues.apache.org/jira/browse/HIVE-1634
 Project: Hive
  Issue Type: New Feature
  Components: HBase Handler
Affects Versions: 0.7.0, 0.8.0, 0.9.0
Reporter: Basab Maulik
Assignee: Ashutosh Chauhan
 Fix For: 0.9.0

 Attachments: HIVE-1634.0.patch, HIVE-1634.1.patch, 
 HIVE-1634.D1581.1.patch, HIVE-1634.D1581.2.patch, HIVE-1634.D1581.3.patch, 
 HIVE-1634.branch08.patch, TestHiveHBaseExternalTable.java, hive-1634_3.patch


 This addresses HIVE-1245 in part, for atomic or primitive types.
 The serde property hbase.columns.storage.types = -,b,b,b,b,b,b,b,b is a 
 specification of the storage option for the corresponding column in the serde 
 property hbase.columns.mapping. Allowed values are '-' for table default, 
 's' for standard string storage, and 'b' for binary storage as would be 
 obtained from o.a.h.hbase.utils.Bytes. Map types for HBase column families 
 use a colon separated pair such as 's:b' for the key and value part 
 specifiers respectively. See the test cases and queries for HBase handler for 
 additional examples.
 There is also a table property hbase.table.default.storage.type = string 
 to specify a table level default storage type. The other valid specification 
 is binary. The table level default is overridden by a column level 
 specification.
 This control is available for the boolean, tinyint, smallint, int, bigint, 
 float, and double primitive types. The attached patch also relaxes the 
 mapping of map types to HBase column families to allow any primitive type to 
 be the map key.
 Attached is a program for creating a table and populating it in HBase. The 
 external table in Hive can access the data as shown in the example below.
 hive create external table TestHiveHBaseExternalTable
  (key string, c_bool boolean, c_byte tinyint, c_short smallint,
   c_int int, c_long bigint, c_string string, c_float float, c_double 
 double)
   stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
   with serdeproperties (hbase.columns.mapping = 
 :key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double)
   tblproperties (hbase.table.name = TestHiveHBaseExternalTable);
 OK
 Time taken: 0.691 seconds
 hive select * from TestHiveHBaseExternalTable;
 OK
 key-1 NULLNULLNULLNULLNULLTest-String NULLNULL
 Time taken: 0.346 seconds
 hive drop table TestHiveHBaseExternalTable;
 OK
 Time taken: 0.139 seconds
 hive create external table TestHiveHBaseExternalTable
  (key string, c_bool boolean, c_byte tinyint, c_short smallint,
   c_int int, c_long bigint, c_string string, c_float float, c_double 
 double)
   stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
   with serdeproperties (
   hbase.columns.mapping = 
 :key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double,
   hbase.columns.storage.types = -,b,b,b,b,b,b,b,b )
   tblproperties (
   hbase.table.name = TestHiveHBaseExternalTable,
   hbase.table.default.storage.type = string);
 OK
 Time taken: 0.139 seconds
 hive select * from TestHiveHBaseExternalTable;
 OK
 key-1 true-128-32768  -2147483648 -9223372036854775808
 Test-String -2.1793132E-11  2.01345E291
 Time taken: 0.151 seconds
 hive drop table TestHiveHBaseExternalTable;
 OK
 Time taken: 0.154 seconds
 hive create external table TestHiveHBaseExternalTable
  (key string, c_bool boolean, c_byte tinyint, c_short smallint,
   c_int int, c_long bigint, c_string string, c_float float, c_double 
 double)
   stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
   with serdeproperties (
   hbase.columns.mapping = 
 :key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double,
   hbase.columns.storage.types = -,b,b,b,b,b,-,b,b )
   tblproperties (hbase.table.name = TestHiveHBaseExternalTable);
 OK
 Time taken: 0.347 seconds
 hive select * from TestHiveHBaseExternalTable;
 OK
 key-1 true-128-32768  -2147483648 -9223372036854775808
 Test-String -2.1793132E-11  2.01345E291
 Time taken: 0.245 seconds
 hive 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2524) enhance PDK with validation and utility functions

2012-04-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2524:
---

Fix Version/s: (was: 0.9.0)

Unlinking from 0.9

 enhance PDK with validation and utility functions
 -

 Key: HIVE-2524
 URL: https://issues.apache.org/jira/browse/HIVE-2524
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: John Sichi
Assignee: Marek Sapota

 Writing a robust UDF implementation requires a lot of boilerplate code such 
 as argument type checking.  We should add utility libraries to make UDF's 
 easier to write.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2963) metastore delegation token is not getting used by hive commandline

2012-04-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2963:
---

Fix Version/s: (was: 0.9.0)
Affects Version/s: 0.9.0
   Status: Open  (was: Patch Available)

This patch fails compilation of test target. You can reproduce it with {{ant 
clean package test}}

 metastore delegation token is not getting used by hive commandline
 --

 Key: HIVE-2963
 URL: https://issues.apache.org/jira/browse/HIVE-2963
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.1, 0.9.0
Reporter: Thejas M Nair
 Fix For: 0.10

 Attachments: HIVE-2963.1.patch


 When metastore delegation tokens are used to run hive (or hcat) commands, the 
 delegation token does not end up getting used.
 This is because new Hive object is not created with value of 
 hive.metastore.token.signature in its conf. This config parameter is missing 
 in the list of HiveConf variables whose change results in metastore 
 recreation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2966) Revert HIVE-2795

2012-04-20 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258595#comment-13258595
 ] 

Ashutosh Chauhan commented on HIVE-2966:


+1, looks good. Will commit if tests pass

 Revert HIVE-2795
 

 Key: HIVE-2966
 URL: https://issues.apache.org/jira/browse/HIVE-2966
 Project: Hive
  Issue Type: Task
  Components: Metastore
Affects Versions: 0.9.0
Reporter: Ashutosh Chauhan
Priority: Blocker
 Fix For: 0.9.0

 Attachments: HIVE-2966.1.patch, HIVE-2966.2.patch


 In 4/18/12 contrib meeting, it was decided to revert HIVE-2795

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2473) Hive throws an NPE when $HADOOP_HOME points to a tarball install directory that contains a build/ subdirectory.

2012-04-20 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258620#comment-13258620
 ] 

Carl Steinbach commented on HIVE-2473:
--

It looks like the underlying bug has been resolved in DN 3.0.0: 
http://www.datanucleus.org/servlet/jira/browse/NUCCORE-689


 Hive throws an NPE when $HADOOP_HOME points to a tarball install directory 
 that contains a build/ subdirectory.
 ---

 Key: HIVE-2473
 URL: https://issues.apache.org/jira/browse/HIVE-2473
 Project: Hive
  Issue Type: Bug
 Environment: hadoop-0.20.204.0
Reporter: Carl Steinbach
Assignee: Carl Steinbach



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-20 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2646:
--

Attachment: HIVE-2646.D2883.2.patch

thw updated the revision HIVE-2646 [jira] Hive Ivy dependencies on Hadoop 
should depend on jars directly, not tarballs.
Reviewers: JIRA

  Fix for pdk test target.

REVISION DETAIL
  https://reviews.facebook.net/D2883

AFFECTED FILES
  shims/ivy.xml
  shims/build.xml
  builtins/ivy.xml
  builtins/build.xml
  build.properties
  hbase-handler/ivy.xml
  hbase-handler/build.xml
  build.xml
  testutils/hadoop
  jdbc/ivy.xml
  jdbc/build.xml
  metastore/ivy.xml
  ivy/common-configurations.xml
  ivy/ivysettings.xml
  ivy/libraries.properties
  build-common.xml
  hwi/ivy.xml
  hwi/build.xml
  common/ivy.xml
  service/ivy.xml
  service/build.xml
  contrib/ivy.xml
  contrib/build.xml
  serde/ivy.xml
  cli/ivy.xml
  ql/ivy.xml
  ql/build.xml
  pdk/ivy.xml
  pdk/scripts/build-plugin.xml
  pdk/build.xml


 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
 

 Key: HIVE-2646
 URL: https://issues.apache.org/jira/browse/HIVE-2646
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.8.0
Reporter: Andrew Bayer
Assignee: Andrew Bayer
Priority: Critical
 Fix For: 0.9.0

 Attachments: HIVE-2646-fixtests.patch, HIVE-2646.D2133.1.patch, 
 HIVE-2646.D2133.10.patch, HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, 
 HIVE-2646.D2133.13.patch, HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, 
 HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, 
 HIVE-2646.D2133.5.patch, HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, 
 HIVE-2646.D2133.8.patch, HIVE-2646.D2133.9.patch, HIVE-2646.D2883.1.patch, 
 HIVE-2646.D2883.2.patch, HIVE-2646.diff.txt


 The current Hive Ivy dependency logic for its Hadoop dependencies is 
 problematic - depending on the tarball and extracting the jars from there, 
 rather than depending on the jars directly. It'd be great if this was fixed 
 to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-20 Thread Thomas Weise (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated HIVE-2646:
---

Attachment: HIVE-2646-fixtests.patch

Patch with fix for pdk test.

 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
 

 Key: HIVE-2646
 URL: https://issues.apache.org/jira/browse/HIVE-2646
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.8.0
Reporter: Andrew Bayer
Assignee: Andrew Bayer
Priority: Critical
 Fix For: 0.9.0

 Attachments: HIVE-2646-fixtests.patch, HIVE-2646.D2133.1.patch, 
 HIVE-2646.D2133.10.patch, HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, 
 HIVE-2646.D2133.13.patch, HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, 
 HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, 
 HIVE-2646.D2133.5.patch, HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, 
 HIVE-2646.D2133.8.patch, HIVE-2646.D2133.9.patch, HIVE-2646.D2883.1.patch, 
 HIVE-2646.D2883.2.patch, HIVE-2646.diff.txt


 The current Hive Ivy dependency logic for its Hadoop dependencies is 
 problematic - depending on the tarball and extracting the jars from there, 
 rather than depending on the jars directly. It'd be great if this was fixed 
 to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-20 Thread Thomas Weise (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated HIVE-2646:
---

Attachment: (was: HIVE-2646-fixtests.patch)

 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
 

 Key: HIVE-2646
 URL: https://issues.apache.org/jira/browse/HIVE-2646
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.8.0
Reporter: Andrew Bayer
Assignee: Andrew Bayer
Priority: Critical
 Fix For: 0.9.0

 Attachments: HIVE-2646-fixtests.patch, HIVE-2646.D2133.1.patch, 
 HIVE-2646.D2133.10.patch, HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, 
 HIVE-2646.D2133.13.patch, HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, 
 HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, 
 HIVE-2646.D2133.5.patch, HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, 
 HIVE-2646.D2133.8.patch, HIVE-2646.D2133.9.patch, HIVE-2646.D2883.1.patch, 
 HIVE-2646.D2883.2.patch, HIVE-2646.diff.txt


 The current Hive Ivy dependency logic for its Hadoop dependencies is 
 problematic - depending on the tarball and extracting the jars from there, 
 rather than depending on the jars directly. It'd be great if this was fixed 
 to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive Contrib Meeting Notes 4/18

2012-04-20 Thread Carl Steinbach
https://cwiki.apache.org/confluence/display/Hive/ContributorMinutes20120418


Notes from the Hive Contributors Meetup at Cloudera, 4/18/12

Attendees: http://www.meetup.com/Hive-Contributors-Group/events/59148562/

Ashutosh gave a status update on the Hive 0.9.0 release work. RC0 was put
up for a vote last week, but it turned out there were several problems.
Ashutosh is in the process of fixing those issues, and is also trying to
get several other patches resolved and backported before cutting RC1.

Carl asked for more details about the impact of
HIVE-2795https://issues.apache.org/jira/browse/HIVE-2795 on
the upgrade process for 0.9.0. Kevin responded that they have decided to
implement regions in a layer above Hive, and do not plan to use the
features that were added in
HIVE-2612https://issues.apache.org/jira/browse/HIVE-2612.
Since these two features are the only things requiring a metastore upgrade
for 0.9.0, it was proposed that we back them out. There were no objections.

Carl said that he is organizing the Hive BoF session at this year's Hadoop
Summit. The meeting will take place on June 12th from 2-5pm. An official
announcement will go up on the Hive Meetup group shortly. The current plan
is to structure the event like last year: 4-6 fifteen minute long talks,
followed by smaller breakout sessions. Please contact Carl if you're
interested in giving a talk.

The discussion next turned to problems with Arc and Phabricator. Carl
expressed concern that bugs have crept in over the past couple of months,
and that it's no longer clear who is responsible for making sure Hive works
with Arc/Phabricator. John pointed out that the issues which were raised on
the dev mailing list last week have already been resolved. There was
general consensus that when it works, Arc/Phabricator is an improvement on
ReviewBoard. John proposed that we continue using Arc/Phabricator, and
raise any problems with it on the dev maligning list. There were no
objections.

Harish gave a short
presentationhttps://github.com/hbutani/SQLWindowing/wiki/MoveToHive
on
the SQL Windowing library https://github.com/hbutani/SQLWindowing he
wrote for Hive and how it might be integrated into Hive. Everyone agreed
that adding this functionality to Hive makes sense. Several people
suggested adding the toolkit to the contrib module as-is and using it to
generate interest with users, but concerns were raised that this might be
painful to support/deprecate in the future. The discussion ended with
general agreement that we should start work now to incrementally push this
capability into Hive's query compiler.

Carl explained the motivations and design decisions behind the HiveServer2
API proposal. The main motivations are supporting concurrency and providing
a better foundation on which to build ODBC and JDBC drivers. Work on this
project has started and is being tracked in
HIVE-2935https://issues.apache.org/jira/browse/HIVE-2935
.

Namit offered to host the next contrib meeting at Facebook.


[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-20 Thread Thomas Weise (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258706#comment-13258706
 ] 

Thomas Weise commented on HIVE-2646:


To note, all of that under the radar of testreport?


 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
 

 Key: HIVE-2646
 URL: https://issues.apache.org/jira/browse/HIVE-2646
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.8.0
Reporter: Andrew Bayer
Assignee: Andrew Bayer
Priority: Critical
 Fix For: 0.9.0

 Attachments: HIVE-2646-fixtests.patch, HIVE-2646.D2133.1.patch, 
 HIVE-2646.D2133.10.patch, HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, 
 HIVE-2646.D2133.13.patch, HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, 
 HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, 
 HIVE-2646.D2133.5.patch, HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, 
 HIVE-2646.D2133.8.patch, HIVE-2646.D2133.9.patch, HIVE-2646.D2883.1.patch, 
 HIVE-2646.D2883.2.patch, HIVE-2646.diff.txt


 The current Hive Ivy dependency logic for its Hadoop dependencies is 
 problematic - depending on the tarball and extracting the jars from there, 
 rather than depending on the jars directly. It'd be great if this was fixed 
 to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-20 Thread Thomas Weise (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258705#comment-13258705
 ] 

Thomas Weise commented on HIVE-2646:


Here is the test execution failure (for the second execution of 
org.apache.hive.pdk.PluginTest)

{code}
Failed to load Hive builtin functions
java.lang.RuntimeException: Failed to load Hive builtin functions
at 
org.apache.hadoop.hive.ql.session.SessionState.init(SessionState.java:205)
at 
org.apache.hadoop.hive.cli.CliSessionState.init(CliSessionState.java:81)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:579)
at org.apache.hive.pdk.PluginTest.runHive(PluginTest.java:70)
at 
org.apache.hive.pdk.PluginTest$PluginGlobalSetup.setUp(PluginTest.java:177)
at junit.extensions.TestSetup$1.protect(TestSetup.java:22)
at junit.extensions.TestSetup.run(TestSetup.java:27)
Caused by: java.lang.ClassNotFoundException: 
org.apache.hive.builtins.BuiltinUtils
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
{code}

 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
 

 Key: HIVE-2646
 URL: https://issues.apache.org/jira/browse/HIVE-2646
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.8.0
Reporter: Andrew Bayer
Assignee: Andrew Bayer
Priority: Critical
 Fix For: 0.9.0

 Attachments: HIVE-2646-fixtests.patch, HIVE-2646.D2133.1.patch, 
 HIVE-2646.D2133.10.patch, HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, 
 HIVE-2646.D2133.13.patch, HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, 
 HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, 
 HIVE-2646.D2133.5.patch, HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, 
 HIVE-2646.D2133.8.patch, HIVE-2646.D2133.9.patch, HIVE-2646.D2883.1.patch, 
 HIVE-2646.D2883.2.patch, HIVE-2646.diff.txt


 The current Hive Ivy dependency logic for its Hadoop dependencies is 
 problematic - depending on the tarball and extracting the jars from there, 
 rather than depending on the jars directly. It'd be great if this was fixed 
 to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2969) Log Time To Submit metric with PerfLogger

2012-04-20 Thread Kevin Wilfong (JIRA)
Kevin Wilfong created HIVE-2969:
---

 Summary: Log Time To Submit metric with PerfLogger
 Key: HIVE-2969
 URL: https://issues.apache.org/jira/browse/HIVE-2969
 Project: Hive
  Issue Type: Wish
  Components: Logging
Affects Versions: 0.10
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
Priority: Minor


Logging the time from when Driver.run starts to when we begin submitting jobs 
to map reduce would be helpful in determining how much of the lag in starting a 
query is due to Hive vs. Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2969) Log Time To Submit metric with PerfLogger

2012-04-20 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2969:
--

Attachment: HIVE-2969.D2919.1.patch

kevinwilfong requested code review of HIVE-2969 [jira] Log Time To Submit 
metric with PerfLogger.
Reviewers: JIRA

  https://issues.apache.org/jira/browse/HIVE-2969

  Added Time To Submit to the PerfLogger, beginning just after the PerfLogger 
is first initialized, and ending just before the loop which starts submitting 
jobs to map reduce.

  Logging the time from when Driver.run starts to when we begin submitting jobs 
to map reduce would be helpful in determining how much of the lag in starting a 
query is due to Hive vs. Hadoop.

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D2919

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/6651/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


 Log Time To Submit metric with PerfLogger
 -

 Key: HIVE-2969
 URL: https://issues.apache.org/jira/browse/HIVE-2969
 Project: Hive
  Issue Type: Wish
  Components: Logging
Affects Versions: 0.10
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
Priority: Minor
 Attachments: HIVE-2969.D2919.1.patch


 Logging the time from when Driver.run starts to when we begin submitting jobs 
 to map reduce would be helpful in determining how much of the lag in starting 
 a query is due to Hive vs. Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2969) Log Time To Submit metric with PerfLogger

2012-04-20 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-2969:


Status: Patch Available  (was: Open)

 Log Time To Submit metric with PerfLogger
 -

 Key: HIVE-2969
 URL: https://issues.apache.org/jira/browse/HIVE-2969
 Project: Hive
  Issue Type: Wish
  Components: Logging
Affects Versions: 0.10
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
Priority: Minor
 Attachments: HIVE-2969.D2919.1.patch


 Logging the time from when Driver.run starts to when we begin submitting jobs 
 to map reduce would be helpful in determining how much of the lag in starting 
 a query is due to Hive vs. Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2288) Adding the oracle nvl function to the UDF

2012-04-20 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated HIVE-2288:
--

Attachment: hive-2288.2.patch.txt

 Adding the oracle nvl function to the UDF
 -

 Key: HIVE-2288
 URL: https://issues.apache.org/jira/browse/HIVE-2288
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.8.1
Reporter: Guy Doulberg
Priority: Minor
  Labels: hive
 Attachments: 
 0002-HIVE-2288-Adding-the-oracle-nvl-function-to-the-UDF.patch, 
 hive-2288.2.patch.txt


 It would be nice if we could use the nvl function, described at oracle:
 http://www.techonthenet.com/oracle/functions/nvl.php

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2965) Revert HIVE-2612

2012-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258734#comment-13258734
 ] 

Hudson commented on HIVE-2965:
--

Integrated in Hive-trunk-h0.21 #1386 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1386/])
HIVE-2965 : Revert HIVE-2612 (hashutosh) (Revision 1328469)

 Result = SUCCESS
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1328469
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/contrib/src/test/results/clientnegative/serde_regex.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/fileformat_base64.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_regex.q.out
* /hive/trunk/metastore/if/hive_metastore.thrift
* /hive/trunk/metastore/scripts/upgrade/derby/009-HIVE-2612.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/hive-schema-0.9.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/upgrade-0.8.0-to-0.9.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/009-HIVE-2612.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/hive-schema-0.9.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/upgrade-0.8.0-to-0.9.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/009-HIVE-2612.postgres.sql
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RegionStorageDescriptor.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
* /hive/trunk/metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
* 
/hive/trunk/metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
* 
/hive/trunk/metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
* /hive/trunk/metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
* /hive/trunk/metastore/src/gen/thrift/gen-php/hive_metastore_types.php
* 
/hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
* /hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py
* /hive/trunk/metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* 
/hive/trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MRegionStorageDescriptor.java
* 
/hive/trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MStorageDescriptor.java
* /hive/trunk/metastore/src/model/package.jdo
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ErrorMsg.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java
* /hive/trunk/ql/src/test/results/clientpositive/create_union_table.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ctas.q.out
* /hive/trunk/ql/src/test/results/clientpositive/fileformat_sequencefile.q.out
* /hive/trunk/ql/src/test/results/clientpositive/fileformat_text.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input15.q.out
* /hive/trunk/ql/src/test/results/clientpositive/inputddl1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/inputddl2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/inputddl3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/merge3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/rcfile_createas1.q.out
* 

[jira] [Commented] (HIVE-2612) support hive table/partitions exists in more than one region

2012-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258735#comment-13258735
 ] 

Hudson commented on HIVE-2612:
--

Integrated in Hive-trunk-h0.21 #1386 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1386/])
HIVE-2965 : Revert HIVE-2612 (hashutosh) (Revision 1328469)

 Result = SUCCESS
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1328469
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/contrib/src/test/results/clientnegative/serde_regex.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/fileformat_base64.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_regex.q.out
* /hive/trunk/metastore/if/hive_metastore.thrift
* /hive/trunk/metastore/scripts/upgrade/derby/009-HIVE-2612.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/hive-schema-0.9.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/upgrade-0.8.0-to-0.9.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/009-HIVE-2612.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/hive-schema-0.9.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/upgrade-0.8.0-to-0.9.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/009-HIVE-2612.postgres.sql
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RegionStorageDescriptor.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
* /hive/trunk/metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
* 
/hive/trunk/metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
* 
/hive/trunk/metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
* /hive/trunk/metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
* /hive/trunk/metastore/src/gen/thrift/gen-php/hive_metastore_types.php
* 
/hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
* /hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py
* /hive/trunk/metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* 
/hive/trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MRegionStorageDescriptor.java
* 
/hive/trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MStorageDescriptor.java
* /hive/trunk/metastore/src/model/package.jdo
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ErrorMsg.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java
* /hive/trunk/ql/src/test/results/clientpositive/create_union_table.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ctas.q.out
* /hive/trunk/ql/src/test/results/clientpositive/fileformat_sequencefile.q.out
* /hive/trunk/ql/src/test/results/clientpositive/fileformat_text.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input15.q.out
* /hive/trunk/ql/src/test/results/clientpositive/inputddl1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/inputddl2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/inputddl3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/merge3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/rcfile_createas1.q.out
* 

[jira] [Created] (HIVE-2970) several jars in hive tar generated are not required

2012-04-20 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-2970:
---

 Summary: several jars in hive tar generated are not required
 Key: HIVE-2970
 URL: https://issues.apache.org/jira/browse/HIVE-2970
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0, 0.10
Reporter: Thejas M Nair
 Fix For: 0.9.0, 0.10


The hive 0.9 build is currently producing tar which contains 31 additional jars 
when compared with 0.8 release, and most of them are not needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2970) several jars in hive tar generated are not required

2012-04-20 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-2970:


Attachment: HIVE-2970.1.patch

HIVE-2970.1.patch - With this patch the only additional jars in 0.9 are the 
jackson jars.
jackson jars have Apache 2 license, so no changes are needed in  LICENSE file.

 several jars in hive tar generated are not required
 ---

 Key: HIVE-2970
 URL: https://issues.apache.org/jira/browse/HIVE-2970
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0, 0.10
Reporter: Thejas M Nair
 Fix For: 0.9.0, 0.10

 Attachments: HIVE-2970.1.patch


 The hive 0.9 build is currently producing tar which contains 31 additional 
 jars when compared with 0.8 release, and most of them are not needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2970) several jars in hive tar generated are not required

2012-04-20 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-2970:


Status: Patch Available  (was: Open)

 several jars in hive tar generated are not required
 ---

 Key: HIVE-2970
 URL: https://issues.apache.org/jira/browse/HIVE-2970
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0, 0.10
Reporter: Thejas M Nair
 Fix For: 0.9.0, 0.10

 Attachments: HIVE-2970.1.patch


 The hive 0.9 build is currently producing tar which contains 31 additional 
 jars when compared with 0.8 release, and most of them are not needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2970) several jars in hive tar generated are not required

2012-04-20 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-2970:


Status: Open  (was: Patch Available)

 several jars in hive tar generated are not required
 ---

 Key: HIVE-2970
 URL: https://issues.apache.org/jira/browse/HIVE-2970
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0, 0.10
Reporter: Thejas M Nair
 Fix For: 0.9.0, 0.10

 Attachments: HIVE-2970.1.patch, HIVE-2970.2.patch


 The hive 0.9 build is currently producing tar which contains 31 additional 
 jars when compared with 0.8 release, and most of them are not needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2970) several jars in hive tar generated are not required

2012-04-20 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-2970:


Attachment: HIVE-2970.2.patch

HIVE-2970.2.patch - previous patch excluded same jar twice, fixed that.

 several jars in hive tar generated are not required
 ---

 Key: HIVE-2970
 URL: https://issues.apache.org/jira/browse/HIVE-2970
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0, 0.10
Reporter: Thejas M Nair
 Fix For: 0.9.0, 0.10

 Attachments: HIVE-2970.1.patch, HIVE-2970.2.patch


 The hive 0.9 build is currently producing tar which contains 31 additional 
 jars when compared with 0.8 release, and most of them are not needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2970) several jars in hive tar generated are not required

2012-04-20 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-2970:


Status: Patch Available  (was: Open)

 several jars in hive tar generated are not required
 ---

 Key: HIVE-2970
 URL: https://issues.apache.org/jira/browse/HIVE-2970
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0, 0.10
Reporter: Thejas M Nair
 Fix For: 0.9.0, 0.10

 Attachments: HIVE-2970.1.patch, HIVE-2970.2.patch


 The hive 0.9 build is currently producing tar which contains 31 additional 
 jars when compared with 0.8 release, and most of them are not needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-20 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2646:
--

Attachment: HIVE-2646.D2883.3.patch

thw updated the revision HIVE-2646 [jira] Hive Ivy dependencies on Hadoop 
should depend on jars directly, not tarballs.
Reviewers: JIRA

  Fix pdk/PluginTest

REVISION DETAIL
  https://reviews.facebook.net/D2883

AFFECTED FILES
  shims/ivy.xml
  shims/build.xml
  builtins/ivy.xml
  builtins/build.xml
  build.properties
  hbase-handler/ivy.xml
  hbase-handler/build.xml
  build.xml
  testutils/hadoop
  jdbc/ivy.xml
  jdbc/build.xml
  metastore/ivy.xml
  ivy/common-configurations.xml
  ivy/ivysettings.xml
  ivy/libraries.properties
  build-common.xml
  hwi/ivy.xml
  hwi/build.xml
  common/ivy.xml
  service/ivy.xml
  service/build.xml
  contrib/ivy.xml
  contrib/build.xml
  serde/ivy.xml
  cli/ivy.xml
  ql/ivy.xml
  ql/build.xml
  pdk/ivy.xml
  pdk/scripts/build-plugin.xml
  pdk/build.xml


 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
 

 Key: HIVE-2646
 URL: https://issues.apache.org/jira/browse/HIVE-2646
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.8.0
Reporter: Andrew Bayer
Assignee: Andrew Bayer
Priority: Critical
 Fix For: 0.9.0

 Attachments: HIVE-2646-fixtests.patch, HIVE-2646.D2133.1.patch, 
 HIVE-2646.D2133.10.patch, HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, 
 HIVE-2646.D2133.13.patch, HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, 
 HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, 
 HIVE-2646.D2133.5.patch, HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, 
 HIVE-2646.D2133.8.patch, HIVE-2646.D2133.9.patch, HIVE-2646.D2883.1.patch, 
 HIVE-2646.D2883.2.patch, HIVE-2646.D2883.3.patch, HIVE-2646.diff.txt


 The current Hive Ivy dependency logic for its Hadoop dependencies is 
 problematic - depending on the tarball and extracting the jars from there, 
 rather than depending on the jars directly. It'd be great if this was fixed 
 to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-20 Thread Thomas Weise (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated HIVE-2646:
---

Attachment: (was: HIVE-2646-fixtests.patch)

 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
 

 Key: HIVE-2646
 URL: https://issues.apache.org/jira/browse/HIVE-2646
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.8.0
Reporter: Andrew Bayer
Assignee: Andrew Bayer
Priority: Critical
 Fix For: 0.9.0

 Attachments: HIVE-2646.D2133.1.patch, HIVE-2646.D2133.10.patch, 
 HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, HIVE-2646.D2133.13.patch, 
 HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, HIVE-2646.D2133.2.patch, 
 HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, HIVE-2646.D2133.5.patch, 
 HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, HIVE-2646.D2133.8.patch, 
 HIVE-2646.D2133.9.patch, HIVE-2646.D2883.1.patch, HIVE-2646.D2883.2.patch, 
 HIVE-2646.D2883.3.patch, HIVE-2646.diff.txt


 The current Hive Ivy dependency logic for its Hadoop dependencies is 
 problematic - depending on the tarball and extracting the jars from there, 
 rather than depending on the jars directly. It'd be great if this was fixed 
 to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-20 Thread Thomas Weise (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated HIVE-2646:
---

Attachment: HIVE-2646-fixtests.patch

 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
 

 Key: HIVE-2646
 URL: https://issues.apache.org/jira/browse/HIVE-2646
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.8.0
Reporter: Andrew Bayer
Assignee: Andrew Bayer
Priority: Critical
 Fix For: 0.9.0

 Attachments: HIVE-2646-fixtests.patch, HIVE-2646.D2133.1.patch, 
 HIVE-2646.D2133.10.patch, HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, 
 HIVE-2646.D2133.13.patch, HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, 
 HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, 
 HIVE-2646.D2133.5.patch, HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, 
 HIVE-2646.D2133.8.patch, HIVE-2646.D2133.9.patch, HIVE-2646.D2883.1.patch, 
 HIVE-2646.D2883.2.patch, HIVE-2646.D2883.3.patch, HIVE-2646.diff.txt


 The current Hive Ivy dependency logic for its Hadoop dependencies is 
 problematic - depending on the tarball and extracting the jars from there, 
 rather than depending on the jars directly. It'd be great if this was fixed 
 to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HIVE-2966) Revert HIVE-2795

2012-04-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-2966:
--

Assignee: Thejas M Nair

 Revert HIVE-2795
 

 Key: HIVE-2966
 URL: https://issues.apache.org/jira/browse/HIVE-2966
 Project: Hive
  Issue Type: Task
  Components: Metastore
Affects Versions: 0.9.0
Reporter: Ashutosh Chauhan
Assignee: Thejas M Nair
Priority: Blocker
 Fix For: 0.9.0

 Attachments: HIVE-2966.1.patch, HIVE-2966.2.patch


 In 4/18/12 contrib meeting, it was decided to revert HIVE-2795

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HIVE-2961) Remove need for storage descriptors for view partitions

2012-04-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-2961.


   Resolution: Fixed
Fix Version/s: 0.9.0

Fixed via HIVE-2966

 Remove need for storage descriptors for view partitions
 ---

 Key: HIVE-2961
 URL: https://issues.apache.org/jira/browse/HIVE-2961
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.9.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.9.0

 Attachments: HIVE-2961.D2877.1.patch


 Storage descriptors were introduced for view partitions as part of HIVE-2795. 
  This was to allow view partitions to have the concept of a region as well as 
 to fix a NPE that resulted from calling describe formatted on them.
 Since regions are no longer necessary for view partitions and the NPE can be 
 fixed by not displaying storage information for view partitions (or 
 displaying the view's storage information if this is preferred, although, 
 since a view partition is purely metadata, this does not seem necessary), 
 these are no longer needed.
 This also means the Python script added which retroactively adds storage 
 descriptors to existing view partitions can be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HIVE-2966) Revert HIVE-2795

2012-04-20 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-2966.


Resolution: Fixed

Committed to trunk  0.9. Thanks, Thejas!

 Revert HIVE-2795
 

 Key: HIVE-2966
 URL: https://issues.apache.org/jira/browse/HIVE-2966
 Project: Hive
  Issue Type: Task
  Components: Metastore
Affects Versions: 0.9.0
Reporter: Ashutosh Chauhan
Assignee: Thejas M Nair
Priority: Blocker
 Fix For: 0.9.0

 Attachments: HIVE-2966.1.patch, HIVE-2966.2.patch


 In 4/18/12 contrib meeting, it was decided to revert HIVE-2795

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2963) metastore delegation token is not getting used by hive commandline

2012-04-20 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-2963:


Attachment: HIVE-2963.2.patch

HIVE-2963.2.patch - fixes test compile problem

 metastore delegation token is not getting used by hive commandline
 --

 Key: HIVE-2963
 URL: https://issues.apache.org/jira/browse/HIVE-2963
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.1, 0.9.0
Reporter: Thejas M Nair
 Fix For: 0.10

 Attachments: HIVE-2963.1.patch, HIVE-2963.2.patch


 When metastore delegation tokens are used to run hive (or hcat) commands, the 
 delegation token does not end up getting used.
 This is because new Hive object is not created with value of 
 hive.metastore.token.signature in its conf. This config parameter is missing 
 in the list of HiveConf variables whose change results in metastore 
 recreation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2970) several jars in hive tar generated are not required

2012-04-20 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-2970:


Attachment: HIVE-2970.3.patch

HIVE-2970.3.patch - extra jars were getting included with previous patch, fixes 
that

 several jars in hive tar generated are not required
 ---

 Key: HIVE-2970
 URL: https://issues.apache.org/jira/browse/HIVE-2970
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0, 0.10
Reporter: Thejas M Nair
 Fix For: 0.9.0, 0.10

 Attachments: HIVE-2970.1.patch, HIVE-2970.2.patch, HIVE-2970.3.patch


 The hive 0.9 build is currently producing tar which contains 31 additional 
 jars when compared with 0.8 release, and most of them are not needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira