[jira] [Commented] (HIVE-2961) Remove need for storage descriptors for view partitions

2012-04-18 Thread Carl Steinbach (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13257294#comment-13257294
 ] 

Carl Steinbach commented on HIVE-2961:
--

But if we end up taking that route, I think we should leave the 
upgrade-0.8.0-to-0.9.0.xxx.sql scripts for the sake of consistency.

> Remove need for storage descriptors for view partitions
> ---
>
> Key: HIVE-2961
> URL: https://issues.apache.org/jira/browse/HIVE-2961
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.9.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-2961.D2877.1.patch
>
>
> Storage descriptors were introduced for view partitions as part of HIVE-2795. 
>  This was to allow view partitions to have the concept of a region as well as 
> to fix a NPE that resulted from calling describe formatted on them.
> Since regions are no longer necessary for view partitions and the NPE can be 
> fixed by not displaying storage information for view partitions (or 
> displaying the view's storage information if this is preferred, although, 
> since a view partition is purely metadata, this does not seem necessary), 
> these are no longer needed.
> This also means the Python script added which retroactively adds storage 
> descriptors to existing view partitions can be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2961) Remove need for storage descriptors for view partitions

2012-04-18 Thread Carl Steinbach (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13257292#comment-13257292
 ] 

Carl Steinbach commented on HIVE-2961:
--

Based on the discussion at today's contrib meeting it sounds like we can drop 
this patch and instead backout HIVE-2795 and HIVE-2612. Does that sound good?

> Remove need for storage descriptors for view partitions
> ---
>
> Key: HIVE-2961
> URL: https://issues.apache.org/jira/browse/HIVE-2961
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.9.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-2961.D2877.1.patch
>
>
> Storage descriptors were introduced for view partitions as part of HIVE-2795. 
>  This was to allow view partitions to have the concept of a region as well as 
> to fix a NPE that resulted from calling describe formatted on them.
> Since regions are no longer necessary for view partitions and the NPE can be 
> fixed by not displaying storage information for view partitions (or 
> displaying the view's storage information if this is preferred, although, 
> since a view partition is purely metadata, this does not seem necessary), 
> these are no longer needed.
> This also means the Python script added which retroactively adds storage 
> descriptors to existing view partitions can be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2954) The statement fails when a column part of an ORDER BY is not specified in the SELECT.

2012-04-18 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2954:
--

Attachment: HIVE-2954.D2889.1.patch

navis requested code review of "HIVE-2954 [jira] The statement fails when a 
column part of an ORDER BY is not specified in the SELECT.".
Reviewers: JIRA

  DPAL-1110 The statement fails when a column part of an ORDER BY is not 
specified in the SELECT

  Given the following table:

  CREATE TABLE `DBCSTB32` (`aaa` DOUBLE,`bbb` STRING,`ccc` STRING,`ddd` DOUBLE) 
ROW FORMAT
  DELIMITED FIELDS TERMINATED BY '\001' STORED AS TEXTFILE;

  The following statement fails:

   select TXT_1`aaa`, TXT_1.`bbb`
 from `DBCSTB32` TXT_1
order by TXT_1.`bbb` asc, TXT_1.`aaa` asc, TXT_1.`ccc` asc

  ERROR: java.sql.SQLException: Query returned non-zero code: 10, cause: 
FAILED: Error in
 semantic analysis: Line 1:104 Invalid column reference '`ccc`'

  Adding `ccc` to the selected list of columns fixes the problem.

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D2889

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnInfo.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/test/queries/clientpositive/orderby_not_selected.q
  ql/src/test/results/clientpositive/orderby_not_selected.q.out

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/6597/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


> The statement fails when a column part of an ORDER BY is not specified in the 
> SELECT.
> -
>
> Key: HIVE-2954
> URL: https://issues.apache.org/jira/browse/HIVE-2954
> Project: Hive
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 0.8.1
>Reporter: Mauro Cazzari
>Assignee: Navis
> Attachments: HIVE-2954.D2889.1.patch
>
>
> Given the following table:
> CREATE TABLE `DBCSTB32` (`aaa` DOUBLE,`bbb` STRING,`ccc` STRING,`ddd` DOUBLE) 
> ROW FORMAT
> DELIMITED FIELDS TERMINATED BY '\001' STORED AS TEXTFILE;
> The following statement fails:
>  select TXT_1`aaa`, TXT_1.`bbb` 
>from `DBCSTB32` TXT_1 
>   order by TXT_1.`bbb` asc, TXT_1.`aaa` asc, TXT_1.`ccc` asc
> ERROR: java.sql.SQLException: Query returned non-zero code: 10, cause: 
> FAILED: Error in
>semantic analysis: Line 1:104 Invalid column reference '`ccc`'
> Adding `ccc` to the selected list of columns fixes the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Meta data didn't rollback after accessing mysql fail

2012-04-18 Thread Zhang Kai
Hi all,

  I got a problem when using Hive 0.7.1.
  After I created a table with JDO failure, I can still find the table by
using 'show tables;'.

  I grep the code and find there is a transaction mechanism when hive
interacts with metastore.
  However, it seems that the transaction doesn't work.

  I have tested this for several times.
  I manually throw an exception before ObjectStore commit the 'CREATE
TABLE' transaction.
  But I can find that a new record in TBLS in mysql.
  So I guess meta data didn't rollback correctly.

  I also notice this could also happen on Hive 0.8 and 0.9.
  However, I haven't tested it.

  I think this is a fatal error and can make hive metastore unavailable.
  Is there anyone can help me solve this problem?

  Thanks.

Kai Zhang


Re: Hive 0.9 now broken on HBase 0.90 ?

2012-04-18 Thread Carl Steinbach
Hi Tim,

It looks like the only HBase specific change made in HIVE-2748 (besides
bumping the HBase version number) was a modification to the HBase test
setup code, which I think means that 0.9.0 is actually still compatible
with HBase 0.90.4. If it turns out that I'm wrong about this you might want
to consider grabbing a copy of the 0.9.0 source tarball and backing the
patch out manually. It's a small patch so this should be pretty
straightforward.

Hope this helps.

Carl

On Wed, Apr 18, 2012 at 1:32 PM, Tim Robertson wrote:

> Thanks for clarifying Ashutosh.
>
> Looks like we'll be forking Hive for a while while we stick with CDH3.  I
> might see if the Cloudera guys are interested in assisting in maintaining a
> CDH3 HBase compatible Hive 0.9 version - there are too many nice things in
> 0.9 for us not to use it, but we're kind of committed to CDH3.
>
> Cheers,
> Tim
>
>
>
>
>
>
> On Wed, Apr 18, 2012 at 10:25 PM, Ashutosh Chauhan  >wrote:
>
> > Hi Tim,
> >
> > Sorry that it broke your setup. Decision to move to hbase-0.92 was made
> in
> > https://issues.apache.org/jira/browse/HIVE-2748
> >
> > Thanks,
> > Ashutosh
> >
> > On Wed, Apr 18, 2012 at 11:42, Tim Robertson  > >wrote:
> >
> > > Hi all,
> > >
> > > This is my first post to hive-dev so please go easy on me...
> > >
> > > I built Hive from trunk (0.90) a couple of weeks ago and have been
> using
> > it
> > > against HBase, and today patched it with the offering of HIVE-2958 and
> it
> > > all worked fine.
> > >
> > > I just tried an Oozie workflow, built using Maven and the Apache
> snapshot
> > > repository to get the 0.90 snapshot.  It fails with the following:
> > >
> > > java.lang.NoSuchMethodError:
> > >
> > >
> >
> org.apache.hadoop.hbase.mapred.TableMapReduceUtil.initCredentials(Lorg/apache/hadoop/mapred/JobConf;)V
> > >at
> > >
> >
> org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:419)
> > >at
> > >
> >
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:292)
> > >
> > >
> > > I believe the source of the issue could be this commit which happened
> > after
> > > I built from trunk a couple weeks ago:
> > >
> > >
> > >
> >
> http://mail-archives.apache.org/mod_mbox/hive-commits/201204.mbox/%3c20120409202655.bdb5d2388...@eris.apache.org%3E
> > >
> > > Is there a decision to make hive 0.9  require HBase 0.92.0+ ?  It would
> > be
> > > awesome if it still worked on 0.90.4 since CDH3 uses that.
> > >
> > > Hope this makes sense,
> > > Tim
> > > (suffering classpath hell)
> > >
> >
>


[jira] [Updated] (HIVE-2902) undefined property exists in eclipse-templates/.classpath

2012-04-18 Thread tamtam180 (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tamtam180 updated HIVE-2902:


Status: Patch Available  (was: Open)

please review once more.

> undefined property exists in eclipse-templates/.classpath
> -
>
> Key: HIVE-2902
> URL: https://issues.apache.org/jira/browse/HIVE-2902
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: tamtam180
>Assignee: tamtam180
>Priority: Minor
> Attachments: HIVE-2902.1.patch.txt, HIVE-2902.2.patch.txt
>
>
> @hbase-test.version@ was removed from ivy/libraries.properties in HIVE-2748,
> but the property still exists in eclipse-templates/.classpath.
> {code}
> 
>  path="build/ivy/lib/default/hbase-@hbase-test.vers...@-tests.jar"/>
> {code}
> It should be changed to @hbase.version@

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2902) undefined property exists in eclipse-templates/.classpath

2012-04-18 Thread tamtam180 (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tamtam180 updated HIVE-2902:


Attachment: HIVE-2902.2.patch.txt

I was add jackson-core and jackson-mapper to classpath.

> undefined property exists in eclipse-templates/.classpath
> -
>
> Key: HIVE-2902
> URL: https://issues.apache.org/jira/browse/HIVE-2902
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: tamtam180
>Assignee: tamtam180
>Priority: Minor
> Attachments: HIVE-2902.1.patch.txt, HIVE-2902.2.patch.txt
>
>
> @hbase-test.version@ was removed from ivy/libraries.properties in HIVE-2748,
> but the property still exists in eclipse-templates/.classpath.
> {code}
> 
>  path="build/ivy/lib/default/hbase-@hbase-test.vers...@-tests.jar"/>
> {code}
> It should be changed to @hbase.version@

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive-trunk-h0.21 - Build # 1382 - Fixed

2012-04-18 Thread Apache Jenkins Server
Changes for Build #1381

Changes for Build #1382
[hashutosh] HIVE-2959 [jira] TestRemoteHiveMetaStoreIpAddress always uses the 
same port
(Kevin Wilfong via Ashutosh Chauhan)

Summary:
https://issues.apache.org/jira/browse/HIVE-2959

TestRemoteHiveMetaStoreIpAddress now uses the standard way of finding a free
port using Java's ServerSocket class.

TestRemoteHiveMetaStoreIpAddress always uses the same port, meaning that if
another process happens to be using that port, the tests cannot succeed.

There seems to be a standard way of finding a free port using Java's
ServerSocket class, this should be used instead.

Test Plan: Ran TestRemoteHiveMetaStoreIpAddress and
TestRemoteUGIHiveMetaStoreIpAddress, the two tests which would be affected by
this change.  I verified they passed and did not use port 39083.

Reviewers: JIRA, njain, ashutoshc

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D2841




All tests passed

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1382)

Status: Fixed

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1382/ to 
view the results.

[jira] [Commented] (HIVE-2959) TestRemoteHiveMetaStoreIpAddress always uses the same port

2012-04-18 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13257148#comment-13257148
 ] 

Hudson commented on HIVE-2959:
--

Integrated in Hive-trunk-h0.21 #1382 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1382/])
HIVE-2959 [jira] TestRemoteHiveMetaStoreIpAddress always uses the same port
(Kevin Wilfong via Ashutosh Chauhan)

Summary:
https://issues.apache.org/jira/browse/HIVE-2959

TestRemoteHiveMetaStoreIpAddress now uses the standard way of finding a free
port using Java's ServerSocket class.

TestRemoteHiveMetaStoreIpAddress always uses the same port, meaning that if
another process happens to be using that port, the tests cannot succeed.

There seems to be a standard way of finding a free port using Java's
ServerSocket class, this should be used instead.

Test Plan: Ran TestRemoteHiveMetaStoreIpAddress and
TestRemoteUGIHiveMetaStoreIpAddress, the two tests which would be affected by
this change.  I verified they passed and did not use port 39083.

Reviewers: JIRA, njain, ashutoshc

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D2841 (Revision 1327591)

 Result = SUCCESS
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1327591
Files : 
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestRemoteHiveMetaStoreIpAddress.java


> TestRemoteHiveMetaStoreIpAddress always uses the same port
> --
>
> Key: HIVE-2959
> URL: https://issues.apache.org/jira/browse/HIVE-2959
> Project: Hive
>  Issue Type: Test
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.10
>
> Attachments: HIVE-2959.D2841.1.patch
>
>
> TestRemoteHiveMetaStoreIpAddress always uses the same port, meaning that if 
> another process happens to be using that port, the tests cannot succeed.
> There seems to be a standard way of finding a free port using Java's 
> ServerSocket class, this should be used instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2963) metastore delegation token is not getting used by hive commandline

2012-04-18 Thread Thejas M Nair (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-2963:


Status: Patch Available  (was: Open)

> metastore delegation token is not getting used by hive commandline
> --
>
> Key: HIVE-2963
> URL: https://issues.apache.org/jira/browse/HIVE-2963
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.8.1
>Reporter: Thejas M Nair
> Fix For: 0.9.0, 0.10
>
> Attachments: HIVE-2963.1.patch
>
>
> When metastore delegation tokens are used to run hive (or hcat) commands, the 
> delegation token does not end up getting used.
> This is because new Hive object is not created with value of 
> hive.metastore.token.signature in its conf. This config parameter is missing 
> in the list of HiveConf variables whose change results in metastore 
> recreation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2963) metastore delegation token is not getting used by hive commandline

2012-04-18 Thread Thejas M Nair (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-2963:


Attachment: HIVE-2963.1.patch

HIVE-2963.1.patch - This is a trivial patch - it adds 
hive.metastore.token.signature to HiveConf.metaVars. I can't think of any 
non-trivial way to add an unit test for this.


> metastore delegation token is not getting used by hive commandline
> --
>
> Key: HIVE-2963
> URL: https://issues.apache.org/jira/browse/HIVE-2963
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.8.1
>Reporter: Thejas M Nair
> Fix For: 0.9.0, 0.10
>
> Attachments: HIVE-2963.1.patch
>
>
> When metastore delegation tokens are used to run hive (or hcat) commands, the 
> delegation token does not end up getting used.
> This is because new Hive object is not created with value of 
> hive.metastore.token.signature in its conf. This config parameter is missing 
> in the list of HiveConf variables whose change results in metastore 
> recreation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2963) metastore delegation token is not getting used by hive commandline

2012-04-18 Thread Thejas M Nair (Created) (JIRA)
metastore delegation token is not getting used by hive commandline
--

 Key: HIVE-2963
 URL: https://issues.apache.org/jira/browse/HIVE-2963
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Thejas M Nair
 Fix For: 0.9.0, 0.10


When metastore delegation tokens are used to run hive (or hcat) commands, the 
delegation token does not end up getting used.
This is because new Hive object is not created with value of 
hive.metastore.token.signature in its conf. This config parameter is missing in 
the list of HiveConf variables whose change results in metastore recreation.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2723) should throw "Ambiguous column reference key" Exception in particular join condition

2012-04-18 Thread Navis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-2723:


Affects Version/s: (was: 0.8.0)
   Status: Patch Available  (was: Open)

Passed all tests

> should throw  "Ambiguous column reference key"  Exception in particular join 
> condition
> --
>
> Key: HIVE-2723
> URL: https://issues.apache.org/jira/browse/HIVE-2723
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
> Environment: Linux zongren-VirtualBox 3.0.0-14-generic #23-Ubuntu SMP 
> Mon Nov 21 20:34:47 UTC 2011 i686 i686 i386 GNU/Linux
> java version "1.6.0_25"
> hadoop-0.20.2-cdh3u0
> hive-0.7.0-cdh3u0
>Reporter: caofangkun
>Assignee: Navis
>Priority: Minor
>  Labels: exception-handling, query, queryparser
> Fix For: 0.9.0
>
> Attachments: HIVE-2723.D1275.1.patch, HIVE-2723.D1275.2.patch
>
>
> This Bug can be Repeated as following :
> create table test(key string, value string);
> create table test1(key string, value string);
> 1: Correct!
> select t.key 
> from 
>   (select a.key, b.key from (select * from src ) a right outer join (select * 
> from src1) b on (a.key = b.key)) t;
> FAILED: Error in semantic analysis: Ambiguous column reference key
> 2: Uncorrect!! Should throw Exception as above too!
> select t.key --Is this a.key or b.key ? It's ambiduous!
> from 
>   (select a.\*, b.\* from (select * from src ) a right outer join (select * 
> from src1) b on (a.value = b.value)) t;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Defaulting to jobconf value of: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Starting Job = job_201201170959_0004, Tracking URL = 
> http://zongren-VirtualBox:50030/jobdetails.jsp?jobid=job_201201170959_0004
> Kill Command = /home/zongren/workspace/hadoop-adh/bin/hadoop job  
> -Dmapred.job.tracker=zongren-VirtualBox:9001 -kill job_201201170959_0004
> Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 
> 1
> 2012-01-17 11:02:47,507 Stage-1 map = 0%,  reduce = 0%
> 2012-01-17 11:02:55,002 Stage-1 map = 100%,  reduce = 0%
> 2012-01-17 11:03:04,240 Stage-1 map = 100%,  reduce = 33%
> 2012-01-17 11:03:05,258 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201201170959_0004
> MapReduce Jobs Launched: 
> Job 0: Map: 2  Reduce: 1   HDFS Read: 669 HDFS Write: 216 SUCESS
> Total MapReduce CPU Time Spent: 0 msec
> OK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2958) GROUP BY causing ClassCastException [LazyDioInteger cannot be cast LazyInteger]

2012-04-18 Thread Navis (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-2958:


Status: Patch Available  (was: Open)

Passed all tests

> GROUP BY causing ClassCastException [LazyDioInteger cannot be cast 
> LazyInteger]
> ---
>
> Key: HIVE-2958
> URL: https://issues.apache.org/jira/browse/HIVE-2958
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.9.0
> Environment: HBase 0.90.4, Hive 0.90 snapshot (trunk) built today
>Reporter: Tim Robertson
>Assignee: Navis
>Priority: Blocker
> Attachments: HIVE-2958.D2871.1.patch
>
>
> This relates to https://issues.apache.org/jira/browse/HIVE-1634.
> The following work fine:
> {code}
> CREATE EXTERNAL TABLE tim_hbase_occurrence ( 
>   id int,
>   scientific_name string,
>   data_resource_id int
> ) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH 
> SERDEPROPERTIES (
>   "hbase.columns.mapping" = ":key#b,v:scientific_name#s,v:data_resource_id#b"
> ) TBLPROPERTIES(
>   "hbase.table.name" = "mini_occurrences", 
>   "hbase.table.default.storage.type" = "binary"
> );
> SELECT * FROM tim_hbase_occurrence LIMIT 3;
> SELECT * FROM tim_hbase_occurrence WHERE data_resource_id=1081 LIMIT 3;
> {code}
> However, the following fails:
> {code}
> SELECT data_resource_id, count(*) FROM tim_hbase_occurrence GROUP BY 
> data_resource_id;
> {code}
> The error given:
> {code}
> 0 TS
> 2012-04-17 16:58:45,693 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
> Initialization Done 7 MAP
> 2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
> Processing alias tim_hbase_occurrence for file 
> hdfs://c1n2.gbif.org/user/hive/warehouse/tim_hbase_occurrence
> 2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 
> forwarding 1 rows
> 2012-04-17 16:58:45,714 INFO 
> org.apache.hadoop.hive.ql.exec.TableScanOperator: 0 forwarding 1 rows
> 2012-04-17 16:58:45,716 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 1 
> forwarding 1 rows
> 2012-04-17 16:58:45,723 FATAL ExecMapper: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"id":1444,"scientific_name":null,"data_resource_id":1081}
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
>   at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
>   at org.apache.hadoop.mapred.Child.main(Child.java:264)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to 
> org.apache.hadoop.hive.serde2.lazy.LazyInteger
>   at 
> org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:737)
>   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
>   at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
>   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
>   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
>   ... 9 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to 
> org.apache.hadoop.hive.serde2.lazy.LazyInteger
>   at 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector.copyObject(LazyIntObjectInspector.java:43)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:239)
>   at 
> org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
>   at 
> org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
>   at 
>

[jira] [Commented] (HIVE-2961) Remove need for storage descriptors for view partitions

2012-04-18 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13257014#comment-13257014
 ] 

Phabricator commented on HIVE-2961:
---

ashutoshc has accepted the revision "HIVE-2961 [jira] Remove need for storage 
descriptors for view partitions".

  +1 will commit if tests pass.

REVISION DETAIL
  https://reviews.facebook.net/D2877

BRANCH
  svn


> Remove need for storage descriptors for view partitions
> ---
>
> Key: HIVE-2961
> URL: https://issues.apache.org/jira/browse/HIVE-2961
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.9.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-2961.D2877.1.patch
>
>
> Storage descriptors were introduced for view partitions as part of HIVE-2795. 
>  This was to allow view partitions to have the concept of a region as well as 
> to fix a NPE that resulted from calling describe formatted on them.
> Since regions are no longer necessary for view partitions and the NPE can be 
> fixed by not displaying storage information for view partitions (or 
> displaying the view's storage information if this is preferred, although, 
> since a view partition is purely metadata, this does not seem necessary), 
> these are no longer needed.
> This also means the Python script added which retroactively adds storage 
> descriptors to existing view partitions can be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2962) Remove unnecessary JAR dependencies

2012-04-18 Thread Carl Steinbach (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-2962:
-

Attachment: report.tar.gz

Attaching Ivy dependency reports generated on the 0.9.0 branch.

> Remove unnecessary JAR dependencies
> ---
>
> Key: HIVE-2962
> URL: https://issues.apache.org/jira/browse/HIVE-2962
> Project: Hive
>  Issue Type: Task
>  Components: Build Infrastructure
>Reporter: Carl Steinbach
> Fix For: 0.9.0
>
> Attachments: report.tar.gz
>
>
> The tarballs currently include a bunch of JARs which aren't real Hive 
> dependencies. I think in most cases this is caused by unnecessary transitive 
> dependencies that are getting pulled down by Ivy.
> Also, once the contents of the lib directory are sanitized we need to 
> reconcile it with the list of dependencies in the LICENSE file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2962) Remove unnecessary JAR dependencies

2012-04-18 Thread Carl Steinbach (Created) (JIRA)
Remove unnecessary JAR dependencies
---

 Key: HIVE-2962
 URL: https://issues.apache.org/jira/browse/HIVE-2962
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure
Reporter: Carl Steinbach
 Fix For: 0.9.0


The tarballs currently include a bunch of JARs which aren't real Hive 
dependencies. I think in most cases this is caused by unnecessary transitive 
dependencies that are getting pulled down by Ivy.

Also, once the contents of the lib directory are sanitized we need to reconcile 
it with the list of dependencies in the LICENSE file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: [VOTE] Apache Hive 0.9.0 Release Candidate 0

2012-04-18 Thread Kevin Wilfong
The only drawbacks I can think of are

1) As I mentioned, view partitions will not have an associated region, but
I don't think this matters to anyone else.

2) There will be an inconsistency in DESCRIBE FORMATTED, in that for view
partitions it will not list storage data.  This seems inherent in the fact
that view partitions are purely metadata, and it is an improvement on the
null pointer exception that occurred in previous releases.

I created a JIRA to do this and uploaded a patch here
https://issues.apache.org/jira/browse/HIVE-2961

Kevin

On 4/18/12 2:08 PM, "Carl Steinbach"  wrote:

>Hi Kevin
>
>
>> I can add a patch removing the need for this script and the script
>>itself
>> to be included in the release.
>>
>
>This sounds like the best option to me. Are there any drawbacks to this
>approach?
>
>Thanks.
>
>Carl



Re: [VOTE] Apache Hive 0.9.0 Release Candidate 0

2012-04-18 Thread Carl Steinbach
Hi Kevin


> I can add a patch removing the need for this script and the script itself
> to be included in the release.
>

This sounds like the best option to me. Are there any drawbacks to this
approach?

Thanks.

Carl


Hive-trunk-h0.21 - Build # 1381 - Failure

2012-04-18 Thread Apache Jenkins Server
Changes for Build #1381



1 tests failed.
REGRESSION:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1

Error Message:
Unexpected exception See build/ql/tmp/hive.log, or try "ant test ... 
-Dtest.silent=false" to get more logs.

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception
See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
more logs.
at junit.framework.Assert.fail(Assert.java:47)
at 
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:10474)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:154)
at junit.framework.TestCase.runBare(TestCase.java:127)
at junit.framework.TestResult$1.protect(TestResult.java:106)
at junit.framework.TestResult.runProtected(TestResult.java:124)
at junit.framework.TestResult.run(TestResult.java:109)
at junit.framework.TestCase.run(TestCase.java:118)
at junit.framework.TestSuite.runTest(TestSuite.java:208)
at junit.framework.TestSuite.run(TestSuite.java:203)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)




The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1381)

Status: Failure

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1381/ to 
view the results.

[jira] [Updated] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-18 Thread Thomas Weise (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated HIVE-2646:
---

Attachment: HIVE-2646-fixtests.patch

Patch to address unit test failures. This should apply cleanly to trunk (arc 
patch doesn't).

> Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
> 
>
> Key: HIVE-2646
> URL: https://issues.apache.org/jira/browse/HIVE-2646
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.8.0
>Reporter: Andrew Bayer
>Assignee: Andrew Bayer
>Priority: Critical
> Fix For: 0.9.0
>
> Attachments: HIVE-2646-fixtests.patch, HIVE-2646.D2133.1.patch, 
> HIVE-2646.D2133.10.patch, HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, 
> HIVE-2646.D2133.13.patch, HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, 
> HIVE-2646.D2133.2.patch, HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, 
> HIVE-2646.D2133.5.patch, HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, 
> HIVE-2646.D2133.8.patch, HIVE-2646.D2133.9.patch, HIVE-2646.D2883.1.patch, 
> HIVE-2646.diff.txt
>
>
> The current Hive Ivy dependency logic for its Hadoop dependencies is 
> problematic - depending on the tarball and extracting the jars from there, 
> rather than depending on the jars directly. It'd be great if this was fixed 
> to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Hive 0.9 now broken on HBase 0.90 ?

2012-04-18 Thread Tim Robertson
Thanks for clarifying Ashutosh.

Looks like we'll be forking Hive for a while while we stick with CDH3.  I
might see if the Cloudera guys are interested in assisting in maintaining a
CDH3 HBase compatible Hive 0.9 version - there are too many nice things in
0.9 for us not to use it, but we're kind of committed to CDH3.

Cheers,
Tim






On Wed, Apr 18, 2012 at 10:25 PM, Ashutosh Chauhan wrote:

> Hi Tim,
>
> Sorry that it broke your setup. Decision to move to hbase-0.92 was made in
> https://issues.apache.org/jira/browse/HIVE-2748
>
> Thanks,
> Ashutosh
>
> On Wed, Apr 18, 2012 at 11:42, Tim Robertson  >wrote:
>
> > Hi all,
> >
> > This is my first post to hive-dev so please go easy on me...
> >
> > I built Hive from trunk (0.90) a couple of weeks ago and have been using
> it
> > against HBase, and today patched it with the offering of HIVE-2958 and it
> > all worked fine.
> >
> > I just tried an Oozie workflow, built using Maven and the Apache snapshot
> > repository to get the 0.90 snapshot.  It fails with the following:
> >
> > java.lang.NoSuchMethodError:
> >
> >
> org.apache.hadoop.hbase.mapred.TableMapReduceUtil.initCredentials(Lorg/apache/hadoop/mapred/JobConf;)V
> >at
> >
> org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:419)
> >at
> >
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:292)
> >
> >
> > I believe the source of the issue could be this commit which happened
> after
> > I built from trunk a couple weeks ago:
> >
> >
> >
> http://mail-archives.apache.org/mod_mbox/hive-commits/201204.mbox/%3c20120409202655.bdb5d2388...@eris.apache.org%3E
> >
> > Is there a decision to make hive 0.9  require HBase 0.92.0+ ?  It would
> be
> > awesome if it still worked on 0.90.4 since CDH3 uses that.
> >
> > Hope this makes sense,
> > Tim
> > (suffering classpath hell)
> >
>


Re: Hive 0.9 now broken on HBase 0.90 ?

2012-04-18 Thread Ashutosh Chauhan
Hi Tim,

Sorry that it broke your setup. Decision to move to hbase-0.92 was made in
https://issues.apache.org/jira/browse/HIVE-2748

Thanks,
Ashutosh

On Wed, Apr 18, 2012 at 11:42, Tim Robertson wrote:

> Hi all,
>
> This is my first post to hive-dev so please go easy on me...
>
> I built Hive from trunk (0.90) a couple of weeks ago and have been using it
> against HBase, and today patched it with the offering of HIVE-2958 and it
> all worked fine.
>
> I just tried an Oozie workflow, built using Maven and the Apache snapshot
> repository to get the 0.90 snapshot.  It fails with the following:
>
> java.lang.NoSuchMethodError:
>
> org.apache.hadoop.hbase.mapred.TableMapReduceUtil.initCredentials(Lorg/apache/hadoop/mapred/JobConf;)V
>at
> org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:419)
>at
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:292)
>
>
> I believe the source of the issue could be this commit which happened after
> I built from trunk a couple weeks ago:
>
>
> http://mail-archives.apache.org/mod_mbox/hive-commits/201204.mbox/%3c20120409202655.bdb5d2388...@eris.apache.org%3E
>
> Is there a decision to make hive 0.9  require HBase 0.92.0+ ?  It would be
> awesome if it still worked on 0.90.4 since CDH3 uses that.
>
> Hope this makes sense,
> Tim
> (suffering classpath hell)
>


[jira] [Updated] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-18 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2646:
--

Attachment: HIVE-2646.D2883.1.patch

thw requested code review of "HIVE-2646 [jira] Hive Ivy dependencies on Hadoop 
should depend on jars directly, not tarballs".
Reviewers: JIRA

  https://issues.apache.org/jira/browse/HIVE-2646

  Update to fix test failures.

  The current Hive Ivy dependency logic for its Hadoop dependencies is 
problematic - depending on the tarball and extracting the jars from there, 
rather than depending on the jars directly. It'd be great if this was fixed to 
actually have the jar dependencies defined directly.

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D2883

AFFECTED FILES
  shims/ivy.xml
  shims/build.xml
  builtins/ivy.xml
  builtins/build.xml
  build.properties
  hbase-handler/ivy.xml
  hbase-handler/build.xml
  build.xml
  testutils/hadoop
  jdbc/ivy.xml
  jdbc/build.xml
  metastore/ivy.xml
  ivy/common-configurations.xml
  ivy/ivysettings.xml
  ivy/libraries.properties
  build-common.xml
  hwi/ivy.xml
  hwi/build.xml
  common/ivy.xml
  service/ivy.xml
  service/build.xml
  contrib/ivy.xml
  contrib/build.xml
  serde/ivy.xml
  cli/ivy.xml
  ql/ivy.xml
  ql/build.xml
  pdk/ivy.xml
  pdk/scripts/build-plugin.xml

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/6567/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


> Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
> 
>
> Key: HIVE-2646
> URL: https://issues.apache.org/jira/browse/HIVE-2646
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.8.0
>Reporter: Andrew Bayer
>Assignee: Andrew Bayer
>Priority: Critical
> Fix For: 0.9.0
>
> Attachments: HIVE-2646.D2133.1.patch, HIVE-2646.D2133.10.patch, 
> HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, HIVE-2646.D2133.13.patch, 
> HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, HIVE-2646.D2133.2.patch, 
> HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, HIVE-2646.D2133.5.patch, 
> HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, HIVE-2646.D2133.8.patch, 
> HIVE-2646.D2133.9.patch, HIVE-2646.D2883.1.patch, HIVE-2646.diff.txt
>
>
> The current Hive Ivy dependency logic for its Hadoop dependencies is 
> problematic - depending on the tarball and extracting the jars from there, 
> rather than depending on the jars directly. It'd be great if this was fixed 
> to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive-0.8.1-SNAPSHOT-h0.21 - Build # 257 - Failure

2012-04-18 Thread Apache Jenkins Server
Changes for Build #257



1 tests failed.
REGRESSION:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1

Error Message:
Unexpected exception See build/ql/tmp/hive.log, or try "ant test ... 
-Dtest.silent=false" to get more logs.

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception
See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
more logs.
at junit.framework.Assert.fail(Assert.java:50)
at 
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:9440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785)




The Apache Jenkins build system has built Hive-0.8.1-SNAPSHOT-h0.21 (build #257)

Status: Failure

Check console output at 
https://builds.apache.org/job/Hive-0.8.1-SNAPSHOT-h0.21/257/ to view the 
results.

[jira] [Commented] (HIVE-2961) Remove need for storage descriptors for view partitions

2012-04-18 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13256822#comment-13256822
 ] 

Ashutosh Chauhan commented on HIVE-2961:


I agree with Kevin. Since, views are purely metadata it doesnt make much sense 
to have a storage-descriptor associated with them.

> Remove need for storage descriptors for view partitions
> ---
>
> Key: HIVE-2961
> URL: https://issues.apache.org/jira/browse/HIVE-2961
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.9.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-2961.D2877.1.patch
>
>
> Storage descriptors were introduced for view partitions as part of HIVE-2795. 
>  This was to allow view partitions to have the concept of a region as well as 
> to fix a NPE that resulted from calling describe formatted on them.
> Since regions are no longer necessary for view partitions and the NPE can be 
> fixed by not displaying storage information for view partitions (or 
> displaying the view's storage information if this is preferred, although, 
> since a view partition is purely metadata, this does not seem necessary), 
> these are no longer needed.
> This also means the Python script added which retroactively adds storage 
> descriptors to existing view partitions can be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: [VOTE] Apache Hive 0.9.0 Release Candidate 0

2012-04-18 Thread Ashutosh Chauhan
Hey Lars,

Thanks for taking a look. HIVE-1634 introduced new storage type for hbase
tables, namely binary. Since, bug manifests itself only for binary storage
type. This doesnt count as a regression since functionality for binary
storage itself was added through HIVE-1634. Since, this is not a regression
of existing functionality, it won't count as a blocker for 0.9 release.

Nonetheless, other folks have found other problems in RC0, so I have to
respin. Thus, I will consider HIVE-2958 fix for RC1.

Thanks,
Ashutosh

On Tue, Apr 17, 2012 at 23:46, Lars Francke  wrote:

> Hey,
>
> thanks for putting up the RC. We tried it yesterday and we stumbled
> across HIVE-2958 which seems like a bug that should be fixed before
> release because it was introduced with HIVE-1634 which is new to 0.9
> too and breaks GROUP BY queries on HBase which were working before.
>
> -1 (non-binding)
>
> Thanks,
> Lars
>


[jira] [Updated] (HIVE-2902) undefined property exists in eclipse-templates/.classpath

2012-04-18 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2902:
---

Assignee: tamtam180
  Status: Open  (was: Patch Available)

You also need to add jackson-core and jackson-mapper jars in the classpath

> undefined property exists in eclipse-templates/.classpath
> -
>
> Key: HIVE-2902
> URL: https://issues.apache.org/jira/browse/HIVE-2902
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: tamtam180
>Assignee: tamtam180
>Priority: Minor
> Attachments: HIVE-2902.1.patch.txt
>
>
> @hbase-test.version@ was removed from ivy/libraries.properties in HIVE-2748,
> but the property still exists in eclipse-templates/.classpath.
> {code}
> 
>  path="build/ivy/lib/default/hbase-@hbase-test.vers...@-tests.jar"/>
> {code}
> It should be changed to @hbase.version@

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2959) TestRemoteHiveMetaStoreIpAddress always uses the same port

2012-04-18 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13256734#comment-13256734
 ] 

Phabricator commented on HIVE-2959:
---

kevinwilfong has committed the revision "HIVE-2959 [jira] 
TestRemoteHiveMetaStoreIpAddress always uses the same port".

  Change committed by hashutosh.

REVISION DETAIL
  https://reviews.facebook.net/D2841

COMMIT
  https://reviews.facebook.net/rHIVE1327591


> TestRemoteHiveMetaStoreIpAddress always uses the same port
> --
>
> Key: HIVE-2959
> URL: https://issues.apache.org/jira/browse/HIVE-2959
> Project: Hive
>  Issue Type: Test
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.10
>
> Attachments: HIVE-2959.D2841.1.patch
>
>
> TestRemoteHiveMetaStoreIpAddress always uses the same port, meaning that if 
> another process happens to be using that port, the tests cannot succeed.
> There seems to be a standard way of finding a free port using Java's 
> ServerSocket class, this should be used instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2959) TestRemoteHiveMetaStoreIpAddress always uses the same port

2012-04-18 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2959:
---

   Resolution: Fixed
Fix Version/s: 0.10
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Kevin!

> TestRemoteHiveMetaStoreIpAddress always uses the same port
> --
>
> Key: HIVE-2959
> URL: https://issues.apache.org/jira/browse/HIVE-2959
> Project: Hive
>  Issue Type: Test
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.10
>
> Attachments: HIVE-2959.D2841.1.patch
>
>
> TestRemoteHiveMetaStoreIpAddress always uses the same port, meaning that if 
> another process happens to be using that port, the tests cannot succeed.
> There seems to be a standard way of finding a free port using Java's 
> ServerSocket class, this should be used instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2961) Remove need for storage descriptors for view partitions

2012-04-18 Thread Kevin Wilfong (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-2961:


Status: Patch Available  (was: Open)

> Remove need for storage descriptors for view partitions
> ---
>
> Key: HIVE-2961
> URL: https://issues.apache.org/jira/browse/HIVE-2961
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.9.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-2961.D2877.1.patch
>
>
> Storage descriptors were introduced for view partitions as part of HIVE-2795. 
>  This was to allow view partitions to have the concept of a region as well as 
> to fix a NPE that resulted from calling describe formatted on them.
> Since regions are no longer necessary for view partitions and the NPE can be 
> fixed by not displaying storage information for view partitions (or 
> displaying the view's storage information if this is preferred, although, 
> since a view partition is purely metadata, this does not seem necessary), 
> these are no longer needed.
> This also means the Python script added which retroactively adds storage 
> descriptors to existing view partitions can be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2961) Remove need for storage descriptors for view partitions

2012-04-18 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2961:
--

Attachment: HIVE-2961.D2877.1.patch

kevinwilfong requested code review of "HIVE-2961 [jira] Remove need for storage 
descriptors for view partitions".
Reviewers: JIRA

  https://issues.apache.org/jira/browse/HIVE-2961

  Removed the need for storage descriptors for view partitions and the script 
to add them.

  Storage descriptors were introduced for view partitions as part of HIVE-2795. 
 This was to allow view partitions to have the concept of a region as well as 
to fix a NPE that resulted from calling describe formatted on them.

  Since regions are no longer necessary for view partitions and the NPE can be 
fixed by not displaying storage information for view partitions (or displaying 
the view's storage information if this is preferred, although, since a view 
partition is purely metadata, this does not seem necessary), these are no 
longer needed.

  This also means the Python script added which retroactively adds storage 
descriptors to existing view partitions can be removed.

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D2877

AFFECTED FILES
  metastore/scripts/upgrade/001-HIVE-2795.update_view_partitions.py
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java
  
ql/src/test/results/clientpositive/describe_formatted_view_partitioned_json.q.out
  ql/src/test/results/clientpositive/describe_formatted_view_partitioned.q.out
  ql/src/test/queries/clientpositive/describe_formatted_view_partitioned_json.q
  
ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/MetaDataFormatUtils.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/6525/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


> Remove need for storage descriptors for view partitions
> ---
>
> Key: HIVE-2961
> URL: https://issues.apache.org/jira/browse/HIVE-2961
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.9.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-2961.D2877.1.patch
>
>
> Storage descriptors were introduced for view partitions as part of HIVE-2795. 
>  This was to allow view partitions to have the concept of a region as well as 
> to fix a NPE that resulted from calling describe formatted on them.
> Since regions are no longer necessary for view partitions and the NPE can be 
> fixed by not displaying storage information for view partitions (or 
> displaying the view's storage information if this is preferred, although, 
> since a view partition is purely metadata, this does not seem necessary), 
> these are no longer needed.
> This also means the Python script added which retroactively adds storage 
> descriptors to existing view partitions can be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2012-04-18 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13256722#comment-13256722
 ] 

Phabricator commented on HIVE-2646:
---

thw has commented on the revision "HIVE-2646 [jira] Hive Ivy dependencies on 
Hadoop should depend on jars directly, not tarballs".

  Carl, thanks for the clarification. I have the 3 tests working now after 
including contrib into the test classpath and excluding it from the MR 
classpath.

  Looks like there is one more test failing (but not showing up as failure in 
the test report?):

  test:
  [junit] Running org.apache.hive.pdk.PluginTest
  [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 9.877 sec

  That's because it does not get the ${test.hadoop.bin.path} setting. I'm 
trying to fix that. Will see to put up a new patch once that is done.


REVISION DETAIL
  https://reviews.facebook.net/D2133


> Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
> 
>
> Key: HIVE-2646
> URL: https://issues.apache.org/jira/browse/HIVE-2646
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.8.0
>Reporter: Andrew Bayer
>Assignee: Andrew Bayer
>Priority: Critical
> Fix For: 0.9.0
>
> Attachments: HIVE-2646.D2133.1.patch, HIVE-2646.D2133.10.patch, 
> HIVE-2646.D2133.11.patch, HIVE-2646.D2133.12.patch, HIVE-2646.D2133.13.patch, 
> HIVE-2646.D2133.14.patch, HIVE-2646.D2133.15.patch, HIVE-2646.D2133.2.patch, 
> HIVE-2646.D2133.3.patch, HIVE-2646.D2133.4.patch, HIVE-2646.D2133.5.patch, 
> HIVE-2646.D2133.6.patch, HIVE-2646.D2133.7.patch, HIVE-2646.D2133.8.patch, 
> HIVE-2646.D2133.9.patch, HIVE-2646.diff.txt
>
>
> The current Hive Ivy dependency logic for its Hadoop dependencies is 
> problematic - depending on the tarball and extracting the jars from there, 
> rather than depending on the jars directly. It'd be great if this was fixed 
> to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2957) Hive JDBC doesn't support TIMESTAMP column

2012-04-18 Thread Ashutosh Chauhan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2957:
---

Status: Open  (was: Patch Available)

Patch needs a test case.

> Hive JDBC doesn't support TIMESTAMP column
> --
>
> Key: HIVE-2957
> URL: https://issues.apache.org/jira/browse/HIVE-2957
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.8.1, 0.9.0
>Reporter: Bharath Ganesh
>Assignee: Bharath Ganesh
>Priority: Minor
> Fix For: 0.9.0
>
> Attachments: HIVE-2957.patch
>
>
> Steps to replicate:
> 1. Create a table with at least one column of type TIMESTAMP
> 2. Do a DatabaseMetaData.getColumns () such that this TIMESTAMP column is 
> part of the resultset.
> 3. When you iterate over the TIMESTAMP column it would fail, throwing the 
> below exception:
> Exception in thread "main" java.sql.SQLException: Unrecognized column type: 
> timestamp
>   at org.apache.hadoop.hive.jdbc.Utils.hiveTypeToSqlType(Utils.java:56)
>   at org.apache.hadoop.hive.jdbc.JdbcColumn.getSqlType(JdbcColumn.java:62)
>   at 
> org.apache.hadoop.hive.jdbc.HiveDatabaseMetaData$2.next(HiveDatabaseMetaData.java:244)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2961) Remove need for storage descriptors for view partitions

2012-04-18 Thread Kevin Wilfong (Created) (JIRA)
Remove need for storage descriptors for view partitions
---

 Key: HIVE-2961
 URL: https://issues.apache.org/jira/browse/HIVE-2961
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.9.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong


Storage descriptors were introduced for view partitions as part of HIVE-2795.  
This was to allow view partitions to have the concept of a region as well as to 
fix a NPE that resulted from calling describe formatted on them.

Since regions are no longer necessary for view partitions and the NPE can be 
fixed by not displaying storage information for view partitions (or displaying 
the view's storage information if this is preferred, although, since a view 
partition is purely metadata, this does not seem necessary), these are no 
longer needed.

This also means the Python script added which retroactively adds storage 
descriptors to existing view partitions can be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: [VOTE] Apache Hive 0.9.0 Release Candidate 0

2012-04-18 Thread Kevin Wilfong
Regarding HIVE-2795, that Python script was intended to solve two problems.

1) The null pointer exception that resulted from calling describe
formatted on a view partition

2) View partitions did not have storage descriptors, so they did not have
a concept of region like other objects.

2 no longer seems to be an issue.  1 could also be solved by not
displaying storage information for view partitions when describe formatted
is called on them, which seems reasonable since they are purely metadata.

I can add a patch removing the need for this script and the script itself
to be included in the release.

Kevin Wilfong

On 4/16/12 10:53 AM, "Carl Steinbach"  wrote:

>-1
>
>* RELEASE_NOTES.txt needs to be updated.
>
>* The Postgres metastore upgrade scripts should be excluded from the
>tarball since they are not current. There is already a build property that
>handles this ("include.postgres"), which should default to false.
>
>* HIVE-2795 added a python metastore upgrade script located in
>scripts/metastore/upgrade. This script is not mentioned in any of the
>README.txt files, and it's also not clear which version of Python is
>required to run it. Furthermore, the docs in the python script are not
>very
>clear, probably won't work with Derby, and reference a dependency in
>"trunk/build/dist/lib/py" which is not present in the tarball.
>
>* The lib/ directory contains some new JARs which are not covered in the
>LICENSE file (it's also possible that these are not really required), as
>well as a couple cases of multiple versions of the same JAR. Here are the
>ones that jump out at me, though there are probably some I missed:
>** JavaEWAH-0.3.2
>** antlr-2.7.7 and antlr-3.0.1?
>** commons-codec-1.3 and commons-codec-1.4?
>** commons-logging-1.0.4 and commons-logging-1.1.1?
>** hamcrest-core-1.1
>** hsqldb-1.8.0.10
>** kfs-0.3
>** oro-2.0.8 (This uses v1.1 of the ASF license)
>
>Thanks.
>
>Carl
>
>On Fri, Apr 13, 2012 at 3:28 PM, Ashutosh Chauhan
>wrote:
>
>> Couple more points:
>>
>> Maven artifacts are available at
>> https://repository.apache.org/content/repositories/orgapachehive-043/
>>for
>> folks to try out.
>>
>> Vote runs for 3 business days so will expire on Wednesday, 4/18.
>>
>> Thanks,
>> Ashutosh
>>
>> On Fri, Apr 13, 2012 at 11:50, Ashutosh Chauhan > >wrote:
>>
>> > Hey all,
>> >
>> > Apache Hive 0.9.0-rc0 is out and available at
>> > http://people.apache.org/~hashutosh/hive-0.9.0-rc0/
>> >
>> > Release notes are available at:
>> >
>> 
>>https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310843
>>&version=12317742
>> >
>> > Please give it a try, let us know.
>> >
>> > Hive PMC members: Please test and vote.
>> >
>> > Thanks,
>> > Ashutosh
>> >
>>



Re: [VOTE] Apache Hive 0.9.0 Release Candidate 0

2012-04-18 Thread Owen O'Malley
I'm -1 (nonbinding) on rc0 so that we can incorporate HIVE-2930, which
fixes the Apache headers on the source files. (Thanks for committing
that!)

-- Owen


[jira] [Updated] (HIVE-2902) undefined property exists in eclipse-templates/.classpath

2012-04-18 Thread tamtam180 (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tamtam180 updated HIVE-2902:


Status: Patch Available  (was: Open)

Could someone review this patch?

> undefined property exists in eclipse-templates/.classpath
> -
>
> Key: HIVE-2902
> URL: https://issues.apache.org/jira/browse/HIVE-2902
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: tamtam180
>Priority: Minor
> Attachments: HIVE-2902.1.patch.txt
>
>
> @hbase-test.version@ was removed from ivy/libraries.properties in HIVE-2748,
> but the property still exists in eclipse-templates/.classpath.
> {code}
> 
>  path="build/ivy/lib/default/hbase-@hbase-test.vers...@-tests.jar"/>
> {code}
> It should be changed to @hbase.version@

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2957) Hive JDBC doesn't support TIMESTAMP column

2012-04-18 Thread Bharath Ganesh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharath Ganesh updated HIVE-2957:
-

Fix Version/s: (was: 0.10)
   (was: 0.8.0)

> Hive JDBC doesn't support TIMESTAMP column
> --
>
> Key: HIVE-2957
> URL: https://issues.apache.org/jira/browse/HIVE-2957
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.8.1, 0.9.0
>Reporter: Bharath Ganesh
>Assignee: Bharath Ganesh
>Priority: Minor
> Fix For: 0.9.0
>
> Attachments: HIVE-2957.patch
>
>
> Steps to replicate:
> 1. Create a table with at least one column of type TIMESTAMP
> 2. Do a DatabaseMetaData.getColumns () such that this TIMESTAMP column is 
> part of the resultset.
> 3. When you iterate over the TIMESTAMP column it would fail, throwing the 
> below exception:
> Exception in thread "main" java.sql.SQLException: Unrecognized column type: 
> timestamp
>   at org.apache.hadoop.hive.jdbc.Utils.hiveTypeToSqlType(Utils.java:56)
>   at org.apache.hadoop.hive.jdbc.JdbcColumn.getSqlType(JdbcColumn.java:62)
>   at 
> org.apache.hadoop.hive.jdbc.HiveDatabaseMetaData$2.next(HiveDatabaseMetaData.java:244)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2957) Hive JDBC doesn't support TIMESTAMP column

2012-04-18 Thread Bharath Ganesh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharath Ganesh updated HIVE-2957:
-

Attachment: HIVE-2957.patch

Added support for TIMESTAMP column on the JDBC driver. I am not 100% sure of 
the precision and scale. Please do verify. I tested the sanity and I am able to 
retrieve a column upto nano-second precision.

> Hive JDBC doesn't support TIMESTAMP column
> --
>
> Key: HIVE-2957
> URL: https://issues.apache.org/jira/browse/HIVE-2957
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.8.1, 0.9.0
>Reporter: Bharath Ganesh
>Assignee: Bharath Ganesh
>Priority: Minor
> Fix For: 0.9.0
>
> Attachments: HIVE-2957.patch
>
>
> Steps to replicate:
> 1. Create a table with at least one column of type TIMESTAMP
> 2. Do a DatabaseMetaData.getColumns () such that this TIMESTAMP column is 
> part of the resultset.
> 3. When you iterate over the TIMESTAMP column it would fail, throwing the 
> below exception:
> Exception in thread "main" java.sql.SQLException: Unrecognized column type: 
> timestamp
>   at org.apache.hadoop.hive.jdbc.Utils.hiveTypeToSqlType(Utils.java:56)
>   at org.apache.hadoop.hive.jdbc.JdbcColumn.getSqlType(JdbcColumn.java:62)
>   at 
> org.apache.hadoop.hive.jdbc.HiveDatabaseMetaData$2.next(HiveDatabaseMetaData.java:244)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2957) Hive JDBC doesn't support TIMESTAMP column

2012-04-18 Thread Bharath Ganesh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharath Ganesh updated HIVE-2957:
-

Fix Version/s: 0.8.0
   0.10
   0.9.0
   Status: Patch Available  (was: In Progress)

> Hive JDBC doesn't support TIMESTAMP column
> --
>
> Key: HIVE-2957
> URL: https://issues.apache.org/jira/browse/HIVE-2957
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.8.1, 0.9.0
>Reporter: Bharath Ganesh
>Assignee: Bharath Ganesh
>Priority: Minor
> Fix For: 0.9.0, 0.10, 0.8.0
>
>
> Steps to replicate:
> 1. Create a table with at least one column of type TIMESTAMP
> 2. Do a DatabaseMetaData.getColumns () such that this TIMESTAMP column is 
> part of the resultset.
> 3. When you iterate over the TIMESTAMP column it would fail, throwing the 
> below exception:
> Exception in thread "main" java.sql.SQLException: Unrecognized column type: 
> timestamp
>   at org.apache.hadoop.hive.jdbc.Utils.hiveTypeToSqlType(Utils.java:56)
>   at org.apache.hadoop.hive.jdbc.JdbcColumn.getSqlType(JdbcColumn.java:62)
>   at 
> org.apache.hadoop.hive.jdbc.HiveDatabaseMetaData$2.next(HiveDatabaseMetaData.java:244)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2957) Hive JDBC doesn't support TIMESTAMP column

2012-04-18 Thread Bharath Ganesh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharath Ganesh updated HIVE-2957:
-

Summary: Hive JDBC doesn't support TIMESTAMP column  (was: JDBC 
getColumns() fails on a TIMESTAMP column)

> Hive JDBC doesn't support TIMESTAMP column
> --
>
> Key: HIVE-2957
> URL: https://issues.apache.org/jira/browse/HIVE-2957
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.8.1, 0.9.0
>Reporter: Bharath Ganesh
>Assignee: Bharath Ganesh
>Priority: Minor
>
> Steps to replicate:
> 1. Create a table with at least one column of type TIMESTAMP
> 2. Do a DatabaseMetaData.getColumns () such that this TIMESTAMP column is 
> part of the resultset.
> 3. When you iterate over the TIMESTAMP column it would fail, throwing the 
> below exception:
> Exception in thread "main" java.sql.SQLException: Unrecognized column type: 
> timestamp
>   at org.apache.hadoop.hive.jdbc.Utils.hiveTypeToSqlType(Utils.java:56)
>   at org.apache.hadoop.hive.jdbc.JdbcColumn.getSqlType(JdbcColumn.java:62)
>   at 
> org.apache.hadoop.hive.jdbc.HiveDatabaseMetaData$2.next(HiveDatabaseMetaData.java:244)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira