[jira] [Commented] (HIVE-4173) Hive Ingnoring where clause for multitable insert

2013-03-19 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607321#comment-13607321
 ] 

Navis commented on HIVE-4173:
-

HIVE-3699 was about PPD for multi-insert case, seemingly fix the problem in 
description. 
Could you try it with version in trunk?

> Hive Ingnoring where clause for multitable insert
> -
>
> Key: HIVE-4173
> URL: https://issues.apache.org/jira/browse/HIVE-4173
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.8.1, 0.9.0
> Environment: Red Hat Enterprise Linux Server release 6.3 (Santiago),
>Reporter: hussain
>Priority: Critical
>
> Hive is ignoring Filter conditions given at Multi Insert select statement 
> when  Filtering given on Source Query..
> To highlight this issue, please see below example with where clause 
> (status!='C') from employee12 table causing issue and due to which insert 
> filters (batch_id='12 and batch_id!='12' )not working and dumping all the 
> data coming from source to both the tables.
> I have checked the hive execution plan, and didn't find Filter predicates 
> under for filtering record per insert statements
> from 
> (from employee12
> select * 
> where status!='C') t
> insert into table employee1
> select 
> status,
> field1,
> 'T' as field2,
> 'P' as field3,
> 'C' as field4
> where batch_id='12'
> insert into table employee2
> select
> status,
> field1,
> 'D' as field2, 
> 'P' as field3,
> 'C' as field4
> where batch_id!='12';
> It is working fine with single insert. Hive generating plan properly.. 
> I am able to reproduce this issue with 8.1 and 9.0 version of Hive.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3820) Consider creating a literal like "D" or "BD" for representing Decimal type constants

2013-03-19 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607319#comment-13607319
 ] 

Navis commented on HIVE-3820:
-

I've abandoned HIVE-2586 but still believe literal float/double type would be 
useful. Isn't it?

> Consider creating a literal like "D" or "BD" for representing Decimal type 
> constants
> 
>
> Key: HIVE-3820
> URL: https://issues.apache.org/jira/browse/HIVE-3820
> Project: Hive
>  Issue Type: Bug
>Reporter: Mark Grover
>Assignee: Gunther Hagleitner
> Attachments: HIVE-3820.1.patch, HIVE-3820.2.patch, 
> HIVE-3820.D8823.1.patch
>
>
> When the HIVE-2693 gets committed, users are going to see this behavior:
> {code}
> hive> select cast(3.14 as decimal) from decimal_3 limit 1;
> 3.140124344978758017532527446746826171875
> {code}
> That's intuitively incorrect but is the case because 3.14 (double) is being 
> converted to BigDecimal because of which there is a precision mismatch.
> We should consider creating a new literal for expressing constants of Decimal 
> type as Gunther suggested in HIVE-2693.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4204) Support ellipsis for selecting multiple columns

2013-03-19 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4204:
--

Attachment: HIVE-4204.D9549.1.patch

navis requested code review of "HIVE-4204 [jira] Support ellipsis for selecting 
multiple columns".

Reviewers: JIRA

HIVE-4204 Support ellipsis for selecting multiple columns

Some UDF should take all columns starting from second or third one and in this 
case, star argument(HIVE-3490) cannot be used. It's not pleasant to specify all 
of them especially when the table has many of columns. For example,

select some_udtf(a2,a3,a4,a5,a6) as (a2,a3,a4,a5,a6) from table;

it can be simplified to

select some_udtf(a2...) as (a2,a3,a4,a5,a6) from table;

if HIVE-2608 would be committed, it can be simplified further

select some_udtf(a2...) from table;

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D9549

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnInfo.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
  ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckCtx.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
  ql/src/test/queries/clientnegative/allcolref_from_tablealias.q
  ql/src/test/queries/clientpositive/allcolref_from.q
  ql/src/test/results/clientnegative/allcolref_from_tablealias.q.out
  ql/src/test/results/clientpositive/allcolref_from.q.out

MANAGE HERALD RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/22845/

To: JIRA, navis


> Support ellipsis for selecting multiple columns
> ---
>
> Key: HIVE-4204
> URL: https://issues.apache.org/jira/browse/HIVE-4204
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Attachments: HIVE-4204.D9549.1.patch
>
>
> Some UDF should take all columns starting from second or third one and in 
> this case, star argument(HIVE-3490) cannot be used. It's not pleasant to 
> specify all of them especially when the table has many of columns. For 
> example,
> {noformat}
> select some_udtf(a2,a3,a4,a5,a6) as (a2,a3,a4,a5,a6) from table;
> {noformat}
> it can be simplified to 
> {noformat}
> select some_udtf(a2...) as (a2,a3,a4,a5,a6) from table;
> {noformat}
> if HIVE-2608 would be committed, it can be simplified further 
> {noformat}
> select some_udtf(a2...) from table;
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4204) Support ellipsis for selecting multiple columns

2013-03-19 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-4204:


Status: Patch Available  (was: Open)

> Support ellipsis for selecting multiple columns
> ---
>
> Key: HIVE-4204
> URL: https://issues.apache.org/jira/browse/HIVE-4204
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Attachments: HIVE-4204.D9549.1.patch
>
>
> Some UDF should take all columns starting from second or third one and in 
> this case, star argument(HIVE-3490) cannot be used. It's not pleasant to 
> specify all of them especially when the table has many of columns. For 
> example,
> {noformat}
> select some_udtf(a2,a3,a4,a5,a6) as (a2,a3,a4,a5,a6) from table;
> {noformat}
> it can be simplified to 
> {noformat}
> select some_udtf(a2...) as (a2,a3,a4,a5,a6) from table;
> {noformat}
> if HIVE-2608 would be committed, it can be simplified further 
> {noformat}
> select some_udtf(a2...) from table;
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4204) Support ellipsis for selecting multiple columns

2013-03-19 Thread Navis (JIRA)
Navis created HIVE-4204:
---

 Summary: Support ellipsis for selecting multiple columns
 Key: HIVE-4204
 URL: https://issues.apache.org/jira/browse/HIVE-4204
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Trivial


Some UDF should take all columns starting from second or third one and in this 
case, star argument(HIVE-3490) cannot be used. It's not pleasant to specify all 
of them especially when the table has many of columns. For example,
{noformat}
select some_udtf(a2,a3,a4,a5,a6) as (a2,a3,a4,a5,a6) from table;
{noformat}

it can be simplified to 
{noformat}
select some_udtf(a2...) as (a2,a3,a4,a5,a6) from table;
{noformat}

if HIVE-2608 would be committed, it can be simplified further 
{noformat}
select some_udtf(a2...) from table;
{noformat}



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4146) bug with hive.auto.convert.join.noconditionaltask with outer joins

2013-03-19 Thread Gang Tim Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607307#comment-13607307
 ] 

Gang Tim Liu commented on HIVE-4146:


A very small comment in D9327.

> bug with hive.auto.convert.join.noconditionaltask with outer joins
> --
>
> Key: HIVE-4146
> URL: https://issues.apache.org/jira/browse/HIVE-4146
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Namit Jain
> Attachments: hive.4146.1.patch, hive.4146.2.patch, hive.4146.3.patch, 
> hive.4146.4.patch, hive.4146.5.patch, hive.4146.6.patch
>
>
> Consider the following scenario:
> create table s1 as select * from src where key = 0;
> set hive.auto.convert.join.noconditionaltask=false;   
> 
> SELECT * FROM s1 src1 LEFT OUTER JOIN s1 src2 ON (src1.key = src2.key AND 
> src2.key > 10);
> gives correct results
> 0 val_0   NULLNULL
> 0 val_0   NULLNULL
> 0 val_0   NULLNULL
> whereas it gives no results with hive.auto.convert.join.noconditionaltask set
> to true

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 2022 - Still Failing

2013-03-19 Thread Apache Jenkins Server
Changes for Build #2012
[hashutosh] HIVE-3862 : testHBaseNegativeCliDriver_cascade_dbdrop fails on 
hadoop-1 (Gunther Hagleitner via Ashutosh Chauhan)


Changes for Build #2013
[kevinwilfong] HIVE-4125. Expose metastore JMX metrics. (Samuel Yuan via 
kevinwilfong)

[hashutosh] HIVE-2935 : Implement HiveServer2 Core code changes  (4th patch of 
4) (Carl Steinbach and others via Ashutosh Chauhan)

[kevinwilfong] HIVE-4096. problem in hive.map.groupby.sorted with distincts. 
(njain via kevinwilfong)

[hashutosh] HIVE-2935 : Implement HiveServer2 Beeline .q.out files (3rd patch 
of 4) (Carl Steinbach and others via Ashutosh Chauhan)

[hashutosh] HIVE-2935 : Implement HiveServer2 Beeline code changes (2nd patch 
of 4) (Carl Steinbach and others via Ashutosh Chauhan)

[hashutosh] HIVE-2935 : Implement HiveServer2 (1st patch of 4) (Carl Steinbach 
and others via Ashutosh Chauhan)

[hashutosh] HIVE-3717 : Hive wont compile with -Dhadoop.mr.rev=20S (Gunther 
Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4148 : Cleanup aisle ivy (Gunther Hagleitner via Ashutosh 
Chauhan)


Changes for Build #2014

Changes for Build #2015

Changes for Build #2016
[ecapriolo] Hive-4141 InspectorFactories use static HashMaps which fail in 
concurrent modification (Brock Noland via egc)

Submitted by: Brock Noland  
Reviewed by: Edward Capriolo
Approved by: Edward Capriolo

[kevinwilfong] HIVE-4176. disable TestBeeLineDriver in ptest util. 
(kevinwilfong reviewed by njain, ashutoshc)

[hashutosh] HIVE-4169 : union_remove_*.q fail on hadoop 2 (Gunther Hagleitner 
via Ashutosh Chauhan)


Changes for Build #2017
[cws] HIVE-4145. Create hcatalog stub directory and add it to the build (Carl 
Steinbach via cws)

[kevinwilfong] HIVE-4162. disable TestBeeLineDriver. (Thejas M Nair via 
kevinwilfong)


Changes for Build #2018

Changes for Build #2019

Changes for Build #2020

Changes for Build #2021

Changes for Build #2022



1 tests failed.
REGRESSION:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_stats_aggregator_error_1

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at 
net.sf.antcontrib.logic.ForTask.doSequentialIteration(ForTask.java:259)
at net.sf.antcontrib.logic.ForTask.doToken(ForTask.java:268)
at net.sf.antcontrib.logic.ForTask.doTheTasks(ForTask.java:299)
at net.sf.antcontrib.logic.ForTask.execute(ForTask.java:244)




The Apache Jenkins build system has built Hive-trunk-h0.21 (build #2022)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/2022/ to 
view the results.

[jira] [Commented] (HIVE-4041) Support multiple partitionings in a single Query

2013-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607247#comment-13607247
 ] 

Ashutosh Chauhan commented on HIVE-4041:


All the +ve tests did pass.

> Support multiple partitionings in a single Query
> 
>
> Key: HIVE-4041
> URL: https://issues.apache.org/jira/browse/HIVE-4041
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: Harish Butani
>Assignee: Harish Butani
> Attachments: HIVE-4041.D9381.1.patch, HIVE-4041.D9381.2.patch, 
> WindowingComponentization.pdf
>
>
> Currently we disallow queries if the partition specifications of all Wdw fns 
> are not the same. We can relax this by generating multiple PTFOps based on 
> the unique partitionings in a Query. For partitionings that only differ in 
> sort, we can introduce a sort step in between PTFOps, which can happen in the 
> same Reduce task.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4041) Support multiple partitionings in a single Query

2013-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607237#comment-13607237
 ] 

Ashutosh Chauhan commented on HIVE-4041:


Following negative tests failed:
* ptf_negative_IncompatibleDistributeClause.q
* ptf_negative_IncompatibleOrderInWindowDefs.q
* ptf_negative_IncompatiblePartitionInWindowDefs.q
* ptf_negative_IncompatibleSortClause.q
* ptf_negative_InvalidValueBoundary.q

> Support multiple partitionings in a single Query
> 
>
> Key: HIVE-4041
> URL: https://issues.apache.org/jira/browse/HIVE-4041
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: Harish Butani
>Assignee: Harish Butani
> Attachments: HIVE-4041.D9381.1.patch, HIVE-4041.D9381.2.patch, 
> WindowingComponentization.pdf
>
>
> Currently we disallow queries if the partition specifications of all Wdw fns 
> are not the same. We can relax this by generating multiple PTFOps based on 
> the unique partitionings in a Query. For partitionings that only differ in 
> sort, we can introduce a sort step in between PTFOps, which can happen in the 
> same Reduce task.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-4202) reuse Partition objects in PTFOperator processing

2013-03-19 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-4202.


Resolution: Fixed

Committed to branch. Thanks, Harish!

> reuse Partition objects in PTFOperator processing
> -
>
> Key: HIVE-4202
> URL: https://issues.apache.org/jira/browse/HIVE-4202
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: Harish Butani
>Assignee: Harish Butani
> Attachments: HIVE-4202.D9525.1.patch
>
>
> to improve memory utilization and reduce number of files and directories 
> created.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3914) use Chinese in hive column comment and table comment

2013-03-19 Thread huang wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607203#comment-13607203
 ] 

huang wei commented on HIVE-3914:
-

In Hive 0.9.0,the file 'DDLtask.java' is not like your Attachments.use 
“outStream.writeUTF” instead of “ outStream.writeBytes ” can't solve it.How do 
you solve it?

> use Chinese in hive column comment and table comment
> 
>
> Key: HIVE-3914
> URL: https://issues.apache.org/jira/browse/HIVE-3914
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.9.0, 0.10.0
>Reporter: caofangkun
>Priority: Minor
> Attachments: HIVE-3914-1.patch
>
>
> use Chinese in hive column comment and table comment,and the metadata in 
> Mysql is regular,the charset of 'COMMENT' column in 'columns_v2' table and 
> 'PARAM_VALUE' column in 'table_params' table both are 'utf8'.
> When I exec 'select * from columns_v2' with mysql client,the Chinese comments 
> display normally. But when I execute 'describe table' with hive cli,the 
> Chinese words are garbled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4202) reuse Partition objects in PTFOperator processing

2013-03-19 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607190#comment-13607190
 ] 

Phabricator commented on HIVE-4202:
---

ashutoshc has accepted the revision "HIVE-4202 [jira] reuse Partition objects 
in PTFOperator processing".

  Cool trick. Query select s, avg(i) over (partition by d, b) from over100k; 
has ~96K unique values for (b,d). It used to take ~35 minutes finishes in less 
than 3 minutes after this patch. Running tests now, will commit if tests pass.

REVISION DETAIL
  https://reviews.facebook.net/D9525

BRANCH
  partition-reuse

ARCANIST PROJECT
  hive

To: JIRA, ashutoshc, hbutani


> reuse Partition objects in PTFOperator processing
> -
>
> Key: HIVE-4202
> URL: https://issues.apache.org/jira/browse/HIVE-4202
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: Harish Butani
>Assignee: Harish Butani
> Attachments: HIVE-4202.D9525.1.patch
>
>
> to improve memory utilization and reduce number of files and directories 
> created.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4139) MiniDFS shim does not work for hadoop 2

2013-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607171#comment-13607171
 ] 

Ashutosh Chauhan commented on HIVE-4139:


ok. running tests

> MiniDFS shim does not work for hadoop 2
> ---
>
> Key: HIVE-4139
> URL: https://issues.apache.org/jira/browse/HIVE-4139
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-4139.1.patch, HIVE-4139.2.patch, HIVE-4139.3.patch, 
> HIVE-4139.4.patch
>
>
> There's an incompatibility between hadoop 1 & 2 wrt to the MiniDfsCluster 
> class. That causes the hadoop 2 line Minimr tests to fail with a 
> "MethodNotFound" exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3938) Hive MetaStore should send a single AddPartitionEvent for atomically added partition-set.

2013-03-19 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607162#comment-13607162
 ] 

Sushanth Sowmyan commented on HIVE-3938:


Looks good to me. +1 (non-binding)

One thing I will note, however, for the sake of completeness, in that this is a 
behaviour change, is that this implementation of taking the common table out of 
the partitions means that add_partitions will only work if all the partitions 
in parts are in the same table. The earlier implementation allowed a mixed 
group of partitions. I think this is better though, and atomically adding 
arbitrary groups of partitions atomically if they're not related is a recipe 
for other usage problems.

> Hive MetaStore should send a single AddPartitionEvent for atomically added 
> partition-set.
> -
>
> Key: HIVE-3938
> URL: https://issues.apache.org/jira/browse/HIVE-3938
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.10.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-3938.trunk.patch
>
>
> HiveMetaStore::add_partitions() currently adds all partitions specified in 
> one call using a single meta-store transaction. This acts correctly. However, 
> there's one AddPartitionEvent created per partition specified.
> Ideally, the set of partitions added atomically can be communicated using a 
> single AddPartitionEvent, such that they are consumed together.
> I'll post a patch that does this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4015) Add ORC file to the grammar as a file format

2013-03-19 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607081#comment-13607081
 ] 

Kevin Wilfong commented on HIVE-4015:
-

Yes +1

> Add ORC file to the grammar as a file format
> 
>
> Key: HIVE-4015
> URL: https://issues.apache.org/jira/browse/HIVE-4015
> Project: Hive
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>Assignee: Gunther Hagleitner
> Attachments: HIVE-4015.1.patch, HIVE-4015.2.patch, HIVE-4015.3.patch, 
> HIVE-4015.4.patch, HIVE-4015.5.patch
>
>
> It would be much more convenient for users if we enable them to use ORC as a 
> file format in the HQL grammar. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4015) Add ORC file to the grammar as a file format

2013-03-19 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607062#comment-13607062
 ] 

Gunther Hagleitner commented on HIVE-4015:
--

[~kevinwilfong] Does Owen's comment and the new test address your concerns?

> Add ORC file to the grammar as a file format
> 
>
> Key: HIVE-4015
> URL: https://issues.apache.org/jira/browse/HIVE-4015
> Project: Hive
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>Assignee: Gunther Hagleitner
> Attachments: HIVE-4015.1.patch, HIVE-4015.2.patch, HIVE-4015.3.patch, 
> HIVE-4015.4.patch, HIVE-4015.5.patch
>
>
> It would be much more convenient for users if we enable them to use ORC as a 
> file format in the HQL grammar. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3820) Consider creating a literal like "D" or "BD" for representing Decimal type constants

2013-03-19 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607061#comment-13607061
 ] 

Gunther Hagleitner commented on HIVE-3820:
--

[~ashutoshc]bd is probably a little less confusing. I've updated the patch 
accordingly.

> Consider creating a literal like "D" or "BD" for representing Decimal type 
> constants
> 
>
> Key: HIVE-3820
> URL: https://issues.apache.org/jira/browse/HIVE-3820
> Project: Hive
>  Issue Type: Bug
>Reporter: Mark Grover
>Assignee: Gunther Hagleitner
> Attachments: HIVE-3820.1.patch, HIVE-3820.2.patch, 
> HIVE-3820.D8823.1.patch
>
>
> When the HIVE-2693 gets committed, users are going to see this behavior:
> {code}
> hive> select cast(3.14 as decimal) from decimal_3 limit 1;
> 3.140124344978758017532527446746826171875
> {code}
> That's intuitively incorrect but is the case because 3.14 (double) is being 
> converted to BigDecimal because of which there is a precision mismatch.
> We should consider creating a new literal for expressing constants of Decimal 
> type as Gunther suggested in HIVE-2693.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3820) Consider creating a literal like "D" or "BD" for representing Decimal type constants

2013-03-19 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-3820:
-

Attachment: HIVE-3820.2.patch

.2 has BD as the ending for decimal literals.

> Consider creating a literal like "D" or "BD" for representing Decimal type 
> constants
> 
>
> Key: HIVE-3820
> URL: https://issues.apache.org/jira/browse/HIVE-3820
> Project: Hive
>  Issue Type: Bug
>Reporter: Mark Grover
>Assignee: Gunther Hagleitner
> Attachments: HIVE-3820.1.patch, HIVE-3820.2.patch, 
> HIVE-3820.D8823.1.patch
>
>
> When the HIVE-2693 gets committed, users are going to see this behavior:
> {code}
> hive> select cast(3.14 as decimal) from decimal_3 limit 1;
> 3.140124344978758017532527446746826171875
> {code}
> That's intuitively incorrect but is the case because 3.14 (double) is being 
> converted to BigDecimal because of which there is a precision mismatch.
> We should consider creating a new literal for expressing constants of Decimal 
> type as Gunther suggested in HIVE-2693.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4154) NPE reading column of empty string from ORC file

2013-03-19 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607041#comment-13607041
 ] 

Namit Jain commented on HIVE-4154:
--

+1

> NPE reading column of empty string from ORC file
> 
>
> Key: HIVE-4154
> URL: https://issues.apache.org/jira/browse/HIVE-4154
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 0.11.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-4154.1.patch.txt, HIVE-4154.2.patch.txt
>
>
> If a String column contains only empty strings, a null pointer exception is 
> throws from the RecordReaderImpl for ORC.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4154) NPE reading column of empty string from ORC file

2013-03-19 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4154:
-

   Resolution: Fixed
Fix Version/s: 0.11.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed. Thanks Kevin

> NPE reading column of empty string from ORC file
> 
>
> Key: HIVE-4154
> URL: https://issues.apache.org/jira/browse/HIVE-4154
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 0.11.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.11.0
>
> Attachments: HIVE-4154.1.patch.txt, HIVE-4154.2.patch.txt
>
>
> If a String column contains only empty strings, a null pointer exception is 
> throws from the RecordReaderImpl for ORC.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4146) bug with hive.auto.convert.join.noconditionaltask with outer joins

2013-03-19 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4146:
-

Status: Patch Available  (was: Open)

> bug with hive.auto.convert.join.noconditionaltask with outer joins
> --
>
> Key: HIVE-4146
> URL: https://issues.apache.org/jira/browse/HIVE-4146
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Namit Jain
> Attachments: hive.4146.1.patch, hive.4146.2.patch, hive.4146.3.patch, 
> hive.4146.4.patch, hive.4146.5.patch, hive.4146.6.patch
>
>
> Consider the following scenario:
> create table s1 as select * from src where key = 0;
> set hive.auto.convert.join.noconditionaltask=false;   
> 
> SELECT * FROM s1 src1 LEFT OUTER JOIN s1 src2 ON (src1.key = src2.key AND 
> src2.key > 10);
> gives correct results
> 0 val_0   NULLNULL
> 0 val_0   NULLNULL
> 0 val_0   NULLNULL
> whereas it gives no results with hive.auto.convert.join.noconditionaltask set
> to true

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4139) MiniDFS shim does not work for hadoop 2

2013-03-19 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607038#comment-13607038
 ] 

Gunther Hagleitner commented on HIVE-4139:
--

I think this is ready to go in and can go first.

> MiniDFS shim does not work for hadoop 2
> ---
>
> Key: HIVE-4139
> URL: https://issues.apache.org/jira/browse/HIVE-4139
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-4139.1.patch, HIVE-4139.2.patch, HIVE-4139.3.patch, 
> HIVE-4139.4.patch
>
>
> There's an incompatibility between hadoop 1 & 2 wrt to the MiniDfsCluster 
> class. That causes the hadoop 2 line Minimr tests to fail with a 
> "MethodNotFound" exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4146) bug with hive.auto.convert.join.noconditionaltask with outer joins

2013-03-19 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607037#comment-13607037
 ] 

Namit Jain commented on HIVE-4146:
--

All tests passed.

> bug with hive.auto.convert.join.noconditionaltask with outer joins
> --
>
> Key: HIVE-4146
> URL: https://issues.apache.org/jira/browse/HIVE-4146
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Namit Jain
> Attachments: hive.4146.1.patch, hive.4146.2.patch, hive.4146.3.patch, 
> hive.4146.4.patch, hive.4146.5.patch, hive.4146.6.patch
>
>
> Consider the following scenario:
> create table s1 as select * from src where key = 0;
> set hive.auto.convert.join.noconditionaltask=false;   
> 
> SELECT * FROM s1 src1 LEFT OUTER JOIN s1 src2 ON (src1.key = src2.key AND 
> src2.key > 10);
> gives correct results
> 0 val_0   NULLNULL
> 0 val_0   NULLNULL
> 0 val_0   NULLNULL
> whereas it gives no results with hive.auto.convert.join.noconditionaltask set
> to true

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4139) MiniDFS shim does not work for hadoop 2

2013-03-19 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-4139:
-

Attachment: HIVE-4139.4.patch

Rebased. No changes from .3 other than resolved conflict in build-common.xml.

> MiniDFS shim does not work for hadoop 2
> ---
>
> Key: HIVE-4139
> URL: https://issues.apache.org/jira/browse/HIVE-4139
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-4139.1.patch, HIVE-4139.2.patch, HIVE-4139.3.patch, 
> HIVE-4139.4.patch
>
>
> There's an incompatibility between hadoop 1 & 2 wrt to the MiniDfsCluster 
> class. That causes the hadoop 2 line Minimr tests to fail with a 
> "MethodNotFound" exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4196) Support for Streaming Partitions in Hive

2013-03-19 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606960#comment-13606960
 ] 

Brock Noland commented on HIVE-4196:


Hi Roshan,

Looks like a good proposal and a great place for Flume to integrate with Hive!  
In the proposal how come we have the clients using webhdfs to write a chunk of 
data? Couldn't the client user any HDFS api?

Brock

> Support for Streaming Partitions in Hive
> 
>
> Key: HIVE-4196
> URL: https://issues.apache.org/jira/browse/HIVE-4196
> Project: Hive
>  Issue Type: New Feature
>  Components: Database/Schema, HCatalog
>Affects Versions: 0.10.1
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>
> Motivation: Allow Hive users to immediately query data streaming in through 
> clients such as Flume.
> Currently Hive partitions must be created after all the data for the 
> partition is available. Thereafter, data in the partitions is considered 
> immutable. 
> This proposal introduces the notion of a streaming partition into which new 
> files an be committed periodically and made available for queries before the 
> partition is closed and converted into a standard partition.
> The admin enables streaming partition on a table using DDL. He provides the 
> following pieces of information:
> - Name of the partition in the table on which streaming is enabled
> - Frequency at which the streaming partition should be closed and converted 
> into a standard partition.
> Tables with streaming partition enabled will be partitioned by one and only 
> one column. It is assumed that this column will contain a timestamp.
> Closing the current streaming partition converts it into a standard 
> partition. Based on the specified frequency, the current streaming partition  
> is closed and a new one created for future writes. This is referred to as 
> 'rolling the partition'.
> A streaming partition's life cycle is as follows:
>  - A new streaming partition is instantiated for writes
>  - Streaming clients request (via webhcat) for a HDFS file name into which 
> they can write a chunk of records for a specific table.
>  - Streaming clients write a chunk (via webhdfs) to that file and commit 
> it(via webhcat). Committing merely indicates that the chunk has been written 
> completely and ready for serving queries.  
>  - When the partition is rolled, all committed chunks are swept into single 
> directory and a standard partition pointing to that directory is created. The 
> streaming partition is closed and new streaming partition is created. Rolling 
> the partition is atomic. Streaming clients are agnostic of partition rolling. 
>  
>  - Hive queries will be able to query the partition that is currently open 
> for streaming. only committed chunks will be visible. read consistency will 
> be ensured so that repeated reads of the same partition will be idempotent 
> for the lifespan of the query.
> Partition rolling requires an active agent/thread running to check when it is 
> time to roll and trigger the roll. This could be either be achieved by using 
> an external agent such as Oozie (preferably) or an internal agent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4203) column AND 1 expression causes internal error

2013-03-19 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606884#comment-13606884
 ] 

Eric Hanson commented on HIVE-4203:
---

"select a OR 1 from t" produces a similar error

> column AND 1 expression causes internal error
> -
>
> Key: HIVE-4203
> URL: https://issues.apache.org/jira/browse/HIVE-4203
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.9.0
> Environment: Windows 8, hive-monarch development environment, 
> HDInsight
>Reporter: Eric Hanson
>Priority: Minor
>
> create table t(a int);
> select a AND 1 from t;
> expected result: query runs or produces an error message for type 
> compatibility issues.
> actual result:
> FAILED: Hive Internal Error: 
> java.lang.ClassCastException(org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectI
> nspector cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector)
> java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector
>  cannot be cast to o
> rg.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd.initialize(GenericUDFOPAnd.java:44)
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:98)
> at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214)
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory
> .java:767)
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:888)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
> at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:165)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7755)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2310)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2112)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:6165)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:6136)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6762)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7531)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:244)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:433)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:338)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:913)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:563)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4203) column AND 1 expression causes internal error

2013-03-19 Thread Eric Hanson (JIRA)
Eric Hanson created HIVE-4203:
-

 Summary: column AND 1 expression causes internal error
 Key: HIVE-4203
 URL: https://issues.apache.org/jira/browse/HIVE-4203
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
 Environment: Windows 8, hive-monarch development environment, HDInsight
Reporter: Eric Hanson
Priority: Minor


create table t(a int);
select a AND 1 from t;

expected result: query runs or produces an error message for type compatibility 
issues.

actual result:


FAILED: Hive Internal Error: 
java.lang.ClassCastException(org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectI
nspector cannot be cast to 
org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector)
java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector
 cannot be cast to o
rg.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd.initialize(GenericUDFOPAnd.java:44)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:98)
at 
org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory
.java:767)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:888)
at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:165)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7755)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2310)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2112)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:6165)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:6136)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6762)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7531)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:244)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:433)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:338)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:913)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:563)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2742) InvalidOperationException "alter table is not possible" when using LOAD DATA INPATH OVERWRITE with database and partition

2013-03-19 Thread Tim Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606859#comment-13606859
 ] 

Tim Goodman commented on HIVE-2742:
---

I had a similar issue using

INSERT OVERWRITE TABLE [mySchema].[myTable] PARTITION([myPartition]) SELECT 
...

It works the first time, and then fails the second time, presumably because in 
the latter case it has to alter the existing partition.

But as a workaround, do 

USE [mySchema]; 

before the overwrite, and then it works.

It is similar to the fact that you can't do 

ALTER TABLE [mySchema].[myTable] ...

but instead must do

USE [mySchema]; ALTER TABLE [myTable] ...

> InvalidOperationException "alter table is not possible" when using LOAD DATA 
> INPATH OVERWRITE with database and partition
> -
>
> Key: HIVE-2742
> URL: https://issues.apache.org/jira/browse/HIVE-2742
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema, Metastore, Query Processor
>Affects Versions: 0.7.1
> Environment: reproduced on cdh3u2 (haven't tried other versions)
>Reporter: Maxime Brugidou
>
> Here is a repeatable procedure:
> {code}
> $ echo "test" | hadoop fs -put - test.txt
> $ echo "test2" | hadoop fs -put - test2.txt
> {code}
> Then in hive:
> {code}
> > create database catalog;
> > use catalog;
> > create table test_load (t string) partitioned by (p string);
> > use default;
> {code}
> Then the problem arises:
> {code}
> > load data inpath 'test.txt' overwrite into table catalog.test_load 
> > partition (p='test');
> Loading data to table catalog.test_load partition (p=test)
> OK
> Time taken: 0.175 seconds
> > load data inpath 'test2.txt' overwrite into table catalog.test_load 
> > partition (p='test');
> Loading data to table catalog.test_load partition (p=test)
> Moved to trash: 
> hdfs://mycluster/user/hive/warehouse/catalog.db/test_load/p=test
> Failed with exception InvalidOperationException(message:alter is not possible)
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-4186) NPE in ReduceSinkDeDuplication

2013-03-19 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-4186.


   Resolution: Fixed
Fix Version/s: 0.11.0

Committed to trunk. Thanks, Harish!

> NPE in ReduceSinkDeDuplication
> --
>
> Key: HIVE-4186
> URL: https://issues.apache.org/jira/browse/HIVE-4186
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.11.0
>
> Attachments: HIVE-4186.1.patch.txt, HIVE-4186.2.patch.txt, 
> HIVE-4186.3.patch.txt
>
>
> When you have a sequence of RedueSinks on constants you get this error:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.optimizer.ReduceSinkDeDuplication$ReduceSinkDeduplicateProcFactory$ReducerReducerProc.getPartitionAndKeyColumnMapping(ReduceSinkDeDuplication.java:416)
> {noformat}
> The e.g. to generate this si:
> {noformat}
> select p_name from (select p_name from part distribute by 1 sort by 1) p 
> distribute by 1 sort by 1
> {noformat}
> Sorry for the contrived e.g., but this actually happens when we stack 
> windowing clauses (see PTF-Windowing branch)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4187) QL build-grammar target fails after HIVE-4148

2013-03-19 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606591#comment-13606591
 ] 

Carl Steinbach commented on HIVE-4187:
--

+1. Will commit if tests pass.

> QL build-grammar target fails after HIVE-4148
> -
>
> Key: HIVE-4187
> URL: https://issues.apache.org/jira/browse/HIVE-4187
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Carl Steinbach
>Assignee: Gunther Hagleitner
>Priority: Critical
> Attachments: HIVE-4187.1.patch, HIVE-4187.2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4202) reuse Partition objects in PTFOperator processing

2013-03-19 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4202:
--

Attachment: HIVE-4202.D9525.1.patch

hbutani requested code review of "HIVE-4202 [jira] reuse Partition objects in 
PTFOperator processing".

Reviewers: JIRA

reuse PTFPartition, BytebasedList and underlying byte array during execution to 
reduce mem footprint and filesystem overhead.

to improve memory utilization and reduce number of files and directories 
created.

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D9525

AFFECTED FILES
  data/files/flights_tiny.txt
  data/files/part.rc
  data/files/part.seq
  ql/src/java/org/apache/hadoop/hive/ql/exec/PTFOperator.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/PTFPartition.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/PTFPersistence.java
  ql/src/java/org/apache/hadoop/hive/ql/udf/ptf/TableFunctionEvaluator.java
  ql/src/java/org/apache/hadoop/hive/ql/udf/ptf/WindowingTableFunction.java

MANAGE HERALD RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/22791/

To: JIRA, hbutani


> reuse Partition objects in PTFOperator processing
> -
>
> Key: HIVE-4202
> URL: https://issues.apache.org/jira/browse/HIVE-4202
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: Harish Butani
>Assignee: Harish Butani
> Attachments: HIVE-4202.D9525.1.patch
>
>
> to improve memory utilization and reduce number of files and directories 
> created.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4202) reuse Partition objects in PTFOperator processing

2013-03-19 Thread Harish Butani (JIRA)
Harish Butani created HIVE-4202:
---

 Summary: reuse Partition objects in PTFOperator processing
 Key: HIVE-4202
 URL: https://issues.apache.org/jira/browse/HIVE-4202
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Reporter: Harish Butani
Assignee: Harish Butani


to improve memory utilization and reduce number of files and directories 
created.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3938) Hive MetaStore should send a single AddPartitionEvent for atomically added partition-set.

2013-03-19 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606577#comment-13606577
 ] 

Mithun Radhakrishnan commented on HIVE-3938:


Thanks for stepping up, Dilip. Here's the review link: 
https://reviews.facebook.net/D9519

> Hive MetaStore should send a single AddPartitionEvent for atomically added 
> partition-set.
> -
>
> Key: HIVE-3938
> URL: https://issues.apache.org/jira/browse/HIVE-3938
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.10.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-3938.trunk.patch
>
>
> HiveMetaStore::add_partitions() currently adds all partitions specified in 
> one call using a single meta-store transaction. This acts correctly. However, 
> there's one AddPartitionEvent created per partition specified.
> Ideally, the set of partitions added atomically can be communicated using a 
> single AddPartitionEvent, such that they are consumed together.
> I'll post a patch that does this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4041) Support multiple partitionings in a single Query

2013-03-19 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606457#comment-13606457
 ] 

Harish Butani commented on HIVE-4041:
-

Thanks Ashutosh. I have
- removed change to ReduceSinkDeDuplication
- added comments to WindowingComponentizer.

> Support multiple partitionings in a single Query
> 
>
> Key: HIVE-4041
> URL: https://issues.apache.org/jira/browse/HIVE-4041
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: Harish Butani
>Assignee: Harish Butani
> Attachments: HIVE-4041.D9381.1.patch, HIVE-4041.D9381.2.patch, 
> WindowingComponentization.pdf
>
>
> Currently we disallow queries if the partition specifications of all Wdw fns 
> are not the same. We can relax this by generating multiple PTFOps based on 
> the unique partitionings in a Query. For partitionings that only differ in 
> sort, we can introduce a sort step in between PTFOps, which can happen in the 
> same Reduce task.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4041) Support multiple partitionings in a single Query

2013-03-19 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4041:
--

Attachment: HIVE-4041.D9381.2.patch

hbutani updated the revision "HIVE-4041 [jira] Support multiple partitionings 
in a single Query".

- Merge branch 'ptf' into HIVE-4041
- updates for 4041

Reviewers: ashutoshc, JIRA

REVISION DETAIL
  https://reviews.facebook.net/D9381

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D9381?vs=29775&id=30057#toc

BRANCH
  HIVE-4041

ARCANIST PROJECT
  hive

AFFECTED FILES
  data/files/flights_tiny.txt
  data/files/part.rc
  data/files/part.seq
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcFactory.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/PTFInvocationSpec.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/PTFTranslator.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/WindowingComponentizer.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/WindowingSpec.java
  ql/src/test/queries/clientpositive/windowing_multipartitioning.q
  ql/src/test/results/clientpositive/windowing_multipartitioning.q.out

To: JIRA, ashutoshc, hbutani


> Support multiple partitionings in a single Query
> 
>
> Key: HIVE-4041
> URL: https://issues.apache.org/jira/browse/HIVE-4041
> Project: Hive
>  Issue Type: Bug
>  Components: PTF-Windowing
>Reporter: Harish Butani
>Assignee: Harish Butani
> Attachments: HIVE-4041.D9381.1.patch, HIVE-4041.D9381.2.patch, 
> WindowingComponentization.pdf
>
>
> Currently we disallow queries if the partition specifications of all Wdw fns 
> are not the same. We can relax this by generating multiple PTFOps based on 
> the unique partitionings in a Query. For partitionings that only differ in 
> sort, we can introduce a sort step in between PTFOps, which can happen in the 
> same Reduce task.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4168) remove package-info.java from svn

2013-03-19 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-4168:


Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

Yes, you are right. It was a problem with my local git clone repo. Sorry about 
the false alarm. Resolving as 'Not A Problem'.


> remove package-info.java from svn
> -
>
> Key: HIVE-4168
> URL: https://issues.apache.org/jira/browse/HIVE-4168
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.11.0
>
> Attachments: HIVE-4168.1.patch
>
>
> common/src/gen/org/apache/hive/common/package-info.java is autogenerated 
> during compile (by saveVersion.sh). 
> Looks like this was unintentionally checked-in. As the file includes 
> timestamps and checksums, after a compile it shows up as a source code change.
> We should delete this file from svn repo.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21 #325

2013-03-19 Thread Apache Jenkins Server
See 

--
[...truncated 5828 lines...]
[ivy:resolve]   [SUCCESSFUL ] 
commons-beanutils#commons-beanutils;1.7.0!commons-beanutils.jar (14ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/mortbay/jetty/servlet-api/2.5-20081211/servlet-api-2.5-20081211.jar
 ...
[ivy:resolve] .. (130kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.mortbay.jetty#servlet-api;2.5-20081211!servlet-api.jar (198ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar
 ...
[ivy:resolve]  (129kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.mortbay.jetty#servlet-api-2.5;6.1.14!servlet-api-2.5.jar (17ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/ant/ant/1.6.5/ant-1.6.5.jar ...
[ivy:resolve] 
..
 (1009kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] ant#ant;1.6.5!ant.jar (37ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/codehaus/jackson/jackson-core-asl/1.0.1/jackson-core-asl-1.0.1.jar
 ...
[ivy:resolve] .. (132kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.codehaus.jackson#jackson-core-asl;1.0.1!jackson-core-asl.jar (382ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/ftpserver/ftplet-api/1.0.0/ftplet-api-1.0.0.jar
 ...
[ivy:resolve] ... (22kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.ftpserver#ftplet-api;1.0.0!ftplet-api.jar(bundle) (12ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar
 ...
[ivy:resolve] 

 (622kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.mina#mina-core;2.0.0-M5!mina-core.jar(bundle) (24ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/ftpserver/ftpserver-core/1.0.0/ftpserver-core-1.0.0.jar
 ...
[ivy:resolve] .. (264kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.ftpserver#ftpserver-core;1.0.0!ftpserver-core.jar(bundle) (32ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/ftpserver/ftpserver-deprecated/1.0.0-M2/ftpserver-deprecated-1.0.0-M2.jar
 ...
[ivy:resolve] .. (31kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.ftpserver#ftpserver-deprecated;1.0.0-M2!ftpserver-deprecated.jar 
(10ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.5.2/slf4j-api-1.5.2.jar ...
[ivy:resolve] .. (16kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] org.slf4j#slf4j-api;1.5.2!slf4j-api.jar (13ms)
[ivy:resolve] 
[ivy:resolve] :: problems summary ::
[ivy:resolve]  WARNINGS
[ivy:resolve]   impossible to put metadata file in cache: 
http://repo1.maven.org/maven2/org/codehaus/jackson/jackson-mapper-asl/1.0.1/jackson-mapper-asl-1.0.1.pom
 (1.0.1). java.io.FileNotFoundException: 
/home/jenkins/.ivy2/cache/org.codehaus.jackson/jackson-mapper-asl/ivy-1.0.1.xml.original
 (No such file or directory)
[ivy:resolve] 
[ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS

ivy-retrieve-hadoop-shim:
 [echo] Project: shims
[javac] Compiling 13 source files to 

[javac] Note: 

 uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: 

 uses unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
 [echo] Building shims 0.23

build_shims:
 [echo] Project: shims
 [echo] Compiling 

 against hadoop 0.23.3 
(

ivy-init-settings:
 [echo] Project: shims

ivy-resolve-hadoop-shim:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 

[

Build failed in Jenkins: Hive-0.10.0-SNAPSHOT-h0.20.1 #98

2013-03-19 Thread Apache Jenkins Server
See 

--
[...truncated 6367 lines...]
[javac] Note: 

 uses unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
 [echo] Building shims 0.23

build-shims:
 [echo] Project: shims
 [echo] Compiling 

 against hadoop 2.0.0-alpha 
(

ivy-init-settings:
 [echo] Project: shims

ivy-resolve-hadoop-shim:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 

[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/google/guava/guava/11.0.2/guava-11.0.2.jar ...
[ivy:resolve] 

 (1609kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] com.google.guava#guava;11.0.2!guava.jar (59ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/2.0.0-alpha/hadoop-common-2.0.0-alpha-tests.jar
 ...
[ivy:resolve] 
...
 (1073kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-common;2.0.0-alpha!hadoop-common.jar(tests) (95ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/2.0.0-alpha/hadoop-common-2.0.0-alpha.jar
 ...
[ivy:resolve] 
.
 (2051kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-common;2.0.0-alpha!hadoop-common.jar (118ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.0.0-alpha/hadoop-mapreduce-client-core-2.0.0-alpha.jar
 ...
[ivy:resolve] 
.
 (1314kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-mapreduce-client-core;2.0.0-alpha!hadoop-mapreduce-client-core.jar
 (101ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-archives/2.0.0-alpha/hadoop-archives-2.0.0-alpha.jar
 ...
[ivy:resolve] ... (20kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-archives;2.0.0-alpha!hadoop-archives.jar (205ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-hdfs/2.0.0-alpha/hadoop-hdfs-2.0.0-alpha.jar
 ...
[ivy:resolve] 
.
 (3790kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-hdfs;2.0.0-alpha!hadoop-hdfs.jar (117ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-hdfs/2.0.0-alpha/hadoop-hdfs-2.0.0-alpha-tests.jar
 ...
[ivy:resolve] 
...
 (1365kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-hdfs;2.0.0-alpha!hadoop-hdfs.jar(tests) (171ms)
[ivy:resolve] downloading 
http:

[jira] [Commented] (HIVE-4201) Consolidate submodule dependencies using ivy inheritance

2013-03-19 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606310#comment-13606310
 ] 

Gunther Hagleitner commented on HIVE-4201:
--

Yes. Can you close this please?

> Consolidate submodule dependencies using ivy inheritance
> 
>
> Key: HIVE-4201
> URL: https://issues.apache.org/jira/browse/HIVE-4201
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
>
> As discussed in 4187:
> For easier maintenance of ivy dependencies across submodules: Create parent 
> ivy file with consolidated dependencies and include into submodules via 
> inheritance. This way we're not relying on transitive dependencies, but also 
> have the dependencies in a single place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4201) Consolidate submodule dependencies using ivy inheritance

2013-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606302#comment-13606302
 ] 

Ashutosh Chauhan commented on HIVE-4201:


Is this in any way different from HIVE-4200? Would you like to close this as 
dupe?

> Consolidate submodule dependencies using ivy inheritance
> 
>
> Key: HIVE-4201
> URL: https://issues.apache.org/jira/browse/HIVE-4201
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
>
> As discussed in 4187:
> For easier maintenance of ivy dependencies across submodules: Create parent 
> ivy file with consolidated dependencies and include into submodules via 
> inheritance. This way we're not relying on transitive dependencies, but also 
> have the dependencies in a single place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3820) Consider creating a literal like "D" or "BD" for representing Decimal type constants

2013-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606301#comment-13606301
 ] 

Ashutosh Chauhan commented on HIVE-3820:


Since we also have Double type. I wonder folks will confuse D for Double. Feels 
like BD will avoid that potential confusion. What do you think?

> Consider creating a literal like "D" or "BD" for representing Decimal type 
> constants
> 
>
> Key: HIVE-3820
> URL: https://issues.apache.org/jira/browse/HIVE-3820
> Project: Hive
>  Issue Type: Bug
>Reporter: Mark Grover
>Assignee: Gunther Hagleitner
> Attachments: HIVE-3820.1.patch, HIVE-3820.D8823.1.patch
>
>
> When the HIVE-2693 gets committed, users are going to see this behavior:
> {code}
> hive> select cast(3.14 as decimal) from decimal_3 limit 1;
> 3.140124344978758017532527446746826171875
> {code}
> That's intuitively incorrect but is the case because 3.14 (double) is being 
> converted to BigDecimal because of which there is a precision mismatch.
> We should consider creating a new literal for expressing constants of Decimal 
> type as Gunther suggested in HIVE-2693.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4168) remove package-info.java from svn

2013-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606293#comment-13606293
 ] 

Ashutosh Chauhan commented on HIVE-4168:


Actually there is no such file checked in the repo.
{noformat}
$ svn up
At revision 1458256.
$ svn st
$ ls common/src/
java  scripts  test
{noformat}

Probably that file got added in your git or svn index by accident. Shall we 
close this as "Not a problem" ?

> remove package-info.java from svn
> -
>
> Key: HIVE-4168
> URL: https://issues.apache.org/jira/browse/HIVE-4168
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.11.0
>
> Attachments: HIVE-4168.1.patch
>
>
> common/src/gen/org/apache/hive/common/package-info.java is autogenerated 
> during compile (by saveVersion.sh). 
> Looks like this was unintentionally checked-in. As the file includes 
> timestamps and checksums, after a compile it shows up as a source code change.
> We should delete this file from svn repo.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4168) remove package-info.java from svn

2013-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606291#comment-13606291
 ] 

Ashutosh Chauhan commented on HIVE-4168:


+1

> remove package-info.java from svn
> -
>
> Key: HIVE-4168
> URL: https://issues.apache.org/jira/browse/HIVE-4168
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.11.0
>
> Attachments: HIVE-4168.1.patch
>
>
> common/src/gen/org/apache/hive/common/package-info.java is autogenerated 
> during compile (by saveVersion.sh). 
> Looks like this was unintentionally checked-in. As the file includes 
> timestamps and checksums, after a compile it shows up as a source code change.
> We should delete this file from svn repo.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4139) MiniDFS shim does not work for hadoop 2

2013-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606288#comment-13606288
 ] 

Ashutosh Chauhan commented on HIVE-4139:


Patch doesn't apply cleanly anymore. This also will have an overlap with 
HIVE-4200 I believe. Would you like to get HIVE-4200 in first and then update 
this patch?

> MiniDFS shim does not work for hadoop 2
> ---
>
> Key: HIVE-4139
> URL: https://issues.apache.org/jira/browse/HIVE-4139
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-4139.1.patch, HIVE-4139.2.patch, HIVE-4139.3.patch
>
>
> There's an incompatibility between hadoop 1 & 2 wrt to the MiniDfsCluster 
> class. That causes the hadoop 2 line Minimr tests to fail with a 
> "MethodNotFound" exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4139) MiniDFS shim does not work for hadoop 2

2013-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606284#comment-13606284
 ] 

Ashutosh Chauhan commented on HIVE-4139:


+1

> MiniDFS shim does not work for hadoop 2
> ---
>
> Key: HIVE-4139
> URL: https://issues.apache.org/jira/browse/HIVE-4139
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-4139.1.patch, HIVE-4139.2.patch, HIVE-4139.3.patch
>
>
> There's an incompatibility between hadoop 1 & 2 wrt to the MiniDfsCluster 
> class. That causes the hadoop 2 line Minimr tests to fail with a 
> "MethodNotFound" exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4179) NonBlockingOpDeDup does not merge SEL operators correctly

2013-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606279#comment-13606279
 ] 

Ashutosh Chauhan commented on HIVE-4179:


[~navis] Would you like to review this?

> NonBlockingOpDeDup does not merge SEL operators correctly
> -
>
> Key: HIVE-4179
> URL: https://issues.apache.org/jira/browse/HIVE-4179
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Attachments: HIVE-4179.1.patch, HIVE-4179.2.patch
>
>
> The input columns list for SEL operations isn't merged properly in the 
> optimization. The best way to see this is running union_remove_22.q with 
> -Dhadoop.mr.rev=23. The plan shows lost UDFs and a broken lineage for one 
> column.
> Note: union_remove tests do not run on hadoop 1 or 0.20.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4201) Consolidate submodule dependencies using ivy inheritance

2013-03-19 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-4201:


 Summary: Consolidate submodule dependencies using ivy inheritance
 Key: HIVE-4201
 URL: https://issues.apache.org/jira/browse/HIVE-4201
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner


As discussed in 4187:

For easier maintenance of ivy dependencies across submodules: Create parent ivy 
file with consolidated dependencies and include into submodules via 
inheritance. This way we're not relying on transitive dependencies, but also 
have the dependencies in a single place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4200) Consolidate submodule dependencies using ivy inheritance

2013-03-19 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-4200:


 Summary: Consolidate submodule dependencies using ivy inheritance
 Key: HIVE-4200
 URL: https://issues.apache.org/jira/browse/HIVE-4200
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner


As discussed in 4187:

For easier maintenance of ivy dependencies across submodules: Create parent ivy 
file with consolidated dependencies and include into submodules via 
inheritance. This way we're not relying on transitive dependencies, but also 
have the dependencies in a single place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4187) QL build-grammar target fails after HIVE-4148

2013-03-19 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-4187:
-

Attachment: HIVE-4187.2.patch

As requested: Now with variables for ST4 + stringtemplate version.

> QL build-grammar target fails after HIVE-4148
> -
>
> Key: HIVE-4187
> URL: https://issues.apache.org/jira/browse/HIVE-4187
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Carl Steinbach
>Assignee: Gunther Hagleitner
>Priority: Critical
> Attachments: HIVE-4187.1.patch, HIVE-4187.2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4173) Hive Ingnoring where clause for multitable insert

2013-03-19 Thread hussain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606195#comment-13606195
 ] 

hussain commented on HIVE-4173:
---

Hive-3699 , is for grouping rows row in select for insert statements. 
Issue above reported is due to filter condition in source query. is it having 
same root cause ?


> Hive Ingnoring where clause for multitable insert
> -
>
> Key: HIVE-4173
> URL: https://issues.apache.org/jira/browse/HIVE-4173
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.8.1, 0.9.0
> Environment: Red Hat Enterprise Linux Server release 6.3 (Santiago),
>Reporter: hussain
>Priority: Critical
>
> Hive is ignoring Filter conditions given at Multi Insert select statement 
> when  Filtering given on Source Query..
> To highlight this issue, please see below example with where clause 
> (status!='C') from employee12 table causing issue and due to which insert 
> filters (batch_id='12 and batch_id!='12' )not working and dumping all the 
> data coming from source to both the tables.
> I have checked the hive execution plan, and didn't find Filter predicates 
> under for filtering record per insert statements
> from 
> (from employee12
> select * 
> where status!='C') t
> insert into table employee1
> select 
> status,
> field1,
> 'T' as field2,
> 'P' as field3,
> 'C' as field4
> where batch_id='12'
> insert into table employee2
> select
> status,
> field1,
> 'D' as field2, 
> 'P' as field3,
> 'C' as field4
> where batch_id!='12';
> It is working fine with single insert. Hive generating plan properly.. 
> I am able to reproduce this issue with 8.1 and 9.0 version of Hive.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4146) bug with hive.auto.convert.join.noconditionaltask with outer joins

2013-03-19 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4146:
-

Attachment: hive.4146.6.patch

> bug with hive.auto.convert.join.noconditionaltask with outer joins
> --
>
> Key: HIVE-4146
> URL: https://issues.apache.org/jira/browse/HIVE-4146
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Namit Jain
> Attachments: hive.4146.1.patch, hive.4146.2.patch, hive.4146.3.patch, 
> hive.4146.4.patch, hive.4146.5.patch, hive.4146.6.patch
>
>
> Consider the following scenario:
> create table s1 as select * from src where key = 0;
> set hive.auto.convert.join.noconditionaltask=false;   
> 
> SELECT * FROM s1 src1 LEFT OUTER JOIN s1 src2 ON (src1.key = src2.key AND 
> src2.key > 10);
> gives correct results
> 0 val_0   NULLNULL
> 0 val_0   NULLNULL
> 0 val_0   NULLNULL
> whereas it gives no results with hive.auto.convert.join.noconditionaltask set
> to true

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3958) support partial scan for analyze command - RCFile

2013-03-19 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13606176#comment-13606176
 ] 

Namit Jain commented on HIVE-3958:
--

comments

> support partial scan for analyze command - RCFile
> -
>
> Key: HIVE-3958
> URL: https://issues.apache.org/jira/browse/HIVE-3958
> Project: Hive
>  Issue Type: Improvement
>Reporter: Gang Tim Liu
>Assignee: Gang Tim Liu
> Attachments: HIVE-3958.patch.1, HIVE-3958.patch.2
>
>
> analyze commands allows us to collect statistics on existing 
> tables/partitions. It works great but might be slow since it scans all files.
> There are 2 ways to speed it up:
> 1. collect stats without file scan. It may not collect all stats but good and 
> fast enough for use case. HIVE-3917 addresses it
> 2. collect stats via partial file scan. It doesn't scan all content of files 
> but part of it to get file metadata. some examples are 
> https://cwiki.apache.org/Hive/rcfilecat.html for RCFile, ORC ( HIVE-3874 ) 
> and HFile of Hbase
> This jira is targeted to address the #2. More specifically RCFile format.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3958) support partial scan for analyze command - RCFile

2013-03-19 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3958:
-

Status: Open  (was: Patch Available)

> support partial scan for analyze command - RCFile
> -
>
> Key: HIVE-3958
> URL: https://issues.apache.org/jira/browse/HIVE-3958
> Project: Hive
>  Issue Type: Improvement
>Reporter: Gang Tim Liu
>Assignee: Gang Tim Liu
> Attachments: HIVE-3958.patch.1, HIVE-3958.patch.2
>
>
> analyze commands allows us to collect statistics on existing 
> tables/partitions. It works great but might be slow since it scans all files.
> There are 2 ways to speed it up:
> 1. collect stats without file scan. It may not collect all stats but good and 
> fast enough for use case. HIVE-3917 addresses it
> 2. collect stats via partial file scan. It doesn't scan all content of files 
> but part of it to get file metadata. some examples are 
> https://cwiki.apache.org/Hive/rcfilecat.html for RCFile, ORC ( HIVE-3874 ) 
> and HFile of Hbase
> This jira is targeted to address the #2. More specifically RCFile format.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira