[jira] [Commented] (HIVE-3806) Ptest failing due to Argument list too long errors

2012-12-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13533723#comment-13533723
 ] 

Hudson commented on HIVE-3806:
--

Integrated in Hive-trunk-h0.21 #1859 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1859/])
HIVE-3806 Ptest failing due to Argument list too long errors
(Bhushan Mandhani via namit) (Revision 1422621)

 Result = SUCCESS
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1422621
Files : 
* /hive/trunk/testutils/ptest/hivetest.py


 Ptest failing due to Argument list too long errors
 

 Key: HIVE-3806
 URL: https://issues.apache.org/jira/browse/HIVE-3806
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure
Reporter: Bhushan Mandhani
Assignee: Bhushan Mandhani
Priority: Minor
 Attachments: HIVE-3806.1.patch.txt


 ptest creates a really huge shell command to delete from each test host those 
 .q files that it should not be running. For TestCliDriver, the command has 
 become long enough that it is over the threshold allowed by the shell. We 
 should rewrite it so that the same semantics is captured in a shorter command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 1859 - Fixed

2012-12-17 Thread Apache Jenkins Server
Changes for Build #1854

Changes for Build #1855

Changes for Build #1856
[kevinwilfong] HIVE-3766. Enable adding hooks to hive meta store init. (Jean Xu 
via kevinwilfong)


Changes for Build #1857

Changes for Build #1858

Changes for Build #1859
[namit] HIVE-3806 Ptest failing due to Argument list too long errors
(Bhushan Mandhani via namit)




All tests passed

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1859)

Status: Fixed

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1859/ to 
view the results.

[jira] [Updated] (HIVE-3633) sort-merge join does not work with sub-queries

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3633:
-

Attachment: hive.3633.10.patch

 sort-merge join does not work with sub-queries
 --

 Key: HIVE-3633
 URL: https://issues.apache.org/jira/browse/HIVE-3633
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3633.10.patch, hive.3633.1.patch, 
 hive.3633.2.patch, hive.3633.3.patch, hive.3633.4.patch, hive.3633.5.patch, 
 hive.3633.6.patch, hive.3633.7.patch, hive.3633.8.patch, hive.3633.9.patch


 Consider the following query:
 create table smb_bucket_1(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 create table smb_bucket_2(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 -- load the above tables
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select count(*) from
 (
 select /*+mapjoin(a)*/ a.key as key1, b.key as key2, a.value as value1, 
 b.value as value2
 from smb_bucket_1 a join smb_bucket_2 b on a.key = b.key)
 subq;
 The above query does not use sort-merge join. This would be very useful as we 
 automatically convert the queries to use sorting and bucketing properties for 
 join.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3633) sort-merge join does not work with sub-queries

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3633:
-

Status: Patch Available  (was: Open)

refreshed, and also attached the new file

 sort-merge join does not work with sub-queries
 --

 Key: HIVE-3633
 URL: https://issues.apache.org/jira/browse/HIVE-3633
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3633.10.patch, hive.3633.1.patch, 
 hive.3633.2.patch, hive.3633.3.patch, hive.3633.4.patch, hive.3633.5.patch, 
 hive.3633.6.patch, hive.3633.7.patch, hive.3633.8.patch, hive.3633.9.patch


 Consider the following query:
 create table smb_bucket_1(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 create table smb_bucket_2(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 -- load the above tables
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select count(*) from
 (
 select /*+mapjoin(a)*/ a.key as key1, b.key as key2, a.value as value1, 
 b.value as value2
 from smb_bucket_1 a join smb_bucket_2 b on a.key = b.key)
 subq;
 The above query does not use sort-merge join. This would be very useful as we 
 automatically convert the queries to use sorting and bucketing properties for 
 join.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-3646) Add 'IGNORE PROTECTION' predicate for dropping partitions

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain resolved HIVE-3646.
--

   Resolution: Fixed
Fix Version/s: 0.11
 Hadoop Flags: Reviewed

Committed. Thanks Andrew

 Add 'IGNORE PROTECTION' predicate for dropping partitions
 -

 Key: HIVE-3646
 URL: https://issues.apache.org/jira/browse/HIVE-3646
 Project: Hive
  Issue Type: New Feature
  Components: CLI
Reporter: Andrew Chalfant
Assignee: Andrew Chalfant
Priority: Minor
 Fix For: 0.11

 Attachments: HIVE-3646.1.patch.txt, HIVE-3646.2.patch.txt, 
 HIVE-3646.3.patch.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 There are cases where it is desirable to move partitions between clusters. 
 Having to undo protection and then re-protect tables in order to delete 
 partitions from a source are multi-step and can leave us in a failed open 
 state where partition and table metadata is dirty. By implementing an 'rm 
 -rf'-like functionality, we can perform these operations atomically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3787) Regression introduced from HIVE-3401

2012-12-17 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13533856#comment-13533856
 ] 

Namit Jain commented on HIVE-3787:
--

+1

 Regression introduced from HIVE-3401
 

 Key: HIVE-3787
 URL: https://issues.apache.org/jira/browse/HIVE-3787
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-3787.D7275.1.patch


 By HIVE-3562, split_sample_out_of_range.q and split_sample_wrong_format.q are 
 not showing valid 'line:loc' information for error messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3300) LOAD DATA INPATH fails if a hdfs file with same name is added to table

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3300:
-

Status: Open  (was: Patch Available)

comments on phabricator

 LOAD DATA INPATH fails if a hdfs file with same name is added to table
 --

 Key: HIVE-3300
 URL: https://issues.apache.org/jira/browse/HIVE-3300
 Project: Hive
  Issue Type: Bug
  Components: Import/Export
Affects Versions: 0.10.0
 Environment: ubuntu linux, hadoop 1.0.3, hive 0.9
Reporter: Bejoy KS
Assignee: Navis
 Attachments: HIVE-3300.1.patch.txt, HIVE-3300.D4383.3.patch


 If we are loading data from local fs to hive tables using 'LOAD DATA LOCAL 
 INPATH' and if a file with the same name exists in the table's location then 
 the new file will be suffixed by *_copy_1.
 But if we do the 'LOAD DATA INPATH'  for a file in hdfs then there is no 
 rename happening but just a move task is getting triggered. Since a file with 
 same name exists in same hdfs location, hadoop fs move operation throws an 
 error.
 hive LOAD DATA INPATH '/userdata/bejoy/site.txt' INTO TABLE test.site;
 Loading data to table test.site
 Failed with exception null
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 hive 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3300) LOAD DATA INPATH fails if a hdfs file with same name is added to table

2012-12-17 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13533867#comment-13533867
 ] 

Phabricator commented on HIVE-3300:
---

njain has commented on the revision HIVE-3300 [jira] LOAD DATA INPATH fails if 
a hdfs file with same name is added to table.

INLINE COMMENTS
  ql/src/test/queries/clientpositive/load_fs2.q:1 Add some comments in the test.
  What are you trying to test ?

REVISION DETAIL
  https://reviews.facebook.net/D4383

To: JIRA, navis
Cc: njain


 LOAD DATA INPATH fails if a hdfs file with same name is added to table
 --

 Key: HIVE-3300
 URL: https://issues.apache.org/jira/browse/HIVE-3300
 Project: Hive
  Issue Type: Bug
  Components: Import/Export
Affects Versions: 0.10.0
 Environment: ubuntu linux, hadoop 1.0.3, hive 0.9
Reporter: Bejoy KS
Assignee: Navis
 Attachments: HIVE-3300.1.patch.txt, HIVE-3300.D4383.3.patch


 If we are loading data from local fs to hive tables using 'LOAD DATA LOCAL 
 INPATH' and if a file with the same name exists in the table's location then 
 the new file will be suffixed by *_copy_1.
 But if we do the 'LOAD DATA INPATH'  for a file in hdfs then there is no 
 rename happening but just a move task is getting triggered. Since a file with 
 same name exists in same hdfs location, hadoop fs move operation throws an 
 error.
 hive LOAD DATA INPATH '/userdata/bejoy/site.txt' INTO TABLE test.site;
 Loading data to table test.site
 Failed with exception null
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 hive 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3810) HiveHistory.log need to replace '\r' with space before writing Entry.value to historyfile

2012-12-17 Thread qiangwang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiangwang updated HIVE-3810:


Description: 
HiveHistory.log will replace '\n' with space before writing Entry.value to 
history file:

val = val.replace('\n', ' ');

but HiveHistory.parseHiveHistory use BufferedReader.readLine which takes '\n', 
'\r', '\r\n'  as line delimiter to parse history file

if val contains '\r', there is a high possibility that HiveHistory.parseLine 
will fail, in which case usually RecordTypes.valueOf(recType) will throw 
exception 'java.lang.IllegalArgumentException'

HiveHistory.log need to replace '\r' with space as well:

val = val.replace('\n', ' ');

changed to

val = val.replaceAll(\r|\n,  );

or

val = val.replace('\r', ' ').replace('\n', ' ');

  was:
HiveHistory.log will replace '\n' with space before writing Entry.value to 
history file:

val = val.replace('\n', ' ');

but HiveHistory.parseHiveHistory use BufferedReader.readLine which takes '\n', 
'\r', '\r\n'  as line delimiter to parse history file

if val contains '\r', there is a high possibility that HiveHistory.parseLine 
will fail, in which case usually RecordTypes.valueOf(recType) will throw 
exception 'java.lang.IllegalArgumentException'

HiveHistory.log need to replace '\r' with space as well:

- val = val.replace('\n', ' ');
+ val = val.replaceAll(\r|\n,  );
or
- val = val.replace('\n', ' ');
+ val = val.replace('\r', ' ').replace('\n', ' ');



 HiveHistory.log need to replace '\r' with space before writing Entry.value to 
 historyfile
 -

 Key: HIVE-3810
 URL: https://issues.apache.org/jira/browse/HIVE-3810
 Project: Hive
  Issue Type: Bug
  Components: Logging
Reporter: qiangwang
Priority: Minor

 HiveHistory.log will replace '\n' with space before writing Entry.value to 
 history file:
 val = val.replace('\n', ' ');
 but HiveHistory.parseHiveHistory use BufferedReader.readLine which takes 
 '\n', '\r', '\r\n'  as line delimiter to parse history file
 if val contains '\r', there is a high possibility that HiveHistory.parseLine 
 will fail, in which case usually RecordTypes.valueOf(recType) will throw 
 exception 'java.lang.IllegalArgumentException'
 HiveHistory.log need to replace '\r' with space as well:
 val = val.replace('\n', ' ');
 changed to
 val = val.replaceAll(\r|\n,  );
 or
 val = val.replace('\r', ' ').replace('\n', ' ');

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3812) TestCase TestJdbcDriver fails with IBM Java 6

2012-12-17 Thread Renata Ghisloti Duarte de Souza (JIRA)
Renata Ghisloti Duarte de Souza created HIVE-3812:
-

 Summary: TestCase TestJdbcDriver fails with IBM Java 6
 Key: HIVE-3812
 URL: https://issues.apache.org/jira/browse/HIVE-3812
 Project: Hive
  Issue Type: Bug
  Components: JDBC, Tests
Affects Versions: 0.9.0, 0.8.1, 0.8.0, 0.10.0
 Environment: Apache Ant 1.7.1
IBM JDK 6
Reporter: Renata Ghisloti Duarte de Souza
Priority: Minor
 Fix For: 0.10.0, 0.8.1


When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
following error:

failure message=expected:lt;[[{}, 1], [{[c=d, a=b]}, 2]]gt; but 
was:lt;[[{}, 1], [{[a=b, c=d]}, 2]]gt; 
type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
expected:lt;[[{}, 1], [{[c=d, a=b]}, 2]]gt; but was:lt;[[{}, 1], [{[a=b, 
c=d]}, 2]]gt;
at junit.framework.Assert.assertEquals(Assert.java:85)
at junit.framework.Assert.assertEquals(Assert.java:91)
at 
org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3812) TestCase TestJdbcDriver fails with IBM Java 6

2012-12-17 Thread Renata Ghisloti Duarte de Souza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renata Ghisloti Duarte de Souza updated HIVE-3812:
--

Description: 
When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
following error:

failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
[{[a=b, c=d]}, 2]]; 
type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
expected:lt;[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 2]];
at junit.framework.Assert.assertEquals(Assert.java:85)
at junit.framework.Assert.assertEquals(Assert.java:91)
at 
org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)

  was:
When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
following error:

failure message=expected:lt;[[{}, 1], [{[c=d, a=b]}, 2]]gt; but 
was:lt;[[{}, 1], [{[a=b, c=d]}, 2]]gt; 
type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
expected:lt;[[{}, 1], [{[c=d, a=b]}, 2]]gt; but was:lt;[[{}, 1], [{[a=b, 
c=d]}, 2]]gt;
at junit.framework.Assert.assertEquals(Assert.java:85)
at junit.framework.Assert.assertEquals(Assert.java:91)
at 
org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)


 TestCase TestJdbcDriver fails with IBM Java 6
 -

 Key: HIVE-3812
 URL: https://issues.apache.org/jira/browse/HIVE-3812
 Project: Hive
  Issue Type: Bug
  Components: JDBC, Tests
Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0
 Environment: Apache Ant 1.7.1
 IBM JDK 6
Reporter: Renata Ghisloti Duarte de Souza
Priority: Minor
 Fix For: 0.8.1, 0.10.0


 When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
 following error:
 failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
 [{[a=b, c=d]}, 2]]; 
 type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
 expected:lt;[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 
 2]];
   at junit.framework.Assert.assertEquals(Assert.java:85)
   at junit.framework.Assert.assertEquals(Assert.java:91)
   at 
 org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3812) TestCase TestJdbcDriver fails with IBM Java 6

2012-12-17 Thread Renata Ghisloti Duarte de Souza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renata Ghisloti Duarte de Souza updated HIVE-3812:
--

Description: 
When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
following error:

failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
[{[a=b, c=d]}, 2]]; 
type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 2]];
at junit.framework.Assert.assertEquals(Assert.java:85)
at junit.framework.Assert.assertEquals(Assert.java:91)
at 
org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)

  was:
When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
following error:

failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
[{[a=b, c=d]}, 2]]; 
type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
expected:lt;[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 2]];
at junit.framework.Assert.assertEquals(Assert.java:85)
at junit.framework.Assert.assertEquals(Assert.java:91)
at 
org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)


 TestCase TestJdbcDriver fails with IBM Java 6
 -

 Key: HIVE-3812
 URL: https://issues.apache.org/jira/browse/HIVE-3812
 Project: Hive
  Issue Type: Bug
  Components: JDBC, Tests
Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0
 Environment: Apache Ant 1.7.1
 IBM JDK 6
Reporter: Renata Ghisloti Duarte de Souza
Priority: Minor
 Fix For: 0.8.1, 0.10.0


 When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
 following error:
 failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
 [{[a=b, c=d]}, 2]]; 
 type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
 expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 2]];
   at junit.framework.Assert.assertEquals(Assert.java:85)
   at junit.framework.Assert.assertEquals(Assert.java:91)
   at 
 org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3812) TestCase TestJdbcDriver fails with IBM Java 6

2012-12-17 Thread Renata Ghisloti Duarte de Souza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renata Ghisloti Duarte de Souza updated HIVE-3812:
--

Description: 
When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
following error:

failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
[{[a=b, c=d]}, 2]]; 
type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 2]];
at junit.framework.Assert.assertEquals(Assert.java:85)
at junit.framework.Assert.assertEquals(Assert.java:91)
at 
org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)

  was:
When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
following error:

failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
[{[a=b, c=d]}, 2]]; 
type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 2]];
at junit.framework.Assert.assertEquals(Assert.java:85)
at junit.framework.Assert.assertEquals(Assert.java:91)
at 
org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)


 TestCase TestJdbcDriver fails with IBM Java 6
 -

 Key: HIVE-3812
 URL: https://issues.apache.org/jira/browse/HIVE-3812
 Project: Hive
  Issue Type: Bug
  Components: JDBC, Tests
Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0
 Environment: Apache Ant 1.7.1
 IBM JDK 6
Reporter: Renata Ghisloti Duarte de Souza
Priority: Minor
 Fix For: 0.8.1, 0.10.0

 Attachments: HIVE-3812.1_trunk.patch.txt


 When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
 following error:
 failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
 [{[a=b, c=d]}, 2]]; 
 type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
 expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 2]];
   at junit.framework.Assert.assertEquals(Assert.java:85)
   at junit.framework.Assert.assertEquals(Assert.java:91)
   at 
 org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3812) TestCase TestJdbcDriver fails with IBM Java 6

2012-12-17 Thread Renata Ghisloti Duarte de Souza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renata Ghisloti Duarte de Souza updated HIVE-3812:
--

Description: 
When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
following error:

failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
[{[a=b, c=d]}, 2]]; 
type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 2]];
at junit.framework.Assert.assertEquals(Assert.java:85)
at junit.framework.Assert.assertEquals(Assert.java:91)
at 
org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)

  was:
When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
following error:

failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
[{[a=b, c=d]}, 2]]; 
type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 2]];
at junit.framework.Assert.assertEquals(Assert.java:85)
at junit.framework.Assert.assertEquals(Assert.java:91)
at 
org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)


 TestCase TestJdbcDriver fails with IBM Java 6
 -

 Key: HIVE-3812
 URL: https://issues.apache.org/jira/browse/HIVE-3812
 Project: Hive
  Issue Type: Bug
  Components: JDBC, Tests
Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0
 Environment: Apache Ant 1.7.1
 IBM JDK 6
Reporter: Renata Ghisloti Duarte de Souza
Priority: Minor
 Fix For: 0.8.1, 0.10.0

 Attachments: HIVE-3812.1_trunk.patch.txt


 When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
 following error:
 failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
 [{[a=b, c=d]}, 2]]; 
 type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
 expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 2]];
   at junit.framework.Assert.assertEquals(Assert.java:85)
   at junit.framework.Assert.assertEquals(Assert.java:91)
   at 
 org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3812) TestCase TestJdbcDriver fails with IBM Java 6

2012-12-17 Thread Renata Ghisloti Duarte de Souza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renata Ghisloti Duarte de Souza updated HIVE-3812:
--

Attachment: HIVE-3812.1_0.8.1.patch.txt

Patch to fix the bug for Hive 0.8.1.

 TestCase TestJdbcDriver fails with IBM Java 6
 -

 Key: HIVE-3812
 URL: https://issues.apache.org/jira/browse/HIVE-3812
 Project: Hive
  Issue Type: Bug
  Components: JDBC, Tests
Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0
 Environment: Apache Ant 1.7.1
 IBM JDK 6
Reporter: Renata Ghisloti Duarte de Souza
Priority: Minor
 Fix For: 0.8.1, 0.10.0

 Attachments: HIVE-3812.1_0.8.1.patch.txt, HIVE-3812.1_trunk.patch.txt


 When running testcase TestJdbcDriver with IBM Java 6, it fails with the 
 following error:
 failure message=expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], 
 [{[a=b, c=d]}, 2]]; 
 type=junit.framework.ComparisonFailurejunit.framework.ComparisonFailure: 
 expected:[[{}, 1], [{[c=d, a=b]}, 2]] but was:[[{}, 1], [{[a=b, c=d]}, 2]];
   at junit.framework.Assert.assertEquals(Assert.java:85)
   at junit.framework.Assert.assertEquals(Assert.java:91)
   at 
 org.apache.hadoop.hive.jdbc.TestJdbcDriver.testDataTypes(TestJdbcDriver.java:380)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #232

2012-12-17 Thread Apache Jenkins Server
See 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/232/

--
[...truncated 9904 lines...]

compile-test:
 [echo] Project: serde
[javac] Compiling 26 source files to 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/232/artifact/hive/build/serde/test/classes
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

create-dirs:
 [echo] Project: service
 [copy] Warning: 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/service/src/test/resources
 does not exist.

init:
 [echo] Project: service

ivy-init-settings:
 [echo] Project: service

ivy-resolve:
 [echo] Project: service
[ivy:resolve] :: loading settings :: file = 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml
[ivy:report] Processing 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/232/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-service-default.xml
 to 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/232/artifact/hive/build/ivy/report/org.apache.hive-hive-service-default.html

ivy-retrieve:
 [echo] Project: service

compile:
 [echo] Project: service

ivy-resolve-test:
 [echo] Project: service

ivy-retrieve-test:
 [echo] Project: service

compile-test:
 [echo] Project: service
[javac] Compiling 2 source files to 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/232/artifact/hive/build/service/test/classes

test:
 [echo] Project: hive

test-shims:
 [echo] Project: hive

test-conditions:
 [echo] Project: shims

gen-test:
 [echo] Project: shims

create-dirs:
 [echo] Project: shims
 [copy] Warning: 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/test/resources
 does not exist.

init:
 [echo] Project: shims

ivy-init-settings:
 [echo] Project: shims

ivy-resolve:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml
[ivy:report] Processing 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/232/artifact/hive/build/ivy/resolution-cache/org.apache.hive-hive-shims-default.xml
 to 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/232/artifact/hive/build/ivy/report/org.apache.hive-hive-shims-default.html

ivy-retrieve:
 [echo] Project: shims

compile:
 [echo] Project: shims
 [echo] Building shims 0.20

build_shims:
 [echo] Project: shims
 [echo] Compiling 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/common/java;/home/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/0.20/java
 against hadoop 0.20.2 
(https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/232/artifact/hive/build/hadoopcore/hadoop-0.20.2)

ivy-init-settings:
 [echo] Project: shims

ivy-resolve-hadoop-shim:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml

ivy-retrieve-hadoop-shim:
 [echo] Project: shims
 [echo] Building shims 0.20S

build_shims:
 [echo] Project: shims
 [echo] Compiling 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/common/java;/home/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/common-secure/java;/home/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/0.20S/java
 against hadoop 1.0.0 
(https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/232/artifact/hive/build/hadoopcore/hadoop-1.0.0)

ivy-init-settings:
 [echo] Project: shims

ivy-resolve-hadoop-shim:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/ivy/ivysettings.xml

ivy-retrieve-hadoop-shim:
 [echo] Project: shims
 [echo] Building shims 0.23

build_shims:
 [echo] Project: shims
 [echo] Compiling 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/ws/hive/shims/src/common/java;/home/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/common-secure/java;/home/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/0.23/java
 against hadoop 0.23.3 
(https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/232/artifact/hive/build/hadoopcore/hadoop-0.23.3)


Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21 #232

2012-12-17 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/232/

--
[...truncated 69516 lines...]
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/tmp/jenkins/hive_2012-12-17_09-01-36_210_8104980688560746626/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21/hive/build/service/tmp/hive_job_log_jenkins_201212170901_231790720.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Copying file: 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21/hive/data/files/kv1.txt
[junit] PREHOOK: query: load data local inpath 
'/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21/hive/data/files/kv1.txt'
 into table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] Copying data from 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21/hive/data/files/kv1.txt
[junit] Loading data to table default.testhivedrivertable
[junit] POSTHOOK: query: load data local inpath 
'/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21/hive/data/files/kv1.txt'
 into table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: select * from testhivedrivertable limit 10
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: 
file:/tmp/jenkins/hive_2012-12-17_09-01-39_696_2417485215413736748/-mr-1
[junit] POSTHOOK: query: select * from testhivedrivertable limit 10
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/tmp/jenkins/hive_2012-12-17_09-01-39_696_2417485215413736748/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21/hive/build/service/tmp/hive_job_log_jenkins_201212170901_792649336.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21/hive/build/service/tmp/hive_job_log_jenkins_201212170901_807524457.txt
[junit] Hive history 
file=/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21/hive/build/service/tmp/hive_job_log_jenkins_201212170901_831268912.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (key int, value 
string)

[jira] [Commented] (HIVE-3492) Provide ALTER for partition changing bucket number

2012-12-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534070#comment-13534070
 ] 

Hudson commented on HIVE-3492:
--

Integrated in Hive-trunk-h0.21 #1860 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1860/])
HIVE-3492 Provide ALTER for partition changing bucket number
(Navis via namit) (Revision 1422749)

 Result = SUCCESS
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1422749
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Partition.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/AlterTableDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/HiveOperation.java
* 
/hive/trunk/ql/src/test/queries/clientpositive/alter_numbuckets_partitioned_table.q
* 
/hive/trunk/ql/src/test/results/clientpositive/alter_numbuckets_partitioned_table.q.out


 Provide ALTER for partition changing bucket number 
 ---

 Key: HIVE-3492
 URL: https://issues.apache.org/jira/browse/HIVE-3492
 Project: Hive
  Issue Type: Improvement
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.11

 Attachments: HIVE-3492.1.patch.txt, HIVE-3492.2.patch.txt, 
 HIVE-3492.D5589.2.patch, HIVE-3492.D5589.3.patch


 As a follow up of HIVE-3283, bucket number of a partition could be 
 set/changed individually by query like 'ALTER table srcpart 
 PARTIRION(ds='1999') SET BUCKETNUM 5'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3633) sort-merge join does not work with sub-queries

2012-12-17 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534142#comment-13534142
 ] 

Kevin Wilfong commented on HIVE-3633:
-

Thanks Namit, running tests.

 sort-merge join does not work with sub-queries
 --

 Key: HIVE-3633
 URL: https://issues.apache.org/jira/browse/HIVE-3633
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3633.10.patch, hive.3633.1.patch, 
 hive.3633.2.patch, hive.3633.3.patch, hive.3633.4.patch, hive.3633.5.patch, 
 hive.3633.6.patch, hive.3633.7.patch, hive.3633.8.patch, hive.3633.9.patch


 Consider the following query:
 create table smb_bucket_1(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 create table smb_bucket_2(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 -- load the above tables
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select count(*) from
 (
 select /*+mapjoin(a)*/ a.key as key1, b.key as key2, a.value as value1, 
 b.value as value2
 from smb_bucket_1 a join smb_bucket_2 b on a.key = b.key)
 subq;
 The above query does not use sort-merge join. This would be very useful as we 
 automatically convert the queries to use sorting and bucketing properties for 
 join.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3794) Oracle upgrade script for Hive is broken

2012-12-17 Thread Deepesh Khandelwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534161#comment-13534161
 ] 

Deepesh Khandelwal commented on HIVE-3794:
--

As per your suggestion I tried to login into phabricator to create an entry but 
cannot get past the login page using my Github account. I tried to look around 
for solutions but see two other folks had raised the same problem but no 
responses.
http://mail-archives.apache.org/mod_mbox/hive-dev/201211.mbox/%3cccd2fafb.40f1%25harish.but...@sap.com%3E
Any help in resolving this to move forward is appreciated.

 Oracle upgrade script for Hive is broken
 

 Key: HIVE-3794
 URL: https://issues.apache.org/jira/browse/HIVE-3794
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
 Environment: Oracle 11g r2
Reporter: Deepesh Khandelwal
Priority: Critical
 Fix For: 0.10.0

 Attachments: HIVE-3794.patch


 As part of Hive configuration for Oracle I ran the schema creation script for 
 Oracle. Here is what I observed when ran the script:
 % sqlplus hive/hive@xe
 SQL*Plus: Release 11.2.0.2.0 Production on Mon Dec 10 18:47:11 2012
 Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 Connected to:
 Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 SQL @scripts/metastore/upgrade/oracle/hive-schema-0.10.0.oracle.sql;
 .
 ALTER TABLE SKEWED_STRING_LIST_VALUES ADD CONSTRAINT 
 SKEWED_STRING_LIST_VALUES_FK1 FOREIGN KEY (STRING_LIST_ID) REFERENCES 
 SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY DEFERRED
   
  *
 ERROR at line 1:
 {color:red}ORA-00904: STRING_LIST_ID: invalid identifier{color}
 .
 ALTER TABLE SKEWED_STRING_LIST_VALUES ADD CONSTRAINT 
 SKEWED_STRING_LIST_VALUES_FK1 FOREIGN KEY (STRING_LIST_ID) REFERENCES 
 SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY DEFERRED
   
  *
 ERROR at line 1:
 {color:red}ORA-00904: STRING_LIST_ID: invalid identifier{color}
 Table created.
 Table altered.
 Table altered.
 CREATE TABLE SKEWED_COL_VALUE_LOCATION_MAPPING
  *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 Table created.
 Table created.
 ALTER TABLE SKEWED_COL_VALUE_LOCATION_MAPPING ADD CONSTRAINT 
 SKEWED_COL_VALUE_LOCATION_MAPPING_PK PRIMARY KEY (SD_ID,STRING_LIST_ID_KID)
 *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 ALTER TABLE SKEWED_COL_VALUE_LOCATION_MAPPING ADD CONSTRAINT 
 SKEWED_COL_VALUE_LOCATION_MAPPING_FK1 FOREIGN KEY (STRING_LIST_ID_KID) 
 REFERENCES SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY DEFERRED
 *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 ALTER TABLE SKEWED_COL_VALUE_LOCATION_MAPPING ADD CONSTRAINT 
 SKEWED_COL_VALUE_LOCATION_MAPPING_FK2 FOREIGN KEY (SD_ID) REFERENCES SDS 
 (SD_ID) INITIALLY DEFERRED
 *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 Table created.
 Table altered.
 ALTER TABLE SKEWED_VALUES ADD CONSTRAINT SKEWED_VALUES_FK1 FOREIGN KEY 
 (STRING_LIST_ID_EID) REFERENCES SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY 
 DEFERRED
   
  *
 ERROR at line 1:
 {color:red}ORA-00904: STRING_LIST_ID: invalid identifier{color}
 Basically there are two issues here with the Oracle sql script:
 (1) Table SKEWED_STRING_LIST is created with the column SD_ID. Later the 
 script tries to reference STRING_LIST_ID column in SKEWED_STRING_LIST 
 which is obviously not there. Comparing the sql with that for other flavors 
 it seems it should be STRING_LIST_ID.
 (2) Table name SKEWED_COL_VALUE_LOCATION_MAPPING is too long for Oracle 
 which limits identifier names to 30 characters. Also impacted are identifiers 
 SKEWED_COL_VALUE_LOCATION_MAPPING_PK and 
 SKEWED_COL_VALUE_LOCATION_MAPPING_FK1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3795) NPE in SELECT when WHERE-clause is an and/or/not operation involving null

2012-12-17 Thread Xiao Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Jiang updated HIVE-3795:
-

Attachment: HIVE-3795.2.patch.txt

 NPE in SELECT when WHERE-clause is an and/or/not operation involving null
 -

 Key: HIVE-3795
 URL: https://issues.apache.org/jira/browse/HIVE-3795
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Xiao Jiang
Assignee: Xiao Jiang
Priority: Trivial
 Attachments: HIVE-3795.1.patch.txt, HIVE-3795.2.patch.txt


 Sometimes users forget to quote date constants in queries. For example, 
 SELECT * FROM some_table WHERE ds = 2012-12-10 and ds = 2012-12-12; . In 
 such cases, if the WHERE-clause contains and/or/not operation, it would throw 
 NPE exception. That's because PcrExprProcFactory in ql/optimizer forgot to 
 check null. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3795) NPE in SELECT when WHERE-clause is an and/or/not operation involving null

2012-12-17 Thread Xiao Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Jiang updated HIVE-3795:
-

Status: Patch Available  (was: Open)

 NPE in SELECT when WHERE-clause is an and/or/not operation involving null
 -

 Key: HIVE-3795
 URL: https://issues.apache.org/jira/browse/HIVE-3795
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Xiao Jiang
Assignee: Xiao Jiang
Priority: Trivial
 Attachments: HIVE-3795.1.patch.txt, HIVE-3795.2.patch.txt


 Sometimes users forget to quote date constants in queries. For example, 
 SELECT * FROM some_table WHERE ds = 2012-12-10 and ds = 2012-12-12; . In 
 such cases, if the WHERE-clause contains and/or/not operation, it would throw 
 NPE exception. That's because PcrExprProcFactory in ql/optimizer forgot to 
 check null. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3645) RCFileWriter does not implement the right function to support Federation

2012-12-17 Thread Mikhail Bautin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534210#comment-13534210
 ] 

Mikhail Bautin commented on HIVE-3645:
--

I am getting the following compilation errors with this patch on branch-0.9:

{code}
ivy-retrieve-hadoop-shim:
 [echo] Project: shims
[javac] Compiling 1 source file to /wd/hive/build/shims/classes
[javac] 
/wd/hive/shims/src/0.23/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java:118:
 error: method getDefaultBlockSize in class FileSystem cannot be applied to 
given types;
[javac] return fs.getDefaultBlockSize(path);
[javac]  ^
[javac]   required: no arguments
[javac]   found: Path
[javac]   reason: actual and formal argument lists differ in length
[javac] 
/wd/hive/shims/src/0.23/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java:123:
 error: method getDefaultReplication in class FileSystem cannot be applied to 
given types;
[javac] return fs.getDefaultReplication(path);
[javac]  ^
[javac]   required: no arguments
[javac]   found: Path
[javac]   reason: actual and formal argument lists differ in length
[javac] 2 errors

BUILD FAILED
/wd/hive/build.xml:319: The following error occurred while executing this line:
/wd/hive/build.xml:169: The following error occurred while executing this line:
/wd/hive/shims/build.xml:90: The following error occurred while executing this 
line:
/wd/hive/shims/build.xml:93: The following error occurred while executing this 
line:
/wd/hive/shims/build.xml:82: Compile failed; see the compiler error output for 
details.
{code}

I am building with the default Hadoop version.

 RCFileWriter does not implement the right function to support Federation
 

 Key: HIVE-3645
 URL: https://issues.apache.org/jira/browse/HIVE-3645
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.9.0, 0.10.0
 Environment: Hadoop 0.23.3 federation, Hive 0.9 and Pig 0.10
Reporter: Viraj Bhat
Assignee: Arup Malakar
 Fix For: 0.11

 Attachments: HIVE_3645_branch_0.patch, HIVE_3645_trunk_0.patch


 Create a table using Hive DDL
 {code}
 CREATE TABLE tmp_hcat_federated_numbers_part_1 (
   id   int,  
   intnum   int,
   floatnum float
 )partitioned by (
   part1string,
   part2string
 )
 STORED AS rcfile
 LOCATION 'viewfs:///database/tmp_hcat_federated_numbers_part_1';
 {code}
 Populate it using Pig:
 {code}
 A = load 'default.numbers_pig' using org.apache.hcatalog.pig.HCatLoader();
 B = filter A by id =  500;
 C = foreach B generate (int)id, (int)intnum, (float)floatnum;
 store C into
 'default.tmp_hcat_federated_numbers_part_1'
 using org.apache.hcatalog.pig.HCatStorer
('part1=pig, part2=hcat_pig_insert',
 'id: int,intnum: int,floatnum: float');
 {code}
 Generates the following error when running on a Federated Cluster:
 {quote}
 2012-10-29 20:40:25,011 [main] ERROR
 org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate
 exception from backed error: AttemptID:attempt_1348522594824_0846_m_00_3
 Info:Error: org.apache.hadoop.fs.viewfs.NotInMountpointException:
 getDefaultReplication on empty path is invalid
 at
 org.apache.hadoop.fs.viewfs.ViewFileSystem.getDefaultReplication(ViewFileSystem.java:479)
 at org.apache.hadoop.hive.ql.io.RCFile$Writer.init(RCFile.java:723)
 at org.apache.hadoop.hive.ql.io.RCFile$Writer.init(RCFile.java:705)
 at
 org.apache.hadoop.hive.ql.io.RCFileOutputFormat.getRecordWriter(RCFileOutputFormat.java:86)
 at
 org.apache.hcatalog.mapreduce.FileOutputFormatContainer.getRecordWriter(FileOutputFormatContainer.java:100)
 at
 org.apache.hcatalog.mapreduce.HCatOutputFormat.getRecordWriter(HCatOutputFormat.java:228)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
 at
 org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.init(MapTask.java:587)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:706)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1212)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more 

[jira] [Commented] (HIVE-2693) Add DECIMAL data type

2012-12-17 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534221#comment-13534221
 ] 

Gunther Hagleitner commented on HIVE-2693:
--

It should be possible to serialize a big decimal as 
signscalelengthint-digits. That way the natural ordering should be 
preserved at the binary level.

 Add DECIMAL data type
 -

 Key: HIVE-2693
 URL: https://issues.apache.org/jira/browse/HIVE-2693
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Types
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Prasad Mujumdar
 Attachments: 2693_7.patch, 2693_8.patch, 2693_fix_all_tests1.patch, 
 HIVE-2693-10.patch, HIVE-2693-11.patch, HIVE-2693-1.patch.txt, 
 HIVE-2693-all.patch, HIVE-2693-fix.patch, HIVE-2693.patch, 
 HIVE-2693-take3.patch, HIVE-2693-take4.patch


 Add support for the DECIMAL data type. HIVE-2272 (TIMESTAMP) provides a nice 
 template for how to do this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3645) RCFileWriter does not implement the right function to support Federation

2012-12-17 Thread Arup Malakar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534224#comment-13534224
 ] 

Arup Malakar commented on HIVE-3645:


From: 
https://issues.apache.org/jira/browse/HIVE-3754?focusedCommentId=13506596page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13506596
You can use either of
{code}
ant clean package -Dhadoop.version=0.23.3 -Dhadoop-0.23.version=0.23.3 
-Dhadoop.mr.rev=23
ant clean package -Dhadoop.version=2.0.0-alpha 
-Dhadoop-0.23.version=2.0.0-alpha -Dhadoop.mr.rev=23
{code}

See HIVE-3754 for more details.

I also see that default hadoop 23 version is 0.23.3 for branch-0.9 as well, so 
this should have worked without the arguments:
{code}
hadoop-0.23.version=0.23.3
{code}

 RCFileWriter does not implement the right function to support Federation
 

 Key: HIVE-3645
 URL: https://issues.apache.org/jira/browse/HIVE-3645
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.9.0, 0.10.0
 Environment: Hadoop 0.23.3 federation, Hive 0.9 and Pig 0.10
Reporter: Viraj Bhat
Assignee: Arup Malakar
 Fix For: 0.11

 Attachments: HIVE_3645_branch_0.patch, HIVE_3645_trunk_0.patch


 Create a table using Hive DDL
 {code}
 CREATE TABLE tmp_hcat_federated_numbers_part_1 (
   id   int,  
   intnum   int,
   floatnum float
 )partitioned by (
   part1string,
   part2string
 )
 STORED AS rcfile
 LOCATION 'viewfs:///database/tmp_hcat_federated_numbers_part_1';
 {code}
 Populate it using Pig:
 {code}
 A = load 'default.numbers_pig' using org.apache.hcatalog.pig.HCatLoader();
 B = filter A by id =  500;
 C = foreach B generate (int)id, (int)intnum, (float)floatnum;
 store C into
 'default.tmp_hcat_federated_numbers_part_1'
 using org.apache.hcatalog.pig.HCatStorer
('part1=pig, part2=hcat_pig_insert',
 'id: int,intnum: int,floatnum: float');
 {code}
 Generates the following error when running on a Federated Cluster:
 {quote}
 2012-10-29 20:40:25,011 [main] ERROR
 org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate
 exception from backed error: AttemptID:attempt_1348522594824_0846_m_00_3
 Info:Error: org.apache.hadoop.fs.viewfs.NotInMountpointException:
 getDefaultReplication on empty path is invalid
 at
 org.apache.hadoop.fs.viewfs.ViewFileSystem.getDefaultReplication(ViewFileSystem.java:479)
 at org.apache.hadoop.hive.ql.io.RCFile$Writer.init(RCFile.java:723)
 at org.apache.hadoop.hive.ql.io.RCFile$Writer.init(RCFile.java:705)
 at
 org.apache.hadoop.hive.ql.io.RCFileOutputFormat.getRecordWriter(RCFileOutputFormat.java:86)
 at
 org.apache.hcatalog.mapreduce.FileOutputFormatContainer.getRecordWriter(FileOutputFormatContainer.java:100)
 at
 org.apache.hcatalog.mapreduce.HCatOutputFormat.getRecordWriter(HCatOutputFormat.java:228)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
 at
 org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.init(MapTask.java:587)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:706)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1212)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2693) Add DECIMAL data type

2012-12-17 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534241#comment-13534241
 ] 

Gunther Hagleitner commented on HIVE-2693:
--

Actually, that's not going to work because of the length. But for this case I 
don't think we need to encode the length. Hive key will put the stuff in a 
byteswritable and skip the length header on comparison.

Also sign has to be 1 for positive 0 for negative (to preserve order) and 
scale has to be negated for negative numbers.


 Add DECIMAL data type
 -

 Key: HIVE-2693
 URL: https://issues.apache.org/jira/browse/HIVE-2693
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Types
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Prasad Mujumdar
 Attachments: 2693_7.patch, 2693_8.patch, 2693_fix_all_tests1.patch, 
 HIVE-2693-10.patch, HIVE-2693-11.patch, HIVE-2693-1.patch.txt, 
 HIVE-2693-all.patch, HIVE-2693-fix.patch, HIVE-2693.patch, 
 HIVE-2693-take3.patch, HIVE-2693-take4.patch


 Add support for the DECIMAL data type. HIVE-2272 (TIMESTAMP) provides a nice 
 template for how to do this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3795) NPE in SELECT when WHERE-clause is an and/or/not operation involving null

2012-12-17 Thread Gang Tim Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534252#comment-13534252
 ] 

Gang Tim Liu commented on HIVE-3795:


very minor comment.


 NPE in SELECT when WHERE-clause is an and/or/not operation involving null
 -

 Key: HIVE-3795
 URL: https://issues.apache.org/jira/browse/HIVE-3795
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Xiao Jiang
Assignee: Xiao Jiang
Priority: Trivial
 Attachments: HIVE-3795.1.patch.txt, HIVE-3795.2.patch.txt


 Sometimes users forget to quote date constants in queries. For example, 
 SELECT * FROM some_table WHERE ds = 2012-12-10 and ds = 2012-12-12; . In 
 such cases, if the WHERE-clause contains and/or/not operation, it would throw 
 NPE exception. That's because PcrExprProcFactory in ql/optimizer forgot to 
 check null. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3795) NPE in SELECT when WHERE-clause is an and/or/not operation involving null

2012-12-17 Thread Gang Tim Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gang Tim Liu updated HIVE-3795:
---

Status: Open  (was: Patch Available)

 NPE in SELECT when WHERE-clause is an and/or/not operation involving null
 -

 Key: HIVE-3795
 URL: https://issues.apache.org/jira/browse/HIVE-3795
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Xiao Jiang
Assignee: Xiao Jiang
Priority: Trivial
 Attachments: HIVE-3795.1.patch.txt, HIVE-3795.2.patch.txt


 Sometimes users forget to quote date constants in queries. For example, 
 SELECT * FROM some_table WHERE ds = 2012-12-10 and ds = 2012-12-12; . In 
 such cases, if the WHERE-clause contains and/or/not operation, it would throw 
 NPE exception. That's because PcrExprProcFactory in ql/optimizer forgot to 
 check null. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3796) Multi-insert involving bucketed/sorted table turns off merging on all outputs

2012-12-17 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534262#comment-13534262
 ] 

Kevin Wilfong commented on HIVE-3796:
-

Updated test cases. The problem was worse than I'd thought. Because it turns of 
merging using configs, and never turns it back on, it remains off for future 
queries run in the same session. This means that the plans for some tests now 
include merges where they did not previously.

My original full test run passed because of the issues with ptesting.

 Multi-insert involving bucketed/sorted table turns off merging on all outputs
 -

 Key: HIVE-3796
 URL: https://issues.apache.org/jira/browse/HIVE-3796
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.11
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3796.1.patch.txt, HIVE-3796.2.patch.txt, 
 HIVE-3796.3.patch.txt


 When a multi-insert query has at least one output that is bucketed, merging 
 is turned off for all outputs, rather than just the bucketed ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3796) Multi-insert involving bucketed/sorted table turns off merging on all outputs

2012-12-17 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3796:


Attachment: HIVE-3796.3.patch.txt

 Multi-insert involving bucketed/sorted table turns off merging on all outputs
 -

 Key: HIVE-3796
 URL: https://issues.apache.org/jira/browse/HIVE-3796
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.11
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3796.1.patch.txt, HIVE-3796.2.patch.txt, 
 HIVE-3796.3.patch.txt


 When a multi-insert query has at least one output that is bucketed, merging 
 is turned off for all outputs, rather than just the bucketed ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3796) Multi-insert involving bucketed/sorted table turns off merging on all outputs

2012-12-17 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3796:


Status: Patch Available  (was: Open)

 Multi-insert involving bucketed/sorted table turns off merging on all outputs
 -

 Key: HIVE-3796
 URL: https://issues.apache.org/jira/browse/HIVE-3796
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.11
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3796.1.patch.txt, HIVE-3796.2.patch.txt, 
 HIVE-3796.3.patch.txt


 When a multi-insert query has at least one output that is bucketed, merging 
 is turned off for all outputs, rather than just the bucketed ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3552) HIVE-3552 performant manner for performing cubes/rollups/grouping sets for a high number of grouping set keys

2012-12-17 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534273#comment-13534273
 ] 

Kevin Wilfong commented on HIVE-3552:
-

Add a couple comments on Phabricator.

 HIVE-3552 performant manner for performing cubes/rollups/grouping sets for a 
 high number of grouping set keys
 -

 Key: HIVE-3552
 URL: https://issues.apache.org/jira/browse/HIVE-3552
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3552.1.patch, hive.3552.2.patch, hive.3552.3.patch, 
 hive.3552.4.patch


 This is a follow up for HIVE-3433.
 Had a offline discussion with Sambavi - she pointed out a scenario where the
 implementation in HIVE-3433 will not scale. Assume that the user is performing
 a cube on many columns, say '8' columns. So, each row would generate 256 rows
 for the hash table, which may kill the current group by implementation.
 A better implementation would be to add an additional mr job - in the first 
 mr job perform the group by assuming there was no cube. Add another mr job, 
 where
 you would perform the cube. The assumption is that the group by would have 
 decreased the output data significantly, and the rows would appear in the 
 order of
 grouping keys which has a higher probability of hitting the hash table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3552) HIVE-3552 performant manner for performing cubes/rollups/grouping sets for a high number of grouping set keys

2012-12-17 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3552:


Status: Open  (was: Patch Available)

 HIVE-3552 performant manner for performing cubes/rollups/grouping sets for a 
 high number of grouping set keys
 -

 Key: HIVE-3552
 URL: https://issues.apache.org/jira/browse/HIVE-3552
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3552.1.patch, hive.3552.2.patch, hive.3552.3.patch, 
 hive.3552.4.patch


 This is a follow up for HIVE-3433.
 Had a offline discussion with Sambavi - she pointed out a scenario where the
 implementation in HIVE-3433 will not scale. Assume that the user is performing
 a cube on many columns, say '8' columns. So, each row would generate 256 rows
 for the hash table, which may kill the current group by implementation.
 A better implementation would be to add an additional mr job - in the first 
 mr job perform the group by assuming there was no cube. Add another mr job, 
 where
 you would perform the cube. The assumption is that the group by would have 
 decreased the output data significantly, and the rows would appear in the 
 order of
 grouping keys which has a higher probability of hitting the hash table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3805) Resolve TODO in TUGIBasedProcessor

2012-12-17 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534278#comment-13534278
 ] 

Kevin Wilfong commented on HIVE-3805:
-

Thanks for pointing that out Ashutosh.  We will likely be interested in 
something like that in the near future.

In the immediate future, we just have one use case for which this quick and 
dirty solution should be sufficient.

 Resolve TODO in TUGIBasedProcessor
 --

 Key: HIVE-3805
 URL: https://issues.apache.org/jira/browse/HIVE-3805
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.11
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3805.1.patch.txt


 There's a TODO in TUGIBasedProcessor
 // TODO get rid of following reflection after THRIFT-1465 is fixed.
 Now that we have upgraded to Thrift 9 THRIFT-1465 is available.
 This will also fix an issue where fb303 counters cannot be collected if the 
 TUGIBasedProcessor is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2693) Add DECIMAL data type

2012-12-17 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534290#comment-13534290
 ] 

Gunther Hagleitner commented on HIVE-2693:
--

Wrong again, but getting closer. The length of strings/byte arrays are encoded 
with trailing \0 in BinarySortableSerDe. So the encoding should be

sign bytesale intstring of digits

sign byte: 1 for = 0, 0 for 0
scale: scale integer if sign byte 1, -scale integer otherwise
string of digits: Zero terminated string of digits

I'm trying this out right now. 


 Add DECIMAL data type
 -

 Key: HIVE-2693
 URL: https://issues.apache.org/jira/browse/HIVE-2693
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Types
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Prasad Mujumdar
 Attachments: 2693_7.patch, 2693_8.patch, 2693_fix_all_tests1.patch, 
 HIVE-2693-10.patch, HIVE-2693-11.patch, HIVE-2693-1.patch.txt, 
 HIVE-2693-all.patch, HIVE-2693-fix.patch, HIVE-2693.patch, 
 HIVE-2693-take3.patch, HIVE-2693-take4.patch


 Add support for the DECIMAL data type. HIVE-2272 (TIMESTAMP) provides a nice 
 template for how to do this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2693) Add DECIMAL data type

2012-12-17 Thread Mark Grover (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534319#comment-13534319
 ] 

Mark Grover commented on HIVE-2693:
---

Thanks for your comments, [~hagleitn]. Yeah, the right thing moving forward 
would be to update BinarySortableSerDe to support BigDecimal. When I thinking 
of the best way to serialize BigDecimal, the sign and scale part were easy but 
I wasn't able to come up with a space efficient way to store and arbitrary 
number of digits so they are in-order byte sortable. Correct me if I am wrong 
but seems like you are suggesting 1 byte per digit which would work (if the 
lengths are equal) but can be dangerous since we are exploding an arbitrarily 
long integer. Having said that, given Hive's historical philosophy of ignoring 
malicious intent users, I am ok with moving forward with that approach.

On a related note, let's talk about the scale.
13267 has a scale of 0 and digits 13267
132.67 has a scale of 2 and digits 13267
132670 has a scale of -1 and digits 13267

So, it seems like the lower scale always means bigger number for positive 
numbers, so shouldn't we do
{code}
scale: -scale for positive numbers (sign byte 1) and scale for negative 
numbers (sign byte 0)
{code}
instead of 
{code}
scale: scale integer if sign byte 1, -scale integer otherwise
{code}
which you suggested in your previous comment. I am basically asking to flip it 
around since negative numbers have sign byte 0 and positive have sign byte.
BTW, feel free to contact me offline if you want to bounce around some ideas!

 Add DECIMAL data type
 -

 Key: HIVE-2693
 URL: https://issues.apache.org/jira/browse/HIVE-2693
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Types
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Prasad Mujumdar
 Attachments: 2693_7.patch, 2693_8.patch, 2693_fix_all_tests1.patch, 
 HIVE-2693-10.patch, HIVE-2693-11.patch, HIVE-2693-1.patch.txt, 
 HIVE-2693-all.patch, HIVE-2693-fix.patch, HIVE-2693.patch, 
 HIVE-2693-take3.patch, HIVE-2693-take4.patch


 Add support for the DECIMAL data type. HIVE-2272 (TIMESTAMP) provides a nice 
 template for how to do this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3633) sort-merge join does not work with sub-queries

2012-12-17 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534334#comment-13534334
 ] 

Kevin Wilfong commented on HIVE-3633:
-

The output of the new testcase appears to be nondeterministic.  Can you update.

I pointed out one nondeterministic query in Phabricator.

 sort-merge join does not work with sub-queries
 --

 Key: HIVE-3633
 URL: https://issues.apache.org/jira/browse/HIVE-3633
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3633.10.patch, hive.3633.1.patch, 
 hive.3633.2.patch, hive.3633.3.patch, hive.3633.4.patch, hive.3633.5.patch, 
 hive.3633.6.patch, hive.3633.7.patch, hive.3633.8.patch, hive.3633.9.patch


 Consider the following query:
 create table smb_bucket_1(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 create table smb_bucket_2(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 -- load the above tables
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select count(*) from
 (
 select /*+mapjoin(a)*/ a.key as key1, b.key as key2, a.value as value1, 
 b.value as value2
 from smb_bucket_1 a join smb_bucket_2 b on a.key = b.key)
 subq;
 The above query does not use sort-merge join. This would be very useful as we 
 automatically convert the queries to use sorting and bucketing properties for 
 join.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3633) sort-merge join does not work with sub-queries

2012-12-17 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3633:


Status: Open  (was: Patch Available)

 sort-merge join does not work with sub-queries
 --

 Key: HIVE-3633
 URL: https://issues.apache.org/jira/browse/HIVE-3633
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3633.10.patch, hive.3633.1.patch, 
 hive.3633.2.patch, hive.3633.3.patch, hive.3633.4.patch, hive.3633.5.patch, 
 hive.3633.6.patch, hive.3633.7.patch, hive.3633.8.patch, hive.3633.9.patch


 Consider the following query:
 create table smb_bucket_1(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 create table smb_bucket_2(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 -- load the above tables
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select count(*) from
 (
 select /*+mapjoin(a)*/ a.key as key1, b.key as key2, a.value as value1, 
 b.value as value2
 from smb_bucket_1 a join smb_bucket_2 b on a.key = b.key)
 subq;
 The above query does not use sort-merge join. This would be very useful as we 
 automatically convert the queries to use sorting and bucketing properties for 
 join.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3795) NPE in SELECT when WHERE-clause is an and/or/not operation involving null

2012-12-17 Thread Xiao Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Jiang updated HIVE-3795:
-

Status: Patch Available  (was: Open)

 NPE in SELECT when WHERE-clause is an and/or/not operation involving null
 -

 Key: HIVE-3795
 URL: https://issues.apache.org/jira/browse/HIVE-3795
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Xiao Jiang
Assignee: Xiao Jiang
Priority: Trivial
 Attachments: HIVE-3795.1.patch.txt, HIVE-3795.2.patch.txt, 
 HIVE-3795.3.patch.txt


 Sometimes users forget to quote date constants in queries. For example, 
 SELECT * FROM some_table WHERE ds = 2012-12-10 and ds = 2012-12-12; . In 
 such cases, if the WHERE-clause contains and/or/not operation, it would throw 
 NPE exception. That's because PcrExprProcFactory in ql/optimizer forgot to 
 check null. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3795) NPE in SELECT when WHERE-clause is an and/or/not operation involving null

2012-12-17 Thread Xiao Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Jiang updated HIVE-3795:
-

Attachment: HIVE-3795.3.patch.txt

 NPE in SELECT when WHERE-clause is an and/or/not operation involving null
 -

 Key: HIVE-3795
 URL: https://issues.apache.org/jira/browse/HIVE-3795
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Xiao Jiang
Assignee: Xiao Jiang
Priority: Trivial
 Attachments: HIVE-3795.1.patch.txt, HIVE-3795.2.patch.txt, 
 HIVE-3795.3.patch.txt


 Sometimes users forget to quote date constants in queries. For example, 
 SELECT * FROM some_table WHERE ds = 2012-12-10 and ds = 2012-12-12; . In 
 such cases, if the WHERE-clause contains and/or/not operation, it would throw 
 NPE exception. That's because PcrExprProcFactory in ql/optimizer forgot to 
 check null. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3731) Ant target to create a Debian package

2012-12-17 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534361#comment-13534361
 ] 

Phabricator commented on HIVE-3731:
---

dhruba has resigned from the revision [jira] [HIVE-3731] Ant target to create 
a Debian package.

REVISION DETAIL
  https://reviews.facebook.net/D6879

To: njain, heyongqiang, raghotham, cwsteinbach, ashutoshc, JIRA, zshao, nzhang, 
jsichi, pauly, amareshwarisr, mbautin


 Ant target to create a Debian package
 -

 Key: HIVE-3731
 URL: https://issues.apache.org/jira/browse/HIVE-3731
 Project: Hive
  Issue Type: Improvement
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin
Priority: Minor
 Attachments: D6879.1.patch


 We need an Ant target to generate a Debian package with Hive binary 
 distribution.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3694) Generate test jars and publish them to Maven

2012-12-17 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534360#comment-13534360
 ] 

Phabricator commented on HIVE-3694:
---

dhruba has resigned from the revision [jira] [HIVE-3694] Generate 
hive-exec-test jar and publish it to Maven locally.

REVISION DETAIL
  https://reviews.facebook.net/D6843

To: ashutoshc, JIRA, zshao, njain, raghotham, heyongqiang, nzhang, jsichi, 
pauly, amareshwarisr, cwsteinbach, mbautin


 Generate test jars and publish them to Maven
 

 Key: HIVE-3694
 URL: https://issues.apache.org/jira/browse/HIVE-3694
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Reporter: Mikhail Bautin
Priority: Minor
 Attachments: D6843.1.patch, D6843.2.patch, D6843.3.patch


 It should be possible to generate Hive test jars and publish them to Maven so 
 that other projects that rely on Hive or extend it could reuse its test 
 library.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3772) Fix a concurrency bug in LazyBinaryUtils due to a static field (patch by Reynold Xin)

2012-12-17 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534359#comment-13534359
 ] 

Phabricator commented on HIVE-3772:
---

dhruba has resigned from the revision [jira] [HIVE-3772] Fix a concurrency bug 
in LazyBinaryUtils due to a static field (patch by Reynold Xin).

REVISION DETAIL
  https://reviews.facebook.net/D7155

To: ashutoshc, njain, raghotham, JIRA, zshao, heyongqiang, nzhang, jsichi, 
pauly, amareshwarisr, cwsteinbach, mbautin


 Fix a concurrency bug in LazyBinaryUtils due to a static field (patch by 
 Reynold Xin)
 -

 Key: HIVE-3772
 URL: https://issues.apache.org/jira/browse/HIVE-3772
 Project: Hive
  Issue Type: Bug
Reporter: Mikhail Bautin
 Attachments: D7155.1.patch, HIVE-3772-2012-12-04.patch


 Creating a JIRA for [~rxin]'s patch needed by the Shark project. 
 https://github.com/amplab/hive/commit/17e1c3dd2f6d8eca767115dc46d5a880aed8c765
 writeVLong should not use a static field due to concurrency concerns.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3527) Allow CREATE TABLE LIKE command to take TBLPROPERTIES

2012-12-17 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3527:


Attachment: HIVE-3527.3.patch.txt

 Allow CREATE TABLE LIKE command to take TBLPROPERTIES
 -

 Key: HIVE-3527
 URL: https://issues.apache.org/jira/browse/HIVE-3527
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3527.1.patch.txt, hive.3527.2.patch, 
 HIVE-3527.3.patch.txt, HIVE-3527.D5883.1.patch


 CREATE TABLE ... LIKE ... commands currently don't take TBLPROPERTIES.  I 
 think it would be a useful feature.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3527) Allow CREATE TABLE LIKE command to take TBLPROPERTIES

2012-12-17 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534390#comment-13534390
 ] 

Kevin Wilfong commented on HIVE-3527:
-

Updated revision here
https://reviews.facebook.net/D5847

and in
HIVE-3527.3.patch.txt
attached to the JIRA

 Allow CREATE TABLE LIKE command to take TBLPROPERTIES
 -

 Key: HIVE-3527
 URL: https://issues.apache.org/jira/browse/HIVE-3527
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3527.1.patch.txt, hive.3527.2.patch, 
 HIVE-3527.3.patch.txt, HIVE-3527.D5883.1.patch


 CREATE TABLE ... LIKE ... commands currently don't take TBLPROPERTIES.  I 
 think it would be a useful feature.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3527) Allow CREATE TABLE LIKE command to take TBLPROPERTIES

2012-12-17 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3527:


Status: Patch Available  (was: Open)

 Allow CREATE TABLE LIKE command to take TBLPROPERTIES
 -

 Key: HIVE-3527
 URL: https://issues.apache.org/jira/browse/HIVE-3527
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3527.1.patch.txt, hive.3527.2.patch, 
 HIVE-3527.3.patch.txt, HIVE-3527.D5883.1.patch


 CREATE TABLE ... LIKE ... commands currently don't take TBLPROPERTIES.  I 
 think it would be a useful feature.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2682) Clean-up logs

2012-12-17 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534431#comment-13534431
 ] 

Phabricator commented on HIVE-2682:
---

rajat has closed the revision HIVE-2682 [jira] Clean-up logs.

  Committed

REVISION DETAIL
  https://reviews.facebook.net/D1035

To: JIRA, jsichi, jonchang, heyongqiang, njain, ashutoshc, rajat
Cc: raghotham, rajat, njain


 Clean-up logs
 -

 Key: HIVE-2682
 URL: https://issues.apache.org/jira/browse/HIVE-2682
 Project: Hive
  Issue Type: Wish
  Components: Logging
Affects Versions: 0.8.1, 0.9.0
Reporter: Rajat Goel
Assignee: Rajat Goel
Priority: Trivial
  Labels: logging
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2682.D1035.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2682.D1035.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2682.D1035.3.patch, hive-2682.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 Just wanted to cleanup some logs being printed at wrong loglevel -
 1. org.apache.hadoop.hive.ql.exec.CommonJoinOperator prints table 0 has 1000 
 rows for join key [...] as WARNING. Is it really that? 
 2. org.apache.hadoop.hive.ql.exec.GroupByOperator prints Hash Table 
 completed flushed and Begin Hash Table flush at close: size = 21 as 
 WARNING. It shouldn't be.
 3. org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher prints Warning. 
 Invalid statistic. which looks fishy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Review Request: float and double calculation is inaccurate in Hive

2012-12-17 Thread Johnny Zhang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/8653/
---

Review request for hive.


Description
---

I found this during debug the e2e test failures. I found Hive miss calculate 
the float and double value. Take float calculation as an example:
hive select f from all100k limit 1;
48308.98
hive select f/10 from all100k limit 1;
4830.898046875 --added 04875 in the end
hive select f*1.01 from all100k limit 1;
48792.0702734375 --should be 48792.0698
It might be essentially the same problem as 
http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm. 
But since e2e test compare the results with mysql and seems mysql does it 
right, so it is worthy fixing it in Hive.


This addresses bug HIVE-3715.
https://issues.apache.org/jira/browse/HIVE-3715


Diffs
-

  
http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
 1423224 
  
http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMultiply.java
 1423224 

Diff: https://reviews.apache.org/r/8653/diff/


Testing
---

I did test to compare the result with mysql default float precision setting, 
the result is identical.

query:  select f, f*1.01, f/10 from all100k limit 1;
mysql result:   48309   48792.07027343754830.898046875
hive result:48308.9848792.07027343754830.898046875


I apply this patch and run the hive e2e test, and the tests all pass (without 
this patch, 5 related failures)


Thanks,

Johnny Zhang



[jira] [Created] (HIVE-3813) Allow publishing artifacts to an arbitrary remote repository

2012-12-17 Thread Mikhail Bautin (JIRA)
Mikhail Bautin created HIVE-3813:


 Summary: Allow publishing artifacts to an arbitrary remote 
repository
 Key: HIVE-3813
 URL: https://issues.apache.org/jira/browse/HIVE-3813
 Project: Hive
  Issue Type: Improvement
Reporter: Mikhail Bautin


Allow publishing artifacts to an arbitrary remote repository by specifying 
-Dmvn.publish.repoUrl on the command line.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3715) float and double calculation is inaccurate in Hive

2012-12-17 Thread Johnny Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Zhang updated HIVE-3715:
---

Attachment: HIVE-3715.patch.txt

the reviewboard link https://reviews.apache.org/r/8653/

 float and double calculation is inaccurate in Hive
 --

 Key: HIVE-3715
 URL: https://issues.apache.org/jira/browse/HIVE-3715
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
Reporter: Johnny Zhang
Assignee: Johnny Zhang
 Attachments: HIVE-3715.patch.txt


 I found this during debug the e2e test failures. I found Hive miss calculate 
 the float and double value. Take float calculation as an example:
 hive select f from all100k limit 1;
 48308.98
 hive select f/10 from all100k limit 1;
 4830.898046875   --added 04875 in the end
 hive select f*1.01 from all100k limit 1;
 48792.0702734375  --should be 48792.0698
 It might be essentially the same problem as 
 http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm.
  But since e2e test compare the results with mysql and seems mysql does it 
 right, so it is worthy fixing it in Hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3715) float and double calculation is inaccurate in Hive

2012-12-17 Thread Johnny Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Zhang updated HIVE-3715:
---

Status: Patch Available  (was: Open)

 float and double calculation is inaccurate in Hive
 --

 Key: HIVE-3715
 URL: https://issues.apache.org/jira/browse/HIVE-3715
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
Reporter: Johnny Zhang
Assignee: Johnny Zhang
 Attachments: HIVE-3715.patch.txt


 I found this during debug the e2e test failures. I found Hive miss calculate 
 the float and double value. Take float calculation as an example:
 hive select f from all100k limit 1;
 48308.98
 hive select f/10 from all100k limit 1;
 4830.898046875   --added 04875 in the end
 hive select f*1.01 from all100k limit 1;
 48792.0702734375  --should be 48792.0698
 It might be essentially the same problem as 
 http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm.
  But since e2e test compare the results with mysql and seems mysql does it 
 right, so it is worthy fixing it in Hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3813) Allow publishing artifacts to an arbitrary remote repository

2012-12-17 Thread Mikhail Bautin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Bautin updated HIVE-3813:
-

Description: Allow publishing artifacts to an arbitrary remote repository 
by specifying -Dmvn.publish.repoUrl on the command line (patch by Thomas 
Dudziak).  (was: Allow publishing artifacts to an arbitrary remote repository 
by specifying -Dmvn.publish.repoUrl on the command line.)

 Allow publishing artifacts to an arbitrary remote repository
 

 Key: HIVE-3813
 URL: https://issues.apache.org/jira/browse/HIVE-3813
 Project: Hive
  Issue Type: Improvement
Reporter: Mikhail Bautin

 Allow publishing artifacts to an arbitrary remote repository by specifying 
 -Dmvn.publish.repoUrl on the command line (patch by Thomas Dudziak).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-2812) Hive multi group by single reducer optimization fails when aggregation with no keys followed by query with no aggregations

2012-12-17 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2812:
--

Attachment: HIVE-2812.D1821.2.patch

kevinwilfong updated the revision HIVE-2812 [jira] Hive multi group by single 
reducer optimization fails when aggregation with no keys followed by query with 
no aggregations.
Reviewers: JIRA, njain

  Updated.

REVISION DETAIL
  https://reviews.facebook.net/D1821

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/test/queries/clientpositive/groupby_multi_single_reducer3.q
  ql/src/test/results/clientpositive/groupby_multi_single_reducer3.q.out

To: JIRA, njain, kevinwilfong


 Hive multi group by single reducer optimization fails when aggregation with 
 no keys followed by query with no aggregations
 --

 Key: HIVE-2812
 URL: https://issues.apache.org/jira/browse/HIVE-2812
 Project: Hive
  Issue Type: Bug
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2812.D1821.1.patch, 
 HIVE-2812.D1821.2.patch


 In multi insert queries where one subquery involves an aggregation with no 
 distinct or group by keys and is followed by a query without any 
 aggregations, like the following, Hive will attempt to add a group by 
 operator for the query without aggregations, causing semantic analysis to 
 fail.
 FROM src
 INSERT OVERWRITE TABLE table1 SELECT count(*)
 INSERT OVERWRITE TABLE table2 SELECT key;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-2812) Hive multi group by single reducer optimization fails when aggregation with no keys followed by query with no aggregations

2012-12-17 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-2812:


Attachment: HIVE-2812.2.patch.txt

 Hive multi group by single reducer optimization fails when aggregation with 
 no keys followed by query with no aggregations
 --

 Key: HIVE-2812
 URL: https://issues.apache.org/jira/browse/HIVE-2812
 Project: Hive
  Issue Type: Bug
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2812.D1821.1.patch, 
 HIVE-2812.2.patch.txt, HIVE-2812.D1821.2.patch


 In multi insert queries where one subquery involves an aggregation with no 
 distinct or group by keys and is followed by a query without any 
 aggregations, like the following, Hive will attempt to add a group by 
 operator for the query without aggregations, causing semantic analysis to 
 fail.
 FROM src
 INSERT OVERWRITE TABLE table1 SELECT count(*)
 INSERT OVERWRITE TABLE table2 SELECT key;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2812) Hive multi group by single reducer optimization fails when aggregation with no keys followed by query with no aggregations

2012-12-17 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534452#comment-13534452
 ] 

Kevin Wilfong commented on HIVE-2812:
-

Refreshed.

 Hive multi group by single reducer optimization fails when aggregation with 
 no keys followed by query with no aggregations
 --

 Key: HIVE-2812
 URL: https://issues.apache.org/jira/browse/HIVE-2812
 Project: Hive
  Issue Type: Bug
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2812.D1821.1.patch, 
 HIVE-2812.2.patch.txt, HIVE-2812.D1821.2.patch


 In multi insert queries where one subquery involves an aggregation with no 
 distinct or group by keys and is followed by a query without any 
 aggregations, like the following, Hive will attempt to add a group by 
 operator for the query without aggregations, causing semantic analysis to 
 fail.
 FROM src
 INSERT OVERWRITE TABLE table1 SELECT count(*)
 INSERT OVERWRITE TABLE table2 SELECT key;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-2812) Hive multi group by single reducer optimization fails when aggregation with no keys followed by query with no aggregations

2012-12-17 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-2812:


Status: Patch Available  (was: Open)

 Hive multi group by single reducer optimization fails when aggregation with 
 no keys followed by query with no aggregations
 --

 Key: HIVE-2812
 URL: https://issues.apache.org/jira/browse/HIVE-2812
 Project: Hive
  Issue Type: Bug
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2812.D1821.1.patch, 
 HIVE-2812.2.patch.txt, HIVE-2812.D1821.2.patch


 In multi insert queries where one subquery involves an aggregation with no 
 distinct or group by keys and is followed by a query without any 
 aggregations, like the following, Hive will attempt to add a group by 
 operator for the query without aggregations, causing semantic analysis to 
 fail.
 FROM src
 INSERT OVERWRITE TABLE table1 SELECT count(*)
 INSERT OVERWRITE TABLE table2 SELECT key;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3813) Allow publishing artifacts to an arbitrary remote repository

2012-12-17 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-3813:
--

Attachment: D7455.1.patch

mbautin requested code review of [jira] [HIVE-3813]
Allow publishing artifacts to an arbitrary remote repository.
Reviewers: ashutoshc, njain, cdrome, cwsteinbach, heyongqiang, nzhang, jsichi, 
pauly, amareshwarisr, JIRA

  Allow publishing artifacts to an arbitrary remote repository by specifying 
-Dmvn.publish.repoUrl on the command line (patch by Thomas Dudziak).

TEST PLAN
  ant -Dmvn.publish.repoUrl=... clean package maven-build maven-publish


REVISION DETAIL
  https://reviews.facebook.net/D7455

AFFECTED FILES
  build.xml

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/17865/

To: ashutoshc, njain, cdrome, cwsteinbach, heyongqiang, nzhang, jsichi, pauly, 
amareshwarisr, JIRA, mbautin


 Allow publishing artifacts to an arbitrary remote repository
 

 Key: HIVE-3813
 URL: https://issues.apache.org/jira/browse/HIVE-3813
 Project: Hive
  Issue Type: Improvement
Reporter: Mikhail Bautin
 Attachments: D7455.1.patch


 Allow publishing artifacts to an arbitrary remote repository by specifying 
 -Dmvn.publish.repoUrl on the command line (patch by Thomas Dudziak).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3715) float and double calculation is inaccurate in Hive

2012-12-17 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534457#comment-13534457
 ] 

Kevin Wilfong commented on HIVE-3715:
-

Can you do a performance comparison between your new code and the old, I've 
heard that BigDecimal is very inefficient compared to float and double.

 float and double calculation is inaccurate in Hive
 --

 Key: HIVE-3715
 URL: https://issues.apache.org/jira/browse/HIVE-3715
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
Reporter: Johnny Zhang
Assignee: Johnny Zhang
 Attachments: HIVE-3715.patch.txt


 I found this during debug the e2e test failures. I found Hive miss calculate 
 the float and double value. Take float calculation as an example:
 hive select f from all100k limit 1;
 48308.98
 hive select f/10 from all100k limit 1;
 4830.898046875   --added 04875 in the end
 hive select f*1.01 from all100k limit 1;
 48792.0702734375  --should be 48792.0698
 It might be essentially the same problem as 
 http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm.
  But since e2e test compare the results with mysql and seems mysql does it 
 right, so it is worthy fixing it in Hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request: float and double calculation is inaccurate in Hive

2012-12-17 Thread Johnny Zhang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/8653/
---

(Updated Dec. 18, 2012, 12:37 a.m.)


Review request for hive.


Description (updated)
---

I found this during debug the e2e test failures. I found Hive miss calculate 
the float and double value. Take float calculation as an example:
hive select f from all100k limit 1;
48308.98
hive select f/10 from all100k limit 1;
4830.898046875 --added 04875 in the end
hive select f*1.01 from all100k limit 1;
48792.0702734375 --should be 48792.0698
It might be essentially the same problem as 
http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm 
But since e2e test compare the results with mysql and seems mysql does it 
right, so it is worthy fixing it in Hive.


This addresses bug HIVE-3715.
https://issues.apache.org/jira/browse/HIVE-3715


Diffs
-

  
http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
 1423224 
  
http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMultiply.java
 1423224 

Diff: https://reviews.apache.org/r/8653/diff/


Testing
---

I did test to compare the result with mysql default float precision setting, 
the result is identical.

query:  select f, f*1.01, f/10 from all100k limit 1;
mysql result:   48309   48792.07027343754830.898046875
hive result:48308.9848792.07027343754830.898046875


I apply this patch and run the hive e2e test, and the tests all pass (without 
this patch, 5 related failures)


Thanks,

Johnny Zhang



[jira] [Updated] (HIVE-3813) Allow publishing artifacts to an arbitrary remote repository

2012-12-17 Thread Mikhail Bautin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Bautin updated HIVE-3813:
-

Attachment: 0001-HIVE-3813-Allow-publishing-artifacts-to-an-arbitrary.patch

Attaching a manually generated patch.

 Allow publishing artifacts to an arbitrary remote repository
 

 Key: HIVE-3813
 URL: https://issues.apache.org/jira/browse/HIVE-3813
 Project: Hive
  Issue Type: Improvement
Reporter: Mikhail Bautin
 Attachments: 
 0001-HIVE-3813-Allow-publishing-artifacts-to-an-arbitrary.patch, D7455.1.patch


 Allow publishing artifacts to an arbitrary remote repository by specifying 
 -Dmvn.publish.repoUrl on the command line (patch by Thomas Dudziak).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request: float and double calculation is inaccurate in Hive

2012-12-17 Thread Mark Grover

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/8653/#review14625
---



http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
https://reviews.apache.org/r/8653/#comment31047

10 seems to be a rather arbitrary number for scale. Any particular reason 
you are using it? Maybe we should invoke the method where no scale needs to be 
specified.



http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMultiply.java
https://reviews.apache.org/r/8653/#comment31048

You seem to be doing
DoubleWritable-String-BigDecimal

There probably is a way to do:
DoubleWritable-Double-BigDecimal

I am not sure if it's any more efficient the present case. So, take this 
suggestion with a grain of salt:-)



- Mark Grover


On Dec. 18, 2012, 12:37 a.m., Johnny Zhang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/8653/
 ---
 
 (Updated Dec. 18, 2012, 12:37 a.m.)
 
 
 Review request for hive.
 
 
 Description
 ---
 
 I found this during debug the e2e test failures. I found Hive miss calculate 
 the float and double value. Take float calculation as an example:
 hive select f from all100k limit 1;
 48308.98
 hive select f/10 from all100k limit 1;
 4830.898046875 --added 04875 in the end
 hive select f*1.01 from all100k limit 1;
 48792.0702734375 --should be 48792.0698
 It might be essentially the same problem as 
 http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm 
 But since e2e test compare the results with mysql and seems mysql does it 
 right, so it is worthy fixing it in Hive.
 
 
 This addresses bug HIVE-3715.
 https://issues.apache.org/jira/browse/HIVE-3715
 
 
 Diffs
 -
 
   
 http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
  1423224 
   
 http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMultiply.java
  1423224 
 
 Diff: https://reviews.apache.org/r/8653/diff/
 
 
 Testing
 ---
 
 I did test to compare the result with mysql default float precision setting, 
 the result is identical.
 
 query:  select f, f*1.01, f/10 from all100k limit 1;
 mysql result:   48309   48792.07027343754830.898046875
 hive result:48308.9848792.0702734375  4830.898046875
 
 
 I apply this patch and run the hive e2e test, and the tests all pass (without 
 this patch, 5 related failures)
 
 
 Thanks,
 
 Johnny Zhang
 




Re: Review Request: float and double calculation is inaccurate in Hive

2012-12-17 Thread Johnny Zhang


 On Dec. 18, 2012, 12:38 a.m., Mark Grover wrote:
  http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java,
   line 50
  https://reviews.apache.org/r/8653/diff/1/?file=240423#file240423line50
 
  10 seems to be a rather arbitrary number for scale. Any particular 
  reason you are using it? Maybe we should invoke the method where no scale 
  needs to be specified.

Hi, Mark, thanks for reviewing it. The reason using 10 is because it is the 
same as mysql default precision setting. Just want to make the calculation 
result identical to mysql's


 On Dec. 18, 2012, 12:38 a.m., Mark Grover wrote:
  http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMultiply.java,
   line 112
  https://reviews.apache.org/r/8653/diff/1/?file=240424#file240424line112
 
  You seem to be doing
  DoubleWritable-String-BigDecimal
  
  There probably is a way to do:
  DoubleWritable-Double-BigDecimal
  
  I am not sure if it's any more efficient the present case. So, take 
  this suggestion with a grain of salt:-)
 

the reason using constructor with String parameter is because using constructor 
with double parameter would reduce the precision before calculation. There is a 
similar discussion regarding it 
http://www.coderanch.com/t/408226/java/java/Double-BigDecimal-Conversion-problems

you will see the difference between creating an instance using a double (whose 
precision has already been compromised by forcing it into IEEE 754 standards) 
and creating an instance using a String (which can be translated accurately). 


- Johnny


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/8653/#review14625
---


On Dec. 18, 2012, 12:37 a.m., Johnny Zhang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/8653/
 ---
 
 (Updated Dec. 18, 2012, 12:37 a.m.)
 
 
 Review request for hive.
 
 
 Description
 ---
 
 I found this during debug the e2e test failures. I found Hive miss calculate 
 the float and double value. Take float calculation as an example:
 hive select f from all100k limit 1;
 48308.98
 hive select f/10 from all100k limit 1;
 4830.898046875 --added 04875 in the end
 hive select f*1.01 from all100k limit 1;
 48792.0702734375 --should be 48792.0698
 It might be essentially the same problem as 
 http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm 
 But since e2e test compare the results with mysql and seems mysql does it 
 right, so it is worthy fixing it in Hive.
 
 
 This addresses bug HIVE-3715.
 https://issues.apache.org/jira/browse/HIVE-3715
 
 
 Diffs
 -
 
   
 http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
  1423224 
   
 http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMultiply.java
  1423224 
 
 Diff: https://reviews.apache.org/r/8653/diff/
 
 
 Testing
 ---
 
 I did test to compare the result with mysql default float precision setting, 
 the result is identical.
 
 query:  select f, f*1.01, f/10 from all100k limit 1;
 mysql result:   48309   48792.07027343754830.898046875
 hive result:48308.9848792.0702734375  4830.898046875
 
 
 I apply this patch and run the hive e2e test, and the tests all pass (without 
 this patch, 5 related failures)
 
 
 Thanks,
 
 Johnny Zhang
 




Re: Review Request: float and double calculation is inaccurate in Hive

2012-12-17 Thread Johnny Zhang


 On Dec. 18, 2012, 12:38 a.m., Mark Grover wrote:
  http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java,
   line 50
  https://reviews.apache.org/r/8653/diff/1/?file=240423#file240423line50
 
  10 seems to be a rather arbitrary number for scale. Any particular 
  reason you are using it? Maybe we should invoke the method where no scale 
  needs to be specified.
 
 Johnny Zhang wrote:
 Hi, Mark, thanks for reviewing it. The reason using 10 is because it is 
 the same as mysql default precision setting. Just want to make the 
 calculation result identical to mysql's

I think I did tried without specify scale, and the result is different from 
mysql. I agree hard coding the scale is not a good way. Open to other 
suggestions.


- Johnny


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/8653/#review14625
---


On Dec. 18, 2012, 12:37 a.m., Johnny Zhang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/8653/
 ---
 
 (Updated Dec. 18, 2012, 12:37 a.m.)
 
 
 Review request for hive.
 
 
 Description
 ---
 
 I found this during debug the e2e test failures. I found Hive miss calculate 
 the float and double value. Take float calculation as an example:
 hive select f from all100k limit 1;
 48308.98
 hive select f/10 from all100k limit 1;
 4830.898046875 --added 04875 in the end
 hive select f*1.01 from all100k limit 1;
 48792.0702734375 --should be 48792.0698
 It might be essentially the same problem as 
 http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm 
 But since e2e test compare the results with mysql and seems mysql does it 
 right, so it is worthy fixing it in Hive.
 
 
 This addresses bug HIVE-3715.
 https://issues.apache.org/jira/browse/HIVE-3715
 
 
 Diffs
 -
 
   
 http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPDivide.java
  1423224 
   
 http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFOPMultiply.java
  1423224 
 
 Diff: https://reviews.apache.org/r/8653/diff/
 
 
 Testing
 ---
 
 I did test to compare the result with mysql default float precision setting, 
 the result is identical.
 
 query:  select f, f*1.01, f/10 from all100k limit 1;
 mysql result:   48309   48792.07027343754830.898046875
 hive result:48308.9848792.0702734375  4830.898046875
 
 
 I apply this patch and run the hive e2e test, and the tests all pass (without 
 this patch, 5 related failures)
 
 
 Thanks,
 
 Johnny Zhang
 




[jira] [Commented] (HIVE-3646) Add 'IGNORE PROTECTION' predicate for dropping partitions

2012-12-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534496#comment-13534496
 ] 

Hudson commented on HIVE-3646:
--

Integrated in Hive-trunk-h0.21 #1861 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1861/])
HIVE-3646 Add 'IGNORE PROTECTION' predicate for dropping partitions
(Andrew Chalfant via namit) (Revision 1422844)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1422844
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/DropTableDesc.java
* 
/hive/trunk/ql/src/test/queries/clientpositive/drop_partitions_ignore_protection.q
* 
/hive/trunk/ql/src/test/results/clientpositive/drop_partitions_ignore_protection.q.out


 Add 'IGNORE PROTECTION' predicate for dropping partitions
 -

 Key: HIVE-3646
 URL: https://issues.apache.org/jira/browse/HIVE-3646
 Project: Hive
  Issue Type: New Feature
  Components: CLI
Reporter: Andrew Chalfant
Assignee: Andrew Chalfant
Priority: Minor
 Fix For: 0.11

 Attachments: HIVE-3646.1.patch.txt, HIVE-3646.2.patch.txt, 
 HIVE-3646.3.patch.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 There are cases where it is desirable to move partitions between clusters. 
 Having to undo protection and then re-protect tables in order to delete 
 partitions from a source are multi-step and can leave us in a failed open 
 state where partition and table metadata is dirty. By implementing an 'rm 
 -rf'-like functionality, we can perform these operations atomically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 1861 - Failure

2012-12-17 Thread Apache Jenkins Server
Changes for Build #1861
[namit] HIVE-3646 Add 'IGNORE PROTECTION' predicate for dropping partitions
(Andrew Chalfant via namit)




1 tests failed.
REGRESSION:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_stats_aggregator_error_1

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at 
net.sf.antcontrib.logic.ForTask.doSequentialIteration(ForTask.java:259)
at net.sf.antcontrib.logic.ForTask.doToken(ForTask.java:268)
at net.sf.antcontrib.logic.ForTask.doTheTasks(ForTask.java:324)
at net.sf.antcontrib.logic.ForTask.execute(ForTask.java:244)




The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1861)

Status: Failure

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1861/ to 
view the results.

[jira] [Commented] (HIVE-3527) Allow CREATE TABLE LIKE command to take TBLPROPERTIES

2012-12-17 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534555#comment-13534555
 ] 

Kevin Wilfong commented on HIVE-3527:
-

Tests passed.

 Allow CREATE TABLE LIKE command to take TBLPROPERTIES
 -

 Key: HIVE-3527
 URL: https://issues.apache.org/jira/browse/HIVE-3527
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3527.1.patch.txt, hive.3527.2.patch, 
 HIVE-3527.3.patch.txt, HIVE-3527.D5883.1.patch


 CREATE TABLE ... LIKE ... commands currently don't take TBLPROPERTIES.  I 
 think it would be a useful feature.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3814) Cannot drop partitions on table when using Oracle metastore

2012-12-17 Thread Deepesh Khandelwal (JIRA)
Deepesh Khandelwal created HIVE-3814:


 Summary: Cannot drop partitions on table when using Oracle 
metastore
 Key: HIVE-3814
 URL: https://issues.apache.org/jira/browse/HIVE-3814
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
 Environment: Oracle 11g r2
Reporter: Deepesh Khandelwal
Priority: Critical


Create a table with a partition. Try to drop the partition or the table 
containing the partition. Following error is seen:
FAILED: Error in metadata: 
MetaException(message:javax.jdo.JDODataStoreException: Error executing JDOQL 
query SELECT 
'org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics' AS 
NUCLEUS_TYPE,THIS.AVG_COL_LEN,THIS.COLUMN_NAME,THIS.COLUMN_TYPE,THIS.DB_NAME,THIS.DOUBLE_HIGH_VALUE,THIS.DOUBLE_LOW_VALUE,THIS.LAST_ANALYZED,THIS.LONG_HIGH_VALUE,THIS.LONG_LOW_VALUE,THIS.MAX_COL_LEN,THIS.NUM_DISTINCTS,THIS.NUM_FALSES,THIS.NUM_NULLS,THIS.NUM_TRUES,THIS.PARTITION_NAME,THIS.TABLE_NAME,THIS.CS_ID
 FROM PART_COL_STATS THIS LEFT OUTER JOIN PARTITIONS 
THIS_PARTITION_PARTITION_NAME ON THIS.PART_ID = 
THIS_PARTITION_PARTITION_NAME.PART_ID WHERE 
THIS_PARTITION_PARTITION_NAME.PART_NAME = ? AND THIS.DB_NAME = ? AND 
THIS.TABLE_NAME = ? : ORA-00904: THIS.PARTITION_NAME: invalid identifier

The problem here is that the column PARTITION_NAME that the query is 
referring to in table PART_COL_STATS is non-existent. Looking at the hive 
schema scripts for mysql  derby, this should be PARTITION_NAME. Postgres 
also suffers from the same problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3814) Cannot drop partitions on table when using Oracle metastore

2012-12-17 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-3814:
-

Attachment: HIVE-3814.patch

Attaching a patch that fixes the issue.

 Cannot drop partitions on table when using Oracle metastore
 ---

 Key: HIVE-3814
 URL: https://issues.apache.org/jira/browse/HIVE-3814
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
 Environment: Oracle 11g r2
Reporter: Deepesh Khandelwal
Priority: Critical
 Attachments: HIVE-3814.patch


 Create a table with a partition. Try to drop the partition or the table 
 containing the partition. Following error is seen:
 FAILED: Error in metadata: 
 MetaException(message:javax.jdo.JDODataStoreException: Error executing JDOQL 
 query SELECT 
 'org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics' AS 
 NUCLEUS_TYPE,THIS.AVG_COL_LEN,THIS.COLUMN_NAME,THIS.COLUMN_TYPE,THIS.DB_NAME,THIS.DOUBLE_HIGH_VALUE,THIS.DOUBLE_LOW_VALUE,THIS.LAST_ANALYZED,THIS.LONG_HIGH_VALUE,THIS.LONG_LOW_VALUE,THIS.MAX_COL_LEN,THIS.NUM_DISTINCTS,THIS.NUM_FALSES,THIS.NUM_NULLS,THIS.NUM_TRUES,THIS.PARTITION_NAME,THIS.TABLE_NAME,THIS.CS_ID
  FROM PART_COL_STATS THIS LEFT OUTER JOIN PARTITIONS 
 THIS_PARTITION_PARTITION_NAME ON THIS.PART_ID = 
 THIS_PARTITION_PARTITION_NAME.PART_ID WHERE 
 THIS_PARTITION_PARTITION_NAME.PART_NAME = ? AND THIS.DB_NAME = ? AND 
 THIS.TABLE_NAME = ? : ORA-00904: THIS.PARTITION_NAME: invalid 
 identifier
 The problem here is that the column PARTITION_NAME that the query is 
 referring to in table PART_COL_STATS is non-existent. Looking at the hive 
 schema scripts for mysql  derby, this should be PARTITION_NAME. Postgres 
 also suffers from the same problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3814) Cannot drop partitions on table when using Oracle metastore

2012-12-17 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-3814:
-

Fix Version/s: 0.10.0
   Status: Patch Available  (was: Open)

 Cannot drop partitions on table when using Oracle metastore
 ---

 Key: HIVE-3814
 URL: https://issues.apache.org/jira/browse/HIVE-3814
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
 Environment: Oracle 11g r2
Reporter: Deepesh Khandelwal
Priority: Critical
 Fix For: 0.10.0

 Attachments: HIVE-3814.patch


 Create a table with a partition. Try to drop the partition or the table 
 containing the partition. Following error is seen:
 FAILED: Error in metadata: 
 MetaException(message:javax.jdo.JDODataStoreException: Error executing JDOQL 
 query SELECT 
 'org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics' AS 
 NUCLEUS_TYPE,THIS.AVG_COL_LEN,THIS.COLUMN_NAME,THIS.COLUMN_TYPE,THIS.DB_NAME,THIS.DOUBLE_HIGH_VALUE,THIS.DOUBLE_LOW_VALUE,THIS.LAST_ANALYZED,THIS.LONG_HIGH_VALUE,THIS.LONG_LOW_VALUE,THIS.MAX_COL_LEN,THIS.NUM_DISTINCTS,THIS.NUM_FALSES,THIS.NUM_NULLS,THIS.NUM_TRUES,THIS.PARTITION_NAME,THIS.TABLE_NAME,THIS.CS_ID
  FROM PART_COL_STATS THIS LEFT OUTER JOIN PARTITIONS 
 THIS_PARTITION_PARTITION_NAME ON THIS.PART_ID = 
 THIS_PARTITION_PARTITION_NAME.PART_ID WHERE 
 THIS_PARTITION_PARTITION_NAME.PART_NAME = ? AND THIS.DB_NAME = ? AND 
 THIS.TABLE_NAME = ? : ORA-00904: THIS.PARTITION_NAME: invalid 
 identifier
 The problem here is that the column PARTITION_NAME that the query is 
 referring to in table PART_COL_STATS is non-existent. Looking at the hive 
 schema scripts for mysql  derby, this should be PARTITION_NAME. Postgres 
 also suffers from the same problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3794) Oracle upgrade script for Hive is broken

2012-12-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3794:
---

Assignee: Deepesh Khandelwal

 Oracle upgrade script for Hive is broken
 

 Key: HIVE-3794
 URL: https://issues.apache.org/jira/browse/HIVE-3794
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
 Environment: Oracle 11g r2
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
Priority: Critical
 Fix For: 0.10.0

 Attachments: HIVE-3794.patch


 As part of Hive configuration for Oracle I ran the schema creation script for 
 Oracle. Here is what I observed when ran the script:
 % sqlplus hive/hive@xe
 SQL*Plus: Release 11.2.0.2.0 Production on Mon Dec 10 18:47:11 2012
 Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 Connected to:
 Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 SQL @scripts/metastore/upgrade/oracle/hive-schema-0.10.0.oracle.sql;
 .
 ALTER TABLE SKEWED_STRING_LIST_VALUES ADD CONSTRAINT 
 SKEWED_STRING_LIST_VALUES_FK1 FOREIGN KEY (STRING_LIST_ID) REFERENCES 
 SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY DEFERRED
   
  *
 ERROR at line 1:
 {color:red}ORA-00904: STRING_LIST_ID: invalid identifier{color}
 .
 ALTER TABLE SKEWED_STRING_LIST_VALUES ADD CONSTRAINT 
 SKEWED_STRING_LIST_VALUES_FK1 FOREIGN KEY (STRING_LIST_ID) REFERENCES 
 SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY DEFERRED
   
  *
 ERROR at line 1:
 {color:red}ORA-00904: STRING_LIST_ID: invalid identifier{color}
 Table created.
 Table altered.
 Table altered.
 CREATE TABLE SKEWED_COL_VALUE_LOCATION_MAPPING
  *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 Table created.
 Table created.
 ALTER TABLE SKEWED_COL_VALUE_LOCATION_MAPPING ADD CONSTRAINT 
 SKEWED_COL_VALUE_LOCATION_MAPPING_PK PRIMARY KEY (SD_ID,STRING_LIST_ID_KID)
 *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 ALTER TABLE SKEWED_COL_VALUE_LOCATION_MAPPING ADD CONSTRAINT 
 SKEWED_COL_VALUE_LOCATION_MAPPING_FK1 FOREIGN KEY (STRING_LIST_ID_KID) 
 REFERENCES SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY DEFERRED
 *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 ALTER TABLE SKEWED_COL_VALUE_LOCATION_MAPPING ADD CONSTRAINT 
 SKEWED_COL_VALUE_LOCATION_MAPPING_FK2 FOREIGN KEY (SD_ID) REFERENCES SDS 
 (SD_ID) INITIALLY DEFERRED
 *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 Table created.
 Table altered.
 ALTER TABLE SKEWED_VALUES ADD CONSTRAINT SKEWED_VALUES_FK1 FOREIGN KEY 
 (STRING_LIST_ID_EID) REFERENCES SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY 
 DEFERRED
   
  *
 ERROR at line 1:
 {color:red}ORA-00904: STRING_LIST_ID: invalid identifier{color}
 Basically there are two issues here with the Oracle sql script:
 (1) Table SKEWED_STRING_LIST is created with the column SD_ID. Later the 
 script tries to reference STRING_LIST_ID column in SKEWED_STRING_LIST 
 which is obviously not there. Comparing the sql with that for other flavors 
 it seems it should be STRING_LIST_ID.
 (2) Table name SKEWED_COL_VALUE_LOCATION_MAPPING is too long for Oracle 
 which limits identifier names to 30 characters. Also impacted are identifiers 
 SKEWED_COL_VALUE_LOCATION_MAPPING_PK and 
 SKEWED_COL_VALUE_LOCATION_MAPPING_FK1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3814) Cannot drop partitions on table when using Oracle metastore

2012-12-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3814:
---

Assignee: Deepesh Khandelwal

 Cannot drop partitions on table when using Oracle metastore
 ---

 Key: HIVE-3814
 URL: https://issues.apache.org/jira/browse/HIVE-3814
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
 Environment: Oracle 11g r2
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
Priority: Critical
 Fix For: 0.10.0

 Attachments: HIVE-3814.patch


 Create a table with a partition. Try to drop the partition or the table 
 containing the partition. Following error is seen:
 FAILED: Error in metadata: 
 MetaException(message:javax.jdo.JDODataStoreException: Error executing JDOQL 
 query SELECT 
 'org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics' AS 
 NUCLEUS_TYPE,THIS.AVG_COL_LEN,THIS.COLUMN_NAME,THIS.COLUMN_TYPE,THIS.DB_NAME,THIS.DOUBLE_HIGH_VALUE,THIS.DOUBLE_LOW_VALUE,THIS.LAST_ANALYZED,THIS.LONG_HIGH_VALUE,THIS.LONG_LOW_VALUE,THIS.MAX_COL_LEN,THIS.NUM_DISTINCTS,THIS.NUM_FALSES,THIS.NUM_NULLS,THIS.NUM_TRUES,THIS.PARTITION_NAME,THIS.TABLE_NAME,THIS.CS_ID
  FROM PART_COL_STATS THIS LEFT OUTER JOIN PARTITIONS 
 THIS_PARTITION_PARTITION_NAME ON THIS.PART_ID = 
 THIS_PARTITION_PARTITION_NAME.PART_ID WHERE 
 THIS_PARTITION_PARTITION_NAME.PART_NAME = ? AND THIS.DB_NAME = ? AND 
 THIS.TABLE_NAME = ? : ORA-00904: THIS.PARTITION_NAME: invalid 
 identifier
 The problem here is that the column PARTITION_NAME that the query is 
 referring to in table PART_COL_STATS is non-existent. Looking at the hive 
 schema scripts for mysql  derby, this should be PARTITION_NAME. Postgres 
 also suffers from the same problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3792) hive pom file has missing conf and scope mapping for compile configuration.

2012-12-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3792:
---

Summary: hive pom file has missing conf and scope mapping for compile 
configuration.   (was: hive jars are not part of the share lib tar ball in 
oozie)

 hive pom file has missing conf and scope mapping for compile configuration. 
 

 Key: HIVE-3792
 URL: https://issues.apache.org/jira/browse/HIVE-3792
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.10.0
Reporter: Ashish Singh
Assignee: Ashish Singh
 Fix For: 0.10.0

 Attachments: HIVE-3792.patch


 hive-0.10.0 pom file has missing conf and scope mapping for compile 
 configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3794) Oracle upgrade script for Hive is broken

2012-12-17 Thread Deepesh Khandelwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534594#comment-13534594
 ] 

Deepesh Khandelwal commented on HIVE-3794:
--

Added an Apache Review Board entry at https://reviews.apache.org/r/8664/.

 Oracle upgrade script for Hive is broken
 

 Key: HIVE-3794
 URL: https://issues.apache.org/jira/browse/HIVE-3794
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
 Environment: Oracle 11g r2
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
Priority: Critical
 Fix For: 0.10.0

 Attachments: HIVE-3794.patch


 As part of Hive configuration for Oracle I ran the schema creation script for 
 Oracle. Here is what I observed when ran the script:
 % sqlplus hive/hive@xe
 SQL*Plus: Release 11.2.0.2.0 Production on Mon Dec 10 18:47:11 2012
 Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 Connected to:
 Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 SQL @scripts/metastore/upgrade/oracle/hive-schema-0.10.0.oracle.sql;
 .
 ALTER TABLE SKEWED_STRING_LIST_VALUES ADD CONSTRAINT 
 SKEWED_STRING_LIST_VALUES_FK1 FOREIGN KEY (STRING_LIST_ID) REFERENCES 
 SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY DEFERRED
   
  *
 ERROR at line 1:
 {color:red}ORA-00904: STRING_LIST_ID: invalid identifier{color}
 .
 ALTER TABLE SKEWED_STRING_LIST_VALUES ADD CONSTRAINT 
 SKEWED_STRING_LIST_VALUES_FK1 FOREIGN KEY (STRING_LIST_ID) REFERENCES 
 SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY DEFERRED
   
  *
 ERROR at line 1:
 {color:red}ORA-00904: STRING_LIST_ID: invalid identifier{color}
 Table created.
 Table altered.
 Table altered.
 CREATE TABLE SKEWED_COL_VALUE_LOCATION_MAPPING
  *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 Table created.
 Table created.
 ALTER TABLE SKEWED_COL_VALUE_LOCATION_MAPPING ADD CONSTRAINT 
 SKEWED_COL_VALUE_LOCATION_MAPPING_PK PRIMARY KEY (SD_ID,STRING_LIST_ID_KID)
 *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 ALTER TABLE SKEWED_COL_VALUE_LOCATION_MAPPING ADD CONSTRAINT 
 SKEWED_COL_VALUE_LOCATION_MAPPING_FK1 FOREIGN KEY (STRING_LIST_ID_KID) 
 REFERENCES SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY DEFERRED
 *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 ALTER TABLE SKEWED_COL_VALUE_LOCATION_MAPPING ADD CONSTRAINT 
 SKEWED_COL_VALUE_LOCATION_MAPPING_FK2 FOREIGN KEY (SD_ID) REFERENCES SDS 
 (SD_ID) INITIALLY DEFERRED
 *
 ERROR at line 1:
 {color:red}ORA-00972: identifier is too long{color}
 Table created.
 Table altered.
 ALTER TABLE SKEWED_VALUES ADD CONSTRAINT SKEWED_VALUES_FK1 FOREIGN KEY 
 (STRING_LIST_ID_EID) REFERENCES SKEWED_STRING_LIST (STRING_LIST_ID) INITIALLY 
 DEFERRED
   
  *
 ERROR at line 1:
 {color:red}ORA-00904: STRING_LIST_ID: invalid identifier{color}
 Basically there are two issues here with the Oracle sql script:
 (1) Table SKEWED_STRING_LIST is created with the column SD_ID. Later the 
 script tries to reference STRING_LIST_ID column in SKEWED_STRING_LIST 
 which is obviously not there. Comparing the sql with that for other flavors 
 it seems it should be STRING_LIST_ID.
 (2) Table name SKEWED_COL_VALUE_LOCATION_MAPPING is too long for Oracle 
 which limits identifier names to 30 characters. Also impacted are identifiers 
 SKEWED_COL_VALUE_LOCATION_MAPPING_PK and 
 SKEWED_COL_VALUE_LOCATION_MAPPING_FK1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3792) hive pom file has missing conf and scope mapping for compile configuration.

2012-12-17 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534593#comment-13534593
 ] 

Ashutosh Chauhan commented on HIVE-3792:


+1

 hive pom file has missing conf and scope mapping for compile configuration. 
 

 Key: HIVE-3792
 URL: https://issues.apache.org/jira/browse/HIVE-3792
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.10.0
Reporter: Ashish Singh
Assignee: Ashish Singh
 Fix For: 0.10.0

 Attachments: HIVE-3792.patch


 hive-0.10.0 pom file has missing conf and scope mapping for compile 
 configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3815) hive table rename fails if filesystem cache is disabled

2012-12-17 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-3815:


Description: 
If fs.filesyste.impl.disable.cache  (eg fs.hdfs.impl.disable.cache) is set to 
true, then table rename fails.


The exception that gets thrown (though not logged!) is 
{quote}
Caused by: InvalidOperationException(message:table new location 
hdfs://host1:8020/apps/hive/warehouse/t2 is on a different file system than the 
old location hdfs://host1:8020/apps/hive/warehouse/t1. This operation is not 
supported)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_result$alter_table_resultStandardScheme.read(ThriftHiveMetastore.java:28825)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_result$alter_table_resultStandardScheme.read(ThriftHiveMetastore.java:28811)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_result.read(ThriftHiveMetastore.java:28753)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_alter_table(ThriftHiveMetastore.java:977)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.alter_table(ThriftHiveMetastore.java:962)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:208)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74)
at $Proxy7.alter_table(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:373)
... 18 more
{quote}

  was:
If fs.filesyste.impl.disable.cache  (eg fs.hdfs.impl.disable.cache) is set to 
true, then table rename fails.


The exception that gets thrown (though not logged!) is 
{quote}
Caused by: InvalidOperationException(message:table new location 
hdfs://ip-10-40-69-195.ec2.internal:8020/apps/hive/warehouse/t2 is on a 
different file system than the old location 
hdfs://ip-10-40-69-195.ec2.internal:8020/apps/hive/warehouse/t1. This operation 
is not supported)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_result$alter_table_resultStandardScheme.read(ThriftHiveMetastore.java:28825)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_result$alter_table_resultStandardScheme.read(ThriftHiveMetastore.java:28811)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_result.read(ThriftHiveMetastore.java:28753)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_alter_table(ThriftHiveMetastore.java:977)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.alter_table(ThriftHiveMetastore.java:962)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:208)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74)
at $Proxy7.alter_table(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:373)
... 18 more
{quote}


 hive table rename fails if filesystem cache is disabled
 ---

 Key: HIVE-3815
 URL: https://issues.apache.org/jira/browse/HIVE-3815
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.10.0


 If fs.filesyste.impl.disable.cache  (eg fs.hdfs.impl.disable.cache) is set 
 to true, then table rename fails.
 The exception that gets thrown (though not logged!) is 
 {quote}
 Caused by: InvalidOperationException(message:table new location 
 hdfs://host1:8020/apps/hive/warehouse/t2 is on a different file system than 
 the old location hdfs://host1:8020/apps/hive/warehouse/t1. This operation is 
 not supported)
 at 
 org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_result$alter_table_resultStandardScheme.read(ThriftHiveMetastore.java:28825)
 at 
 

[jira] [Created] (HIVE-3815) hive table rename fails if filesystem cache is disabled

2012-12-17 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-3815:
---

 Summary: hive table rename fails if filesystem cache is disabled
 Key: HIVE-3815
 URL: https://issues.apache.org/jira/browse/HIVE-3815
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.10.0


If fs.filesyste.impl.disable.cache  (eg fs.hdfs.impl.disable.cache) is set to 
true, then table rename fails.


The exception that gets thrown (though not logged!) is 
{quote}
Caused by: InvalidOperationException(message:table new location 
hdfs://ip-10-40-69-195.ec2.internal:8020/apps/hive/warehouse/t2 is on a 
different file system than the old location 
hdfs://ip-10-40-69-195.ec2.internal:8020/apps/hive/warehouse/t1. This operation 
is not supported)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_result$alter_table_resultStandardScheme.read(ThriftHiveMetastore.java:28825)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_result$alter_table_resultStandardScheme.read(ThriftHiveMetastore.java:28811)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_result.read(ThriftHiveMetastore.java:28753)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_alter_table(ThriftHiveMetastore.java:977)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.alter_table(ThriftHiveMetastore.java:962)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:208)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74)
at $Proxy7.alter_table(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:373)
... 18 more
{quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3814) Cannot drop partitions on table when using Oracle metastore

2012-12-17 Thread Deepesh Khandelwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534602#comment-13534602
 ] 

Deepesh Khandelwal commented on HIVE-3814:
--

Added an Apache Review Board entry at https://reviews.apache.org/r/8665/.

 Cannot drop partitions on table when using Oracle metastore
 ---

 Key: HIVE-3814
 URL: https://issues.apache.org/jira/browse/HIVE-3814
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
 Environment: Oracle 11g r2
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
Priority: Critical
 Fix For: 0.10.0

 Attachments: HIVE-3814.patch


 Create a table with a partition. Try to drop the partition or the table 
 containing the partition. Following error is seen:
 FAILED: Error in metadata: 
 MetaException(message:javax.jdo.JDODataStoreException: Error executing JDOQL 
 query SELECT 
 'org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics' AS 
 NUCLEUS_TYPE,THIS.AVG_COL_LEN,THIS.COLUMN_NAME,THIS.COLUMN_TYPE,THIS.DB_NAME,THIS.DOUBLE_HIGH_VALUE,THIS.DOUBLE_LOW_VALUE,THIS.LAST_ANALYZED,THIS.LONG_HIGH_VALUE,THIS.LONG_LOW_VALUE,THIS.MAX_COL_LEN,THIS.NUM_DISTINCTS,THIS.NUM_FALSES,THIS.NUM_NULLS,THIS.NUM_TRUES,THIS.PARTITION_NAME,THIS.TABLE_NAME,THIS.CS_ID
  FROM PART_COL_STATS THIS LEFT OUTER JOIN PARTITIONS 
 THIS_PARTITION_PARTITION_NAME ON THIS.PART_ID = 
 THIS_PARTITION_PARTITION_NAME.PART_ID WHERE 
 THIS_PARTITION_PARTITION_NAME.PART_NAME = ? AND THIS.DB_NAME = ? AND 
 THIS.TABLE_NAME = ? : ORA-00904: THIS.PARTITION_NAME: invalid 
 identifier
 The problem here is that the column PARTITION_NAME that the query is 
 referring to in table PART_COL_STATS is non-existent. Looking at the hive 
 schema scripts for mysql  derby, this should be PARTITION_NAME. Postgres 
 also suffers from the same problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3803) explain dependency should show the dependencies hierarchically in presence of views

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3803:
-

Attachment: hive.3803.3.patch

 explain dependency should show the dependencies hierarchically in presence of 
 views
 ---

 Key: HIVE-3803
 URL: https://issues.apache.org/jira/browse/HIVE-3803
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3803.1.patch, hive.3803.2.patch, hive.3803.3.patch


 It should also include tables whose partitions are being accessed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3795) NPE in SELECT when WHERE-clause is an and/or/not operation involving null

2012-12-17 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534623#comment-13534623
 ] 

Namit Jain commented on HIVE-3795:
--

+1

running tests

 NPE in SELECT when WHERE-clause is an and/or/not operation involving null
 -

 Key: HIVE-3795
 URL: https://issues.apache.org/jira/browse/HIVE-3795
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Xiao Jiang
Assignee: Xiao Jiang
Priority: Trivial
 Attachments: HIVE-3795.1.patch.txt, HIVE-3795.2.patch.txt, 
 HIVE-3795.3.patch.txt


 Sometimes users forget to quote date constants in queries. For example, 
 SELECT * FROM some_table WHERE ds = 2012-12-10 and ds = 2012-12-12; . In 
 such cases, if the WHERE-clause contains and/or/not operation, it would throw 
 NPE exception. That's because PcrExprProcFactory in ql/optimizer forgot to 
 check null. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3795) NPE in SELECT when WHERE-clause is an and/or/not operation involving null

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3795:
-

Attachment: hive.3795.4.patch

 NPE in SELECT when WHERE-clause is an and/or/not operation involving null
 -

 Key: HIVE-3795
 URL: https://issues.apache.org/jira/browse/HIVE-3795
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Xiao Jiang
Assignee: Xiao Jiang
Priority: Trivial
 Attachments: HIVE-3795.1.patch.txt, HIVE-3795.2.patch.txt, 
 HIVE-3795.3.patch.txt, hive.3795.4.patch


 Sometimes users forget to quote date constants in queries. For example, 
 SELECT * FROM some_table WHERE ds = 2012-12-10 and ds = 2012-12-12; . In 
 such cases, if the WHERE-clause contains and/or/not operation, it would throw 
 NPE exception. That's because PcrExprProcFactory in ql/optimizer forgot to 
 check null. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3633) sort-merge join does not work with sub-queries

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3633:
-

Attachment: hive.3633.11.patch

 sort-merge join does not work with sub-queries
 --

 Key: HIVE-3633
 URL: https://issues.apache.org/jira/browse/HIVE-3633
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3633.10.patch, hive.3633.11.patch, 
 hive.3633.1.patch, hive.3633.2.patch, hive.3633.3.patch, hive.3633.4.patch, 
 hive.3633.5.patch, hive.3633.6.patch, hive.3633.7.patch, hive.3633.8.patch, 
 hive.3633.9.patch


 Consider the following query:
 create table smb_bucket_1(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 create table smb_bucket_2(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 -- load the above tables
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select count(*) from
 (
 select /*+mapjoin(a)*/ a.key as key1, b.key as key2, a.value as value1, 
 b.value as value2
 from smb_bucket_1 a join smb_bucket_2 b on a.key = b.key)
 subq;
 The above query does not use sort-merge join. This would be very useful as we 
 automatically convert the queries to use sorting and bucketing properties for 
 join.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3633) sort-merge join does not work with sub-queries

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3633:
-

Status: Patch Available  (was: Open)

addressed comments - made new tests deterministic

 sort-merge join does not work with sub-queries
 --

 Key: HIVE-3633
 URL: https://issues.apache.org/jira/browse/HIVE-3633
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3633.10.patch, hive.3633.11.patch, 
 hive.3633.1.patch, hive.3633.2.patch, hive.3633.3.patch, hive.3633.4.patch, 
 hive.3633.5.patch, hive.3633.6.patch, hive.3633.7.patch, hive.3633.8.patch, 
 hive.3633.9.patch


 Consider the following query:
 create table smb_bucket_1(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 create table smb_bucket_2(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 -- load the above tables
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select count(*) from
 (
 select /*+mapjoin(a)*/ a.key as key1, b.key as key2, a.value as value1, 
 b.value as value2
 from smb_bucket_1 a join smb_bucket_2 b on a.key = b.key)
 subq;
 The above query does not use sort-merge join. This would be very useful as we 
 automatically convert the queries to use sorting and bucketing properties for 
 join.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HIVE-3810) HiveHistory.log need to replace '\r' with space before writing Entry.value to historyfile

2012-12-17 Thread Mark Grover (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Grover reassigned HIVE-3810:
-

Assignee: Mark Grover

 HiveHistory.log need to replace '\r' with space before writing Entry.value to 
 historyfile
 -

 Key: HIVE-3810
 URL: https://issues.apache.org/jira/browse/HIVE-3810
 Project: Hive
  Issue Type: Bug
  Components: Logging
Reporter: qiangwang
Assignee: Mark Grover
Priority: Minor
 Attachments: HIVE-3810.1.patch


 HiveHistory.log will replace '\n' with space before writing Entry.value to 
 history file:
 val = val.replace('\n', ' ');
 but HiveHistory.parseHiveHistory use BufferedReader.readLine which takes 
 '\n', '\r', '\r\n'  as line delimiter to parse history file
 if val contains '\r', there is a high possibility that HiveHistory.parseLine 
 will fail, in which case usually RecordTypes.valueOf(recType) will throw 
 exception 'java.lang.IllegalArgumentException'
 HiveHistory.log need to replace '\r' with space as well:
 val = val.replace('\n', ' ');
 changed to
 val = val.replaceAll(\r|\n,  );
 or
 val = val.replace('\r', ' ').replace('\n', ' ');

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3810) HiveHistory.log need to replace '\r' with space before writing Entry.value to historyfile

2012-12-17 Thread Mark Grover (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Grover updated HIVE-3810:
--

Attachment: HIVE-3810.1.patch

 HiveHistory.log need to replace '\r' with space before writing Entry.value to 
 historyfile
 -

 Key: HIVE-3810
 URL: https://issues.apache.org/jira/browse/HIVE-3810
 Project: Hive
  Issue Type: Bug
  Components: Logging
Reporter: qiangwang
Assignee: Mark Grover
Priority: Minor
 Attachments: HIVE-3810.1.patch


 HiveHistory.log will replace '\n' with space before writing Entry.value to 
 history file:
 val = val.replace('\n', ' ');
 but HiveHistory.parseHiveHistory use BufferedReader.readLine which takes 
 '\n', '\r', '\r\n'  as line delimiter to parse history file
 if val contains '\r', there is a high possibility that HiveHistory.parseLine 
 will fail, in which case usually RecordTypes.valueOf(recType) will throw 
 exception 'java.lang.IllegalArgumentException'
 HiveHistory.log need to replace '\r' with space as well:
 val = val.replace('\n', ' ');
 changed to
 val = val.replaceAll(\r|\n,  );
 or
 val = val.replace('\r', ' ').replace('\n', ' ');

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3810) HiveHistory.log need to replace '\r' with space before writing Entry.value to historyfile

2012-12-17 Thread Mark Grover (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Grover updated HIVE-3810:
--

Status: Patch Available  (was: Open)

 HiveHistory.log need to replace '\r' with space before writing Entry.value to 
 historyfile
 -

 Key: HIVE-3810
 URL: https://issues.apache.org/jira/browse/HIVE-3810
 Project: Hive
  Issue Type: Bug
  Components: Logging
Reporter: qiangwang
Assignee: Mark Grover
Priority: Minor
 Attachments: HIVE-3810.1.patch


 HiveHistory.log will replace '\n' with space before writing Entry.value to 
 history file:
 val = val.replace('\n', ' ');
 but HiveHistory.parseHiveHistory use BufferedReader.readLine which takes 
 '\n', '\r', '\r\n'  as line delimiter to parse history file
 if val contains '\r', there is a high possibility that HiveHistory.parseLine 
 will fail, in which case usually RecordTypes.valueOf(recType) will throw 
 exception 'java.lang.IllegalArgumentException'
 HiveHistory.log need to replace '\r' with space as well:
 val = val.replace('\n', ' ');
 changed to
 val = val.replaceAll(\r|\n,  );
 or
 val = val.replace('\r', ' ').replace('\n', ' ');

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3803) explain dependency should show the dependencies hierarchically in presence of views

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3803:
-

Attachment: hive.3803.4.patch

 explain dependency should show the dependencies hierarchically in presence of 
 views
 ---

 Key: HIVE-3803
 URL: https://issues.apache.org/jira/browse/HIVE-3803
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3803.1.patch, hive.3803.2.patch, hive.3803.3.patch, 
 hive.3803.4.patch


 It should also include tables whose partitions are being accessed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3552) HIVE-3552 performant manner for performing cubes/rollups/grouping sets for a high number of grouping set keys

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3552:
-

Attachment: hive.3552.5.patch

 HIVE-3552 performant manner for performing cubes/rollups/grouping sets for a 
 high number of grouping set keys
 -

 Key: HIVE-3552
 URL: https://issues.apache.org/jira/browse/HIVE-3552
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3552.1.patch, hive.3552.2.patch, hive.3552.3.patch, 
 hive.3552.4.patch, hive.3552.5.patch


 This is a follow up for HIVE-3433.
 Had a offline discussion with Sambavi - she pointed out a scenario where the
 implementation in HIVE-3433 will not scale. Assume that the user is performing
 a cube on many columns, say '8' columns. So, each row would generate 256 rows
 for the hash table, which may kill the current group by implementation.
 A better implementation would be to add an additional mr job - in the first 
 mr job perform the group by assuming there was no cube. Add another mr job, 
 where
 you would perform the cube. The assumption is that the group by would have 
 decreased the output data significantly, and the rows would appear in the 
 order of
 grouping keys which has a higher probability of hitting the hash table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3552) HIVE-3552 performant manner for performing cubes/rollups/grouping sets for a high number of grouping set keys

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3552:
-

Status: Patch Available  (was: Open)

comments addressed

 HIVE-3552 performant manner for performing cubes/rollups/grouping sets for a 
 high number of grouping set keys
 -

 Key: HIVE-3552
 URL: https://issues.apache.org/jira/browse/HIVE-3552
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3552.1.patch, hive.3552.2.patch, hive.3552.3.patch, 
 hive.3552.4.patch, hive.3552.5.patch


 This is a follow up for HIVE-3433.
 Had a offline discussion with Sambavi - she pointed out a scenario where the
 implementation in HIVE-3433 will not scale. Assume that the user is performing
 a cube on many columns, say '8' columns. So, each row would generate 256 rows
 for the hash table, which may kill the current group by implementation.
 A better implementation would be to add an additional mr job - in the first 
 mr job perform the group by assuming there was no cube. Add another mr job, 
 where
 you would perform the cube. The assumption is that the group by would have 
 decreased the output data significantly, and the rows would appear in the 
 order of
 grouping keys which has a higher probability of hitting the hash table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3803) explain dependency should show the dependencies hierarchically in presence of views

2012-12-17 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534647#comment-13534647
 ] 

Namit Jain commented on HIVE-3803:
--

I was not able to refresh the phabricator entry due to length exceeded (lots of 
log files).
The attached patch file contains all the changes - the code changes are present 
in the phabricator review.

 explain dependency should show the dependencies hierarchically in presence of 
 views
 ---

 Key: HIVE-3803
 URL: https://issues.apache.org/jira/browse/HIVE-3803
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3803.1.patch, hive.3803.2.patch, hive.3803.3.patch, 
 hive.3803.4.patch


 It should also include tables whose partitions are being accessed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3537) release locks at the end of move tasks

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3537:
-

Attachment: hive.3537.5.patch

 release locks at the end of move tasks
 --

 Key: HIVE-3537
 URL: https://issues.apache.org/jira/browse/HIVE-3537
 Project: Hive
  Issue Type: Bug
  Components: Locking, Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3537.1.patch, hive.3537.2.patch, hive.3537.3.patch, 
 hive.3537.4.patch, hive.3537.5.patch


 Look at HIVE-3106 for details.
 In order to make sure that concurrency is not an issue for multi-table 
 inserts, the current option is to introduce a dependency task, which thereby
 delays the creation of all partitions. It would be desirable to release the
 locks for the outputs as soon as the move task is completed. That way, for
 multi-table inserts, the concurrency can be enabled without delaying any 
 table.
 Currently, the movetask contains a input/output, but they do not seem to be
 populated correctly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3810) HiveHistory.log need to replace '\r' with space before writing Entry.value to historyfile

2012-12-17 Thread Mark Grover (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Grover updated HIVE-3810:
--

Attachment: HIVE-3810.2.patch

Took care of mac line endings

 HiveHistory.log need to replace '\r' with space before writing Entry.value to 
 historyfile
 -

 Key: HIVE-3810
 URL: https://issues.apache.org/jira/browse/HIVE-3810
 Project: Hive
  Issue Type: Bug
  Components: Logging
Reporter: qiangwang
Assignee: Mark Grover
Priority: Minor
 Attachments: HIVE-3810.1.patch, HIVE-3810.2.patch


 HiveHistory.log will replace '\n' with space before writing Entry.value to 
 history file:
 val = val.replace('\n', ' ');
 but HiveHistory.parseHiveHistory use BufferedReader.readLine which takes 
 '\n', '\r', '\r\n'  as line delimiter to parse history file
 if val contains '\r', there is a high possibility that HiveHistory.parseLine 
 will fail, in which case usually RecordTypes.valueOf(recType) will throw 
 exception 'java.lang.IllegalArgumentException'
 HiveHistory.log need to replace '\r' with space as well:
 val = val.replace('\n', ' ');
 changed to
 val = val.replaceAll(\r|\n,  );
 or
 val = val.replace('\r', ' ').replace('\n', ' ');

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3646) Add 'IGNORE PROTECTION' predicate for dropping partitions

2012-12-17 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534660#comment-13534660
 ] 

Namit Jain commented on HIVE-3646:
--

[~chalfant], please add documentation for this change.

 Add 'IGNORE PROTECTION' predicate for dropping partitions
 -

 Key: HIVE-3646
 URL: https://issues.apache.org/jira/browse/HIVE-3646
 Project: Hive
  Issue Type: New Feature
  Components: CLI
Reporter: Andrew Chalfant
Assignee: Andrew Chalfant
Priority: Minor
 Fix For: 0.11

 Attachments: HIVE-3646.1.patch.txt, HIVE-3646.2.patch.txt, 
 HIVE-3646.3.patch.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 There are cases where it is desirable to move partitions between clusters. 
 Having to undo protection and then re-protect tables in order to delete 
 partitions from a source are multi-step and can leave us in a failed open 
 state where partition and table metadata is dirty. By implementing an 'rm 
 -rf'-like functionality, we can perform these operations atomically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3492) Provide ALTER for partition changing bucket number

2012-12-17 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534661#comment-13534661
 ] 

Namit Jain commented on HIVE-3492:
--

[~navis], please add documentation for this change.

 Provide ALTER for partition changing bucket number 
 ---

 Key: HIVE-3492
 URL: https://issues.apache.org/jira/browse/HIVE-3492
 Project: Hive
  Issue Type: Improvement
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.11

 Attachments: HIVE-3492.1.patch.txt, HIVE-3492.2.patch.txt, 
 HIVE-3492.D5589.2.patch, HIVE-3492.D5589.3.patch


 As a follow up of HIVE-3283, bucket number of a partition could be 
 set/changed individually by query like 'ALTER table srcpart 
 PARTIRION(ds='1999') SET BUCKETNUM 5'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3401) Diversify grammar for split sampling

2012-12-17 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534664#comment-13534664
 ] 

Namit Jain commented on HIVE-3401:
--

[~navis], please add documentation for this change.

 Diversify grammar for split sampling
 

 Key: HIVE-3401
 URL: https://issues.apache.org/jira/browse/HIVE-3401
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-3401.D4821.2.patch, HIVE-3401.D4821.3.patch, 
 HIVE-3401.D4821.4.patch, HIVE-3401.D4821.5.patch, HIVE-3401.D4821.6.patch, 
 HIVE-3401.D4821.7.patch


 Current split sampling only supports grammar like TABLESAMPLE(n PERCENT). But 
 some users wants to specify just the size of input. It can be easily 
 calculated with a few commands but it seemed good to support more grammars 
 something like TABLESAMPLE(500M). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3796) Multi-insert involving bucketed/sorted table turns off merging on all outputs

2012-12-17 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534665#comment-13534665
 ] 

Namit Jain commented on HIVE-3796:
--

hmmm, So you fixed a bug as a side affect.
let me take a look again

 Multi-insert involving bucketed/sorted table turns off merging on all outputs
 -

 Key: HIVE-3796
 URL: https://issues.apache.org/jira/browse/HIVE-3796
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.11
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3796.1.patch.txt, HIVE-3796.2.patch.txt, 
 HIVE-3796.3.patch.txt


 When a multi-insert query has at least one output that is bucketed, merging 
 is turned off for all outputs, rather than just the bucketed ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3401) Diversify grammar for split sampling

2012-12-17 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534686#comment-13534686
 ] 

Navis commented on HIVE-3401:
-

It's done. I've not mentioned, sorry.

 Diversify grammar for split sampling
 

 Key: HIVE-3401
 URL: https://issues.apache.org/jira/browse/HIVE-3401
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-3401.D4821.2.patch, HIVE-3401.D4821.3.patch, 
 HIVE-3401.D4821.4.patch, HIVE-3401.D4821.5.patch, HIVE-3401.D4821.6.patch, 
 HIVE-3401.D4821.7.patch


 Current split sampling only supports grammar like TABLESAMPLE(n PERCENT). But 
 some users wants to specify just the size of input. It can be easily 
 calculated with a few commands but it seemed good to support more grammars 
 something like TABLESAMPLE(500M). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3715) float and double calculation is inaccurate in Hive

2012-12-17 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13534708#comment-13534708
 ] 

Namit Jain commented on HIVE-3715:
--

Have you looked at ArciMath BigDecimal - I have not looked at it, but casual 
browsing suggests it might be faster than BigDecimal.

 float and double calculation is inaccurate in Hive
 --

 Key: HIVE-3715
 URL: https://issues.apache.org/jira/browse/HIVE-3715
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
Reporter: Johnny Zhang
Assignee: Johnny Zhang
 Attachments: HIVE-3715.patch.txt


 I found this during debug the e2e test failures. I found Hive miss calculate 
 the float and double value. Take float calculation as an example:
 hive select f from all100k limit 1;
 48308.98
 hive select f/10 from all100k limit 1;
 4830.898046875   --added 04875 in the end
 hive select f*1.01 from all100k limit 1;
 48792.0702734375  --should be 48792.0698
 It might be essentially the same problem as 
 http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm.
  But since e2e test compare the results with mysql and seems mysql does it 
 right, so it is worthy fixing it in Hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3715) float and double calculation is inaccurate in Hive

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3715:
-

Status: Open  (was: Patch Available)

 float and double calculation is inaccurate in Hive
 --

 Key: HIVE-3715
 URL: https://issues.apache.org/jira/browse/HIVE-3715
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
Reporter: Johnny Zhang
Assignee: Johnny Zhang
 Attachments: HIVE-3715.patch.txt


 I found this during debug the e2e test failures. I found Hive miss calculate 
 the float and double value. Take float calculation as an example:
 hive select f from all100k limit 1;
 48308.98
 hive select f/10 from all100k limit 1;
 4830.898046875   --added 04875 in the end
 hive select f*1.01 from all100k limit 1;
 48792.0702734375  --should be 48792.0698
 It might be essentially the same problem as 
 http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm.
  But since e2e test compare the results with mysql and seems mysql does it 
 right, so it is worthy fixing it in Hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3537) release locks at the end of move tasks

2012-12-17 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3537:
-

Attachment: hive.3537.6.patch

 release locks at the end of move tasks
 --

 Key: HIVE-3537
 URL: https://issues.apache.org/jira/browse/HIVE-3537
 Project: Hive
  Issue Type: Bug
  Components: Locking, Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3537.1.patch, hive.3537.2.patch, hive.3537.3.patch, 
 hive.3537.4.patch, hive.3537.5.patch, hive.3537.6.patch


 Look at HIVE-3106 for details.
 In order to make sure that concurrency is not an issue for multi-table 
 inserts, the current option is to introduce a dependency task, which thereby
 delays the creation of all partitions. It would be desirable to release the
 locks for the outputs as soon as the move task is completed. That way, for
 multi-table inserts, the concurrency can be enabled without delaying any 
 table.
 Currently, the movetask contains a input/output, but they do not seem to be
 populated correctly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3721) ALTER TABLE ADD PARTS should check for valid partition spec and throw a SemanticException if part spec is not valid

2012-12-17 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3721:
---

Fix Version/s: (was: 0.10.0)
   0.11

 ALTER TABLE ADD PARTS should check for valid partition spec and throw a 
 SemanticException if part spec is not valid
 ---

 Key: HIVE-3721
 URL: https://issues.apache.org/jira/browse/HIVE-3721
 Project: Hive
  Issue Type: Task
Reporter: Pamela Vagata
Assignee: Pamela Vagata
Priority: Minor
 Fix For: 0.11

 Attachments: HIVE-3721.1.patch.txt, HIVE-3721.2.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >