[jira] Commented: (HIVE-1330) fatal error check omitted for reducer-side operators

2010-04-29 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12862106#action_12862106
 ] 

Namit Jain commented on HIVE-1330:
--

+1

looks good

 fatal error check omitted for reducer-side operators
 

 Key: HIVE-1330
 URL: https://issues.apache.org/jira/browse/HIVE-1330
 Project: Hadoop Hive
  Issue Type: Bug
Affects Versions: 0.6.0
Reporter: Ning Zhang
Assignee: Ning Zhang
 Fix For: 0.6.0

 Attachments: HIVE-1330.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1328) make mapred.input.dir.recursive work for select *

2010-04-29 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12862217#action_12862217
 ] 

Edward Capriolo commented on HIVE-1328:
---

I find external partitions to be pretty badly broken now. I am circling around 
one or two other bugs in them, that I am about to report. Users (including 
myself) are frustrated beause rather then working with data they have to work 
around bugs like HIVE-1318. I understand everyone has their own priorities. 
Call it what you will (inconsistancy/feature) we are adding to the capability 
of external tables while current features do not even work well. 

In particular HIVE-1318 is brutal. When working with my data I can make no 
assumptions when querying. I have to do all types of shell scripting to ensure 
that partitions exist before I query them, adding extra where clauses to 
carefully select ranges of partitions. 

If you are using external partitions at facebook, I wonder how you work around 
HIVE-1318, and I am also curious if you experience HIVE-1303 or is this just 
something in my environment. The handfull of users I have constantly have 
issues, does everyone there just 'suck it up'?

 make mapred.input.dir.recursive work for select *
 -

 Key: HIVE-1328
 URL: https://issues.apache.org/jira/browse/HIVE-1328
 Project: Hadoop Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.6.0
Reporter: John Sichi
Assignee: John Sichi
 Fix For: 0.6.0


 For the script below, we would like the behavior from MAPREDUCE-1501 to apply 
 so that the select * returns two rows instead of none.
 create table fact_daily(x int)
 partitioned by (ds string);
 create table fact_tz(x int)
 partitioned by (ds string, hr string, gmtoffset string);
 alter table fact_tz 
 add partition (ds='2010-01-03', hr='1', gmtoffset='-8');
 insert overwrite table fact_tz
 partition (ds='2010-01-03', hr='1', gmtoffset='-8')
 select key+11 from src where key=484;
 alter table fact_tz 
 add partition (ds='2010-01-03', hr='2', gmtoffset='-7');
 insert overwrite table fact_tz
 partition (ds='2010-01-03', hr='2', gmtoffset='-7')
 select key+12 from src where key=484;
 alter table fact_daily
 set tblproperties('EXTERNAL'='TRUE');
 alter table fact_daily
 add partition (ds='2010-01-03')
 location '/user/hive/warehouse/fact_tz/ds=2010-01-03';
 set mapred.input.dir.recursive=true;
 select * from fact_daily where ds='2010-01-03';

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1192) Build fails when hadoop.version=0.20.1

2010-04-29 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12862221#action_12862221
 ] 

Bill Au commented on HIVE-1192:
---

As Carl has pointed out, I have no problem downloading the file.  But ivy fails 
because of the bad checksum.

 Build fails when hadoop.version=0.20.1
 --

 Key: HIVE-1192
 URL: https://issues.apache.org/jira/browse/HIVE-1192
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Build Infrastructure
Reporter: Carl Steinbach
 Attachments: hadoop-0.20.1.tar.gz.md5


 Setting hadoop.version=0.20.1 causes the build to fail since
 mirror.facebook.net/facebook/hive-deps does not have 0.20.1
 (only 0.17.2.1, 0.18.3, 0.19.0, 0.20.0).
 Suggested fix:
 * remove/ignore the hadoop.version configuration parameter
 or
 * Remove the patch numbers from these archives and use only the major.minor 
 numbers specified by the user to locate the appropriate tarball to download, 
 so 0.20.0 and 0.20.1 would both map to hadoop-0.20.tar.gz.
 * Optionally create new tarballs that only contain the components that are 
 actually needed for the build (Hadoop jars), and remove things that aren't 
 needed (all of the source files).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HIVE-1327) Group by partition column returns wrong results

2010-04-29 Thread Ning Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ning Zhang resolved HIVE-1327.
--

Resolution: Not A Problem

Oops, this is a false alarm. It only happened in my sandbox. 

 Group by partition column returns wrong results
 ---

 Key: HIVE-1327
 URL: https://issues.apache.org/jira/browse/HIVE-1327
 Project: Hadoop Hive
  Issue Type: Bug
Affects Versions: 0.6.0
Reporter: Ning Zhang
 Fix For: 0.6.0


 hive show partitions nzhang_part7;
 show partitions nzhang_part7;
 OK
 ds=2010-01-11
 ds=2010-01-23
 ds=2010-04-03
 ds=2010-04-19
 ds=2010-04-22
 Time taken: 0.431 seconds
 [nzh...@dev303 /tmp] dfs -ls /user/facebook/warehouse/nzhang_part7/*
 -rw-r--r--   3 nzhang supergroup1756123 2010-04-28 11:54 
 /user/facebook/warehouse/nzhang_part7/ds=2010-01-11/attempt_201004162336_176893_r_00_0.gz
 -rw-r--r--   3 nzhang supergroup1758227 2010-04-28 11:54 
 /user/facebook/warehouse/nzhang_part7/ds=2010-01-11/attempt_201004162336_176893_r_01_0.gz
 -rw-r--r--   3 nzhang supergroup1915969 2010-04-28 11:54 
 /user/facebook/warehouse/nzhang_part7/ds=2010-01-23/attempt_201004162336_176893_r_00_0.gz
 -rw-r--r--   3 nzhang supergroup1943830 2010-04-28 11:54 
 /user/facebook/warehouse/nzhang_part7/ds=2010-01-23/attempt_201004162336_176893_r_01_0.gz
 -rw-r--r--   3 nzhang supergroup1646739 2010-04-28 11:54 
 /user/facebook/warehouse/nzhang_part7/ds=2010-04-03/attempt_201004162336_176893_r_00_0.gz
 -rw-r--r--   3 nzhang supergroup1641052 2010-04-28 11:54 
 /user/facebook/warehouse/nzhang_part7/ds=2010-04-03/attempt_201004162336_176893_r_01_0.gz
 -rw-r--r--   3 nzhang supergroup  58601 2010-04-28 11:54 
 /user/facebook/warehouse/nzhang_part7/ds=2010-04-19/attempt_201004162336_176893_r_00_0.gz
 -rw-r--r--   3 nzhang supergroup  57465 2010-04-28 11:54 
 /user/facebook/warehouse/nzhang_part7/ds=2010-04-19/attempt_201004162336_176893_r_01_0.gz
 -rw-r--r--   3 nzhang supergroup1064491 2010-04-28 11:54 
 /user/facebook/warehouse/nzhang_part7/ds=2010-04-22/attempt_201004162336_176893_r_00_0.gz
 -rw-r--r--   3 nzhang supergroup1070580 2010-04-28 11:54 
 /user/facebook/warehouse/nzhang_part7/ds=2010-04-22/attempt_201004162336_176893_r_01_0.gz
 hive select ds, count(1) from nzhang_part7 where ds is not null group by ds;
 2010-04-031761129
 Time taken: 187.692 seconds

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1330) fatal error check omitted for reducer-side operators

2010-04-29 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-1330:
-

  Status: Resolved  (was: Patch Available)
Hadoop Flags: [Reviewed]
  Resolution: Fixed

Commited. Thanks Ning

 fatal error check omitted for reducer-side operators
 

 Key: HIVE-1330
 URL: https://issues.apache.org/jira/browse/HIVE-1330
 Project: Hadoop Hive
  Issue Type: Bug
Affects Versions: 0.6.0
Reporter: Ning Zhang
Assignee: Ning Zhang
 Fix For: 0.6.0

 Attachments: HIVE-1330.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1331) select * does not work if different partitions contain different formats

2010-04-29 Thread Namit Jain (JIRA)
select * does not work if different partitions contain different formats


 Key: HIVE-1331
 URL: https://issues.apache.org/jira/browse/HIVE-1331
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
 Fix For: 0.6.0


Will try to come up with a concrete test - but looks like we are using the 
table's input format

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1192) Build fails when hadoop.version=0.20.1

2010-04-29 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12862282#action_12862282
 ] 

Carl Steinbach commented on HIVE-1192:
--

A temporary workaround for this problem is to set the property ivy.checksums= 
in ivy/ivysettings.xml:

{code}
property name=ivy.checksums value=/
{code}

This has the affect of disabling checksum checks.

 Build fails when hadoop.version=0.20.1
 --

 Key: HIVE-1192
 URL: https://issues.apache.org/jira/browse/HIVE-1192
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Build Infrastructure
Reporter: Carl Steinbach
 Attachments: hadoop-0.20.1.tar.gz.md5


 Setting hadoop.version=0.20.1 causes the build to fail since
 mirror.facebook.net/facebook/hive-deps does not have 0.20.1
 (only 0.17.2.1, 0.18.3, 0.19.0, 0.20.0).
 Suggested fix:
 * remove/ignore the hadoop.version configuration parameter
 or
 * Remove the patch numbers from these archives and use only the major.minor 
 numbers specified by the user to locate the appropriate tarball to download, 
 so 0.20.0 and 0.20.1 would both map to hadoop-0.20.tar.gz.
 * Optionally create new tarballs that only contain the components that are 
 actually needed for the build (Hadoop jars), and remove things that aren't 
 needed (all of the source files).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1328) make mapred.input.dir.recursive work for select *

2010-04-29 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12862295#action_12862295
 ] 

Namit Jain commented on HIVE-1328:
--

I haven't heard anyone running into 
https://issues.apache.org/jira/browse/HIVE-1303 at facebook.



 make mapred.input.dir.recursive work for select *
 -

 Key: HIVE-1328
 URL: https://issues.apache.org/jira/browse/HIVE-1328
 Project: Hadoop Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.6.0
Reporter: John Sichi
Assignee: John Sichi
 Fix For: 0.6.0


 For the script below, we would like the behavior from MAPREDUCE-1501 to apply 
 so that the select * returns two rows instead of none.
 create table fact_daily(x int)
 partitioned by (ds string);
 create table fact_tz(x int)
 partitioned by (ds string, hr string, gmtoffset string);
 alter table fact_tz 
 add partition (ds='2010-01-03', hr='1', gmtoffset='-8');
 insert overwrite table fact_tz
 partition (ds='2010-01-03', hr='1', gmtoffset='-8')
 select key+11 from src where key=484;
 alter table fact_tz 
 add partition (ds='2010-01-03', hr='2', gmtoffset='-7');
 insert overwrite table fact_tz
 partition (ds='2010-01-03', hr='2', gmtoffset='-7')
 select key+12 from src where key=484;
 alter table fact_daily
 set tblproperties('EXTERNAL'='TRUE');
 alter table fact_daily
 add partition (ds='2010-01-03')
 location '/user/hive/warehouse/fact_tz/ds=2010-01-03';
 set mapred.input.dir.recursive=true;
 select * from fact_daily where ds='2010-01-03';

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1192) Build fails when hadoop.version=0.20.1

2010-04-29 Thread John Sichi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12862326#action_12862326
 ] 

John Sichi commented on HIVE-1192:
--

Thanks Carl.  I'm working on getting the corrected checksum uploaded.

Note that in the past, the checksums have actually been needed for detecting 
bad downloads from archive.apache.org when it was overloaded.

Is there a JIRA open for Apache to keep the release directories ivy-friendly?  
Or maybe it has already been corrected for later releases?


 Build fails when hadoop.version=0.20.1
 --

 Key: HIVE-1192
 URL: https://issues.apache.org/jira/browse/HIVE-1192
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Build Infrastructure
Reporter: Carl Steinbach
 Attachments: hadoop-0.20.1.tar.gz.md5


 Setting hadoop.version=0.20.1 causes the build to fail since
 mirror.facebook.net/facebook/hive-deps does not have 0.20.1
 (only 0.17.2.1, 0.18.3, 0.19.0, 0.20.0).
 Suggested fix:
 * remove/ignore the hadoop.version configuration parameter
 or
 * Remove the patch numbers from these archives and use only the major.minor 
 numbers specified by the user to locate the appropriate tarball to download, 
 so 0.20.0 and 0.20.1 would both map to hadoop-0.20.tar.gz.
 * Optionally create new tarballs that only contain the components that are 
 actually needed for the build (Hadoop jars), and remove things that aren't 
 needed (all of the source files).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1303) Adding/selecting many external partitions tables in one session eventually fails

2010-04-29 Thread John Sichi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12862341#action_12862341
 ] 

John Sichi commented on HIVE-1303:
--

Do the MySQL server logs contain any information about why it is giving Access 
denied?  Maybe there is a resource limit that needs to be increased on the 
MySQL server?  Can you reproduce the problem with another metastore db such as 
Derby?


 Adding/selecting many external partitions tables in one session eventually 
 fails
 

 Key: HIVE-1303
 URL: https://issues.apache.org/jira/browse/HIVE-1303
 Project: Hadoop Hive
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: Edward Capriolo
Priority: Critical

 echo create external table if not exists edtest ( dat string ) partitioned 
 by (dummy string) location '/tmp/a';  test.q
  for i in {1..3000} ; do echo alter table ed_test add partition 
 (dummy='${i}') location '/tmp/duh'; ; done  test.q
 hive -f test.q
 Also, there are problems working with this type of table as well. :(
 $ hive -e explain select * from X_action 
 Hive history file=/tmp/XX/hive_job_log_media6_201004121029_170696698.txt
 FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException: Access 
 denied for user 'hivadm'@'XX' (using password: YES)
 NestedThrowables:
 java.sql.SQLException: Access denied for user 'hivadm'@'XX' (using 
 password: YES)
 Interestingly enough if we specify some partitions we can dodge this error. I 
 get the fealing that the select * is trying to select too many partitions and 
 causing this error.
 2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629)) - 
 javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
 .sd.pl.pvt' (using password: YES)
 at 
 org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
 at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
 at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
 at 
 org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
 at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
 at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
 NestedThrowablesStackTrace:
 java.sql.SQLException: Access denied for user 
 'hivadm'@'X.domain.whatetever' (using password: YES)
 at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
 at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
 at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
 at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
 at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
 at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
 at com.mysql.jdbc.Connection.init(Connection.java:1555)
 at 
 com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
 at 
 org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
 at 
 

[jira] Updated: (HIVE-1331) select * does not work if different partitions contain different formats

2010-04-29 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-1331:
-

Attachment: hive.1331.1.patch

 select * does not work if different partitions contain different formats
 

 Key: HIVE-1331
 URL: https://issues.apache.org/jira/browse/HIVE-1331
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
 Fix For: 0.6.0

 Attachments: hive.1331.1.patch


 Will try to come up with a concrete test - but looks like we are using the 
 table's input format

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HIVE-1331) select * does not work if different partitions contain different formats

2010-04-29 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain reassigned HIVE-1331:


Assignee: Namit Jain

 select * does not work if different partitions contain different formats
 

 Key: HIVE-1331
 URL: https://issues.apache.org/jira/browse/HIVE-1331
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.6.0

 Attachments: hive.1331.1.patch


 Will try to come up with a concrete test - but looks like we are using the 
 table's input format

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1331) select * does not work if different partitions contain different formats

2010-04-29 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-1331:
-

Status: Patch Available  (was: Open)

 select * does not work if different partitions contain different formats
 

 Key: HIVE-1331
 URL: https://issues.apache.org/jira/browse/HIVE-1331
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.6.0

 Attachments: hive.1331.1.patch


 Will try to come up with a concrete test - but looks like we are using the 
 table's input format

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1331) select * does not work if different partitions contain different formats

2010-04-29 Thread John Sichi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12862413#action_12862413
 ] 

John Sichi commented on HIVE-1331:
--

+1 (I guess we can't add ORDER BY for determinism since we need to test select 
* fastpath)


 select * does not work if different partitions contain different formats
 

 Key: HIVE-1331
 URL: https://issues.apache.org/jira/browse/HIVE-1331
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.6.0

 Attachments: hive.1331.1.patch


 Will try to come up with a concrete test - but looks like we are using the 
 table's input format

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1192) Build fails when hadoop.version=0.20.1

2010-04-29 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12862433#action_12862433
 ] 

Carl Steinbach commented on HIVE-1192:
--

@John: As far as I know 0.20.1 is the only release on archive.apache.org with a 
bogus md5 file.
I raised this issue on common-user a couple months ago and got no response.  I 
just filed
HADOOP-6737 to track this problem.

 Build fails when hadoop.version=0.20.1
 --

 Key: HIVE-1192
 URL: https://issues.apache.org/jira/browse/HIVE-1192
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Build Infrastructure
Reporter: Carl Steinbach
 Attachments: hadoop-0.20.1.tar.gz.md5


 Setting hadoop.version=0.20.1 causes the build to fail since
 mirror.facebook.net/facebook/hive-deps does not have 0.20.1
 (only 0.17.2.1, 0.18.3, 0.19.0, 0.20.0).
 Suggested fix:
 * remove/ignore the hadoop.version configuration parameter
 or
 * Remove the patch numbers from these archives and use only the major.minor 
 numbers specified by the user to locate the appropriate tarball to download, 
 so 0.20.0 and 0.20.1 would both map to hadoop-0.20.tar.gz.
 * Optionally create new tarballs that only contain the components that are 
 actually needed for the build (Hadoop jars), and remove things that aren't 
 needed (all of the source files).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1318) External Tables: Selecting a partition that does not exist produces errors

2010-04-29 Thread John Sichi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12862506#action_12862506
 ] 

John Sichi commented on HIVE-1318:
--

What error did you get (other than abct not existing due to the typo)?

It works for me on trunk (returns no rows), but maybe 0.5 didn't get some patch.


 External Tables: Selecting a partition that does not exist produces errors
 --

 Key: HIVE-1318
 URL: https://issues.apache.org/jira/browse/HIVE-1318
 Project: Hadoop Hive
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: Edward Capriolo
 Attachments: partdoom.q


 {noformat}
 dfs -mkdir /tmp/a;
 dfs -mkdir /tmp/a/b;
 dfs -mkdir /tmp/a/c;
 create external table abc( key string, val string  )
 partitioned by (part int)
 location '/tmp/a/';
 alter table abc ADD PARTITION (part=1)  LOCATION 'b';
 alter table abc ADD PARTITION (part=2)  LOCATION 'c';
 select key from abc where part=1;
 select key from abct where part=70;
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.