[jira] Updated: (HIVE-1729) Satisfy ASF release management requirements

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1729:
-

Status: Patch Available  (was: Open)

 Satisfy ASF release management requirements
 ---

 Key: HIVE-1729
 URL: https://issues.apache.org/jira/browse/HIVE-1729
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure, Documentation
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.6.0

 Attachments: HIVE-1729-backport06.1.patch.txt, HIVE-1729.1.patch.txt


 We need to make sure we satisfy the ASF release requirements:
 * http://www.apache.org/dev/release.html
 * http://incubator.apache.org/guides/releasemanagement.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1729) Satisfy ASF release management requirements

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1729:
-

Attachment: HIVE-1729-backport06.1.patch.txt
HIVE-1729.1.patch.txt

* Add LICENSE and NOTICE files.
* Modify build to include LICENSE AND NOTICE files in top-level release 
directory and in JAR manifests.
* Add RELEASE_NOTES.txt file with project release note output from JIRA (text 
format), and modify build to include in top-level release directory.
* For the 0.6 backport, modified the 'tar' target to exclude the unfinished 
xdocs.


 Satisfy ASF release management requirements
 ---

 Key: HIVE-1729
 URL: https://issues.apache.org/jira/browse/HIVE-1729
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure, Documentation
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.6.0

 Attachments: HIVE-1729-backport06.1.patch.txt, HIVE-1729.1.patch.txt


 We need to make sure we satisfy the ASF release requirements:
 * http://www.apache.org/dev/release.html
 * http://incubator.apache.org/guides/releasemanagement.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1638) convert commonly used udfs to generic udfs

2010-10-19 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12922635#action_12922635
 ] 

Namit Jain commented on HIVE-1638:
--

Forgot to mention, I had merged I saw your last comment.
So, the checked in patch is HIVE-1638.5.patch

If you want the remaining changes, can you file a follow up jira with the patch

 convert commonly used udfs to generic udfs
 --

 Key: HIVE-1638
 URL: https://issues.apache.org/jira/browse/HIVE-1638
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Siying Dong
 Fix For: 0.7.0

 Attachments: HIVE-1638.1.patch, HIVE-1638.2.patch, HIVE-1638.3.patch, 
 HIVE-1638.4.patch, HIVE-1638.5.patch, HIVE-1638.6.patch


 Copying a mail from Joy:
 i did a little bit of profiling of a simple hive group by query today. i was 
 surprised to see that one of the most expensive functions were in converting 
 the equals udf (i had some simple string filters) to generic udfs. 
 (primitiveobjectinspectorconverter.textconverter)
 am i correct in thinking that the fix is to simply port some of the most 
 popular udfs (string equality/comparison etc.) to generic udsf?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1733) Make the bucket size of JDBM configurable

2010-10-19 Thread Liyin Tang (JIRA)
Make the bucket size of JDBM configurable 
--

 Key: HIVE-1733
 URL: https://issues.apache.org/jira/browse/HIVE-1733
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Affects Versions: 0.6.0, 0.7.0
Reporter: Liyin Tang
Assignee: Liyin Tang


Right now the bucket size of jdbm bucket is hard coded as 256.
To better config and improve the performance of the jdbm component,
it is necessary to make the bucket size configurable.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1641) add map joined table to distributed cache

2010-10-19 Thread Liyin Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liyin Tang updated HIVE-1641:
-

Attachment: Hive-1641(4).patch

 add map joined table to distributed cache
 -

 Key: HIVE-1641
 URL: https://issues.apache.org/jira/browse/HIVE-1641
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.7.0
Reporter: Namit Jain
Assignee: Liyin Tang
 Fix For: 0.7.0

 Attachments: Hive-1641(3).txt, Hive-1641(4).patch, Hive-1641.patch


 Currently, the mappers directly read the map-joined table from HDFS, which 
 makes it difficult to scale.
 We end up getting lots of timeouts once the number of mappers are beyond a 
 few thousand, due to 
 concurrent mappers.
 It would be good idea to put the mapped file into distributed cache and read 
 from there instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1633) CombineHiveInputFormat fails with cannot find dir for emptyFile

2010-10-19 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12922644#action_12922644
 ] 

Namit Jain commented on HIVE-1633:
--

Yongqiang, can you take a look ?

 CombineHiveInputFormat fails with cannot find dir for emptyFile
 -

 Key: HIVE-1633
 URL: https://issues.apache.org/jira/browse/HIVE-1633
 Project: Hive
  Issue Type: Bug
  Components: Clients
Reporter: Amareshwari Sriramadasu
 Attachments: HIVE-1633.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1660) Change get_partitions_ps to pass partition filter to database

2010-10-19 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-1660:
-

   Resolution: Fixed
Fix Version/s: 0.7.0
 Hadoop Flags: [Reviewed]
   Status: Resolved  (was: Patch Available)

Committed. Thanks Paul

 Change get_partitions_ps to pass partition filter to database
 -

 Key: HIVE-1660
 URL: https://issues.apache.org/jira/browse/HIVE-1660
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.7.0
Reporter: Ajay Kidave
Assignee: Paul Yang
 Fix For: 0.7.0

 Attachments: HIVE-1660.1.patch, HIVE-1660.2.patch, HIVE-1660.3.patch, 
 HIVE-1660.4.patch, HIVE-1660_regex.patch


 Support for doing partition pruning by passing the partition filter to the 
 database is added by HIVE-1609. Changing get_partitions_ps to use this could 
 result in performance improvement  for tables having large number of 
 partitions. A listPartitionNamesByFilter API might be required for 
 implementing this for use from Hive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1633) CombineHiveInputFormat fails with cannot find dir for emptyFile

2010-10-19 Thread He Yongqiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12922650#action_12922650
 ] 

He Yongqiang commented on HIVE-1633:


+1 

 CombineHiveInputFormat fails with cannot find dir for emptyFile
 -

 Key: HIVE-1633
 URL: https://issues.apache.org/jira/browse/HIVE-1633
 Project: Hive
  Issue Type: Bug
  Components: Clients
Reporter: Amareshwari Sriramadasu
 Attachments: HIVE-1633.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Build failed in Hudson: Hive-trunk-h0.20 #397

2010-10-19 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/397/changes

Changes:

[namit] spelling mistake in Siying's name

[namit] HIVE-1638. convert commonly used udfs to generic udfs
(Siyong Dong via namit)

[jvs] HIVE-1726. Update README file for 0.6.0 release
(Carl Steinbach via jvs)

[jvs] HIVE-1725. Include metastore upgrade scripts in release tarball
(Carl Steinbach via jvs)

--
[...truncated 15079 lines...]
[junit] POSTHOOK: Output: defa...@src1
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.seq
[junit] Loading data to table src_sequencefile
[junit] POSTHOOK: Output: defa...@src_sequencefile
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/complex.seq
[junit] Loading data to table src_thrift
[junit] POSTHOOK: Output: defa...@src_thrift
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/json.txt
[junit] Loading data to table src_json
[junit] POSTHOOK: Output: defa...@src_json
[junit] OK
[junit] diff 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/ql/test/logs/negative/unknown_table1.q.out
 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/ql/src/test/results/compiler/errors/unknown_table1.q.out
[junit] Done query: unknown_table1.q
[junit] Begin query: unknown_table2.q
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt
[junit] Loading data to table srcpart partition (ds=2008-04-08, hr=11)
[junit] rmr: cannot remove 
phttps://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/ql/test/data/warehouse/srcpart/ds=2008-04-08/hr=11:
 No such file or directory.
[junit] POSTHOOK: Output: defa...@srcpart@ds=2008-04-08/hr=11
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt
[junit] Loading data to table srcpart partition (ds=2008-04-08, hr=12)
[junit] rmr: cannot remove 
phttps://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/ql/test/data/warehouse/srcpart/ds=2008-04-08/hr=12:
 No such file or directory.
[junit] POSTHOOK: Output: defa...@srcpart@ds=2008-04-08/hr=12
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt
[junit] Loading data to table srcpart partition (ds=2008-04-09, hr=11)
[junit] rmr: cannot remove 
phttps://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/ql/test/data/warehouse/srcpart/ds=2008-04-09/hr=11:
 No such file or directory.
[junit] POSTHOOK: Output: defa...@srcpart@ds=2008-04-09/hr=11
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt
[junit] Loading data to table srcpart partition (ds=2008-04-09, hr=12)
[junit] rmr: cannot remove 
phttps://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/ql/test/data/warehouse/srcpart/ds=2008-04-09/hr=12:
 No such file or directory.
[junit] POSTHOOK: Output: defa...@srcpart@ds=2008-04-09/hr=12
[junit] OK
[junit] POSTHOOK: Output: defa...@srcbucket
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket0.txt
[junit] Loading data to table srcbucket
[junit] POSTHOOK: Output: defa...@srcbucket
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket1.txt
[junit] Loading data to table srcbucket
[junit] POSTHOOK: Output: defa...@srcbucket
[junit] OK
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket20.txt
[junit] Loading data to table srcbucket2
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket21.txt
[junit] Loading data to table srcbucket2
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket22.txt
[junit] Loading data to table srcbucket2
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket23.txt
[junit] Loading data to table srcbucket2
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt
[junit] Loading data to table src
[junit] POSTHOOK: Output: 

[jira] Updated: (HIVE-1641) add map joined table to distributed cache

2010-10-19 Thread Liyin Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liyin Tang updated HIVE-1641:
-

Attachment: Hive-1641(5).patch

 add map joined table to distributed cache
 -

 Key: HIVE-1641
 URL: https://issues.apache.org/jira/browse/HIVE-1641
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.7.0
Reporter: Namit Jain
Assignee: Liyin Tang
 Fix For: 0.7.0

 Attachments: Hive-1641(3).txt, Hive-1641(4).patch, 
 Hive-1641(5).patch, Hive-1641.patch


 Currently, the mappers directly read the map-joined table from HDFS, which 
 makes it difficult to scale.
 We end up getting lots of timeouts once the number of mappers are beyond a 
 few thousand, due to 
 concurrent mappers.
 It would be good idea to put the mapped file into distributed cache and read 
 from there instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1734) Implement map_keys() and map_values() UDFs

2010-10-19 Thread Carl Steinbach (JIRA)
Implement map_keys() and map_values() UDFs
--

 Key: HIVE-1734
 URL: https://issues.apache.org/jira/browse/HIVE-1734
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Reporter: Carl Steinbach


Implement the following UDFs:

array map_keys(map)

and

array map_values(map)

map_keys() takes a map as input and returns an array consisting of the key 
values in the supplied map.
Similarly, map_values() takes a map as input and returns an array containing 
the map value fields.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1729) Satisfy ASF release management requirements

2010-10-19 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-1729:
-

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

Committed to branch and trunk.  Thanks Carl!


 Satisfy ASF release management requirements
 ---

 Key: HIVE-1729
 URL: https://issues.apache.org/jira/browse/HIVE-1729
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure, Documentation
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.6.0

 Attachments: HIVE-1729-backport06.1.patch.txt, HIVE-1729.1.patch.txt


 We need to make sure we satisfy the ASF release requirements:
 * http://www.apache.org/dev/release.html
 * http://incubator.apache.org/guides/releasemanagement.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1735) Extend Explode UDTF to handle Maps

2010-10-19 Thread Carl Steinbach (JIRA)
Extend Explode UDTF to handle Maps
--

 Key: HIVE-1735
 URL: https://issues.apache.org/jira/browse/HIVE-1735
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Reporter: Carl Steinbach


The explode() UDTF currently only accepts arrays as input. We should modify it
so that it can also handle map inputs, in which case it will output two columns
corresponding to the key and value fields.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1641) add map joined table to distributed cache

2010-10-19 Thread He Yongqiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12922808#action_12922808
 ] 

He Yongqiang commented on HIVE-1641:


Had several reviews on the patch offline with Namit, Joy and Liying.
Will go through the diff again, and commit it. 

 add map joined table to distributed cache
 -

 Key: HIVE-1641
 URL: https://issues.apache.org/jira/browse/HIVE-1641
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.7.0
Reporter: Namit Jain
Assignee: Liyin Tang
 Fix For: 0.7.0

 Attachments: Hive-1641(3).txt, Hive-1641(4).patch, 
 Hive-1641(5).patch, Hive-1641.patch


 Currently, the mappers directly read the map-joined table from HDFS, which 
 makes it difficult to scale.
 We end up getting lots of timeouts once the number of mappers are beyond a 
 few thousand, due to 
 concurrent mappers.
 It would be good idea to put the mapped file into distributed cache and read 
 from there instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1736) Remove -dev suffix from release package name and generate MD5 checksum using Ant

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1736:
-

Status: Patch Available  (was: Open)

 Remove -dev suffix from release package name and generate MD5 checksum 
 using Ant
 --

 Key: HIVE-1736
 URL: https://issues.apache.org/jira/browse/HIVE-1736
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.6.0

 Attachments: HIVE-1736-backport.1.patch.txt, HIVE-1736.1.patch.txt




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1736) Remove -dev suffix from release package name and generate MD5 checksum using Ant

2010-10-19 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-1736:
-

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

+1.

Committed to branch and trunk.  Thanks Carl!


 Remove -dev suffix from release package name and generate MD5 checksum 
 using Ant
 --

 Key: HIVE-1736
 URL: https://issues.apache.org/jira/browse/HIVE-1736
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.6.0

 Attachments: HIVE-1736-backport.1.patch.txt, HIVE-1736.1.patch.txt




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1124) CREATE VIEW should expand the query text consistently

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1124:
-

Release Note:   (was: HIVE-1124. Create view should expand the query text 
consistently. (John Sichi via zshao))
 Summary: CREATE VIEW should expand the query text consistently  (was: 
create view should expand the query text consistently)

 CREATE VIEW should expand the query text consistently
 -

 Key: HIVE-1124
 URL: https://issues.apache.org/jira/browse/HIVE-1124
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.6.0
Reporter: Zheng Shao
Assignee: John Sichi
 Fix For: 0.6.0

 Attachments: HIVE-1124.1.patch, HIVE-1124.2.patch, HIVE-1124.3.patch


 We should expand the omitted alias in the same way in select and in group 
 by.
 Hive Group By recognize group by expressions by comparing the literal 
 string.
 {code}
 hive create view zshao_view as select d, count(1) as cnt from zshao_tt group 
 by d;
 OK
 Time taken: 0.286 seconds
 hive select * from zshao_view;
 FAILED: Error in semantic analysis: line 1:7 Expression Not In Group By Key d 
 in definition of VIEW zshao_view [
 select d, count(1) as `cnt` from `zshao_tt` group by `zshao_tt`.`d`
 ] used as zshao_view at line 1:14
 {code}
  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1174) Fix job counter error if hive.merge.mapfiles equals true

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1174:
-

Release Note:   (was: HIVE-1174. Fix Job counter error if 
hive.merge.mapfiles equals true. (Yongqiang He via zshao))
 Summary: Fix job counter error if hive.merge.mapfiles equals true  
(was: Job counter error if hive.merge.mapfiles equals true)

 Fix job counter error if hive.merge.mapfiles equals true
 --

 Key: HIVE-1174
 URL: https://issues.apache.org/jira/browse/HIVE-1174
 Project: Hive
  Issue Type: Bug
Reporter: He Yongqiang
Assignee: He Yongqiang
 Fix For: 0.6.0

 Attachments: hive-1174.1.patch, hive-1174.2.patch


 if hive.merge.mapfiles is set to true, the job counter will go to 3.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1204) typedbytes: writing to stderr kills the mapper

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1204:
-

Release Note:   (was: typedbytes: writing to stderr kills the mapper)

 typedbytes: writing to stderr kills the mapper
 --

 Key: HIVE-1204
 URL: https://issues.apache.org/jira/browse/HIVE-1204
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.6.0

 Attachments: hive.1204.1.patch, hive.1204.2.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1253) Fix Date_sub and Date_add in case of daylight saving

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1253:
-

Affects Version/s: (was: 0.6.0)
 Release Note:   (was: HIVE-1253. Fix Date_sub and Date_add in case of 
daylight saving. (Bryan Talbot via zshao))
  Summary: Fix Date_sub and Date_add in case of daylight saving  
(was: date_sub() function returns wrong date because of daylight saving time 
difference)

 Fix Date_sub and Date_add in case of daylight saving
 

 Key: HIVE-1253
 URL: https://issues.apache.org/jira/browse/HIVE-1253
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: mingran wang
Assignee: Bryan Talbot
 Fix For: 0.6.0

 Attachments: HIVE-1253.patch


 date_sub('2010-03-15', 7) returns '2010-03-07'. This is because we have time 
 shifts on 2010-03-14 for daylight saving time.
 Looking at ql/src/java/org/apache/hadoop/hive/ql/udf/UDFDateSub.java, it is 
 getting a calendar instance in UTC time zone.
 def calendar = Calendar.getInstance(TimeZone.getTimeZone(UTC));
 And use calendar.add() to substract 7 days, then conver the time to 
 '-MM-dd' format.
 If it simply uses default timezone, the problem is solved: def calendar = 
 Calendar.getInstance());
 When people use date_sub('2010-03-15', 7), I think they mean substract 7 
 days, instead of substracting 7*24 hours. So it should be an easy fix. The 
 same changes should go to date_add and date_diff

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1185) Fix RCFile resource leak when opening a non-RCFile

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1185:
-

Affects Version/s: (was: 0.6.0)
 Release Note:   (was: HIVE-1185. Fix RCFile resource leak when opening 
a non-RCFile. (He Yongqiang via zshao)
)

 Fix RCFile resource leak when opening a non-RCFile
 --

 Key: HIVE-1185
 URL: https://issues.apache.org/jira/browse/HIVE-1185
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Reporter: Zheng Shao
Assignee: He Yongqiang
 Fix For: 0.6.0

 Attachments: hive-1185.1.patch, hive-1185.2.patch


 See HADOOP-5476 for the bug in SequenceFile. We should do the same thing in 
 RCFile.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-990) Incorporate CheckStyle into Hive's build.xml

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-990:


Release Note:   (was: HIVE-990. Incorporate CheckStyle into Hive's 
build.xml. (Carl Steinbach via zshao))

 Incorporate CheckStyle into Hive's build.xml
 

 Key: HIVE-990
 URL: https://issues.apache.org/jira/browse/HIVE-990
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.6.0

 Attachments: checkstyle-errors.html, HIVE-990.patch


 Hadoop and Pig both have CheckStyle integrated into their build. This is 
 useful for catching
 a variety of errors as well as for enforcing a specific coding style and 
 maintaining good code hygiene.
 We just need to snatch Hadoop's checkstyle.xml and integrate it into Hive's 
 build.xml file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-972) Support views

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-972:


Release Note:   (was: HIVE-972. Support views. (John Sichi via zshao))
 Summary: Support views  (was: support views)

 Support views
 -

 Key: HIVE-972
 URL: https://issues.apache.org/jira/browse/HIVE-972
 Project: Hive
  Issue Type: New Feature
  Components: Metastore, Query Processor
Reporter: Namit Jain
Assignee: John Sichi
 Fix For: 0.6.0

 Attachments: HIVE-972.1.patch, HIVE-972.2.patch, HIVE-972.3.patch


 Hive currently does not support views. 
 It would be a very nice feature to have.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-259) Add PERCENTILE aggregate function

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-259:


Release Note:   (was: Add PERCENTILE aggregate function)

 Add PERCENTILE aggregate function
 -

 Key: HIVE-259
 URL: https://issues.apache.org/jira/browse/HIVE-259
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Venky Iyer
Assignee: Jerome Boulon
 Fix For: 0.6.0

 Attachments: HIVE-259-2.patch, HIVE-259-3.patch, HIVE-259.1.patch, 
 HIVE-259.4.patch, HIVE-259.5.patch, HIVE-259.patch, jb2.txt, Percentile.xlsx


 Compute atleast 25, 50, 75th percentiles

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1147) Update Eclipse project configuration to match Checkstyle

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1147:
-

  Component/s: Build Infrastructure
Affects Version/s: (was: 0.6.0)
 Release Note:   (was: HIVE-1147. Update Eclipse project configuration 
to match Checkstyle (Carl Steinbach via zshao))

 Update Eclipse project configuration to match Checkstyle
 

 Key: HIVE-1147
 URL: https://issues.apache.org/jira/browse/HIVE-1147
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.6.0

 Attachments: HIVE-1147.patch


 We recently made Checkstyle part of the build. We need to update
 the Eclipse project configuration so that it is compatible with Checkstyle.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1137) Fix build.xml for references to IVY_HOME

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1137:
-

Affects Version/s: (was: 0.6.0)
  Summary: Fix build.xml for references to IVY_HOME  (was: build 
references IVY_HOME incorrectly)

 Fix build.xml for references to IVY_HOME
 

 Key: HIVE-1137
 URL: https://issues.apache.org/jira/browse/HIVE-1137
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure
Reporter: John Sichi
Assignee: Carl Steinbach
 Fix For: 0.6.0

 Attachments: HIVE-1137.patch


 The build references env.IVY_HOME, but doesn't actually import env as it 
 should (via property environment=env/).
 It's not clear what the IVY_HOME reference is for since the build doesn't 
 even use ivy.home (instead, it installs under the build/ivy directory).
 It looks like someone copied bits and pieces from the Automatically section 
 here:
 http://ant.apache.org/ivy/history/latest-milestone/install.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1123) Checkstyle fixes

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1123:
-

 Component/s: Build Infrastructure
Release Note:   (was: HIVE-1123. Checkstyle fixes. (Carl Steinbach via 
zshao))

 Checkstyle fixes
 

 Key: HIVE-1123
 URL: https://issues.apache.org/jira/browse/HIVE-1123
 Project: Hive
  Issue Type: Task
  Components: Build Infrastructure
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.6.0

 Attachments: HIVE-1123.checkstyle.patch, HIVE-1123.cli.2.patch, 
 HIVE-1123.cli.patch, HIVE-1123.common.2.patch, HIVE-1123.common.patch, 
 HIVE-1123.contrib.2.patch, HIVE-1123.contrib.patch, HIVE-1123.hwi.2.patch, 
 HIVE-1123.hwi.patch, HIVE-1123.jdbc.2.patch, HIVE-1123.jdbc.patch, 
 HIVE-1123.metastore.2.patch, HIVE-1123.metastore.patch, HIVE-1123.ql.2.patch, 
 HIVE-1123.ql.patch, HIVE-1123.serde.2.patch, HIVE-1123.serde.patch, 
 HIVE-1123.service.2.patch, HIVE-1123.service.patch, HIVE-1123.shims.2.patch, 
 HIVE-1123.shims.patch


 Fix checkstyle errors.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1081) Automated source code cleanup

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1081:
-

Release Note:   (was: HIVE-1081. Automated source code cleanup - Part 3 - 
ql. (Carl Steinbach via zshao))

 Automated source code cleanup
 -

 Key: HIVE-1081
 URL: https://issues.apache.org/jira/browse/HIVE-1081
 Project: Hive
  Issue Type: Task
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.6.0

 Attachments: cleanup-cli.patch, cleanup-common.patch, 
 cleanup-contrib-test.patch, cleanup-contrib.patch, cleanup-hwi-test.patch, 
 cleanup-hwi.patch, cleanup-jdbc-test.patch, cleanup-jdbc.patch, 
 cleanup-metastore-test.patch, cleanup-metastore.patch, cleanup-ql-test.patch, 
 cleanup-ql.2.patch, cleanup-ql.patch, cleanup-serde-test.patch, 
 cleanup-serde.patch, cleanup-service-test.patch, cleanup-service.patch


 Reduce the number of Checkstyle violations by applying Eclipse's automated 
 cleanup
 tools to the Hive code base:
 * Correct indentation
 * Remove trailing white spaces on all lines
 * Format source code
 * Organize imports
 * Remove unnecessary casts
 * Add missing '@Deprecated' annotations
 * Add missing '@Override' annotations
 * Remove unused
 ** local variables
 ** private fields
 ** private types
 ** private constructors
 ** private methods
 ** imports
 * Add final modifier to
 ** local variables (if applicable)
 ** private fields (if applicable)
 * Convert for loops to enhanced for loops
 * Convert control statement bodies to block
 * Remove 'this' qualifier for non-static method accesses.
 * Remove 'this' qualifier for non-static field accesses.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1132) Add metastore API method to get partition by name

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1132:
-

Affects Version/s: (was: 0.6.0)
 Release Note:   (was: HIVE-1132. Add metastore API method to get 
partition by name. (Paul Yang via zshao))

 Add metastore API method to get partition by name
 -

 Key: HIVE-1132
 URL: https://issues.apache.org/jira/browse/HIVE-1132
 Project: Hive
  Issue Type: New Feature
  Components: Metastore
Reporter: Paul Yang
Assignee: Paul Yang
 Fix For: 0.6.0

 Attachments: HIVE-1132.1.patch


 Currently, get_partition_names returns the partition names in an escaped form 
  ie 'ds=2010-02-03/ts=2010-02-03 
 18%3A49%3A26/offset=0-3184760670135/instance=nfs/host=nfs'. In this case, the 
 colons have been replaced by %3A. The escaped form is necessary because the 
 partition column values could contain symbols such as '=' or '/' that would 
 interfere with parsing or have some other unwanted effects. See HIVE-883.
 However, there is no way to directly retrieve the partition using the escaped 
 name because get_partition accepts a ListString that requires the partition 
 column values to be in their original unescaped form. So the proposal is to 
 add get_partition_by_name() that directly accepts the partition name in the 
 escaped form.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1178) Enforce bucketing for a table

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1178:
-

Release Note:   (was: HIVE-1178. enforce bucketing for a table.)
 Summary: Enforce bucketing for a table  (was: enforce bucketing for a 
table)

 Enforce bucketing for a table
 -

 Key: HIVE-1178
 URL: https://issues.apache.org/jira/browse/HIVE-1178
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.6.0

 Attachments: hive.1178.1.patch, hive.1178.2.patch, hive.1178.3.patch


 If the table being inserted is a bucketed, currently hive does not try to 
 enforce that.
 An option should be added for checking that.
 Moreover, the number of buckets can be higher than the number of maximum 
 reducers, in which
 case a single reducer can write to multiple files.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1150) Add comment to explain why we check for dir first in add_partitions().

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1150:
-

Release Note:   (was: HIVE-1150. Add comment to explain why we check for 
dir first in add_partitions(). (Paul Yang via zshao))

 Add comment to explain why we check for dir first in add_partitions().
 --

 Key: HIVE-1150
 URL: https://issues.apache.org/jira/browse/HIVE-1150
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Paul Yang
Assignee: Paul Yang
Priority: Trivial
 Fix For: 0.6.0

 Attachments: HIVE-1150.1.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1136) Add type-checking setters for HiveConf class to match existing getters

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1136:
-

Affects Version/s: (was: 0.6.0)
 Release Note:   (was: HIVE-1136. Add type-checking setters for 
HiveConf class. (John Sichi via zshao))
  Summary: Add type-checking setters for HiveConf class to match 
existing getters  (was: add type-checking setters for HiveConf class to match 
existing getters)

 Add type-checking setters for HiveConf class to match existing getters
 --

 Key: HIVE-1136
 URL: https://issues.apache.org/jira/browse/HIVE-1136
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Reporter: John Sichi
Assignee: John Sichi
 Fix For: 0.6.0

 Attachments: HIVE-1136.1.patch, HIVE-1136.2.patch


 This is a followup from HIVE-1129.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1190) Configure build to download Hadoop tarballs from Facebook mirror instead of Apache

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1190:
-

Release Note:   (was: HIVE-1190. Configure build to download Hadoop 
tarballs from Facebook mirror. (John Sichi via zshao))

 Configure build to download Hadoop tarballs from Facebook mirror instead of 
 Apache
 --

 Key: HIVE-1190
 URL: https://issues.apache.org/jira/browse/HIVE-1190
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Affects Versions: 0.5.0
Reporter: John Sichi
Assignee: John Sichi
Priority: Critical
 Fix For: 0.6.0

 Attachments: HIVE-1190.1.patch


 See discussion in HIVE-984.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1311) Bug is use of hadoop supports splittable

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1311:
-

 Description: 
CombineHiveInputFormat: getSplits()
 if (this.mrwork != null  this.mrwork.getHadoopSupportsSplittable()) 


should check if hadoop supports splittable is false

  was:

CombineHiveInputFormat: getSplits()
 if (this.mrwork != null  this.mrwork.getHadoopSupportsSplittable()) 


should check if hadoop supports splittable is false

Release Note:   (was: HIVE-1311. Bug in use of parameter hadoop supports 
splittable. (Namit Jain via zshao))
 Summary: Bug is use of hadoop supports splittable  (was: bug is use of 
hadoop supports splittable)

 Bug is use of hadoop supports splittable
 

 Key: HIVE-1311
 URL: https://issues.apache.org/jira/browse/HIVE-1311
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.6.0

 Attachments: hive.1311.1.patch


 CombineHiveInputFormat: getSplits()
  if (this.mrwork != null  this.mrwork.getHadoopSupportsSplittable()) 
 should check if hadoop supports splittable is false

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1068) CREATE VIEW followup: add a table type enum attribute in metastore's MTable, and also null out irrelevant attributes for MTable instances which describe views

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1068:
-

Affects Version/s: (was: 0.6.0)
 Release Note:   (was: HIVE-1068. CREATE VIEW followup: add a 'table 
type' enum attribute in metastore's MTable. (John Sichi via zshao)
)

 CREATE VIEW followup:  add a table type enum attribute in metastore's 
 MTable, and also null out irrelevant attributes for MTable instances which 
 describe views
 -

 Key: HIVE-1068
 URL: https://issues.apache.org/jira/browse/HIVE-1068
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: John Sichi
Assignee: John Sichi
 Fix For: 0.6.0

 Attachments: HIVE-1068.1.patch, HIVE-1068.2.patch


 Zheng's description:
 5. TODO: Metadata change: We store view definitions in the same metadata 
 table that we store table definitions.
 Shall we add a field table type so we know whether it's a table, external 
 table, view, or materialized view in the future.
 We should clean up the additional useless fields in view - the test output 
 shows that we are storing some garbage information for views.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1286) Remove debug message from stdout in ColumnarSerDe

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1286:
-

 Component/s: Serializers/Deserializers
Release Note:   (was: HIVE-1286. Remove debug message from stdout in 
ColumnarSerDe. (Yongqiang He via zshao)
)
 Summary: Remove debug message from stdout in ColumnarSerDe  (was: 
error/info message being emitted on standard output)

 Remove debug message from stdout in ColumnarSerDe
 -

 Key: HIVE-1286
 URL: https://issues.apache.org/jira/browse/HIVE-1286
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Reporter: He Yongqiang
Assignee: He Yongqiang
Priority: Minor
 Fix For: 0.6.0

 Attachments: hive.1286.1.patch, hive.1286.2.patch


 'Found class for org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe'
 should go to stderr where other informational messages are sent.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1280) Add option to CombineHiveInputFormat for non-splittable inputs.

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1280:
-

  Issue Type: New Feature  (was: Bug)
Release Note:   (was: HIVE-1280. Add option to CombineHiveInputFormat for 
non-splittable inputs. (Namit Jain via zshao))
 Summary: Add option to CombineHiveInputFormat for non-splittable 
inputs.  (was: problem in combinehiveinputformat with nested directories)

 Add option to CombineHiveInputFormat for non-splittable inputs.
 ---

 Key: HIVE-1280
 URL: https://issues.apache.org/jira/browse/HIVE-1280
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.6.0

 Attachments: hive.1280.1.patch, hive.1280.2.patch, hive.1280.3.patch, 
 hive.1280.4.patch, hive.1280.5.patch, hive.1280.6.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1142) Fix datanucleus typos in conf/hive-default.xml

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1142:
-

Release Note:   (was: HIVE-1142. Fix datanucleus typos in 
conf/hive-default.xml. (Paul Yang via zshao))
 Summary: Fix datanucleus typos in conf/hive-default.xml  (was: 
datanucleus typos in conf/hive-default.xml)

 Fix datanucleus typos in conf/hive-default.xml
 --

 Key: HIVE-1142
 URL: https://issues.apache.org/jira/browse/HIVE-1142
 Project: Hive
  Issue Type: Bug
  Components: Configuration
Affects Versions: 0.5.0
Reporter: John Sichi
Assignee: Paul Yang
Priority: Trivial
 Fix For: 0.6.0

 Attachments: HIVE-1142.1.patch, HIVE-1142.2.patch


 datanucleus is misspelled as datancucleus and datanuclues.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-675) Add Database/Schema support to Hive QL

2010-10-19 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-675:


Summary: Add Database/Schema support to Hive QL  (was: add database/schema 
support Hive QL)

 Add Database/Schema support to Hive QL
 --

 Key: HIVE-675
 URL: https://issues.apache.org/jira/browse/HIVE-675
 Project: Hive
  Issue Type: New Feature
  Components: Metastore, Query Processor
Reporter: Prasad Chakka
Assignee: Carl Steinbach
 Fix For: 0.6.0

 Attachments: hive-675-2009-9-16.patch, hive-675-2009-9-19.patch, 
 hive-675-2009-9-21.patch, hive-675-2009-9-23.patch, hive-675-2009-9-7.patch, 
 hive-675-2009-9-8.patch, HIVE-675-2010-08-16.patch.txt, 
 HIVE-675-2010-7-16.patch.txt, HIVE-675-2010-8-4.patch.txt, 
 HIVE-675-backport-v6.1.patch.txt, HIVE-675-backport-v6.2.patch.txt, 
 HIVE-675.10.patch.txt, HIVE-675.11.patch.txt, HIVE-675.12.patch.txt, 
 HIVE-675.13.patch.txt


 Currently all Hive tables reside in single namespace (default). Hive should 
 support multiple namespaces (databases or schemas) such that users can create 
 tables in their specific namespaces. These name spaces can have different 
 warehouse directories (with a default naming scheme) and possibly different 
 properties.
 There is already some support for this in metastore but Hive query parser 
 should have this feature as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.