[jira] Updated: (HIVE-1614) UDTF json_tuple should return null row when input is not a valid JSON string

2010-09-03 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-1614:
-

   Status: Resolved  (was: Patch Available)
 Hadoop Flags: [Reviewed]
Fix Version/s: 0.7.0
   Resolution: Fixed

Committed. Thanks Ning

> UDTF json_tuple should return null row when input is not a valid JSON string
> 
>
> Key: HIVE-1614
> URL: https://issues.apache.org/jira/browse/HIVE-1614
> Project: Hadoop Hive
>  Issue Type: Bug
>Affects Versions: 0.7.0
>Reporter: Ning Zhang
>Assignee: Ning Zhang
> Fix For: 0.7.0
>
> Attachments: HIVE-1614.2.patch, HIVE-1614.patch
>
>
> If the input column is not a valid JSON string, json_tuple will not return 
> anything but this will prevent the downstream operators to access the 
> left-hand side table. We should output a NULL row instead, similar to when 
> the input column is a NULL value. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1615) Web Interface JSP needs Refactoring for removed meta store methods

2010-09-03 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated HIVE-1615:
--

Attachment: hive-1615.patch.2.txt

> Web Interface JSP needs Refactoring for removed meta store methods
> --
>
> Key: HIVE-1615
> URL: https://issues.apache.org/jira/browse/HIVE-1615
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 0.7.0
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 0.7.0
>
> Attachments: hive-1615.patch.2.txt, hive-1615.patch.txt
>
>
> Some meta store methods being called from JSP have been removed. Really 
> should prioritize compiling jsp into servlet code again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1615) Web Interface JSP needs Refactoring for removed meta store methods

2010-09-03 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated HIVE-1615:
--

   Status: Patch Available  (was: Open)
Affects Version/s: 0.7.0
Fix Version/s: 0.7.0

> Web Interface JSP needs Refactoring for removed meta store methods
> --
>
> Key: HIVE-1615
> URL: https://issues.apache.org/jira/browse/HIVE-1615
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 0.7.0
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 0.7.0
>
> Attachments: hive-1615.patch.txt
>
>
> Some meta store methods being called from JSP have been removed. Really 
> should prioritize compiling jsp into servlet code again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1615) Web Interface JSP needs Refactoring for removed meta store methods

2010-09-03 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated HIVE-1615:
--

Summary: Web Interface JSP needs Refactoring for removed meta store methods 
 (was: Web Interface JSP needs Refactoring for deprecated meta store methods)

> Web Interface JSP needs Refactoring for removed meta store methods
> --
>
> Key: HIVE-1615
> URL: https://issues.apache.org/jira/browse/HIVE-1615
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Web UI
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Attachments: hive-1615.patch.txt
>
>
> Some meta store methods being called from JSP have been removed. Really 
> should prioritize compiling jsp into servlet code again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1615) Web Interface JSP needs Refactoring for deprecated meta store methods

2010-09-03 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated HIVE-1615:
--

Attachment: hive-1615.patch.txt

> Web Interface JSP needs Refactoring for deprecated meta store methods
> -
>
> Key: HIVE-1615
> URL: https://issues.apache.org/jira/browse/HIVE-1615
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Web UI
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Attachments: hive-1615.patch.txt
>
>
> Some meta store methods being called from JSP have been removed. Really 
> should prioritize compiling jsp into servlet code again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1615) Web Interface JSP needs Refactoring for deprecated meta store methods

2010-09-03 Thread Edward Capriolo (JIRA)
Web Interface JSP needs Refactoring for deprecated meta store methods
-

 Key: HIVE-1615
 URL: https://issues.apache.org/jira/browse/HIVE-1615
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Web UI
Reporter: Edward Capriolo
Assignee: Edward Capriolo


Some meta store methods being called from JSP have been removed. Really should 
prioritize compiling jsp into servlet code again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1546) Ability to plug custom Semantic Analyzers for Hive Grammar

2010-09-03 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12906185#action_12906185
 ] 

Namit Jain commented on HIVE-1546:
--

I agree it would be good to have a meeting.

btw, the hooks I was referrring to and Pre/Post Execution Statement level 
hooks, with the following signature:

public interface PostExecute {

  /**
   * The run command that is called just before the execution of the query.
   *
   * @param sess
   *  The session state.
   * @param inputs
   *  The set of input tables and partitions.
   * @param outputs
   *  The set of output tables, partitions, local and hdfs directories.
   * @param lInfo
   *   The column level lineage information.
   * @param ugi
   *  The user group security information.
   */
  void run(SessionState sess, Set inputs,
  Set outputs, LineageInfo lInfo,
  UserGroupInformation ugi) throws Exception;
}


Looking at the spec., it looks like a subset of DDLs need to be supported, 
which can be easily accomplished via the hook.
If need be, we can pass more info. in the hook - there was a plan to add job 
level hook also, which is not checked in,
but via which the configuration etc. can be changed.

> Ability to plug custom Semantic Analyzers for Hive Grammar
> --
>
> Key: HIVE-1546
> URL: https://issues.apache.org/jira/browse/HIVE-1546
> Project: Hadoop Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 0.7.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.7.0
>
> Attachments: hive-1546-3.patch, hive-1546-4.patch, hive-1546.patch, 
> hive-1546_2.patch
>
>
> It will be useful if Semantic Analysis phase is made pluggable such that 
> other projects can do custom analysis of hive queries before doing metastore 
> operations on them. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-192) Add TIMESTAMP column type

2010-09-03 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-192:


Status: Open  (was: Patch Available)

This patch is not ready for commit.


> Add TIMESTAMP column type
> -
>
> Key: HIVE-192
> URL: https://issues.apache.org/jira/browse/HIVE-192
> Project: Hadoop Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Johan Oskarsson
>Assignee: Shyam Sundar Sarkar
> Attachments: create_2.q.txt, Hive-192.patch.txt, 
> TIMESTAMP_specification.txt
>
>
> create table something2 (test timestamp);
> ERROR: DDL specifying type timestamp which has not been defined
> java.lang.RuntimeException: specifying type timestamp which has not been 
> defined
>   at 
> org.apache.hadoop.hive.serde2.dynamic_type.thrift_grammar.FieldType(thrift_grammar.java:1879)
>   at 
> org.apache.hadoop.hive.serde2.dynamic_type.thrift_grammar.Field(thrift_grammar.java:1545)
>   at 
> org.apache.hadoop.hive.serde2.dynamic_type.thrift_grammar.FieldList(thrift_grammar.java:1501)
>   at 
> org.apache.hadoop.hive.serde2.dynamic_type.thrift_grammar.Struct(thrift_grammar.java:1171)
>   at 
> org.apache.hadoop.hive.serde2.dynamic_type.thrift_grammar.TypeDefinition(thrift_grammar.java:497)
>   at 
> org.apache.hadoop.hive.serde2.dynamic_type.thrift_grammar.Definition(thrift_grammar.java:439)
>   at 
> org.apache.hadoop.hive.serde2.dynamic_type.thrift_grammar.Start(thrift_grammar.java:101)
>   at 
> org.apache.hadoop.hive.serde2.dynamic_type.DynamicSerDe.initialize(DynamicSerDe.java:97)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:180)
>   at org.apache.hadoop.hive.ql.metadata.Table.initSerDe(Table.java:141)
>   at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:202)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:641)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:98)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:215)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:174)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:207)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:305)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1614) UDTF json_tuple should return null row when input is not a valid JSON string

2010-09-03 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12906135#action_12906135
 ] 

Namit Jain commented on HIVE-1614:
--

+1

will commit if the tests pass

> UDTF json_tuple should return null row when input is not a valid JSON string
> 
>
> Key: HIVE-1614
> URL: https://issues.apache.org/jira/browse/HIVE-1614
> Project: Hadoop Hive
>  Issue Type: Bug
>Affects Versions: 0.7.0
>Reporter: Ning Zhang
>Assignee: Ning Zhang
> Attachments: HIVE-1614.2.patch, HIVE-1614.patch
>
>
> If the input column is not a valid JSON string, json_tuple will not return 
> anything but this will prevent the downstream operators to access the 
> left-hand side table. We should output a NULL row instead, similar to when 
> the input column is a NULL value. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-716) Web Interface wait/notify, interface changes

2010-09-03 Thread Shrijeet Paliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12906129#action_12906129
 ] 

Shrijeet Paliwal commented on HIVE-716:
---

I think we might have bug in HWISessionItem.java. 
I am referring to this change: 
http://svn.apache.org/viewvc/hadoop/hive/trunk/hwi/src/java/org/apache/hadoop/hive/hwi/HWISessionItem.java?r1=817845&r2=817844&pathrev=817845

Why was this line (343) commented out : res.clear();

This is making a query like : " select * from blah limit 1000 "  to return more 
that 1000 results (because results are not being flushed)

> Web Interface wait/notify, interface changes
> 
>
> Key: HIVE-716
> URL: https://issues.apache.org/jira/browse/HIVE-716
> Project: Hadoop Hive
>  Issue Type: Improvement
>  Components: Web UI
> Environment: All
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 0.5.0
>
> Attachments: hive-716-2.diff, hive-716-3.diff, hive-716-4.diff, 
> hive-716-5.diff, hive-716-6.diff, hive-716.diff, hwi_query_box.png
>
>
> In TestHWISessionItem 
> Asserts are backwards
> {noformat}
> assertEquals(  searchItem.getQueryRet(), 0);
> {noformat}
> Should be
> {noformat}
> assertEquals( zero , searchItem.getQueryRet());
> {noformat}
> Wait/notify semantics can be added. This is helpful for end user, and cleaner 
> in the test case.
> {noformat}
> while (user1_item2.getStatus() != 
> HWISessionItem.WebSessionItemStatus.QUERY_COMPLETE) {
>   Thread.sleep(1);
> }
> {noformat}
> {noformat}
> synchronized (user1_item2.runnable) {
>   while (user1_item2.getStatus() != 
> HWISessionItem.WebSessionItemStatus.QUERY_COMPLETE) {
>  user1_item2.runnable.wait();
>   }
> }
> {noformat}
> The text box in the web interface should accept multiple queries separated by 
> ';' like the cli does. This will add more usability. No need for separate set 
> processor pages. 
> setQuery(String) is replaced by setQueries(List)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1446) Move Hive Documentation from the wiki to version control

2010-09-03 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1446:
-

Fix Version/s: (was: 0.6.0)

Postponing this work until 0.7.0

> Move Hive Documentation from the wiki to version control
> 
>
> Key: HIVE-1446
> URL: https://issues.apache.org/jira/browse/HIVE-1446
> Project: Hadoop Hive
>  Issue Type: Task
>  Components: Documentation
>Reporter: Carl Steinbach
>Assignee: Carl Steinbach
> Fix For: 0.7.0
>
> Attachments: hive-1446-part-1.diff, hive-1446.diff, hive-logo-wide.png
>
>
> Move the Hive Language Manual (and possibly some other documents) from the 
> Hive wiki to version control. This work needs to be coordinated with the 
> hive-dev and hive-user community in order to avoid missing any edits as well 
> as to avoid or limit unavailability of the docs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-675) add database/schema support Hive QL

2010-09-03 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-675:


Fix Version/s: 0.6.0

> add database/schema support Hive QL
> ---
>
> Key: HIVE-675
> URL: https://issues.apache.org/jira/browse/HIVE-675
> Project: Hadoop Hive
>  Issue Type: New Feature
>  Components: Metastore, Query Processor
>Reporter: Prasad Chakka
>Assignee: Carl Steinbach
> Fix For: 0.6.0, 0.7.0
>
> Attachments: hive-675-2009-9-16.patch, hive-675-2009-9-19.patch, 
> hive-675-2009-9-21.patch, hive-675-2009-9-23.patch, hive-675-2009-9-7.patch, 
> hive-675-2009-9-8.patch, HIVE-675-2010-08-16.patch.txt, 
> HIVE-675-2010-7-16.patch.txt, HIVE-675-2010-8-4.patch.txt, 
> HIVE-675.10.patch.txt, HIVE-675.11.patch.txt, HIVE-675.12.patch.txt, 
> HIVE-675.13.patch.txt
>
>
> Currently all Hive tables reside in single namespace (default). Hive should 
> support multiple namespaces (databases or schemas) such that users can create 
> tables in their specific namespaces. These name spaces can have different 
> warehouse directories (with a default naming scheme) and possibly different 
> properties.
> There is already some support for this in metastore but Hive query parser 
> should have this feature as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1517) ability to select across a database

2010-09-03 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1517:
-

Fix Version/s: 0.6.0
   0.7.0

> ability to select across a database
> ---
>
> Key: HIVE-1517
> URL: https://issues.apache.org/jira/browse/HIVE-1517
> Project: Hadoop Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Carl Steinbach
> Fix For: 0.6.0, 0.7.0
>
>
> After  https://issues.apache.org/jira/browse/HIVE-675, we need a way to be 
> able to select across a database for this feature to be useful.
> For eg:
> use db1
> create table foo();
> use db2
> select .. from db1.foo.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (HIVE-675) add database/schema support Hive QL

2010-09-03 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach reopened HIVE-675:
-


Working on a backport for 0.6.0

> add database/schema support Hive QL
> ---
>
> Key: HIVE-675
> URL: https://issues.apache.org/jira/browse/HIVE-675
> Project: Hadoop Hive
>  Issue Type: New Feature
>  Components: Metastore, Query Processor
>Reporter: Prasad Chakka
>Assignee: Carl Steinbach
> Fix For: 0.7.0
>
> Attachments: hive-675-2009-9-16.patch, hive-675-2009-9-19.patch, 
> hive-675-2009-9-21.patch, hive-675-2009-9-23.patch, hive-675-2009-9-7.patch, 
> hive-675-2009-9-8.patch, HIVE-675-2010-08-16.patch.txt, 
> HIVE-675-2010-7-16.patch.txt, HIVE-675-2010-8-4.patch.txt, 
> HIVE-675.10.patch.txt, HIVE-675.11.patch.txt, HIVE-675.12.patch.txt, 
> HIVE-675.13.patch.txt
>
>
> Currently all Hive tables reside in single namespace (default). Hive should 
> support multiple namespaces (databases or schemas) such that users can create 
> tables in their specific namespaces. These name spaces can have different 
> warehouse directories (with a default naming scheme) and possibly different 
> properties.
> There is already some support for this in metastore but Hive query parser 
> should have this feature as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



wiki index pages for Howl and security

2010-09-03 Thread John Sichi
I've created two new wiki pages for a couple of topics on which questions are 
coming up often now.  These are just collections of links for now; feel free to 
add more links/resources and expand the pages.

The various security efforts I'm aware of:

http://wiki.apache.org/hadoop/Hive/Security

Howl development:

http://wiki.apache.org/hadoop/Hive/Howl

JVS



[jira] Updated: (HIVE-842) Authentication Infrastructure for Hive

2010-09-03 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-842:


Attachment: HiveSecurityThoughts.pdf

For lack of a better place, uploading this doc from Venkatesh here so I can 
link it from wiki.


> Authentication Infrastructure for Hive
> --
>
> Key: HIVE-842
> URL: https://issues.apache.org/jira/browse/HIVE-842
> Project: Hadoop Hive
>  Issue Type: New Feature
>  Components: Server Infrastructure
>Reporter: Edward Capriolo
> Attachments: HiveSecurityThoughts.pdf
>
>
> This issue deals with the authentication (user name,password) infrastructure. 
> Not the authorization components that specify what a user should be able to 
> do.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1529) Add ANSI SQL covariance aggregate functions: covar_pop and covar_samp.

2010-09-03 Thread John Sichi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12906081#action_12906081
 ] 

John Sichi commented on HIVE-1529:
--

BTW, Pierre, I found this wiki page:

http://wiki.apache.org/hadoop/Hive/PoweredBy

Feel free to add Intuit there and note that the company is contributing 
resources to improve Hive.


> Add ANSI SQL covariance aggregate functions: covar_pop and covar_samp.
> --
>
> Key: HIVE-1529
> URL: https://issues.apache.org/jira/browse/HIVE-1529
> Project: Hadoop Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Affects Versions: 0.7.0
>Reporter: Pierre Huyn
>Assignee: Pierre Huyn
> Fix For: 0.7.0
>
> Attachments: HIVE-1529.1.patch, HIVE-1529.2.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Create new built-in aggregate functions covar_pop and covar_samp, functions 
> commonly used in statistical data analyses.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1546) Ability to plug custom Semantic Analyzers for Hive Grammar

2010-09-03 Thread John Sichi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12906041#action_12906041
 ] 

John Sichi commented on HIVE-1546:
--

@Namit:  the option of putting Howl directly into Hive was previously proposed 
but dropped for the same reasons Alan mentioned above.  Regarding hooks, could 
you point me to the hook you're referring to?  I don't believe Pre/Post have 
enough information currently, do they?

@Carl: I don't think Howl cares about the query processing stuff like {Task and 
FetchTask,QB,QBParseInfo,QBMetaData,QBJoinTree}.  For the others, it's not any 
time we touch them; it's only when we make breaking changes.  And since Howl is 
also open source, it's not like these are opaque dependencies.  We would need 
to do the same impact analysis if we used the contrib approach, right?  I don't 
see a big difference between the two except with contrib we get the convenience 
of immediate compilation errors to tell us something broke.  A continuous 
integration setup for Howl would take us close to that.

@All:  maybe we should set up a f2f meeting to hash this out?


> Ability to plug custom Semantic Analyzers for Hive Grammar
> --
>
> Key: HIVE-1546
> URL: https://issues.apache.org/jira/browse/HIVE-1546
> Project: Hadoop Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 0.7.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.7.0
>
> Attachments: hive-1546-3.patch, hive-1546-4.patch, hive-1546.patch, 
> hive-1546_2.patch
>
>
> It will be useful if Semantic Analysis phase is made pluggable such that 
> other projects can do custom analysis of hive queries before doing metastore 
> operations on them. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HIVE-1609) Support partition filtering in metastore

2010-09-03 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi reassigned HIVE-1609:


Assignee: Ajay Kidave

> Support partition filtering in metastore
> 
>
> Key: HIVE-1609
> URL: https://issues.apache.org/jira/browse/HIVE-1609
> Project: Hadoop Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Ajay Kidave
>Assignee: Ajay Kidave
> Fix For: 0.7.0
>
> Attachments: hive_1609.patch, hive_1609_2.patch
>
>
> The metastore needs to have support for returning a list of partitions based 
> on user specified filter conditions. This will be useful for tools which need 
> to do partition pruning. Howl is one such use case. The way partition pruning 
> is done during hive query execution need not be changed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1609) Support partition filtering in metastore

2010-09-03 Thread John Sichi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12906028#action_12906028
 ] 

John Sichi commented on HIVE-1609:
--

@Carl:  looks like Steven Wong and Zheng have been discussing how to get rid of 
the last uses of DynamicSerDe (over on hive-dev), so yeah, maybe we can do that 
once Steven completes the work.


> Support partition filtering in metastore
> 
>
> Key: HIVE-1609
> URL: https://issues.apache.org/jira/browse/HIVE-1609
> Project: Hadoop Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Ajay Kidave
> Fix For: 0.7.0
>
> Attachments: hive_1609.patch, hive_1609_2.patch
>
>
> The metastore needs to have support for returning a list of partitions based 
> on user specified filter conditions. This will be useful for tools which need 
> to do partition pruning. Howl is one such use case. The way partition pruning 
> is done during hive query execution need not be changed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1609) Support partition filtering in metastore

2010-09-03 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-1609:
-

Status: Open  (was: Patch Available)

> Support partition filtering in metastore
> 
>
> Key: HIVE-1609
> URL: https://issues.apache.org/jira/browse/HIVE-1609
> Project: Hadoop Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Ajay Kidave
> Fix For: 0.7.0
>
> Attachments: hive_1609.patch, hive_1609_2.patch
>
>
> The metastore needs to have support for returning a list of partitions based 
> on user specified filter conditions. This will be useful for tools which need 
> to do partition pruning. Howl is one such use case. The way partition pruning 
> is done during hive query execution need not be changed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-558) describe extended table/partition output is cryptic

2010-09-03 Thread Paul Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12906015#action_12906015
 ] 

Paul Yang commented on HIVE-558:


Also, this change will break a lot of unit tests - have you had a chance to run 
the full suite? In many unit tests, we do a describe extended that prints a 
single line with all the attributes. The entire line was usually ignored as the 
location attribute had a 'file:/'.

With your change, we will now start comparing all the attributes, some of which 
which may vary from run to run, e.g. table create time. Those attributes will 
need to be added to the ignore list in QTestUtil.java:checkCliDriverResults()

> describe extended table/partition output is cryptic
> ---
>
> Key: HIVE-558
> URL: https://issues.apache.org/jira/browse/HIVE-558
> Project: Hadoop Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Prasad Chakka
>Assignee: Thiruvel Thirumoolan
> Attachments: HIVE-558.patch, HIVE-558.patch, 
> HIVE-558_PrelimPatch.patch, SampleOutputDescribe.txt
>
>
> describe extended table prints out the Thrift metadata object directly. The 
> information from it is not easy to read or parse. Output should be easily 
> read and can be simple parsed to get table location etc by programs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1613) hive --service jar looks for hadoop version but was not defined

2010-09-03 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated HIVE-1613:
--

Status: Patch Available  (was: Open)

> hive --service jar looks for hadoop version but was not defined
> ---
>
> Key: HIVE-1613
> URL: https://issues.apache.org/jira/browse/HIVE-1613
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Clients
>Affects Versions: 0.6.0
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 0.7.0
>
> Attachments: hive-1613.patch.txt
>
>
> hive --service jar fails. I have to open another ticket to clean up the 
> scripts and unify functions like version detection.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1613) hive --service jar looks for hadoop version but was not defined

2010-09-03 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated HIVE-1613:
--

Attachment: hive-1613.patch.txt

> hive --service jar looks for hadoop version but was not defined
> ---
>
> Key: HIVE-1613
> URL: https://issues.apache.org/jira/browse/HIVE-1613
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Clients
>Affects Versions: 0.6.0
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 0.7.0
>
> Attachments: hive-1613.patch.txt
>
>
> hive --service jar fails. I have to open another ticket to clean up the 
> scripts and unify functions like version detection.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1614) UDTF json_tuple should return null row when input is not a valid JSON string

2010-09-03 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12906010#action_12906010
 ] 

Namit Jain commented on HIVE-1614:
--

I will take a look

> UDTF json_tuple should return null row when input is not a valid JSON string
> 
>
> Key: HIVE-1614
> URL: https://issues.apache.org/jira/browse/HIVE-1614
> Project: Hadoop Hive
>  Issue Type: Bug
>Affects Versions: 0.7.0
>Reporter: Ning Zhang
>Assignee: Ning Zhang
> Attachments: HIVE-1614.2.patch, HIVE-1614.patch
>
>
> If the input column is not a valid JSON string, json_tuple will not return 
> anything but this will prevent the downstream operators to access the 
> left-hand side table. We should output a NULL row instead, similar to when 
> the input column is a NULL value. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1614) UDTF json_tuple should return null row when input is not a valid JSON string

2010-09-03 Thread Ning Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ning Zhang updated HIVE-1614:
-

Attachment: HIVE-1614.2.patch

Added a catch for all throwable in the UDFT.

> UDTF json_tuple should return null row when input is not a valid JSON string
> 
>
> Key: HIVE-1614
> URL: https://issues.apache.org/jira/browse/HIVE-1614
> Project: Hadoop Hive
>  Issue Type: Bug
>Affects Versions: 0.7.0
>Reporter: Ning Zhang
>Assignee: Ning Zhang
> Attachments: HIVE-1614.2.patch, HIVE-1614.patch
>
>
> If the input column is not a valid JSON string, json_tuple will not return 
> anything but this will prevent the downstream operators to access the 
> left-hand side table. We should output a NULL row instead, similar to when 
> the input column is a NULL value. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1614) UDTF json_tuple should return null row when input is not a valid JSON string

2010-09-03 Thread Ning Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ning Zhang updated HIVE-1614:
-

   Status: Patch Available  (was: Open)
Affects Version/s: 0.7.0

> UDTF json_tuple should return null row when input is not a valid JSON string
> 
>
> Key: HIVE-1614
> URL: https://issues.apache.org/jira/browse/HIVE-1614
> Project: Hadoop Hive
>  Issue Type: Bug
>Affects Versions: 0.7.0
>Reporter: Ning Zhang
>Assignee: Ning Zhang
> Attachments: HIVE-1614.patch
>
>
> If the input column is not a valid JSON string, json_tuple will not return 
> anything but this will prevent the downstream operators to access the 
> left-hand side table. We should output a NULL row instead, similar to when 
> the input column is a NULL value. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1614) UDTF json_tuple should return null row when input is not a valid JSON string

2010-09-03 Thread Ning Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ning Zhang updated HIVE-1614:
-

Attachment: HIVE-1614.patch

> UDTF json_tuple should return null row when input is not a valid JSON string
> 
>
> Key: HIVE-1614
> URL: https://issues.apache.org/jira/browse/HIVE-1614
> Project: Hadoop Hive
>  Issue Type: Bug
>Affects Versions: 0.7.0
>Reporter: Ning Zhang
>Assignee: Ning Zhang
> Attachments: HIVE-1614.patch
>
>
> If the input column is not a valid JSON string, json_tuple will not return 
> anything but this will prevent the downstream operators to access the 
> left-hand side table. We should output a NULL row instead, similar to when 
> the input column is a NULL value. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1614) UDTF json_tuple should return null row when input is not a valid JSON string

2010-09-03 Thread Ning Zhang (JIRA)
UDTF json_tuple should return null row when input is not a valid JSON string


 Key: HIVE-1614
 URL: https://issues.apache.org/jira/browse/HIVE-1614
 Project: Hadoop Hive
  Issue Type: Bug
Reporter: Ning Zhang
Assignee: Ning Zhang


If the input column is not a valid JSON string, json_tuple will not return 
anything but this will prevent the downstream operators to access the left-hand 
side table. We should output a NULL row instead, similar to when the input 
column is a NULL value. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1613) hive --service jar looks for hadoop version but was not defined

2010-09-03 Thread Edward Capriolo (JIRA)
hive --service jar looks for hadoop version but was not defined
---

 Key: HIVE-1613
 URL: https://issues.apache.org/jira/browse/HIVE-1613
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Clients
Affects Versions: 0.6.0
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Fix For: 0.7.0


hive --service jar fails. I have to open another ticket to clean up the scripts 
and unify functions like version detection.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



RE: Deserializing map column via JDBC (HIVE-1378)

2010-09-03 Thread Steven Wong
OK, I misunderstood that the input and output serdes for script must be the 
same type.


-Original Message-
From: Zheng Shao [mailto:zs...@facebook.com] 
Sent: Friday, September 03, 2010 6:17 AM
To: Steven Wong; hive-dev@hadoop.apache.org
Subject: RE: Deserializing map column via JDBC (HIVE-1378)

No. DelimitedJSONSerDe does not need to have any deserialization capability 
(The function can contain a single line of "throw new ...").  After that let's 
run unit tests. There might be several small places to fix but I am pretty sure 
it's very easy to find.

Zheng
-Original Message-
From: Steven Wong [mailto:sw...@netflix.com] 
Sent: Friday, September 03, 2010 2:39 PM
To: Zheng Shao; hive-dev@hadoop.apache.org
Subject: RE: Deserializing map column via JDBC (HIVE-1378)

> The simplest thing to do is to:
> 1. Rename "useJSONforLazy" to "useDelimitedJSON";
> 2. Use "DelimitedJSONSerDe" when useDelimitedJSON = true;

So, DelimitedJSONSerDe will need the same deserialization capability as 
LazySimpleSerDe?


-Original Message-
From: Zheng Shao [mailto:zs...@facebook.com] 
Sent: Thursday, September 02, 2010 7:19 PM
To: Steven Wong; hive-dev@hadoop.apache.org
Subject: RE: Deserializing map column via JDBC (HIVE-1378)

Earlier there was no multi-level delimited format - the only way is first-level 
delimited, and then JSON.
Some legacy scripts/apps have been written to work with that.

Later we introduced multi-level delimited format, and made the hack to put them 
together.

Zheng
-Original Message-
From: Steven Wong [mailto:sw...@netflix.com] 
Sent: Friday, September 03, 2010 10:17 AM
To: Zheng Shao; hive-dev@hadoop.apache.org
Subject: RE: Deserializing map column via JDBC (HIVE-1378)

Why was/is useJSONforLazy needed? What's the historical background?


-Original Message-
From: Zheng Shao [mailto:zs...@facebook.com] 
Sent: Thursday, September 02, 2010 7:11 PM
To: Steven Wong; hive-dev@hadoop.apache.org
Subject: RE: Deserializing map column via JDBC (HIVE-1378)

The simplest thing to do is to:
1. Rename "useJSONforLazy" to "useDelimitedJSON";
2. Use "DelimitedJSONSerDe" when useDelimitedJSON = true;

Zheng
-Original Message-
From: Steven Wong [mailto:sw...@netflix.com] 
Sent: Friday, September 03, 2010 10:05 AM
To: Zheng Shao; hive-dev@hadoop.apache.org
Subject: RE: Deserializing map column via JDBC (HIVE-1378)

Zheng,

In LazySimpleSerDe.initSerdeParams:

String useJsonSerialize = tbl
.getProperty(Constants.SERIALIZATION_USE_JSON_OBJECTS);
serdeParams.jsonSerialize = (useJsonSerialize != null && useJsonSerialize
.equalsIgnoreCase("true"));

SERIALIZATION_USE_JSON_OBJECTS is set to true in PlanUtis.getTableDesc:

// It is not a very clean way, and should be modified later - due to
// compatiblity reasons,
// user sees the results as json for custom scripts and has no way for
// specifying that.
// Right now, it is hard-coded in the code
if (useJSONForLazy) {
  properties.setProperty(Constants.SERIALIZATION_USE_JSON_OBJECTS, "true");
}

useJSONForLazy is true in the following 2 calls to PlanUtis.getTableDesc:

SemanticAnalyzer.genScriptPlan -> PlanUtis.getTableDesc
SemanticAnalyzer.genScriptPlan -> SemanticAnalyzer.getTableDescFromSerDe -> 
PlanUtis.getTableDesc

What is it all about and how should we untangle it (ideally get rid of 
SERIALIZATION_USE_JSON_OBJECTS)?

Thanks.
Steven


-Original Message-
From: Zheng Shao [mailto:zs...@facebook.com] 
Sent: Wednesday, September 01, 2010 6:45 PM
To: Steven Wong; hive-dev@hadoop.apache.org; John Sichi
Cc: Jerome Boulon
Subject: RE: Deserializing map column via JDBC (HIVE-1378)

Hi Steven,

As far as I remember, the only use case of JSON logic in LazySimpleSerDe is the 
FetchTask.   Even if there are other cases, we should be able to catch it in 
unit tests.

The potential risk is small enough, and the benefit of cleaning it up is pretty 
big - it makes the code much easier to understand.

Thanks for getting to it Steven!  I am very happy to see that this finally gets 
cleaned up!

Zheng
-Original Message-
From: Steven Wong [mailto:sw...@netflix.com] 
Sent: Thursday, September 02, 2010 7:45 AM
To: Zheng Shao; hive-dev@hadoop.apache.org; John Sichi
Cc: Jerome Boulon
Subject: RE: Deserializing map column via JDBC (HIVE-1378)

Your suggestion is in line with my earlier proposal of fixing FetchTask. The 
only major difference is the moving of the JSON-related logic from 
LazySimpleSerDe to a new serde called DelimitedJSONSerDe.

Is it safe to get rid of the JSON-related logic in LazySimpleSerDe? Sounds like 
you're implying that it is safe, but I'd like to confirm with you. I don't 
really know whether there are components other than FetchTask that rely on 
LazySimpleSerDe and its JSON capability (the useJSONSerialize flag doesn't have 
to be true for LazySimpleSerDe to use JSON).

If it is safe, I am totally fine with introducing DelimitedJSONSerDe.

Combin

[jira] Resolved: (HIVE-1580) cleanup ExecDriver.progress

2010-09-03 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain resolved HIVE-1580.
--

 Hadoop Flags: [Reviewed]
Fix Version/s: 0.7.0
   Resolution: Fixed

Committed. Thanks Joy

> cleanup ExecDriver.progress
> ---
>
> Key: HIVE-1580
> URL: https://issues.apache.org/jira/browse/HIVE-1580
> Project: Hadoop Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Joydeep Sen Sarma
>Assignee: Joydeep Sen Sarma
> Fix For: 0.7.0
>
> Attachments: hive-1580.1.patch
>
>
> a few problems:
> - if a job is retired - then counters cannot be obtained and a stack trace is 
> printed out (from history code). this confuses users
> - too many calls to getCounters. after a job has been detected to be finished 
> - there are quite a few more calls to get the job status and the counters. we 
> need to figure out a way to curtail this - in busy clusters the gap between 
> the job getting finished and the hive client noticing is very perceptible and 
> impacts user experience.
> calls to getCounters are very expensive in 0.20 as they grab a jobtracker 
> global lock (something we have fixed internally at FB)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Build Crashing on Hive 0.5 Release

2010-09-03 Thread Stephen Watt
Thanks for the suggestion Ed. I tried that but it doesn't help, the build 
still crashes in the same place with the same message. i.e. It doesn't 
complete and report errors, it crashes and gives the following error:

 [junit] Test org.apache.hadoop.hive.cli.TestCliDriver FAILED

BUILD FAILED
/home/hive/hive-0.5.0-build/hive-0.5.0-dev/src/build.xml:151: The 
following error occurred while executing this line:
/home/hive/hive-0.5.0-build/hive-0.5.0-dev/src/build.xml:91: The following 
error occurred while executing this line:
/home/hive/hive-0.5.0-build/hive-0.5.0-dev/src/build-common.xml:327: Tests 
failed

Is there a specific point release of Java 1.6 I should be using? 

Also, I do see this in the ant output, not sure if its affecting Hive:

[echo] Compiling shims against hadoop 0.20.0 
(/home/hive/hive-0.5.0-build/hive-0.5.0-dev/src/build/hadoopcore/hadoop-0.20.0)
[javac] Compiling 2 source files to 
/home/hive/hive-0.5.0-build/hive-0.5.0-dev/src/build/shims/classes
[javac] Note: 
/home/hive/hive-0.5.0-build/hive-0.5.0-dev/src/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java
 
uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: 
/home/hive/hive-0.5.0-build/hive-0.5.0-dev/src/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java
 
uses unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

Regards
Steve Watt 



From:
Edward Capriolo 
To:
hive-dev@hadoop.apache.org
Date:
09/02/2010 04:33 PM
Subject:
Re: Build Crashing on Hive 0.5 Release



On Thu, Sep 2, 2010 at 5:12 PM, Stephen Watt  wrote:
> Hi Folks
>
> I'm a Hadoop contributor and am presently working to get both Hadoop and
> Hive running on alternate JREs such as Apache Harmony and IBM Java.
>
> I noticed when building and running the functional tests ("clean test
> tar") for the Hive 0.5 release (i.e. not nightly build) , the build
> crashes right after running
> org.apache.hadoop.hive.ql.tool.TestLineageInfo. In addition, the
> TestCLIDriver Test Case fails as well. This is all using SUN JDK 
1.60_14.
> I'm running on a SLES 10 system.
>
> This is a little odd, given that this is a release and not a nightly
> build. Although, its not uncommon for me to see Hudson pass tests that
> fail when running locally. Can someone confirm the build works for them?
>
> This is my build script:
>
> #!/bin/sh
>
> # Set Build Dependencies
> set PATH=$PATH:/home/hive/Java-Versions/jdk1.6.0_14/bin/
> export ANT_HOME=/home/hive/Test-Dependencies/apache-ant-1.7.1
> export JAVA_HOME=/home/hive/Java-Versions/jdk1.6.0_14
> export BUILD_DIR=/home/hive/hive-0.5.0-build
> export HIVE_BUILD=$BUILD_DIR/build
> export HIVE_INSTALL=$BUILD_DIR/hive-0.5.0-dev/
> export HIVE_SRC=$HIVE_INSTALL/src
> export PATH=$PATH:$ANT_HOME/bin
>
> # Define Hadoop Version to Use
> HADOOP_VER=0.20.2
>
> # Run Build and Unit Test
> cd $HIVE_SRC
> ant -Dtarget.dir=$HIVE_BUILD -Dhadoop.version=$HADOOP_VER clean test tar 
>
> $BUILD_DIR/hiveSUN32Build.out
>
>
> Regards
> Steve Watt

I seem to remember. There were some older bugs when specifying the
minor versions of the 20 branch.
can you try:

HADOOP_VER=0.20.0

Rather then:

HADOOP_VER=0.20.2




[jira] Work started: (HIVE-558) describe extended table/partition output is cryptic

2010-09-03 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-558 started by Thiruvel Thirumoolan.

> describe extended table/partition output is cryptic
> ---
>
> Key: HIVE-558
> URL: https://issues.apache.org/jira/browse/HIVE-558
> Project: Hadoop Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Prasad Chakka
>Assignee: Thiruvel Thirumoolan
> Attachments: HIVE-558.patch, HIVE-558.patch, 
> HIVE-558_PrelimPatch.patch, SampleOutputDescribe.txt
>
>
> describe extended table prints out the Thrift metadata object directly. The 
> information from it is not easy to read or parse. Output should be easily 
> read and can be simple parsed to get table location etc by programs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-558) describe extended table/partition output is cryptic

2010-09-03 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HIVE-558:
--

Attachment: HIVE-558.patch

Thanks Paul, attaching patch with bug fixed.

> describe extended table/partition output is cryptic
> ---
>
> Key: HIVE-558
> URL: https://issues.apache.org/jira/browse/HIVE-558
> Project: Hadoop Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Prasad Chakka
>Assignee: Thiruvel Thirumoolan
> Attachments: HIVE-558.patch, HIVE-558.patch, 
> HIVE-558_PrelimPatch.patch, SampleOutputDescribe.txt
>
>
> describe extended table prints out the Thrift metadata object directly. The 
> information from it is not easy to read or parse. Output should be easily 
> read and can be simple parsed to get table location etc by programs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1612) Cannot build hive for hadoop 0.21.0

2010-09-03 Thread AJ Pahl (JIRA)
Cannot build hive for hadoop 0.21.0
---

 Key: HIVE-1612
 URL: https://issues.apache.org/jira/browse/HIVE-1612
 Project: Hadoop Hive
  Issue Type: Bug
Affects Versions: 0.7.0
Reporter: AJ Pahl


Current trunk for 0.7.0 does not support building HIVE against the Hadoop 
0.21.0 release.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.