[ https://issues.apache.org/jira/browse/HIVE-493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12730913#action_12730913 ]
Edward Capriolo commented on HIVE-493: -------------------------------------- Prasad, by 'virtual read only schema' I mean that mysql has a schema table. You can run queries like 'select columns from table where tablename='tablea' '. Hive has some support for things like 'show partitions', but some data can only be read with the MetaStore API. For a use case. I want to write my compact utility. As a user I am faced with options on how to do this. The best way would be a tightly integrated tool. {noformat} hive --service compact --table tablea {noformat} This tool would be smart. It would do things rebuild indexes, warn or error on external tables, or bucketed columns. On the other side of the spectrum one could write a map/reduce job with hard coded paths to the warehouse that works on the file level this assumes I know a lot about the underpinning of hive. Users need to be able to explore the metastore table structure and the warehouse so their utilities can make an informed decision. Should they link to the metastore API or should the hql language support most of the operations and the process could be done purely from an HQL script? > automatically infer existing partitions of table from HDFS files. > ----------------------------------------------------------------- > > Key: HIVE-493 > URL: https://issues.apache.org/jira/browse/HIVE-493 > Project: Hadoop Hive > Issue Type: New Feature > Components: Metastore, Query Processor > Affects Versions: 0.3.0, 0.3.1, 0.4.0 > Reporter: Prasad Chakka > > Initially partition list for a table is inferred from HDFS directory > structure instead of looking into metastore (partitions are created using > 'alter table ... add partition'). but this automatic inferring was removed to > favor the later approach during checking-in metastore checker feature and > also to facilitate external partitions. > Joydeep and Frederick mentioned that it would simple for users to create the > HDFS directory and let Hive infer rather than explicitly add a partition. But > doing that raises following... > 1) External partitions -- so we have to mix both approaches and partition > list is merged list of inferred partitions and registered partitions. and > duplicates have to be resolved. > 2) Partition level schemas can't supported. Which schema to chose for the > inferred partitions? the table schema when the inferred partition is created > or the latest tale schema? how do we know the table schema when the inferred > partitions is created? > 3) If partitions have to be registered the partitions can be disabled without > actually deleting the data. this feature is not supported and may not be that > useful but nevertheless this can't be supported with inferred partitions > 4) Indexes are being added. So if partitions are not registered then indexes > for such partitions can not be maintained automatically. > I would like to know what is the general thinking about this among users of > Hive. If inferred partitions are preferred then can we live with restricted > functionality that this imposes? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.