[jira] Updated: (HIVE-1413) bring a table/partition offline
[ https://issues.apache.org/jira/browse/HIVE-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Jain updated HIVE-1413: - Status: Resolved (was: Patch Available) Hadoop Flags: [Reviewed] Resolution: Fixed committed. Thanks Siying > bring a table/partition offline > --- > > Key: HIVE-1413 > URL: https://issues.apache.org/jira/browse/HIVE-1413 > Project: Hadoop Hive > Issue Type: New Feature > Components: Metastore, Query Processor >Reporter: Namit Jain >Assignee: Siying Dong > Fix For: 0.7.0 > > Attachments: HIVE-1413.1.patch, HIVE-1413.2.patch, HIVE-1413.3.patch, > HIVE-1413.4.patch > > > There should be a way to bring a table/partition offline. > At that time, no read/write operations should be supported on that table. > It would be very useful for housekeeping operations -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895537#action_12895537 ] Ning Zhang commented on HIVE-1509: -- Yes, I check the svn diff in my working directory is the same as HIVE-1509.4.patch. > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.3.patch, HIVE-1509.4.patch, > HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895534#action_12895534 ] Joydeep Sen Sarma commented on HIVE-1509: - strange - let me retry. can u check the patch one last time? (perhaps it's not up to date with contents of ur tree?) > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.3.patch, HIVE-1509.4.patch, > HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HIVE-1413) bring a table/partition offline
[ https://issues.apache.org/jira/browse/HIVE-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siying Dong updated HIVE-1413: -- Attachment: HIVE-1413.4.patch The previous one missed two test output files... > bring a table/partition offline > --- > > Key: HIVE-1413 > URL: https://issues.apache.org/jira/browse/HIVE-1413 > Project: Hadoop Hive > Issue Type: New Feature > Components: Metastore, Query Processor >Reporter: Namit Jain >Assignee: Siying Dong > Fix For: 0.7.0 > > Attachments: HIVE-1413.1.patch, HIVE-1413.2.patch, HIVE-1413.3.patch, > HIVE-1413.4.patch > > > There should be a way to bring a table/partition offline. > At that time, no read/write operations should be supported on that table. > It would be very useful for housekeeping operations -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895526#action_12895526 ] Ning Zhang commented on HIVE-1509: -- The TestNegativeCliDriver ran successfully. In terms of the number of queries in the .q files in neg, I think as long as there is no queries after the first exception, it should be fine. At least this is what dyn_part[12].q was doing (only the last query thrown expected exception). > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.3.patch, HIVE-1509.4.patch, > HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895523#action_12895523 ] Ning Zhang commented on HIVE-1509: -- It's strange. I didn't get this diff on both 17 and 20. I'll run TestNegativeCliDriver again. > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.3.patch, HIVE-1509.4.patch, > HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-675) add database/scheme support Hive QL
[ https://issues.apache.org/jira/browse/HIVE-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895507#action_12895507 ] Namit Jain commented on HIVE-675: - Thanks Carl, I will take a look > add database/scheme support Hive QL > --- > > Key: HIVE-675 > URL: https://issues.apache.org/jira/browse/HIVE-675 > Project: Hadoop Hive > Issue Type: New Feature > Components: Metastore, Query Processor >Reporter: Prasad Chakka >Assignee: Carl Steinbach > Fix For: 0.6.0, 0.7.0 > > Attachments: hive-675-2009-9-16.patch, hive-675-2009-9-19.patch, > hive-675-2009-9-21.patch, hive-675-2009-9-23.patch, hive-675-2009-9-7.patch, > hive-675-2009-9-8.patch, HIVE-675-2010-7-16.patch.txt, > HIVE-675-2010-8-4.patch.txt > > > Currently all Hive tables reside in single namespace (default). Hive should > support multiple namespaces (databases or schemas) such that users can create > tables in their specific namespaces. These name spaces can have different > warehouse directories (with a default naming scheme) and possibly different > properties. > There is already some support for this in metastore but Hive query parser > should have this feature as well. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1413) bring a table/partition offline
[ https://issues.apache.org/jira/browse/HIVE-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895505#action_12895505 ] Namit Jain commented on HIVE-1413: -- +1 will commit if the tests pass > bring a table/partition offline > --- > > Key: HIVE-1413 > URL: https://issues.apache.org/jira/browse/HIVE-1413 > Project: Hadoop Hive > Issue Type: New Feature > Components: Metastore, Query Processor >Reporter: Namit Jain >Assignee: Siying Dong > Fix For: 0.7.0 > > Attachments: HIVE-1413.1.patch, HIVE-1413.2.patch, HIVE-1413.3.patch > > > There should be a way to bring a table/partition offline. > At that time, no read/write operations should be supported on that table. > It would be very useful for housekeeping operations -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HIVE-1413) bring a table/partition offline
[ https://issues.apache.org/jira/browse/HIVE-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siying Dong updated HIVE-1413: -- Status: Patch Available (was: Open) > bring a table/partition offline > --- > > Key: HIVE-1413 > URL: https://issues.apache.org/jira/browse/HIVE-1413 > Project: Hadoop Hive > Issue Type: New Feature > Components: Metastore, Query Processor >Reporter: Namit Jain >Assignee: Siying Dong > Fix For: 0.7.0 > > Attachments: HIVE-1413.1.patch, HIVE-1413.2.patch, HIVE-1413.3.patch > > > There should be a way to bring a table/partition offline. > At that time, no read/write operations should be supported on that table. > It would be very useful for housekeeping operations -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HIVE-1413) bring a table/partition offline
[ https://issues.apache.org/jira/browse/HIVE-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siying Dong updated HIVE-1413: -- Attachment: HIVE-1413.3.patch use more consolidated approach to try to check it in validate() instead of everywhere. change syntax a little bit. Add some more unit test cases. > bring a table/partition offline > --- > > Key: HIVE-1413 > URL: https://issues.apache.org/jira/browse/HIVE-1413 > Project: Hadoop Hive > Issue Type: New Feature > Components: Metastore, Query Processor >Reporter: Namit Jain >Assignee: Siying Dong > Fix For: 0.7.0 > > Attachments: HIVE-1413.1.patch, HIVE-1413.2.patch, HIVE-1413.3.patch > > > There should be a way to bring a table/partition offline. > At that time, no read/write operations should be supported on that table. > It would be very useful for housekeeping operations -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HIVE-1512) Need to get hive_hbase-handler to work with hbase versions 0.20.4 0.20.5 and cloudera CDH3 version
[ https://issues.apache.org/jira/browse/HIVE-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Hu updated HIVE-1512: --- Attachment: HIVE-1512.patch patch for 3 files to make the hive-hbase-handler to be compatible with 0.20.4 and 0.20.5 and cloudera CDH3 hbase. > Need to get hive_hbase-handler to work with hbase versions 0.20.4 0.20.5 and > cloudera CDH3 version > --- > > Key: HIVE-1512 > URL: https://issues.apache.org/jira/browse/HIVE-1512 > Project: Hadoop Hive > Issue Type: Improvement > Components: HBase Handler >Affects Versions: 0.7.0 >Reporter: Jimmy Hu > Fix For: 0.7.0 > > Attachments: HIVE-1512.patch > > Original Estimate: 48h > Remaining Estimate: 48h > > the current trunk hive_hbase-handler only works with hbase 0.20.3, we need > to get it to work with hbase versions 0.20.4 0.20.5 and cloudera CDH3 version -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HIVE-1512) Need to get hive_hbase-handler to work with hbase versions 0.20.4 0.20.5 and cloudera CDH3 version
Need to get hive_hbase-handler to work with hbase versions 0.20.4 0.20.5 and cloudera CDH3 version --- Key: HIVE-1512 URL: https://issues.apache.org/jira/browse/HIVE-1512 Project: Hadoop Hive Issue Type: Improvement Components: HBase Handler Affects Versions: 0.7.0 Reporter: Jimmy Hu Fix For: 0.7.0 the current trunk hive_hbase-handler only works with hbase 0.20.3, we need to get it to work with hbase versions 0.20.4 0.20.5 and cloudera CDH3 version -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895472#action_12895472 ] Joydeep Sen Sarma commented on HIVE-1509: - the test result for dyn_part3.q is not matching the one provided in the patch. it seems that testnegativeclidriver is not executing anything but the first query in the .q file: [junit] diff -a -I file: -I pfile: -I /tmp/ -I invalidscheme: -I lastUpdateTime -I lastAccessTime -I \ owner -I transient_lastDdlTime -I java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -\ I Caused by: -I [.][.][.] [0-9]* more /data/users/jssarma/hive_trunk/build/ql/test/logs/clientnegative/dy\ n_part3.q.out [junit] 9a10,27 [junit] > PREHOOK: query: create table nzhang_part( key string) partitioned by (value string) [junit] > PREHOOK: type: CREATETABLE [junit] > POSTHOOK: query: create table nzhang_part( key string) partitioned by (value string) [junit] > POSTHOOK: type: CREATETABLE [junit] > POSTHOOK: Output: defa...@nzhang_part [junit] > PREHOOK: query: insert overwrite table nzhang_part partition(value) select key, value from \ src [junit] > PREHOOK: type: QUERY [junit] > PREHOOK: Input: defa...@src [junit] > FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask [junit] > PREHOOK: query: create table nzhang_part( key string) partitioned by (value string) [junit] > PREHOOK: type: CREATETABLE [junit] > POSTHOOK: query: create table nzhang_part( key string) partitioned by (value string) [junit] > POSTHOOK: type: CREATETABLE [junit] > POSTHOOK: Output: defa...@nzhang_part [junit] > PREHOOK: query: insert overwrite table nzhang_part partition(value) select key, value from \ src [junit] > PREHOOK: type: QUERY [junit] > PREHOOK: Input: defa...@src [junit] > FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.3.patch, HIVE-1509.4.patch, > HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Re: can anybody help to check in this code into hive trunk ?
Please create a Hive JIRA issue and attach a patch file. http://wiki.apache.org/hadoop/Hive/HowToContribute#Creating_a_patch JVS On Aug 4, 2010, at 4:23 PM, Jinsong Hu wrote: > Please help to check in this code into > > http://svn.apache.org/repos/asf/hadoop/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase > > The change is needed for the hive_hbase-handler.jar to be compatible with > cloudera CDH3 version of hbase. > they are also needed to be compatible with hbase 0.20.4 and 0.20.5. The > change is very minimal tompared > to today's trunk. only 2 type casting, and one change to use the getVaule() > API that exists in all versions of hbase. > > Jimmy. > >
can anybody help to check in this code into hive trunk ?
Please help to check in this code into http://svn.apache.org/repos/asf/hadoop/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase The change is needed for the hive_hbase-handler.jar to be compatible with cloudera CDH3 version of hbase. they are also needed to be compatible with hbase 0.20.4 and 0.20.5. The change is very minimal tompared to today's trunk. only 2 type casting, and one change to use the getVaule() API that exists in all versions of hbase. Jimmy.
[jira] Commented: (HIVE-675) add database/scheme support Hive QL
[ https://issues.apache.org/jira/browse/HIVE-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895457#action_12895457 ] HBase Review Board commented on HIVE-675: - Message from: "Carl Steinbach" --- This is an automatically generated e-mail. To reply, visit: http://review.cloudera.org/r/508/ --- Review request for Hive Developers. Summary --- Database/Scheme support for Hive. * Implemented 'SHOW DATABASES' command * Refactored TestHiveMetaStore and enabled tests for remote metastore client. * Added launch configurations for TestHiveMetaStore and TestHiveMetaStoreRemote This addresses bug HIVE-675. http://issues.apache.org/jira/browse/HIVE-675 Diffs - build-common.xml d4ff895 eclipse-templates/TestHive.launchtemplate 24efc12 eclipse-templates/TestHiveMetaStore.launchtemplate PRE-CREATION eclipse-templates/TestHiveMetaStoreRemote.launchtemplate PRE-CREATION metastore/if/hive_metastore.thrift 478d0af metastore/src/gen-cpp/ThriftHiveMetastore.h e2538fb metastore/src/gen-cpp/ThriftHiveMetastore.cpp f945a3a metastore/src/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp ed2bb99 metastore/src/gen-cpp/hive_metastore_types.h 1b0c706 metastore/src/gen-cpp/hive_metastore_types.cpp b5a403d metastore/src/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java 78c78d9 metastore/src/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java 25408d9 metastore/src/gen-php/ThriftHiveMetastore.php ea4add5 metastore/src/gen-php/hive_metastore_types.php 61872a0 metastore/src/gen-py/hive_metastore/ThriftHiveMetastore-remote fc06cba metastore/src/gen-py/hive_metastore/ThriftHiveMetastore.py 4a0bc67 metastore/src/gen-py/hive_metastore/ttypes.py ea7269e metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java 39dbd52 metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 4fb296a metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java c6541af metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 6013644 metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java 0818689 metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java a06384c metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java 4951bd6 metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java 4488f94 metastore/src/model/org/apache/hadoop/hive/metastore/model/MDatabase.java b3e098d metastore/src/model/package.jdo 206ba75 metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java fff6aad metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStoreBase.java PRE-CREATION metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStoreRemote.java bc950b9 ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java bc268a4 ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java d59f48c ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 04dd9e3 ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java 2ecda01 ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java eedf9e3 ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 0484c91 ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g 02bf926 ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 70cd05f ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java eb079aa ql/src/java/org/apache/hadoop/hive/ql/plan/CreateDatabaseDesc.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/plan/DDLWork.java ed4ed22 ql/src/java/org/apache/hadoop/hive/ql/plan/DropDatabaseDesc.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/plan/ShowDatabasesDesc.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/plan/SwitchDatabaseDesc.java PRE-CREATION ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java b4651a2 ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java ab39ca4 ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java 26cc71a ql/src/test/queries/clientnegative/create_database_bad_name.q PRE-CREATION ql/src/test/queries/clientnegative/database_switch_test.q PRE-CREATION ql/src/test/queries/clientpositive/database.q PRE-CREATION ql/src/test/results/clientnegative/create_database_bad_name.q.out PRE-CREATION ql/src/test/results/clientnegative/database_switch_test.q.out PRE-CREATION ql/src/test/results/clientpositive/database.q.out PRE-CREATION Diff: http://review.cloudera.org/r/508/diff Testing --- Thanks, Carl > add database/scheme support Hive QL > --- > > Key: HIVE-675 > URL: https://issues.apache.org/jir
[jira] Updated: (HIVE-675) add database/scheme support Hive QL
[ https://issues.apache.org/jira/browse/HIVE-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-675: Status: Patch Available (was: Open) Posted on reviewboard: https://review.cloudera.org/r/508/ > add database/scheme support Hive QL > --- > > Key: HIVE-675 > URL: https://issues.apache.org/jira/browse/HIVE-675 > Project: Hadoop Hive > Issue Type: New Feature > Components: Metastore, Query Processor >Reporter: Prasad Chakka >Assignee: Carl Steinbach > Fix For: 0.6.0, 0.7.0 > > Attachments: hive-675-2009-9-16.patch, hive-675-2009-9-19.patch, > hive-675-2009-9-21.patch, hive-675-2009-9-23.patch, hive-675-2009-9-7.patch, > hive-675-2009-9-8.patch, HIVE-675-2010-7-16.patch.txt, > HIVE-675-2010-8-4.patch.txt > > > Currently all Hive tables reside in single namespace (default). Hive should > support multiple namespaces (databases or schemas) such that users can create > tables in their specific namespaces. These name spaces can have different > warehouse directories (with a default naming scheme) and possibly different > properties. > There is already some support for this in metastore but Hive query parser > should have this feature as well. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HIVE-675) add database/scheme support Hive QL
[ https://issues.apache.org/jira/browse/HIVE-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-675: Attachment: HIVE-675-2010-8-4.patch.txt HIVE-675-2010-8-4.patch.txt: * Implemented 'SHOW DATABASES' command * Refactored TestHiveMetaStore and enabled tests for remote metastore client. * Added launch configurations for TestHiveMetaStore and TestHiveMetaStoreRemote > add database/scheme support Hive QL > --- > > Key: HIVE-675 > URL: https://issues.apache.org/jira/browse/HIVE-675 > Project: Hadoop Hive > Issue Type: New Feature > Components: Metastore, Query Processor >Reporter: Prasad Chakka >Assignee: Carl Steinbach > Fix For: 0.6.0, 0.7.0 > > Attachments: hive-675-2009-9-16.patch, hive-675-2009-9-19.patch, > hive-675-2009-9-21.patch, hive-675-2009-9-23.patch, hive-675-2009-9-7.patch, > hive-675-2009-9-8.patch, HIVE-675-2010-7-16.patch.txt, > HIVE-675-2010-8-4.patch.txt > > > Currently all Hive tables reside in single namespace (default). Hive should > support multiple namespaces (databases or schemas) such that users can create > tables in their specific namespaces. These name spaces can have different > warehouse directories (with a default naming scheme) and possibly different > properties. > There is already some support for this in metastore but Hive query parser > should have this feature as well. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Review Request: HIVE-675: Add database/scheme support Hive QL
--- This is an automatically generated e-mail. To reply, visit: http://review.cloudera.org/r/508/ --- Review request for Hive Developers. Summary --- Database/Scheme support for Hive. * Implemented 'SHOW DATABASES' command * Refactored TestHiveMetaStore and enabled tests for remote metastore client. * Added launch configurations for TestHiveMetaStore and TestHiveMetaStoreRemote This addresses bug HIVE-675. http://issues.apache.org/jira/browse/HIVE-675 Diffs - build-common.xml d4ff895 eclipse-templates/TestHive.launchtemplate 24efc12 eclipse-templates/TestHiveMetaStore.launchtemplate PRE-CREATION eclipse-templates/TestHiveMetaStoreRemote.launchtemplate PRE-CREATION metastore/if/hive_metastore.thrift 478d0af metastore/src/gen-cpp/ThriftHiveMetastore.h e2538fb metastore/src/gen-cpp/ThriftHiveMetastore.cpp f945a3a metastore/src/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp ed2bb99 metastore/src/gen-cpp/hive_metastore_types.h 1b0c706 metastore/src/gen-cpp/hive_metastore_types.cpp b5a403d metastore/src/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java 78c78d9 metastore/src/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java 25408d9 metastore/src/gen-php/ThriftHiveMetastore.php ea4add5 metastore/src/gen-php/hive_metastore_types.php 61872a0 metastore/src/gen-py/hive_metastore/ThriftHiveMetastore-remote fc06cba metastore/src/gen-py/hive_metastore/ThriftHiveMetastore.py 4a0bc67 metastore/src/gen-py/hive_metastore/ttypes.py ea7269e metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java 39dbd52 metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 4fb296a metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java c6541af metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 6013644 metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java 0818689 metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java a06384c metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java 4951bd6 metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java 4488f94 metastore/src/model/org/apache/hadoop/hive/metastore/model/MDatabase.java b3e098d metastore/src/model/package.jdo 206ba75 metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java fff6aad metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStoreBase.java PRE-CREATION metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStoreRemote.java bc950b9 ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java bc268a4 ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java d59f48c ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 04dd9e3 ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java 2ecda01 ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java eedf9e3 ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 0484c91 ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g 02bf926 ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 70cd05f ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java eb079aa ql/src/java/org/apache/hadoop/hive/ql/plan/CreateDatabaseDesc.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/plan/DDLWork.java ed4ed22 ql/src/java/org/apache/hadoop/hive/ql/plan/DropDatabaseDesc.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/plan/ShowDatabasesDesc.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/plan/SwitchDatabaseDesc.java PRE-CREATION ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java b4651a2 ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java ab39ca4 ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java 26cc71a ql/src/test/queries/clientnegative/create_database_bad_name.q PRE-CREATION ql/src/test/queries/clientnegative/database_switch_test.q PRE-CREATION ql/src/test/queries/clientpositive/database.q PRE-CREATION ql/src/test/results/clientnegative/create_database_bad_name.q.out PRE-CREATION ql/src/test/results/clientnegative/database_switch_test.q.out PRE-CREATION ql/src/test/results/clientpositive/database.q.out PRE-CREATION Diff: http://review.cloudera.org/r/508/diff Testing --- Thanks, Carl
[jira] Commented: (HIVE-1510) HiveCombineInputFormat should not use prefix matching to find the partitionDesc for a given path
[ https://issues.apache.org/jira/browse/HIVE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895428#action_12895428 ] Ning Zhang commented on HIVE-1510: -- It's fine for me if you feel strong for it. The concern from me (besides har+CHIF support) is the performance implication when using CHIF merging large number of small files inside a partition. Siying has a use case where the pathToPartitionInfo is very large and the # of files in the splits is also very large. Determining whether partitionDesc for each input path takes a long time. In your patch, you have another HashMap for the path part of the pathToPartitionInfo (which trade memory for speed), but introduced another loop for comparing parent of paths. It would be nice (better performance) if you could avoid this loop by simply appending '/' at the end. But if it doesn't hurt the performance or appending '/' doesn't work, the current patch is fine for me too. As an aside, we should find out why pathToPartitionInfo in some cases contains paths only rather than the full URI. The ideal case is that it should always contains the full URI so that we don't rely on heuristics. But this could be another JIRA. > HiveCombineInputFormat should not use prefix matching to find the > partitionDesc for a given path > > > Key: HIVE-1510 > URL: https://issues.apache.org/jira/browse/HIVE-1510 > Project: Hadoop Hive > Issue Type: Bug >Reporter: He Yongqiang >Assignee: He Yongqiang > Attachments: hive-1510.1.patch > > > set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat; > drop table combine_3_srcpart_seq_rc; > create table combine_3_srcpart_seq_rc (key int , value string) partitioned by > (ds string, hr string) stored as sequencefile; > insert overwrite table combine_3_srcpart_seq_rc partition (ds="2010-08-03", > hr="00") select * from src; > alter table combine_3_srcpart_seq_rc set fileformat rcfile; > insert overwrite table combine_3_srcpart_seq_rc partition (ds="2010-08-03", > hr="001") select * from src; > desc extended combine_3_srcpart_seq_rc partition(ds="2010-08-03", hr="00"); > desc extended combine_3_srcpart_seq_rc partition(ds="2010-08-03", hr="001"); > select * from combine_3_srcpart_seq_rc where ds="2010-08-03" order by key; > drop table combine_3_srcpart_seq_rc; > will fail. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895411#action_12895411 ] Joydeep Sen Sarma commented on HIVE-1509: - ok - i will run tests on 20 and commit if all clear. > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.3.patch, HIVE-1509.4.patch, > HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ning Zhang updated HIVE-1509: - Attachment: HIVE-1509.4.patch Attaching HIVE-1509.4.patch to change 10 to 10L. I think it passed because we have the parameter set up in hive-default.xml as well. All unit tests passed on 0.20 except index_compact_2.q. It is strange that this particular test passed when I run it individually. It also passed when I ran the whole tests again. > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.3.patch, HIVE-1509.4.patch, > HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Re: hive_hbase-handleer.jar
Under the source for hive/hbase-handler/lib, delete the hbase-0.20.3 jars there, replace them with the jars matching your HBase version from CDH3, and then run ant package in hive. JVS On Aug 4, 2010, at 12:03 PM, Jinsong Hu wrote: > Hi, There: > I took the latest code from the hive trunk and compiled it against cloudera > CDH3 release of hbase/hadoop. I was able to get to the step > that I can create an external table successfully. but when I select * from > the table, the hive CLI console just hangs and nothing happens. > I waited for a long time and it still goes that way. I wonder if anybody have > gotten hive_hbase-handleer.jar to work. > I understand that current hive_hbase-handleer.jar only works with hbase > 0.20.3, I debugged the code and there is only some simple > change needed for it to compile against cloudera CDH3. I wonder if there is > anything I can do to figure out why the select * doesn't work. > > Jimmy.
[jira] Commented: (HIVE-1510) HiveCombineInputFormat should not use prefix matching to find the partitionDesc for a given path
[ https://issues.apache.org/jira/browse/HIVE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895386#action_12895386 ] He Yongqiang commented on HIVE-1510: will also test against har. even the old code also remove the scheme and authorities part when try to match partitionDesc. no offense -- the old code is not very clear, and not efficient. The new code does the same thing with simplified logic. > HiveCombineInputFormat should not use prefix matching to find the > partitionDesc for a given path > > > Key: HIVE-1510 > URL: https://issues.apache.org/jira/browse/HIVE-1510 > Project: Hadoop Hive > Issue Type: Bug >Reporter: He Yongqiang >Assignee: He Yongqiang > Attachments: hive-1510.1.patch > > > set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat; > drop table combine_3_srcpart_seq_rc; > create table combine_3_srcpart_seq_rc (key int , value string) partitioned by > (ds string, hr string) stored as sequencefile; > insert overwrite table combine_3_srcpart_seq_rc partition (ds="2010-08-03", > hr="00") select * from src; > alter table combine_3_srcpart_seq_rc set fileformat rcfile; > insert overwrite table combine_3_srcpart_seq_rc partition (ds="2010-08-03", > hr="001") select * from src; > desc extended combine_3_srcpart_seq_rc partition(ds="2010-08-03", hr="00"); > desc extended combine_3_srcpart_seq_rc partition(ds="2010-08-03", hr="001"); > select * from combine_3_srcpart_seq_rc where ds="2010-08-03" order by key; > drop table combine_3_srcpart_seq_rc; > will fail. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895384#action_12895384 ] Joydeep Sen Sarma commented on HIVE-1509: - let me know once the tests pass 0.20 and i can commit. one more question: +MAXCREATEDFILES("hive.exec.max.created.files", 10), i think u may have to append a 'L' to 10 since u are trying to later on do a: + long upperLimit = HiveConf.getLongVar(job, HiveConf.ConfVars.MAXCREATEDFILES); (or switch to using getIntVar). i am a little surprised how this is working because the 10 would be interpreted as Integer, go to the integer constructor which should leave the long default to -1. (or i guess i have forgotten how this works) > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.3.patch, HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1510) HiveCombineInputFormat should not use prefix matching to find the partitionDesc for a given path
[ https://issues.apache.org/jira/browse/HIVE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895381#action_12895381 ] Ning Zhang commented on HIVE-1510: -- In HiveFileFormatUtils, removing scheme and authorities from the URI and only retain the path part may cause problem when the URI is har:// rather than hdfs://. This is one of the bugs that Paul fixed in hadoop for HAR to be able to work with CHIF. As a general comment, is it easier to just modify the pathToPartitionInfo to add a Path.SEPARATER at the end? You don't need to introduce the recursive checking and still can use the prefix matching. > HiveCombineInputFormat should not use prefix matching to find the > partitionDesc for a given path > > > Key: HIVE-1510 > URL: https://issues.apache.org/jira/browse/HIVE-1510 > Project: Hadoop Hive > Issue Type: Bug >Reporter: He Yongqiang >Assignee: He Yongqiang > Attachments: hive-1510.1.patch > > > set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat; > drop table combine_3_srcpart_seq_rc; > create table combine_3_srcpart_seq_rc (key int , value string) partitioned by > (ds string, hr string) stored as sequencefile; > insert overwrite table combine_3_srcpart_seq_rc partition (ds="2010-08-03", > hr="00") select * from src; > alter table combine_3_srcpart_seq_rc set fileformat rcfile; > insert overwrite table combine_3_srcpart_seq_rc partition (ds="2010-08-03", > hr="001") select * from src; > desc extended combine_3_srcpart_seq_rc partition(ds="2010-08-03", hr="00"); > desc extended combine_3_srcpart_seq_rc partition(ds="2010-08-03", hr="001"); > select * from combine_3_srcpart_seq_rc where ds="2010-08-03" order by key; > drop table combine_3_srcpart_seq_rc; > will fail. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1510) HiveCombineInputFormat should not use prefix matching to find the partitionDesc for a given path
[ https://issues.apache.org/jira/browse/HIVE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895370#action_12895370 ] Namit Jain commented on HIVE-1510: -- Some minor comments TestHiveFileFormatUtils: 1. Use a different PartitionDesc every time instead of partDesc_1 for the partitions. 2. Spelling mistakes: forth group Otherwise, it looks good to me. Ning, can you also OK it , since we spent a lot of time debugging in the past. Also, before checking it, can you try the following 4 types of queries (with CombineHiveInputFormat): 1. hadoop 17 normal query 2. hadoop 17 sampling query 3. hadoop 20 normal query 4. hadoop 20 sampling query > HiveCombineInputFormat should not use prefix matching to find the > partitionDesc for a given path > > > Key: HIVE-1510 > URL: https://issues.apache.org/jira/browse/HIVE-1510 > Project: Hadoop Hive > Issue Type: Bug >Reporter: He Yongqiang >Assignee: He Yongqiang > Attachments: hive-1510.1.patch > > > set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat; > drop table combine_3_srcpart_seq_rc; > create table combine_3_srcpart_seq_rc (key int , value string) partitioned by > (ds string, hr string) stored as sequencefile; > insert overwrite table combine_3_srcpart_seq_rc partition (ds="2010-08-03", > hr="00") select * from src; > alter table combine_3_srcpart_seq_rc set fileformat rcfile; > insert overwrite table combine_3_srcpart_seq_rc partition (ds="2010-08-03", > hr="001") select * from src; > desc extended combine_3_srcpart_seq_rc partition(ds="2010-08-03", hr="00"); > desc extended combine_3_srcpart_seq_rc partition(ds="2010-08-03", hr="001"); > select * from combine_3_srcpart_seq_rc where ds="2010-08-03" order by key; > drop table combine_3_srcpart_seq_rc; > will fail. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
hive_hbase-handleer.jar
Hi, There: I took the latest code from the hive trunk and compiled it against cloudera CDH3 release of hbase/hadoop. I was able to get to the step that I can create an external table successfully. but when I select * from the table, the hive CLI console just hangs and nothing happens. I waited for a long time and it still goes that way. I wonder if anybody have gotten hive_hbase-handleer.jar to work. I understand that current hive_hbase-handleer.jar only works with hbase 0.20.3, I debugged the code and there is only some simple change needed for it to compile against cloudera CDH3. I wonder if there is anything I can do to figure out why the select * doesn't work. Jimmy.
[jira] Updated: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ning Zhang updated HIVE-1509: - Attachment: HIVE-1509.3.patch Uploading HIVE-1509.3.patch which fixed a bug. It passed hadoop 0.17 tests. I'm running 0.20 tests. > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.3.patch, HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1511) Hive plan serialization is slow
[ https://issues.apache.org/jira/browse/HIVE-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895352#action_12895352 ] Edward Capriolo commented on HIVE-1511: --- Also possibly a clever way to remove duplicate expressions that evaluate to the same result such as multiple key=0 > Hive plan serialization is slow > --- > > Key: HIVE-1511 > URL: https://issues.apache.org/jira/browse/HIVE-1511 > Project: Hadoop Hive > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Ning Zhang > > As reported by Edward Capriolo: > For reference I did this as a test case > SELECT * FROM src where > key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 > OR key=0 OR key=0 OR key=0 OR > key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 > OR key=0 OR key=0 OR key=0 OR > ...(100 more of these) > No OOM but I gave up after the test case did not go anywhere for about > 2 minutes. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1482) Not all jdbc calls are threadsafe.
[ https://issues.apache.org/jira/browse/HIVE-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895344#action_12895344 ] John Sichi commented on HIVE-1482: -- Yes, synchronized is the way to go. I think the synchronization has to be at the connection level. For example, HiveStatement also needs to make calls on the thrift interface. It's not just DatabaseMetaData. So we should add a new data member to HiveConnection: Object connectionMutex = new Object(); Then pass connectionMutex to constructors of sub-objects which need to participate in synchronization. They can then do synchronized(connectionMutex) { ... } around their critical sections. Creating a separate object for this purpose allows us to keep control over synchronization (e.g. so it doesn't get mixed up with user-level or thrift-level synchronization code later). We'll also need to be able to skip synchronization in the case of asynchronous cancel, but that's a separate task. We should also review to see if there is any client-side state which needs protection. > Not all jdbc calls are threadsafe. > -- > > Key: HIVE-1482 > URL: https://issues.apache.org/jira/browse/HIVE-1482 > Project: Hadoop Hive > Issue Type: Bug > Components: Drivers >Affects Versions: 0.7.0 >Reporter: Bennie Schut > Fix For: 0.7.0 > > > As per jdbc spec they should be threadsafe: > http://download.oracle.com/docs/cd/E17476_01/javase/1.3/docs/guide/jdbc/spec/jdbc-spec.frame9.html -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1511) Hive plan serialization is slow
[ https://issues.apache.org/jira/browse/HIVE-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895343#action_12895343 ] Ning Zhang commented on HIVE-1511: -- The issue seems to be the fact that we serialize the plan by writing to HDFS file directly. We probably should cache it locally and then write it to HDFS. > Hive plan serialization is slow > --- > > Key: HIVE-1511 > URL: https://issues.apache.org/jira/browse/HIVE-1511 > Project: Hadoop Hive > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Ning Zhang > > As reported by Edward Capriolo: > For reference I did this as a test case > SELECT * FROM src where > key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 > OR key=0 OR key=0 OR key=0 OR > key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 > OR key=0 OR key=0 OR key=0 OR > ...(100 more of these) > No OOM but I gave up after the test case did not go anywhere for about > 2 minutes. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HIVE-1511) Hive plan serialization is slow
Hive plan serialization is slow --- Key: HIVE-1511 URL: https://issues.apache.org/jira/browse/HIVE-1511 Project: Hadoop Hive Issue Type: Improvement Affects Versions: 0.7.0 Reporter: Ning Zhang As reported by Edward Capriolo: For reference I did this as a test case SELECT * FROM src where key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR key=0 OR ...(100 more of these) No OOM but I gave up after the test case did not go anywhere for about 2 minutes. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895327#action_12895327 ] Ning Zhang commented on HIVE-1509: -- found the bug. I'll a new patch after running the tests. > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895322#action_12895322 ] Joydeep Sen Sarma commented on HIVE-1509: - can u try bucketmapjoin2.q in clientpositive. it's failing for me > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ning Zhang updated HIVE-1509: - Attachment: HIVE-1509.2.patch Good points Joy. Uploading a new patch with these changes. > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.2.patch, HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895312#action_12895312 ] Joydeep Sen Sarma commented on HIVE-1509: - couple of comments: - use ProgressCounter.CREATED_FILES directly instead of using valueOf("CREATED_FILES") - can we move the check for total number of created files to inside checkFatalErrors? we are duplicating some code (for example we just fixed a problem where getCounters() can return null and ignoring that inside checkFatal). > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ning Zhang updated HIVE-1509: - Status: Patch Available (was: Open) > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HIVE-1509) Monitor the working set of the number of files
[ https://issues.apache.org/jira/browse/HIVE-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ning Zhang updated HIVE-1509: - Attachment: HIVE-1509.patch > Monitor the working set of the number of files > --- > > Key: HIVE-1509 > URL: https://issues.apache.org/jira/browse/HIVE-1509 > Project: Hadoop Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.0 >Reporter: Namit Jain >Assignee: Ning Zhang > Attachments: HIVE-1509.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
SAXParseException on local mode?
I seem to get this error when hive decides to use local mode. If I disable it the problem is fixed: "set hive.exec.mode.local.auto=false;" I was running a large integration test so I'm not exactly sure which calls to make to reproduce this but perhaps someone else know's what's going on? java.lang.RuntimeException: org.xml.sax.SAXParseException: Content is not allowed in trailing section. at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1168) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1040) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:980) at org.apache.hadoop.conf.Configuration.get(Configuration.java:382) at org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:1662) at org.apache.hadoop.mapred.JobConf.(JobConf.java:215) 10/08/03 15:40:36 INFO parse.ParseDriver: Parsing command: show functions loglinecleanup at org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:93) at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:373) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:800) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730) 10/08/03 15:40:36 INFO parse.ParseDriver: Parse Completed at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:602) at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:1021) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) 10/08/03 15:40:36 INFO ql.Driver: Semantic Analysis Completed Caused by: org.xml.sax.SAXParseException: Content is not allowed in trailing section. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:249) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:124) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1092) ... 16 more
[jira] Commented: (HIVE-1482) Not all jdbc calls are threadsafe.
[ https://issues.apache.org/jira/browse/HIVE-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12895161#action_12895161 ] Bennie Schut commented on HIVE-1482: Ok I guess the first questions would be: Do we want to go for as much concurrent work as possible or are we ok with using some synchronize on the client call's? I would say sync. is prefered over creating new connections in this case right? Do we perhaps want to solve this on the "ThriftHive" level since that's the part which doesn't allow the multi threaded calls or perhaps more on the "HiveDatabaseMetaData" since that's the part which has the requirement of multi thread safety? Any idea's are welcome on this. > Not all jdbc calls are threadsafe. > -- > > Key: HIVE-1482 > URL: https://issues.apache.org/jira/browse/HIVE-1482 > Project: Hadoop Hive > Issue Type: Bug > Components: Drivers >Affects Versions: 0.7.0 >Reporter: Bennie Schut > Fix For: 0.7.0 > > > As per jdbc spec they should be threadsafe: > http://download.oracle.com/docs/cd/E17476_01/javase/1.3/docs/guide/jdbc/spec/jdbc-spec.frame9.html -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.