[jira] [Created] (HIVE-18047) Support dynamic service discovery for HiveMetaStore
Bing Li created HIVE-18047: -- Summary: Support dynamic service discovery for HiveMetaStore Key: HIVE-18047 URL: https://issues.apache.org/jira/browse/HIVE-18047 Project: Hive Issue Type: Bug Components: Metastore Reporter: Bing Li Assignee: Bing Li Similar like what Hive does on HiveServer2 (HIVE-7935), a HiveMetaStore client can dynamically resolve an HiveMetaStore service to connect to via ZooKeeper. *High Level Design:* Whether dynamic service discovery is supported or not can be configured by setting HIVE_METASTORE_SUPPORT_DYNAMIC_SERVICE_DISCOVERY. * This property should ONLY work when HiveMetaStrore service is in remote mode. * When an instance of HiveMetaStore comes up, it adds itself as a znode to Zookeeper under a configurable namespace (HIVE_METASTORE_ZOOKEEPER_NAMESPACE, e.g. hivemetastore). * A thrift client specifies the ZooKeeper ensemble in its connection string, instead of pointing to a specific HiveMetaStore instance. The ZooKeeper ensemble will pick an instance of HiveMetaStore to connect for the session. * When an instance is removed from ZooKeeper, the existing client sessions continue till completion. When the last client session completes, the instance shuts down. * All new client connection pick one of the available HiveMetaStore uris from ZooKeeper. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HIVE-14156) Problem with Chinese characters as partition value when using MySQL
Bing Li created HIVE-14156: -- Summary: Problem with Chinese characters as partition value when using MySQL Key: HIVE-14156 URL: https://issues.apache.org/jira/browse/HIVE-14156 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 2.0.0, 1.2.1 Reporter: Bing Li Steps to reproduce: create table t1 (name string, age int) partitioned by (city string) row format delimited fields terminated by ','; load data local inpath '/tmp/chn-partition.txt' overwrite into table t1 partition (city='北京'); The content of /tmp/chn-partition.txt: 小明,20 小红,15 张三,36 李四,50 When check the partition value in MySQL, it shows ?? instead of "北京". When run "drop table t1", it will hang. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-13850) File name conflict when have multiple INSERT INTO queries running in parallel
Bing Li created HIVE-13850: -- Summary: File name conflict when have multiple INSERT INTO queries running in parallel Key: HIVE-13850 URL: https://issues.apache.org/jira/browse/HIVE-13850 Project: Hive Issue Type: Bug Reporter: Bing Li Assignee: Bing Li We have an application which connect to HiveServer2 via JDBC. In the application, it executes "INSERT INTO" query to the same table. If there are a lot of users running the application at the same time. Some of the INSERT could fail. In hive log, org.apache.hadoop.hive.ql.metadata.HiveException: copyFiles: error while moving files!!! Cannot move hdfs://node:8020/apps/hive/warehouse/met adata.db/scalding_stats/.hive-staging_hive_2016-05-10_18-46- 23_642_2056172497900766879-3321/-ext-1/00_0 to hdfs://node:8020/apps/hive /warehouse/metadata.db/scalding_stats/00_0_copy_9014 at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java: 2719) at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java: 1645) In hadoop log, WARN hdfs.StateChange (FSDirRenameOp.java: unprotectedRenameTo(174)) - DIR* FSDirectory.unprotectedRenameTo: failed to rename /apps/hive/warehouse/metadata.db/scalding_stats/.hive- staging_hive_2016-05-10_18-46-23_642_2056172497900766879-3321/-ext- 1/00_0 to /apps/hive/warehouse/metadata. db/scalding_stats/00_0_copy_9014 because destination exists -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-13384) Failed to create HiveMetaStoreClient object with proxy user when Kerberos enabled
Bing Li created HIVE-13384: -- Summary: Failed to create HiveMetaStoreClient object with proxy user when Kerberos enabled Key: HIVE-13384 URL: https://issues.apache.org/jira/browse/HIVE-13384 Project: Hive Issue Type: Improvement Components: Metastore Affects Versions: 1.2.1, 1.2.0 Reporter: Bing Li I wrote a Java client to talk with HiveMetaStore. (Hive 1.2.0) But found that it can't new a HiveMetaStoreClient object successfully via a proxy using in Kerberos env. === 15/10/13 00:14:38 ERROR transport.TSaslTransport: SASL negotiation failure javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) == When I debugging on Hive, I found that the error came from open() method in HiveMetaStoreClient class. Around line 406, transport = UserGroupInformation.getCurrentUser().doAs(new PrivilegedExceptionAction() { //FAILED, because the current user doesn't have the cridential But it will work if I change above line to transport = UserGroupInformation.getCurrentUser().getRealUser().doAs(new PrivilegedExceptionAction() { //PASS I found DRILL-3413 fixes this error in Drill side as a workaround. But if I submit a mapreduce job via Pig/HCatalog, it runs into the same issue again when initialize the object via HCatalog. It would be better to fix this issue in Hive side. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-11201) HCatalog is ignoring user specified avro schema in the table definition
Bing Li created HIVE-11201: -- Summary: HCatalog is ignoring user specified avro schema in the table definition Key: HIVE-11201 URL: https://issues.apache.org/jira/browse/HIVE-11201 Project: Hive Issue Type: Bug Components: HCatalog Affects Versions: 1.2.0 Reporter: Bing Li Assignee: Bing Li Priority: Critical HCatalog is ignoring user specified avro schema in the table definition , instead generating its own avro based from hive meta store. By generating its own schema will result in mismatch names. For exmple Avro fields name are Case Sensitive. By generating it's own schema will result in incorrect schema written to the avro file , and result select fail on read. And also Even if user specified schema does not allow null , when data is written using Hcatalog , it will write a schema that will allow null. For example in the table , user specified , all CAPITAL letters in the schema , and record name as LINEITEM. The schema should be written as it is. Instead Hcatalog ignores it and generated its own avro schema from the hive table case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-11020) support partial scan for analyze command - Avro
Bing Li created HIVE-11020: -- Summary: support partial scan for analyze command - Avro Key: HIVE-11020 URL: https://issues.apache.org/jira/browse/HIVE-11020 Project: Hive Issue Type: Improvement Reporter: Bing Li Assignee: Bing Li This is follow up on HIVE-3958. We already have two similar Jiras - support partial scan for analyze command - ORC https://issues.apache.org/jira/browse/HIVE-4177 - [Parquet] Support Analyze Table with partial scan https://issues.apache.org/jira/browse/HIVE-9491 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-11019) Can't create an Avro table with uniontype column correctly
Bing Li created HIVE-11019: -- Summary: Can't create an Avro table with uniontype column correctly Key: HIVE-11019 URL: https://issues.apache.org/jira/browse/HIVE-11019 Project: Hive Issue Type: Bug Affects Versions: 1.2.0 Reporter: Bing Li I tried the example in https://cwiki.apache.org/confluence/display/Hive/AvroSerDe And found that it can't create an AVRO table correctly with uniontype hive> create table avro_union(union1 uniontype)STORED AS AVRO; OK Time taken: 0.083 seconds hive> describe avro_union; OK union1 uniontype Time taken: 0.058 seconds, Fetched: 1 row(s) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-10982) Customizable the value of java.sql.statement.setFetchSize in Hive JDBC Driver
Bing Li created HIVE-10982: -- Summary: Customizable the value of java.sql.statement.setFetchSize in Hive JDBC Driver Key: HIVE-10982 URL: https://issues.apache.org/jira/browse/HIVE-10982 Project: Hive Issue Type: Improvement Components: JDBC Affects Versions: 1.2.0 Reporter: Bing Li Assignee: Bing Li Priority: Critical The current JDBC driver for Hive hard-code the value of setFetchSize to 50, which will be a bottleneck for performance. Pentaho filed this issue as http://jira.pentaho.com/browse/PDI-11511, whose status is open. Also it has discussion in http://forums.pentaho.com/showthread.php?158381-Hive-JDBC-Query-too-slow-too-many-fetches-after-query-execution-Kettle-Xform http://mail-archives.apache.org/mod_mbox/hive-user/201307.mbox/%3ccacq46vevgrfqg5rwxnr1psgyz7dcf07mvlo8mm2qit3anm1...@mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-10948) Slf4j warning in HiveCLI due to spark
Bing Li created HIVE-10948: -- Summary: Slf4j warning in HiveCLI due to spark Key: HIVE-10948 URL: https://issues.apache.org/jira/browse/HIVE-10948 Project: Hive Issue Type: Bug Components: CLI Affects Versions: 1.2.0 Reporter: Bing Li Assignee: Bing Li Priority: Minor The spark-assembly-1.3.1.jar is added to the Hive classpath ./hive.distro: export SPARK_HOME=$sparkHome ./hive.distro: sparkAssemblyPath=`ls ${SPARK_HOME}/lib/spark-assembly-*.jar` ./hive.distro: CLASSPATH="${CLASSPATH}:${sparkAssemblyPath}" When launch HiveCLI, we could see the following message: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/.../hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/.../spark/lib/spark-assembly-1.3.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindingsfor an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] WARNING: Use "yarn jar" to launch YARN applications. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/.../hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/.../spark/lib/spark-assembly-1.3.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindingsfor an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-10495) Hive index creation code throws NPE if index table is null
Bing Li created HIVE-10495: -- Summary: Hive index creation code throws NPE if index table is null Key: HIVE-10495 URL: https://issues.apache.org/jira/browse/HIVE-10495 Project: Hive Issue Type: Bug Affects Versions: 1.0.0 Reporter: Bing Li Assignee: Bing Li The stack trace would be: Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.add_index(HiveMetaStore.java:2870) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:611) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) at $Proxy9.add_index(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createIndex(HiveMetaStoreClient.java:962) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-9169) UT: set hive.support.concurrency to true for spark UTs
[ https://issues.apache.org/jira/browse/HIVE-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li reassigned HIVE-9169: - Assignee: Bing Li > UT: set hive.support.concurrency to true for spark UTs > -- > > Key: HIVE-9169 > URL: https://issues.apache.org/jira/browse/HIVE-9169 > Project: Hive > Issue Type: Sub-task > Components: Tests >Affects Versions: spark-branch >Reporter: Thomas Friedrich >Assignee: Bing Li >Priority: Minor > > The test cases > lock1 > lock2 > lock3 > lock4 > are failing because the flag hive.support.concurrency is set to false in the > hive-site.xml for the spark tests. > This value was set to true in trunk with HIVE-1293 when these test cases were > introduced to Hive. > After setting the value to true and generating the output files, the test > cases are successful. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-7292) Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li reassigned HIVE-7292: - Assignee: Bing Li (was: Xuefu Zhang) > Hive on Spark > - > > Key: HIVE-7292 > URL: https://issues.apache.org/jira/browse/HIVE-7292 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Xuefu Zhang >Assignee: Bing Li > Labels: Spark-M1, Spark-M2, Spark-M3, Spark-M4, Spark-M5 > Attachments: Hive-on-Spark.pdf > > > Spark as an open-source data analytics cluster computing framework has gained > significant momentum recently. Many Hive users already have Spark installed > as their computing backbone. To take advantages of Hive, they still need to > have either MapReduce or Tez on their cluster. This initiative will provide > user a new alternative so that those user can consolidate their backend. > Secondly, providing such an alternative further increases Hive's adoption as > it exposes Spark users to a viable, feature-rich de facto standard SQL tools > on Hadoop. > Finally, allowing Hive to run on Spark also has performance benefits. Hive > queries, especially those involving multiple reducer stages, will run faster, > thus improving user experience as Tez does. > This is an umbrella JIRA which will cover many coming subtask. Design doc > will be attached here shortly, and will be on the wiki as well. Feedback from > the community is greatly appreciated! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-6727) Table level stats for external tables are set incorrectly
[ https://issues.apache.org/jira/browse/HIVE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6727: -- Attachment: (was: HIVE-6727.1.patch) > Table level stats for external tables are set incorrectly > - > > Key: HIVE-6727 > URL: https://issues.apache.org/jira/browse/HIVE-6727 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 0.13.0, 0.13.1 >Reporter: Harish Butani >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6727.2.patch > > > if you do the following: > {code} > CREATE EXTERNAL TABLE anaylyze_external (a INT) LOCATION > 'data/files/ext_test'; > describe formatted anaylyze_external; > {code} > The table level stats are: > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > EXTERNALTRUE > numFiles0 > numRows 6 > rawDataSize 6 > totalSize 0 > {noformat} > numFiles and totalSize is always 0. > Issue is: > MetaStoreUtils:updateUnpartitionedTableStatsFast attempts to set table level > stats from FileStatus. But it doesn't account for External tables, it always > calls Warehouse.getFileStatusesForUnpartitionedTable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-6727) Table level stats for external tables are set incorrectly
[ https://issues.apache.org/jira/browse/HIVE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6727: -- Attachment: HIVE-6727.2.patch Fix the error in HIVE-6727.1.patch > Table level stats for external tables are set incorrectly > - > > Key: HIVE-6727 > URL: https://issues.apache.org/jira/browse/HIVE-6727 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 0.13.0, 0.13.1 >Reporter: Harish Butani >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6727.1.patch, HIVE-6727.2.patch > > > if you do the following: > {code} > CREATE EXTERNAL TABLE anaylyze_external (a INT) LOCATION > 'data/files/ext_test'; > describe formatted anaylyze_external; > {code} > The table level stats are: > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > EXTERNALTRUE > numFiles0 > numRows 6 > rawDataSize 6 > totalSize 0 > {noformat} > numFiles and totalSize is always 0. > Issue is: > MetaStoreUtils:updateUnpartitionedTableStatsFast attempts to set table level > stats from FileStatus. But it doesn't account for External tables, it always > calls Warehouse.getFileStatusesForUnpartitionedTable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-6727) Table level stats for external tables are set incorrectly
[ https://issues.apache.org/jira/browse/HIVE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6727: -- Attachment: HIVE-6727.1.patch This patch is generated based on the latest trunk code > Table level stats for external tables are set incorrectly > - > > Key: HIVE-6727 > URL: https://issues.apache.org/jira/browse/HIVE-6727 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 0.13.0, 0.13.1 >Reporter: Harish Butani >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6727.1.patch > > > if you do the following: > {code} > CREATE EXTERNAL TABLE anaylyze_external (a INT) LOCATION > 'data/files/ext_test'; > describe formatted anaylyze_external; > {code} > The table level stats are: > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > EXTERNALTRUE > numFiles0 > numRows 6 > rawDataSize 6 > totalSize 0 > {noformat} > numFiles and totalSize is always 0. > Issue is: > MetaStoreUtils:updateUnpartitionedTableStatsFast attempts to set table level > stats from FileStatus. But it doesn't account for External tables, it always > calls Warehouse.getFileStatusesForUnpartitionedTable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-6727) Table level stats for external tables are set incorrectly
[ https://issues.apache.org/jira/browse/HIVE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6727: -- Status: Patch Available (was: Open) The patch is generated based on the latest trunk code. > Table level stats for external tables are set incorrectly > - > > Key: HIVE-6727 > URL: https://issues.apache.org/jira/browse/HIVE-6727 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 0.13.1, 0.13.0 >Reporter: Harish Butani >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6727.1.patch > > > if you do the following: > {code} > CREATE EXTERNAL TABLE anaylyze_external (a INT) LOCATION > 'data/files/ext_test'; > describe formatted anaylyze_external; > {code} > The table level stats are: > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > EXTERNALTRUE > numFiles0 > numRows 6 > rawDataSize 6 > totalSize 0 > {noformat} > numFiles and totalSize is always 0. > Issue is: > MetaStoreUtils:updateUnpartitionedTableStatsFast attempts to set table level > stats from FileStatus. But it doesn't account for External tables, it always > calls Warehouse.getFileStatusesForUnpartitionedTable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-6727) Table level stats for external tables are set incorrectly
[ https://issues.apache.org/jira/browse/HIVE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6727: -- Fix Version/s: 0.14.0 > Table level stats for external tables are set incorrectly > - > > Key: HIVE-6727 > URL: https://issues.apache.org/jira/browse/HIVE-6727 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 0.13.0, 0.13.1 >Reporter: Harish Butani >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6727.1.patch > > > if you do the following: > {code} > CREATE EXTERNAL TABLE anaylyze_external (a INT) LOCATION > 'data/files/ext_test'; > describe formatted anaylyze_external; > {code} > The table level stats are: > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > EXTERNALTRUE > numFiles0 > numRows 6 > rawDataSize 6 > totalSize 0 > {noformat} > numFiles and totalSize is always 0. > Issue is: > MetaStoreUtils:updateUnpartitionedTableStatsFast attempts to set table level > stats from FileStatus. But it doesn't account for External tables, it always > calls Warehouse.getFileStatusesForUnpartitionedTable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-6727) Table level stats for external tables are set incorrectly
[ https://issues.apache.org/jira/browse/HIVE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6727: -- Affects Version/s: 0.13.0 0.13.1 > Table level stats for external tables are set incorrectly > - > > Key: HIVE-6727 > URL: https://issues.apache.org/jira/browse/HIVE-6727 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 0.13.0, 0.13.1 >Reporter: Harish Butani >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6727.1.patch > > > if you do the following: > {code} > CREATE EXTERNAL TABLE anaylyze_external (a INT) LOCATION > 'data/files/ext_test'; > describe formatted anaylyze_external; > {code} > The table level stats are: > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > EXTERNALTRUE > numFiles0 > numRows 6 > rawDataSize 6 > totalSize 0 > {noformat} > numFiles and totalSize is always 0. > Issue is: > MetaStoreUtils:updateUnpartitionedTableStatsFast attempts to set table level > stats from FileStatus. But it doesn't account for External tables, it always > calls Warehouse.getFileStatusesForUnpartitionedTable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-6727) Table level stats for external tables are set incorrectly
[ https://issues.apache.org/jira/browse/HIVE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6727: -- Component/s: Metastore > Table level stats for external tables are set incorrectly > - > > Key: HIVE-6727 > URL: https://issues.apache.org/jira/browse/HIVE-6727 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 0.13.0, 0.13.1 >Reporter: Harish Butani >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6727.1.patch > > > if you do the following: > {code} > CREATE EXTERNAL TABLE anaylyze_external (a INT) LOCATION > 'data/files/ext_test'; > describe formatted anaylyze_external; > {code} > The table level stats are: > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > EXTERNALTRUE > numFiles0 > numRows 6 > rawDataSize 6 > totalSize 0 > {noformat} > numFiles and totalSize is always 0. > Issue is: > MetaStoreUtils:updateUnpartitionedTableStatsFast attempts to set table level > stats from FileStatus. But it doesn't account for External tables, it always > calls Warehouse.getFileStatusesForUnpartitionedTable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-6727) Table level stats for external tables are set incorrectly
[ https://issues.apache.org/jira/browse/HIVE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14176731#comment-14176731 ] Bing Li commented on HIVE-6727: --- This issue also happens when the table is managed but specified a location which is not in hive warehouse directory on hdfs. > Table level stats for external tables are set incorrectly > - > > Key: HIVE-6727 > URL: https://issues.apache.org/jira/browse/HIVE-6727 > Project: Hive > Issue Type: Bug >Reporter: Harish Butani >Assignee: Bing Li > > if you do the following: > {code} > CREATE EXTERNAL TABLE anaylyze_external (a INT) LOCATION > 'data/files/ext_test'; > describe formatted anaylyze_external; > {code} > The table level stats are: > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > EXTERNALTRUE > numFiles0 > numRows 6 > rawDataSize 6 > totalSize 0 > {noformat} > numFiles and totalSize is always 0. > Issue is: > MetaStoreUtils:updateUnpartitionedTableStatsFast attempts to set table level > stats from FileStatus. But it doesn't account for External tables, it always > calls Warehouse.getFileStatusesForUnpartitionedTable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-6727) Table level stats for external tables are set incorrectly
[ https://issues.apache.org/jira/browse/HIVE-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li reassigned HIVE-6727: - Assignee: Bing Li > Table level stats for external tables are set incorrectly > - > > Key: HIVE-6727 > URL: https://issues.apache.org/jira/browse/HIVE-6727 > Project: Hive > Issue Type: Bug >Reporter: Harish Butani >Assignee: Bing Li > > if you do the following: > {code} > CREATE EXTERNAL TABLE anaylyze_external (a INT) LOCATION > 'data/files/ext_test'; > describe formatted anaylyze_external; > {code} > The table level stats are: > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > EXTERNALTRUE > numFiles0 > numRows 6 > rawDataSize 6 > totalSize 0 > {noformat} > numFiles and totalSize is always 0. > Issue is: > MetaStoreUtils:updateUnpartitionedTableStatsFast attempts to set table level > stats from FileStatus. But it doesn't account for External tables, it always > calls Warehouse.getFileStatusesForUnpartitionedTable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-8213) TestHWISessionManager failed due to miss hadoop2 dependencies
[ https://issues.apache.org/jira/browse/HIVE-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li resolved HIVE-8213. --- Resolution: Duplicate This failure is fixed in HIVE-6880 > TestHWISessionManager failed due to miss hadoop2 dependencies > - > > Key: HIVE-8213 > URL: https://issues.apache.org/jira/browse/HIVE-8213 > Project: Hive > Issue Type: Test > Components: Testing Infrastructure >Affects Versions: 0.13.1 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > > Error: > == > java.io.IOException: Cannot initialize Cluster. Please check your > configuration for mapreduce.framework.name and the correspond server > addresses. > at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120) > at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:82) > at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:75) > at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470) > at org.apache.hadoop.mapred.JobClient.(JobClient.java:449) > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:397) > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:740) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at org.apache.hadoop.util.RunJar.main(RunJar.java:212) > Job Submission failed with exception 'java.io.IOException(Cannot initialize > Cluster. Please check your configuration for mapreduce.framework.name and the > correspond server addresses.)' > java.io.IOException: Cannot initialize Cluster. Please check your > configuration for mapreduce.framework.name and the correspond server > addresses. > at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120) > at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:82) > at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:75) > at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470) > at org.apache.hadoop.mapred.JobClient.(JobClient.java:449) > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:397) > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:740) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at org.apache.hadoop.util.RunJar.main(RunJar.java:212) > Job Submission failed with exception 'java.io.IOException(Cannot initialize > Cluster. Please check your configuration for mapreduce.framework.name and the > correspond server addresses.)' -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8213) TestHWISessionManager failed due to miss hadoop2 dependencies
Bing Li created HIVE-8213: - Summary: TestHWISessionManager failed due to miss hadoop2 dependencies Key: HIVE-8213 URL: https://issues.apache.org/jira/browse/HIVE-8213 Project: Hive Issue Type: Test Components: Testing Infrastructure Affects Versions: 0.13.1 Reporter: Bing Li Assignee: Bing Li Fix For: 0.14.0 Error: == java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses. at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120) at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:82) at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:75) at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470) at org.apache.hadoop.mapred.JobClient.(JobClient.java:449) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:397) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:740) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:619) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)' java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses. at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120) at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:82) at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:75) at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470) at org.apache.hadoop.mapred.JobClient.(JobClient.java:449) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:397) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:740) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:619) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Job Submission failed with exception 'java.io.IOException(Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.)' -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8070) TestHWIServer failed due to wrong references to war and properties file
[ https://issues.apache.org/jira/browse/HIVE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-8070: -- Attachment: HIVE-8070.1.patch > TestHWIServer failed due to wrong references to war and properties file > --- > > Key: HIVE-8070 > URL: https://issues.apache.org/jira/browse/HIVE-8070 > Project: Hive > Issue Type: Test > Components: Tests >Affects Versions: 0.13.1 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-8070.1.patch > > > In testServerInit() method of that test class, it's still using > build.properties to retrieve the version # for the war file name -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HIVE-8070) TestHWIServer failed due to wrong references to war and properties file
[ https://issues.apache.org/jira/browse/HIVE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-8070 started by Bing Li. - > TestHWIServer failed due to wrong references to war and properties file > --- > > Key: HIVE-8070 > URL: https://issues.apache.org/jira/browse/HIVE-8070 > Project: Hive > Issue Type: Test > Components: Tests >Affects Versions: 0.13.1 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-8070.1.patch > > > In testServerInit() method of that test class, it's still using > build.properties to retrieve the version # for the war file name -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8070) TestHWIServer failed due to wrong references to war and properties file
[ https://issues.apache.org/jira/browse/HIVE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-8070: -- Status: Patch Available (was: In Progress) The patch is generated for trunk > TestHWIServer failed due to wrong references to war and properties file > --- > > Key: HIVE-8070 > URL: https://issues.apache.org/jira/browse/HIVE-8070 > Project: Hive > Issue Type: Test > Components: Tests >Affects Versions: 0.13.1 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-8070.1.patch > > > In testServerInit() method of that test class, it's still using > build.properties to retrieve the version # for the war file name -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8070) TestHWIServer failed due to wrong references to war and properties file
[ https://issues.apache.org/jira/browse/HIVE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14131159#comment-14131159 ] Bing Li commented on HIVE-8070: --- This JIRA is blocked by HIVE-7233 > TestHWIServer failed due to wrong references to war and properties file > --- > > Key: HIVE-8070 > URL: https://issues.apache.org/jira/browse/HIVE-8070 > Project: Hive > Issue Type: Test > Components: Tests >Affects Versions: 0.13.1 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > > In testServerInit() method of that test class, it's still using > build.properties to retrieve the version # for the war file name -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8070) TestHWIServer failed due to wrong references to war and properties file
Bing Li created HIVE-8070: - Summary: TestHWIServer failed due to wrong references to war and properties file Key: HIVE-8070 URL: https://issues.apache.org/jira/browse/HIVE-8070 Project: Hive Issue Type: Test Components: Tests Affects Versions: 0.13.1 Reporter: Bing Li Assignee: Bing Li Fix For: 0.14.0 In testServerInit() method of that test class, it's still using build.properties to retrieve the version # for the war file name -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4118: -- Status: Patch Available (was: Open) HIVE-4118.2.patch is generated based on the latest trunk > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-4118.1.patch, HIVE-4118.2.patch > > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4118: -- Status: Open (was: Patch Available) > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-4118.1.patch, HIVE-4118.2.patch > > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4118: -- Attachment: HIVE-4118.2.patch Re-created the patch against the latest trunk. > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Shreepadma Venugopalan > Fix For: 0.14.0 > > Attachments: HIVE-4118.1.patch, HIVE-4118.2.patch > > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041737#comment-14041737 ] Bing Li commented on HIVE-4118: --- Re-assign this JIRA to myself. > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-4118.1.patch, HIVE-4118.2.patch > > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li reassigned HIVE-4118: - Assignee: Bing Li (was: Shreepadma Venugopalan) > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-4118.1.patch, HIVE-4118.2.patch > > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4118: -- Assignee: Shreepadma Venugopalan (was: Bing Li) > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Shreepadma Venugopalan > Fix For: 0.14.0 > > Attachments: HIVE-4118.1.patch > > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li reassigned HIVE-4118: - Assignee: Bing Li (was: Shreepadma Venugopalan) > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-4118.1.patch > > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041657#comment-14041657 ] Bing Li commented on HIVE-4577: --- Hi, [~thejas] Thank you for your comments. I tried StrTokenizer, seems it only can handle part of scenarios, like dfs -mkdir "hello world" // StrTokenizer(cmd,splitDel,doubleQuo) dfs -mkdir 'hello world" // StrTokenizer(cmd,splitDel,singleQuo) But can't handle the wrong input. like dfs -mkdir "abd'db"abe' " // " and ' are not matched Let me know if I missed something. Thank you! > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-4577.1.patch, HIVE-4577.2.patch, > HIVE-4577.3.patch.txt > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4118: -- Fix Version/s: 0.13.1 0.14.0 > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Shreepadma Venugopalan > Fix For: 0.14.0, 0.13.1 > > Attachments: HIVE-4118.1.patch > > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14001403#comment-14001403 ] Bing Li commented on HIVE-4118: --- Generated HIVE-4118.1.patch against the trunk > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Shreepadma Venugopalan > Attachments: HIVE-4118.1.patch > > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4118: -- Status: Patch Available (was: Reopened) > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Shreepadma Venugopalan > Attachments: HIVE-4118.1.patch > > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4118: -- Attachment: HIVE-4118.1.patch > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Shreepadma Venugopalan > Attachments: HIVE-4118.1.patch > > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name
[ https://issues.apache.org/jira/browse/HIVE-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13993458#comment-13993458 ] Bing Li commented on HIVE-4118: --- Hi, [~shreepadma] I ran into this error in Hive-0.12.0, will you plan to fix this? > ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully > qualified table name > > > Key: HIVE-4118 > URL: https://issues.apache.org/jira/browse/HIVE-4118 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 0.10.0 >Reporter: Lenni Kuff >Assignee: Shreepadma Venugopalan > > Computing column stats fails when using fully qualified table name. Issuing a > "USE db" and using only the table name succeeds. > {code} > hive -e "ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS > int_col" > org.apache.hadoop.hive.ql.metadata.HiveException: > NoSuchObjectException(message:Table somedb.some_table for which stats is > gathered doesn't exist.) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325) > at > org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) > at $Proxy9.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) > at $Proxy10.update_table_column_statistics(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74) > at $Proxy11.updateTableColumnStatistics(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198) > ... 18 more > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one
[ https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6990: -- Attachment: HIVE-6990.3.patch > Direct SQL fails when the explicit schema setting is different from the > default one > --- > > Key: HIVE-6990 > URL: https://issues.apache.org/jira/browse/HIVE-6990 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.12.0 > Environment: hive + derby >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6990.1.patch, HIVE-6990.2.patch, HIVE-6990.3.patch > > > I got the following ERROR in hive.log > 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore > (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling > back to ORM > javax.jdo.JDODataStoreException: Error executing SQL query "select > PARTITIONS.PART_ID from PARTITIONS inner join TBLS on PARTITIONS.TBL_ID = > TBLS.TBL_ID inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join > PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and > FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and > ((FILTER0.PART_KEY_VAL = ?))". > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) > at > org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124) > at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103) > at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source) > Reproduce steps: > 1. set the following properties in hive-site.xml > > javax.jdo.mapping.Schema > HIVE > > > javax.jdo.option.ConnectionUserName > user1 > > 2. execute hive queries > hive> create table mytbl ( key int, value string); > hive> load data local inpath 'examples/files/kv1.txt' overwrite into table > mytbl; > hive> select * from mytbl; > hive> create view myview partitioned on (value) as select key, value from > mytbl where key=98; > hive> alter view myview add partition (value='val_98') partition > (value='val_xyz'); > hive> alter view myview drop partition (value='val_xyz'); -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one
[ https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6990: -- Attachment: HIVE-6990.2.patch patch based on the latest trunk > Direct SQL fails when the explicit schema setting is different from the > default one > --- > > Key: HIVE-6990 > URL: https://issues.apache.org/jira/browse/HIVE-6990 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.12.0 > Environment: hive + derby >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6990.1.patch, HIVE-6990.2.patch > > > I got the following ERROR in hive.log > 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore > (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling > back to ORM > javax.jdo.JDODataStoreException: Error executing SQL query "select > PARTITIONS.PART_ID from PARTITIONS inner join TBLS on PARTITIONS.TBL_ID = > TBLS.TBL_ID inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join > PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and > FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and > ((FILTER0.PART_KEY_VAL = ?))". > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) > at > org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124) > at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103) > at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source) > Reproduce steps: > 1. set the following properties in hive-site.xml > > javax.jdo.mapping.Schema > HIVE > > > javax.jdo.option.ConnectionUserName > user1 > > 2. execute hive queries > hive> create table mytbl ( key int, value string); > hive> load data local inpath 'examples/files/kv1.txt' overwrite into table > mytbl; > hive> select * from mytbl; > hive> create view myview partitioned on (value) as select key, value from > mytbl where key=98; > hive> alter view myview add partition (value='val_98') partition > (value='val_xyz'); > hive> alter view myview drop partition (value='val_xyz'); -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one
[ https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990480#comment-13990480 ] Bing Li commented on HIVE-6990: --- Hi, [~sershe] The failures in build#88 are not related to this patch. If we don't set javax.jdo.mapping.Schema in hive-site.xml, then the value of the schema is empty, and I can't get the table schema info from the database either. Do you have some good method to get this info? Thank you! > Direct SQL fails when the explicit schema setting is different from the > default one > --- > > Key: HIVE-6990 > URL: https://issues.apache.org/jira/browse/HIVE-6990 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.12.0 > Environment: hive + derby >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6990.1.patch, HIVE-6990.2.patch > > > I got the following ERROR in hive.log > 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore > (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling > back to ORM > javax.jdo.JDODataStoreException: Error executing SQL query "select > PARTITIONS.PART_ID from PARTITIONS inner join TBLS on PARTITIONS.TBL_ID = > TBLS.TBL_ID inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join > PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and > FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and > ((FILTER0.PART_KEY_VAL = ?))". > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) > at > org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124) > at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103) > at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source) > Reproduce steps: > 1. set the following properties in hive-site.xml > > javax.jdo.mapping.Schema > HIVE > > > javax.jdo.option.ConnectionUserName > user1 > > 2. execute hive queries > hive> create table mytbl ( key int, value string); > hive> load data local inpath 'examples/files/kv1.txt' overwrite into table > mytbl; > hive> select * from mytbl; > hive> create view myview partitioned on (value) as select key, value from > mytbl where key=98; > hive> alter view myview add partition (value='val_98') partition > (value='val_xyz'); > hive> alter view myview drop partition (value='val_xyz'); -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one
[ https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6990: -- Attachment: HIVE-6990.1.patch The patch is generated based on trunk > Direct SQL fails when the explicit schema setting is different from the > default one > --- > > Key: HIVE-6990 > URL: https://issues.apache.org/jira/browse/HIVE-6990 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.12.0 > Environment: hive + derby >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6990.1.patch > > > I got the following ERROR in hive.log > 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore > (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling > back to ORM > javax.jdo.JDODataStoreException: Error executing SQL query "select > PARTITIONS.PART_ID from PARTITIONS inner join TBLS on PARTITIONS.TBL_ID = > TBLS.TBL_ID inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join > PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and > FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and > ((FILTER0.PART_KEY_VAL = ?))". > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) > at > org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124) > at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103) > at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source) > Reproduce steps: > 1. set the following properties in hive-site.xml > > javax.jdo.mapping.Schema > HIVE > > > javax.jdo.option.ConnectionUserName > user1 > > 2. execute hive queries > hive> create table mytbl ( key int, value string); > hive> load data local inpath 'examples/files/kv1.txt' overwrite into table > mytbl; > hive> select * from mytbl; > hive> create view myview partitioned on (value) as select key, value from > mytbl where key=98; > hive> alter view myview add partition (value='val_98') partition > (value='val_xyz'); > hive> alter view myview drop partition (value='val_xyz'); -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one
[ https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6990: -- Status: Patch Available (was: In Progress) > Direct SQL fails when the explicit schema setting is different from the > default one > --- > > Key: HIVE-6990 > URL: https://issues.apache.org/jira/browse/HIVE-6990 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.12.0 > Environment: hive + derby >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-6990.1.patch > > > I got the following ERROR in hive.log > 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore > (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling > back to ORM > javax.jdo.JDODataStoreException: Error executing SQL query "select > PARTITIONS.PART_ID from PARTITIONS inner join TBLS on PARTITIONS.TBL_ID = > TBLS.TBL_ID inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join > PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and > FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and > ((FILTER0.PART_KEY_VAL = ?))". > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) > at > org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124) > at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103) > at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source) > Reproduce steps: > 1. set the following properties in hive-site.xml > > javax.jdo.mapping.Schema > HIVE > > > javax.jdo.option.ConnectionUserName > user1 > > 2. execute hive queries > hive> create table mytbl ( key int, value string); > hive> load data local inpath 'examples/files/kv1.txt' overwrite into table > mytbl; > hive> select * from mytbl; > hive> create view myview partitioned on (value) as select key, value from > mytbl where key=98; > hive> alter view myview add partition (value='val_98') partition > (value='val_xyz'); > hive> alter view myview drop partition (value='val_xyz'); -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Work started] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one
[ https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-6990 started by Bing Li. > Direct SQL fails when the explicit schema setting is different from the > default one > --- > > Key: HIVE-6990 > URL: https://issues.apache.org/jira/browse/HIVE-6990 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.12.0 > Environment: hive + derby >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > > I got the following ERROR in hive.log > 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore > (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling > back to ORM > javax.jdo.JDODataStoreException: Error executing SQL query "select > PARTITIONS.PART_ID from PARTITIONS inner join TBLS on PARTITIONS.TBL_ID = > TBLS.TBL_ID inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join > PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and > FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and > ((FILTER0.PART_KEY_VAL = ?))". > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) > at > org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124) > at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103) > at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source) > Reproduce steps: > 1. set the following properties in hive-site.xml > > javax.jdo.mapping.Schema > HIVE > > > javax.jdo.option.ConnectionUserName > user1 > > 2. execute hive queries > hive> create table mytbl ( key int, value string); > hive> load data local inpath 'examples/files/kv1.txt' overwrite into table > mytbl; > hive> select * from mytbl; > hive> create view myview partitioned on (value) as select key, value from > mytbl where key=98; > hive> alter view myview add partition (value='val_98') partition > (value='val_xyz'); > hive> alter view myview drop partition (value='val_xyz'); -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one
[ https://issues.apache.org/jira/browse/HIVE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985150#comment-13985150 ] Bing Li commented on HIVE-6990: --- The error message is similar like the one in HIVE-5128 > Direct SQL fails when the explicit schema setting is different from the > default one > --- > > Key: HIVE-6990 > URL: https://issues.apache.org/jira/browse/HIVE-6990 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.12.0 > Environment: hive + derby >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > > I got the following ERROR in hive.log > 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore > (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling > back to ORM > javax.jdo.JDODataStoreException: Error executing SQL query "select > PARTITIONS.PART_ID from PARTITIONS inner join TBLS on PARTITIONS.TBL_ID = > TBLS.TBL_ID inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join > PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and > FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and > ((FILTER0.PART_KEY_VAL = ?))". > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) > at > org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181) > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124) > at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) > at java.lang.reflect.Method.invoke(Method.java:619) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103) > at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source) > Reproduce steps: > 1. set the following properties in hive-site.xml > > javax.jdo.mapping.Schema > HIVE > > > javax.jdo.option.ConnectionUserName > user1 > > 2. execute hive queries > hive> create table mytbl ( key int, value string); > hive> load data local inpath 'examples/files/kv1.txt' overwrite into table > mytbl; > hive> select * from mytbl; > hive> create view myview partitioned on (value) as select key, value from > mytbl where key=98; > hive> alter view myview add partition (value='val_98') partition > (value='val_xyz'); > hive> alter view myview drop partition (value='val_xyz'); -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6990) Direct SQL fails when the explicit schema setting is different from the default one
Bing Li created HIVE-6990: - Summary: Direct SQL fails when the explicit schema setting is different from the default one Key: HIVE-6990 URL: https://issues.apache.org/jira/browse/HIVE-6990 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.12.0 Environment: hive + derby Reporter: Bing Li Assignee: Bing Li Fix For: 0.14.0 I got the following ERROR in hive.log 2014-04-23 17:30:23,331 ERROR metastore.ObjectStore (ObjectStore.java:handleDirectSqlError(1756)) - Direct SQL failed, falling back to ORM javax.jdo.JDODataStoreException: Error executing SQL query "select PARTITIONS.PART_ID from PARTITIONS inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID inner join DBS on TBLS.DB_ID = DBS.DB_ID inner join PARTITION_KEY_VALS as FILTER0 on FILTER0.PART_ID = PARTITIONS.PART_ID and FILTER0.INTEGER_IDX = 0 where TBLS.TBL_NAME = ? and DBS.NAME = ? and ((FILTER0.PART_KEY_VAL = ?))". at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:321) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:181) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:98) at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:1833) at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:1806) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:619) at org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124) at com.sun.proxy.$Proxy11.getPartitionsByFilter(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:3310) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:619) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103) at com.sun.proxy.$Proxy12.get_partitions_by_filter(Unknown Source) Reproduce steps: 1. set the following properties in hive-site.xml javax.jdo.mapping.Schema HIVE javax.jdo.option.ConnectionUserName user1 2. execute hive queries hive> create table mytbl ( key int, value string); hive> load data local inpath 'examples/files/kv1.txt' overwrite into table mytbl; hive> select * from mytbl; hive> create view myview partitioned on (value) as select key, value from mytbl where key=98; hive> alter view myview add partition (value='val_98') partition (value='val_xyz'); hive> alter view myview drop partition (value='val_xyz'); -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-3574) Allow Hive to Submit MapReduce jobs via the MapReduce API (instead of using Hadoop BIN)
[ https://issues.apache.org/jira/browse/HIVE-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-3574: -- Assignee: (was: Bing Li) > Allow Hive to Submit MapReduce jobs via the MapReduce API (instead of using > Hadoop BIN) > --- > > Key: HIVE-3574 > URL: https://issues.apache.org/jira/browse/HIVE-3574 > Project: Hive > Issue Type: Improvement > Components: Query Processor, SQL >Affects Versions: 0.3.0, 0.4.0, 0.4.1, 0.5.0, 0.6.0, 0.7.0, 0.7.1, 0.8.0, > 0.8.1, 0.9.0, 0.9.1, 0.10.0 > Environment: All environments would be affected by this >Reporter: Jeremy A. Lucas >Priority: Minor > Labels: feature, test > > The current behavior of the MapRedTask is to start a process that invokes the > "hadoop jar" command, passing each additional jobconf property as an argument > to this Hadoop CLI. > Having Hive to submit generated jobs to an M/R cluster via the MapReduce API > would allow for potentially greater compatibility across platforms, in > addition to allowing for these jobs to be run easily against pseudo-clusters > in tests (think MiniMRCluster). > This kind of change could involve something as simple as using a Hadoop > Configuration object with a generic ToolRunner or something similar to run > jobs. > Specifically, this kind of change would most likely occur in the execute() > method of org.apache.hadoop.hive.ql.exec.MapRedTask. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-3685) TestCliDriver (script_pipe.q) failed with IBM JDK
[ https://issues.apache.org/jira/browse/HIVE-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-3685: -- Fix Version/s: 0.14.0 > TestCliDriver (script_pipe.q) failed with IBM JDK > - > > Key: HIVE-3685 > URL: https://issues.apache.org/jira/browse/HIVE-3685 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.1, 0.8.0, 0.9.0, 0.11.0 > Environment: ant-1.8.2 > IBM JDK 1.6 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-3685.1.patch-trunk.txt, HIVE_3685.patch > > > 1 failed: TestCliDriver (script_pipe.q) > [junit] Begin query: script_pipe.q > [junit] java.io.IOException: No such file or directory > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:293) > [junit] at > java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:76) > [junit] at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:134) > [junit] at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:135) > [junit] at java.io.DataOutputStream.flush(DataOutputStream.java:117) > [junit] at > org.apache.hadoop.hive.ql.exec.TextRecordWriter.close(TextRecordWriter.java:48) > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:365) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] Ended Job = job_local_0001 with errors > [junit] Error during job, obtaining debugging information... > [junit] Exception: Client Execution failed with error code = 9 > [junit] See build/ql/tmp/hive.lo
[jira] [Updated] (HIVE-6820) HiveServer(2) ignores HIVE_OPTS
[ https://issues.apache.org/jira/browse/HIVE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6820: -- Fix Version/s: 0.14.0 > HiveServer(2) ignores HIVE_OPTS > --- > > Key: HIVE-6820 > URL: https://issues.apache.org/jira/browse/HIVE-6820 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 0.12.0 >Reporter: Richard Ding >Assignee: Bing Li >Priority: Minor > Fix For: 0.14.0 > > Attachments: HIVE-6820.1.patch > > > In hiveserver2.sh: > {code} > exec $HADOOP jar $JAR $CLASS "$@" > {code} > While cli.sh having: > {code} > exec $HADOOP jar ${HIVE_LIB}/hive-cli-*.jar $CLASS $HIVE_OPTS "$@" > {code} > Hence some hive commands that run properly in Hive shell fail in HiveServer. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980845#comment-13980845 ] Bing Li commented on HIVE-4577: --- Hi, Thejas I noticed this fix hasn't been included in 0.12 release. I updated the fix for 0.14. thank you. > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-4577.1.patch, HIVE-4577.2.patch, > HIVE-4577.3.patch.txt > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4577: -- Fix Version/s: 0.12.1 > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-4577.1.patch, HIVE-4577.2.patch, > HIVE-4577.3.patch.txt > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4577: -- Fix Version/s: (was: 0.12.1) 0.14.0 > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.14.0 > > Attachments: HIVE-4577.1.patch, HIVE-4577.2.patch, > HIVE-4577.3.patch.txt > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6820) HiveServer(2) ignores HIVE_OPTS
[ https://issues.apache.org/jira/browse/HIVE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6820: -- Status: Patch Available (was: In Progress) The patch is created based on trunk branch > HiveServer(2) ignores HIVE_OPTS > --- > > Key: HIVE-6820 > URL: https://issues.apache.org/jira/browse/HIVE-6820 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 0.12.0 >Reporter: Richard Ding >Assignee: Bing Li >Priority: Minor > Attachments: HIVE-6820.1.patch > > > In hiveserver2.sh: > {code} > exec $HADOOP jar $JAR $CLASS "$@" > {code} > While cli.sh having: > {code} > exec $HADOOP jar ${HIVE_LIB}/hive-cli-*.jar $CLASS $HIVE_OPTS "$@" > {code} > Hence some hive commands that run properly in Hive shell fail in HiveServer. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6820) HiveServer(2) ignores HIVE_OPTS
[ https://issues.apache.org/jira/browse/HIVE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-6820: -- Attachment: HIVE-6820.1.patch Append $HIVE_OPTS to hiveserver2.sh and hiveserver.sh > HiveServer(2) ignores HIVE_OPTS > --- > > Key: HIVE-6820 > URL: https://issues.apache.org/jira/browse/HIVE-6820 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 0.12.0 >Reporter: Richard Ding >Assignee: Bing Li >Priority: Minor > Attachments: HIVE-6820.1.patch > > > In hiveserver2.sh: > {code} > exec $HADOOP jar $JAR $CLASS "$@" > {code} > While cli.sh having: > {code} > exec $HADOOP jar ${HIVE_LIB}/hive-cli-*.jar $CLASS $HIVE_OPTS "$@" > {code} > Hence some hive commands that run properly in Hive shell fail in HiveServer. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Work started] (HIVE-6820) HiveServer(2) ignores HIVE_OPTS
[ https://issues.apache.org/jira/browse/HIVE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-6820 started by Bing Li. > HiveServer(2) ignores HIVE_OPTS > --- > > Key: HIVE-6820 > URL: https://issues.apache.org/jira/browse/HIVE-6820 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 0.12.0 >Reporter: Richard Ding >Assignee: Bing Li >Priority: Minor > > In hiveserver2.sh: > {code} > exec $HADOOP jar $JAR $CLASS "$@" > {code} > While cli.sh having: > {code} > exec $HADOOP jar ${HIVE_LIB}/hive-cli-*.jar $CLASS $HIVE_OPTS "$@" > {code} > Hence some hive commands that run properly in Hive shell fail in HiveServer. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6865) Failed to load data into Hive from Pig using HCatStorer()
Bing Li created HIVE-6865: - Summary: Failed to load data into Hive from Pig using HCatStorer() Key: HIVE-6865 URL: https://issues.apache.org/jira/browse/HIVE-6865 Project: Hive Issue Type: Bug Components: HCatalog Affects Versions: 0.12.0 Reporter: Bing Li Assignee: Bing Li Reproduce steps: 1. create a hive table hive> create table t1 (c1 int, c2 int, c3 int); 2. start pig shell grunt> register $HIVE_HOME/lib/*.jar grunt> register $HIVE_HOME/hcatalog/share/hcatalog/*.jar grunt> A = load 'pig.txt' as (c1:int, c2:int, c3:int) grunt> store A into 't1' using org.apache.hive.hcatalog.HCatSrorer(); Error Message: ERROR [main] org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate exception from backend error: org.apache.hcatalog.common.HCatException : 2004 : HCatOutputFormat not initialized, setOutput has to be called at org.apache.hcatalog.mapreduce.HCatBaseOutputFormat.getJobInfo(HCatBaseOutputFormat.java:111) at org.apache.hcatalog.mapreduce.HCatBaseOutputFormat.getJobInfo(HCatBaseOutputFormat.java:97) at org.apache.hcatalog.mapreduce.HCatBaseOutputFormat.getOutputFormat(HCatBaseOutputFormat.java:85) at org.apache.hcatalog.mapreduce.HCatBaseOutputFormat.checkOutputSpecs(HCatBaseOutputFormat.java:75) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecsHelper(PigOutputFormat.java:207) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:187) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:1000) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:963) at java.security.AccessController.doPrivileged(AccessController.java:310) at javax.security.auth.Subject.doAs(Subject.java:573) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1502) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:963) at org.apache.hadoop.mapreduce.Job.submit(Job.java:616) at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:336) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:611) at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128) at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191) at java.lang.Thread.run(Thread.java:738) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HIVE-6820) HiveServer(2) ignores HIVE_OPTS
[ https://issues.apache.org/jira/browse/HIVE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li reassigned HIVE-6820: - Assignee: Bing Li > HiveServer(2) ignores HIVE_OPTS > --- > > Key: HIVE-6820 > URL: https://issues.apache.org/jira/browse/HIVE-6820 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 0.12.0 >Reporter: Richard Ding >Assignee: Bing Li >Priority: Minor > > In hiveserver2.sh: > {code} > exec $HADOOP jar $JAR $CLASS "$@" > {code} > While cli.sh having: > {code} > exec $HADOOP jar ${HIVE_LIB}/hive-cli-*.jar $CLASS $HIVE_OPTS "$@" > {code} > Hence some hive commands that run properly in Hive shell fail in HiveServer. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5124) group by without map aggregation lead to mapreduce exception
[ https://issues.apache.org/jira/browse/HIVE-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-5124: -- Assignee: (was: Bing Li) > group by without map aggregation lead to mapreduce exception > > > Key: HIVE-5124 > URL: https://issues.apache.org/jira/browse/HIVE-5124 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.11.0 >Reporter: cyril liao > > On my environment, the same query but diffent by seting hive.map.aggr with > true or flase,produce different result. > With hive.map.aggr=false,tasktracker report the following exception: > java.lang.RuntimeException: Error in configuring object > at > org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93) > at > org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) > at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:485) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420) > at org.apache.hadoop.mapred.Child$4.run(Child.java:255) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083) > at org.apache.hadoop.mapred.Child.main(Child.java:249) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88) > ... 9 more > Caused by: java.lang.RuntimeException: Reduce operator initialization failed > at > org.apache.hadoop.hive.ql.exec.ExecReducer.configure(ExecReducer.java:160) > ... 14 more > Caused by: java.lang.RuntimeException: cannot find field value from [0:_col0, > 1:_col1, 2:_col2, 3:_col3] > at > org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:366) > at > org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldRef(StandardStructObjectInspector.java:143) > at > org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:82) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:299) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:451) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:407) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:62) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:451) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:407) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:438) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375) > at > org.apache.hadoop.hive.ql.exec.ExecReducer.configure(ExecReducer.java:153) -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-5132) Can't access to hwi due to "No Java compiler available"
[ https://issues.apache.org/jira/browse/HIVE-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-5132: -- Status: Patch Available (was: Open) The patch is generated against the latest trunk. > Can't access to hwi due to "No Java compiler available" > --- > > Key: HIVE-5132 > URL: https://issues.apache.org/jira/browse/HIVE-5132 > Project: Hive > Issue Type: Bug >Affects Versions: 0.11.0, 0.10.0 > Environment: JDK1.6, hadoop 2.0.4-alpha >Reporter: Bing Li >Assignee: Bing Li >Priority: Critical > Attachments: HIVE-5132-01.patch > > > I want to use hwi to submit hive queries, but after start hwi successfully, I > can't open the web page of it. > I noticed that someone also met the same issue in hive-0.10. > Reproduce steps: > -- > 1. start hwi > bin/hive --config $HIVE_CONF_DIR --service hwi > 2. access to http://:/hwi via browser > got the following error message: > HTTP ERROR 500 > Problem accessing /hwi/. Reason: > No Java compiler available > Caused by: > java.lang.IllegalStateException: No Java compiler available > at > org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225) > at > org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560) > at > org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299) > at > org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315) > at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) > at > org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at > org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) > at > org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) > at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) > at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) > at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) > at > org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) > at > org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5132) Can't access to hwi due to "No Java compiler available"
[ https://issues.apache.org/jira/browse/HIVE-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-5132: -- Attachment: HIVE-5132-01.patch add ant.jar and ant-launcher.jar as the runtime dependencies of hive > Can't access to hwi due to "No Java compiler available" > --- > > Key: HIVE-5132 > URL: https://issues.apache.org/jira/browse/HIVE-5132 > Project: Hive > Issue Type: Bug >Affects Versions: 0.10.0, 0.11.0 > Environment: JDK1.6, hadoop 2.0.4-alpha >Reporter: Bing Li >Assignee: Bing Li >Priority: Critical > Attachments: HIVE-5132-01.patch > > > I want to use hwi to submit hive queries, but after start hwi successfully, I > can't open the web page of it. > I noticed that someone also met the same issue in hive-0.10. > Reproduce steps: > -- > 1. start hwi > bin/hive --config $HIVE_CONF_DIR --service hwi > 2. access to http://:/hwi via browser > got the following error message: > HTTP ERROR 500 > Problem accessing /hwi/. Reason: > No Java compiler available > Caused by: > java.lang.IllegalStateException: No Java compiler available > at > org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225) > at > org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560) > at > org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299) > at > org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315) > at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) > at > org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at > org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) > at > org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) > at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) > at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) > at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) > at > org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) > at > org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5132) Can't access to hwi due to "No Java compiler available"
[ https://issues.apache.org/jira/browse/HIVE-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13749963#comment-13749963 ] Bing Li commented on HIVE-5132: --- The root cause of this failure is that ANT_LIB is NOT setting in hwi server. But I can resolve this failure when copy the following two ant jars into $HIVE_HOME/lib - ant-launcher.jar - ant.jar I think we can add ant as the runtime dependency of hive. > Can't access to hwi due to "No Java compiler available" > --- > > Key: HIVE-5132 > URL: https://issues.apache.org/jira/browse/HIVE-5132 > Project: Hive > Issue Type: Bug >Affects Versions: 0.10.0, 0.11.0 > Environment: JDK1.6, hadoop 2.0.4-alpha >Reporter: Bing Li >Assignee: Bing Li >Priority: Critical > > I want to use hwi to submit hive queries, but after start hwi successfully, I > can't open the web page of it. > I noticed that someone also met the same issue in hive-0.10. > Reproduce steps: > -- > 1. start hwi > bin/hive --config $HIVE_CONF_DIR --service hwi > 2. access to http://:/hwi via browser > got the following error message: > HTTP ERROR 500 > Problem accessing /hwi/. Reason: > No Java compiler available > Caused by: > java.lang.IllegalStateException: No Java compiler available > at > org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225) > at > org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560) > at > org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299) > at > org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315) > at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) > at > org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at > org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) > at > org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) > at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) > at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) > at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) > at > org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) > at > org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5132) Can't access to hwi due to "No Java compiler available"
[ https://issues.apache.org/jira/browse/HIVE-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-5132: -- Assignee: Bing Li > Can't access to hwi due to "No Java compiler available" > --- > > Key: HIVE-5132 > URL: https://issues.apache.org/jira/browse/HIVE-5132 > Project: Hive > Issue Type: Bug >Affects Versions: 0.10.0, 0.11.0 > Environment: JDK1.6, hadoop 2.0.4-alpha >Reporter: Bing Li >Assignee: Bing Li >Priority: Critical > > I want to use hwi to submit hive queries, but after start hwi successfully, I > can't open the web page of it. > I noticed that someone also met the same issue in hive-0.10. > Reproduce steps: > -- > 1. start hwi > bin/hive --config $HIVE_CONF_DIR --service hwi > 2. access to http://:/hwi via browser > got the following error message: > HTTP ERROR 500 > Problem accessing /hwi/. Reason: > No Java compiler available > Caused by: > java.lang.IllegalStateException: No Java compiler available > at > org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225) > at > org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560) > at > org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299) > at > org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315) > at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) > at > org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at > org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) > at > org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) > at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) > at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) > at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) > at > org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) > at > org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-5132) Can't access to hwi due to "No Java compiler available"
[ https://issues.apache.org/jira/browse/HIVE-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13745798#comment-13745798 ] Bing Li commented on HIVE-5132: --- I already set the following properties in hive-site.xml - hive.hwi.listen.host - hive.hwi.listen.port - hive.hwi.war.file And copied two jasper jars into hive/lib: - jasper-compiler-5.5.23.jar - jasper-runtime-5.5.23.jar But above didn't work for this issue. > Can't access to hwi due to "No Java compiler available" > --- > > Key: HIVE-5132 > URL: https://issues.apache.org/jira/browse/HIVE-5132 > Project: Hive > Issue Type: Bug >Affects Versions: 0.10.0, 0.11.0 > Environment: JDK1.6, hadoop 2.0.4-alpha >Reporter: Bing Li >Priority: Critical > > I want to use hwi to submit hive queries, but after start hwi successfully, I > can't open the web page of it. > I noticed that someone also met the same issue in hive-0.10. > Reproduce steps: > -- > 1. start hwi > bin/hive --config $HIVE_CONF_DIR --service hwi > 2. access to http://:/hwi via browser > got the following error message: > HTTP ERROR 500 > Problem accessing /hwi/. Reason: > No Java compiler available > Caused by: > java.lang.IllegalStateException: No Java compiler available > at > org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225) > at > org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560) > at > org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299) > at > org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315) > at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) > at > org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at > org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) > at > org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) > at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) > at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) > at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) > at > org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) > at > org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5132) Can't access to hwi due to
[ https://issues.apache.org/jira/browse/HIVE-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-5132: -- Summary: Can't access to hwi due to (was: Can't access to hwi ) > Can't access to hwi due to > --- > > Key: HIVE-5132 > URL: https://issues.apache.org/jira/browse/HIVE-5132 > Project: Hive > Issue Type: Bug >Affects Versions: 0.10.0, 0.11.0 > Environment: JDK1.6, hadoop 2.0.4-alpha >Reporter: Bing Li >Priority: Critical > > I want to use hwi to submit hive queries, but after start hwi successfully, I > can't open the web page of it. > I noticed that someone also met the same issue in hive-0.10. > Reproduce steps: > -- > 1. start hwi > bin/hive --config $HIVE_CONF_DIR --service hwi > 2. access to http://:/hwi via browser > got the following error message: > HTTP ERROR 500 > Problem accessing /hwi/. Reason: > No Java compiler available > Caused by: > java.lang.IllegalStateException: No Java compiler available > at > org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225) > at > org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560) > at > org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299) > at > org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315) > at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) > at > org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at > org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) > at > org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) > at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) > at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) > at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) > at > org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) > at > org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-5132) Can't access to hwi due to "No Java compiler available"
[ https://issues.apache.org/jira/browse/HIVE-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-5132: -- Summary: Can't access to hwi due to "No Java compiler available" (was: Can't access to hwi due to ) > Can't access to hwi due to "No Java compiler available" > --- > > Key: HIVE-5132 > URL: https://issues.apache.org/jira/browse/HIVE-5132 > Project: Hive > Issue Type: Bug >Affects Versions: 0.10.0, 0.11.0 > Environment: JDK1.6, hadoop 2.0.4-alpha >Reporter: Bing Li >Priority: Critical > > I want to use hwi to submit hive queries, but after start hwi successfully, I > can't open the web page of it. > I noticed that someone also met the same issue in hive-0.10. > Reproduce steps: > -- > 1. start hwi > bin/hive --config $HIVE_CONF_DIR --service hwi > 2. access to http://:/hwi via browser > got the following error message: > HTTP ERROR 500 > Problem accessing /hwi/. Reason: > No Java compiler available > Caused by: > java.lang.IllegalStateException: No Java compiler available > at > org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225) > at > org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560) > at > org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299) > at > org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315) > at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) > at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) > at > org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at > org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) > at > org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) > at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) > at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) > at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) > at > org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) > at > org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-5132) Can't access to hwi
Bing Li created HIVE-5132: - Summary: Can't access to hwi Key: HIVE-5132 URL: https://issues.apache.org/jira/browse/HIVE-5132 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.10.0 Environment: JDK1.6, hadoop 2.0.4-alpha Reporter: Bing Li Priority: Critical I want to use hwi to submit hive queries, but after start hwi successfully, I can't open the web page of it. I noticed that someone also met the same issue in hive-0.10. Reproduce steps: -- 1. start hwi bin/hive --config $HIVE_CONF_DIR --service hwi 2. access to http://:/hwi via browser got the following error message: HTTP ERROR 500 Problem accessing /hwi/. Reason: No Java compiler available Caused by: java.lang.IllegalStateException: No Java compiler available at org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) at org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HIVE-5124) group by without map aggregation lead to mapreduce exception
[ https://issues.apache.org/jira/browse/HIVE-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li reassigned HIVE-5124: - Assignee: Bing Li > group by without map aggregation lead to mapreduce exception > > > Key: HIVE-5124 > URL: https://issues.apache.org/jira/browse/HIVE-5124 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.11.0 >Reporter: cyril liao >Assignee: Bing Li > > On my environment, the same query but diffent by seting hive.map.aggr with > true or flase,produce different result. > With hive.map.aggr=false,tasktracker report the following exception: > java.lang.RuntimeException: Error in configuring object > at > org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93) > at > org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) > at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:485) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420) > at org.apache.hadoop.mapred.Child$4.run(Child.java:255) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083) > at org.apache.hadoop.mapred.Child.main(Child.java:249) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88) > ... 9 more > Caused by: java.lang.RuntimeException: Reduce operator initialization failed > at > org.apache.hadoop.hive.ql.exec.ExecReducer.configure(ExecReducer.java:160) > ... 14 more > Caused by: java.lang.RuntimeException: cannot find field value from [0:_col0, > 1:_col1, 2:_col2, 3:_col3] > at > org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:366) > at > org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldRef(StandardStructObjectInspector.java:143) > at > org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:82) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:299) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:451) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:407) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:62) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:451) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:407) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:438) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375) > at > org.apache.hadoop.hive.ql.exec.ExecReducer.configure(ExecReducer.java:153) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-3685) TestCliDriver (script_pipe.q) failed with IBM JDK
[ https://issues.apache.org/jira/browse/HIVE-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698638#comment-13698638 ] Bing Li commented on HIVE-3685: --- Hi,[~owen.omalley] Could you help to review the latest patch based on Hive-0.11? btw, HIVE-3685.1.patch-trunk.txt is based on Hive-0.8.0 and Hive-0.9.0. Thank you > TestCliDriver (script_pipe.q) failed with IBM JDK > - > > Key: HIVE-3685 > URL: https://issues.apache.org/jira/browse/HIVE-3685 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.1, 0.8.0, 0.9.0, 0.11.0 > Environment: ant-1.8.2 > IBM JDK 1.6 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.8.0, 0.9.0, 0.12.0 > > Attachments: HIVE-3685.1.patch-trunk.txt, HIVE_3685.patch > > > 1 failed: TestCliDriver (script_pipe.q) > [junit] Begin query: script_pipe.q > [junit] java.io.IOException: No such file or directory > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:293) > [junit] at > java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:76) > [junit] at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:134) > [junit] at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:135) > [junit] at java.io.DataOutputStream.flush(DataOutputStream.java:117) > [junit] at > org.apache.hadoop.hive.ql.exec.TextRecordWriter.close(TextRecordWriter.java:48) > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:365) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216)
[jira] [Updated] (HIVE-3685) TestCliDriver (script_pipe.q) failed with IBM JDK
[ https://issues.apache.org/jira/browse/HIVE-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-3685: -- Fix Version/s: 0.12.0 > TestCliDriver (script_pipe.q) failed with IBM JDK > - > > Key: HIVE-3685 > URL: https://issues.apache.org/jira/browse/HIVE-3685 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.7.1, 0.8.0, 0.9.0, 0.11.0 > Environment: ant-1.8.2 > IBM JDK 1.6 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.8.0, 0.9.0, 0.12.0 > > Attachments: HIVE-3685.1.patch-trunk.txt, HIVE_3685.patch > > > 1 failed: TestCliDriver (script_pipe.q) > [junit] Begin query: script_pipe.q > [junit] java.io.IOException: No such file or directory > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:293) > [junit] at > java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:76) > [junit] at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:134) > [junit] at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:135) > [junit] at java.io.DataOutputStream.flush(DataOutputStream.java:117) > [junit] at > org.apache.hadoop.hive.ql.exec.TextRecordWriter.close(TextRecordWriter.java:48) > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:365) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] Ended Job = job_local_0001 with errors > [junit] Error during job, obtaining debugging information... > [junit] Exception: Client Execution failed with error code = 9 > [junit] See bu
[jira] [Commented] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698625#comment-13698625 ] Bing Li commented on HIVE-4577: --- [~appodictic],could you help to review the new patch? Thank you > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.12.0 > > Attachments: HIVE-4577.1.patch, HIVE-4577.2.patch > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4589) Hive Load command failed when inpath contains space or any restricted characters
[ https://issues.apache.org/jira/browse/HIVE-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4589: -- Status: Patch Available (was: Open) I added a new test case for this defect. In order to run the case (-Dtestcase=TestCliDriver -Dqfile=load_fs3.q), you should apply the patch for HIVE-4577 first. > Hive Load command failed when inpath contains space or any restricted > characters > > > Key: HIVE-4589 > URL: https://issues.apache.org/jira/browse/HIVE-4589 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.10.0, 0.9.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4589.patch > > > 0) create a simple text file with some string. See attached uk.cities > 1) Create a directory in Hadoop that contains a space in the name > hadoop fs -mkdir '/testdir/bri tain/' > hadoop fs -copyFromLocal /tmp/uk.cities '/testdir/bri > tain/uk.cities' > 2) create table partspace ( city string) partitioned by ( country string) row > format delimited FIELDS TERMINATED BY '$' stored as textfile; > 3) load data inpath '/testdir/bri tain/uk.cities' into table partspace > partition (country='britain'); > Then I got the message like, > Load failed with message "Wrong file format. Please check the file's format" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4589) Hive Load command failed when inpath contains space or any restricted characters
[ https://issues.apache.org/jira/browse/HIVE-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664032#comment-13664032 ] Bing Li commented on HIVE-4589: --- In order to run this test case (-Dtestcase=TestCliDriver -Dqfile=load_fs3.q), you should apply the patch for HIVE-4577 first. > Hive Load command failed when inpath contains space or any restricted > characters > > > Key: HIVE-4589 > URL: https://issues.apache.org/jira/browse/HIVE-4589 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4589.patch > > > 0) create a simple text file with some string. See attached uk.cities > 1) Create a directory in Hadoop that contains a space in the name > hadoop fs -mkdir '/testdir/bri tain/' > hadoop fs -copyFromLocal /tmp/uk.cities '/testdir/bri > tain/uk.cities' > 2) create table partspace ( city string) partitioned by ( country string) row > format delimited FIELDS TERMINATED BY '$' stored as textfile; > 3) load data inpath '/testdir/bri tain/uk.cities' into table partspace > partition (country='britain'); > Then I got the message like, > Load failed with message "Wrong file format. Please check the file's format" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4589) Hive Load command failed when inpath contains space or any restricted characters
[ https://issues.apache.org/jira/browse/HIVE-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4589: -- Attachment: HIVE-4589.patch > Hive Load command failed when inpath contains space or any restricted > characters > > > Key: HIVE-4589 > URL: https://issues.apache.org/jira/browse/HIVE-4589 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4589.patch > > > 0) create a simple text file with some string. See attached uk.cities > 1) Create a directory in Hadoop that contains a space in the name > hadoop fs -mkdir '/testdir/bri tain/' > hadoop fs -copyFromLocal /tmp/uk.cities '/testdir/bri > tain/uk.cities' > 2) create table partspace ( city string) partitioned by ( country string) row > format delimited FIELDS TERMINATED BY '$' stored as textfile; > 3) load data inpath '/testdir/bri tain/uk.cities' into table partspace > partition (country='britain'); > Then I got the message like, > Load failed with message "Wrong file format. Please check the file's format" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4589) Hive Load command failed when inpath contains space or any restricted characters
[ https://issues.apache.org/jira/browse/HIVE-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4589: -- Attachment: (was: HIVE-4589.patch) > Hive Load command failed when inpath contains space or any restricted > characters > > > Key: HIVE-4589 > URL: https://issues.apache.org/jira/browse/HIVE-4589 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4589.patch > > > 0) create a simple text file with some string. See attached uk.cities > 1) Create a directory in Hadoop that contains a space in the name > hadoop fs -mkdir '/testdir/bri tain/' > hadoop fs -copyFromLocal /tmp/uk.cities '/testdir/bri > tain/uk.cities' > 2) create table partspace ( city string) partitioned by ( country string) row > format delimited FIELDS TERMINATED BY '$' stored as textfile; > 3) load data inpath '/testdir/bri tain/uk.cities' into table partspace > partition (country='britain'); > Then I got the message like, > Load failed with message "Wrong file format. Please check the file's format" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4589) Hive Load command failed when inpath contains space or any restricted characters
[ https://issues.apache.org/jira/browse/HIVE-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4589: -- Attachment: HIVE-4589.patch > Hive Load command failed when inpath contains space or any restricted > characters > > > Key: HIVE-4589 > URL: https://issues.apache.org/jira/browse/HIVE-4589 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4589.patch > > > 0) create a simple text file with some string. See attached uk.cities > 1) Create a directory in Hadoop that contains a space in the name > hadoop fs -mkdir '/testdir/bri tain/' > hadoop fs -copyFromLocal /tmp/uk.cities '/testdir/bri > tain/uk.cities' > 2) create table partspace ( city string) partitioned by ( country string) row > format delimited FIELDS TERMINATED BY '$' stored as textfile; > 3) load data inpath '/testdir/bri tain/uk.cities' into table partspace > partition (country='britain'); > Then I got the message like, > Load failed with message "Wrong file format. Please check the file's format" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4589) Hive Load command failed when inpath contains space or any restricted characters
[ https://issues.apache.org/jira/browse/HIVE-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4589: -- Attachment: (was: HIVE-4589.patch) > Hive Load command failed when inpath contains space or any restricted > characters > > > Key: HIVE-4589 > URL: https://issues.apache.org/jira/browse/HIVE-4589 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4589.patch > > > 0) create a simple text file with some string. See attached uk.cities > 1) Create a directory in Hadoop that contains a space in the name > hadoop fs -mkdir '/testdir/bri tain/' > hadoop fs -copyFromLocal /tmp/uk.cities '/testdir/bri > tain/uk.cities' > 2) create table partspace ( city string) partitioned by ( country string) row > format delimited FIELDS TERMINATED BY '$' stored as textfile; > 3) load data inpath '/testdir/bri tain/uk.cities' into table partspace > partition (country='britain'); > Then I got the message like, > Load failed with message "Wrong file format. Please check the file's format" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4589) Hive Load command failed when inpath contains space or any restricted characters
[ https://issues.apache.org/jira/browse/HIVE-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4589: -- Status: Open (was: Patch Available) > Hive Load command failed when inpath contains space or any restricted > characters > > > Key: HIVE-4589 > URL: https://issues.apache.org/jira/browse/HIVE-4589 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.10.0, 0.9.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4589.patch > > > 0) create a simple text file with some string. See attached uk.cities > 1) Create a directory in Hadoop that contains a space in the name > hadoop fs -mkdir '/testdir/bri tain/' > hadoop fs -copyFromLocal /tmp/uk.cities '/testdir/bri > tain/uk.cities' > 2) create table partspace ( city string) partitioned by ( country string) row > format delimited FIELDS TERMINATED BY '$' stored as textfile; > 3) load data inpath '/testdir/bri tain/uk.cities' into table partspace > partition (country='britain'); > Then I got the message like, > Load failed with message "Wrong file format. Please check the file's format" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4589) Hive Load command failed when inpath contains space or any restricted characters
[ https://issues.apache.org/jira/browse/HIVE-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4589: -- Attachment: HIVE-4589.patch > Hive Load command failed when inpath contains space or any restricted > characters > > > Key: HIVE-4589 > URL: https://issues.apache.org/jira/browse/HIVE-4589 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4589.patch > > > 0) create a simple text file with some string. See attached uk.cities > 1) Create a directory in Hadoop that contains a space in the name > hadoop fs -mkdir '/testdir/bri tain/' > hadoop fs -copyFromLocal /tmp/uk.cities '/testdir/bri > tain/uk.cities' > 2) create table partspace ( city string) partitioned by ( country string) row > format delimited FIELDS TERMINATED BY '$' stored as textfile; > 3) load data inpath '/testdir/bri tain/uk.cities' into table partspace > partition (country='britain'); > Then I got the message like, > Load failed with message "Wrong file format. Please check the file's format" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4589) Hive Load command failed when inpath contains space or any restricted characters
[ https://issues.apache.org/jira/browse/HIVE-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4589: -- Status: Patch Available (was: In Progress) > Hive Load command failed when inpath contains space or any restricted > characters > > > Key: HIVE-4589 > URL: https://issues.apache.org/jira/browse/HIVE-4589 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.10.0, 0.9.0 >Reporter: Bing Li >Assignee: Bing Li > > 0) create a simple text file with some string. See attached uk.cities > 1) Create a directory in Hadoop that contains a space in the name > hadoop fs -mkdir '/testdir/bri tain/' > hadoop fs -copyFromLocal /tmp/uk.cities '/testdir/bri > tain/uk.cities' > 2) create table partspace ( city string) partitioned by ( country string) row > format delimited FIELDS TERMINATED BY '$' stored as textfile; > 3) load data inpath '/testdir/bri tain/uk.cities' into table partspace > partition (country='britain'); > Then I got the message like, > Load failed with message "Wrong file format. Please check the file's format" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Work started] (HIVE-4589) Hive Load command failed when inpath contains space or any restricted characters
[ https://issues.apache.org/jira/browse/HIVE-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-4589 started by Bing Li. > Hive Load command failed when inpath contains space or any restricted > characters > > > Key: HIVE-4589 > URL: https://issues.apache.org/jira/browse/HIVE-4589 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > > 0) create a simple text file with some string. See attached uk.cities > 1) Create a directory in Hadoop that contains a space in the name > hadoop fs -mkdir '/testdir/bri tain/' > hadoop fs -copyFromLocal /tmp/uk.cities '/testdir/bri > tain/uk.cities' > 2) create table partspace ( city string) partitioned by ( country string) row > format delimited FIELDS TERMINATED BY '$' stored as textfile; > 3) load data inpath '/testdir/bri tain/uk.cities' into table partspace > partition (country='britain'); > Then I got the message like, > Load failed with message "Wrong file format. Please check the file's format" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-4589) Hive Load command failed when inpath contains space or any restricted characters
Bing Li created HIVE-4589: - Summary: Hive Load command failed when inpath contains space or any restricted characters Key: HIVE-4589 URL: https://issues.apache.org/jira/browse/HIVE-4589 Project: Hive Issue Type: Bug Components: CLI Affects Versions: 0.10.0, 0.9.0 Reporter: Bing Li Assignee: Bing Li 0) create a simple text file with some string. See attached uk.cities 1) Create a directory in Hadoop that contains a space in the name hadoop fs -mkdir '/testdir/bri tain/' hadoop fs -copyFromLocal /tmp/uk.cities '/testdir/bri tain/uk.cities' 2) create table partspace ( city string) partitioned by ( country string) row format delimited FIELDS TERMINATED BY '$' stored as textfile; 3) load data inpath '/testdir/bri tain/uk.cities' into table partspace partition (country='britain'); Then I got the message like, Load failed with message "Wrong file format. Please check the file's format" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4577: -- Affects Version/s: 0.10.0 > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0, 0.10.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4577.1.patch, HIVE-4577.2.patch > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4577: -- Status: Patch Available (was: In Progress) add a query file for unit test for my patch > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4577.1.patch, HIVE-4577.2.patch > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13662680#comment-13662680 ] Bing Li commented on HIVE-4577: --- Hi, Edward I updated the patch file with a simple query file. The commands are like: dfs -mkdir "hello"; dfs -mkdir 'world'; dfs -mkdir "bei jing"; dfs -rmr 'hello'; dfs -rmr "world"; dfs -rmr 'bei jing'; > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4577.1.patch, HIVE-4577.2.patch > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Work started] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-4577 started by Bing Li. > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4577.1.patch, HIVE-4577.2.patch > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4577: -- Attachment: HIVE-4577.2.patch add a query file for junit > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4577.1.patch, HIVE-4577.2.patch > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4577: -- Status: Patch Available (was: In Progress) > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4577.1.patch > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-4577: -- Attachment: HIVE-4577.1.patch > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4577.1.patch > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13661811#comment-13661811 ] Bing Li commented on HIVE-4577: --- The root cause is in DfsProcessor class. Hive parses command via a simple split("\\s+"), which won't handle the space between quotes or ignore special characters ("/') in the path. > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0 >Reporter: Bing Li >Assignee: Bing Li > Attachments: HIVE-4577.1.patch > > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Work started] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
[ https://issues.apache.org/jira/browse/HIVE-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-4577 started by Bing Li. > hive CLI can't handle hadoop dfs command with space and quotes. > > > Key: HIVE-4577 > URL: https://issues.apache.org/jira/browse/HIVE-4577 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.9.0 >Reporter: Bing Li >Assignee: Bing Li > > As design, hive could support hadoop dfs command in hive shell, like > hive> dfs -mkdir /user/biadmin/mydir; > but has different behavior with hadoop if the path contains space and quotes > hive> dfs -mkdir "hello"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 > /user/biadmin/"hello" > hive> dfs -mkdir 'world'; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 > /user/biadmin/'world' > hive> dfs -mkdir "bei jing"; > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/"bei > drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 > /user/biadmin/jing" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-4577) hive CLI can't handle hadoop dfs command with space and quotes.
Bing Li created HIVE-4577: - Summary: hive CLI can't handle hadoop dfs command with space and quotes. Key: HIVE-4577 URL: https://issues.apache.org/jira/browse/HIVE-4577 Project: Hive Issue Type: Bug Components: CLI Affects Versions: 0.9.0 Reporter: Bing Li Assignee: Bing Li As design, hive could support hadoop dfs command in hive shell, like hive> dfs -mkdir /user/biadmin/mydir; but has different behavior with hadoop if the path contains space and quotes hive> dfs -mkdir "hello"; drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:40 /user/biadmin/"hello" hive> dfs -mkdir 'world'; drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:43 /user/biadmin/'world' hive> dfs -mkdir "bei jing"; drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 /user/biadmin/"bei drwxr-xr-x - biadmin supergroup 0 2013-04-23 09:44 /user/biadmin/jing" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-4402) Support UPDATE statement
Bing Li created HIVE-4402: - Summary: Support UPDATE statement Key: HIVE-4402 URL: https://issues.apache.org/jira/browse/HIVE-4402 Project: Hive Issue Type: New Feature Reporter: Bing Li It would be good if hive could support UPDATE statement like common database. e.g. update row into database (for edit rows and save back) cmd: update "DB2ADMIN"."EMP" set "SALARY"=? where "EMPNO"=? and "DEPTNO"=? and "SALARY"=? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-4401) Support quoted schema and table names in Hive /Hive JDBC Driver
Bing Li created HIVE-4401: - Summary: Support quoted schema and table names in Hive /Hive JDBC Driver Key: HIVE-4401 URL: https://issues.apache.org/jira/browse/HIVE-4401 Project: Hive Issue Type: Improvement Affects Versions: 0.9.0 Reporter: Bing Li Assignee: Bing Li Hive driver can not handle the quoted table names and schema names, which can be processed by db2, and almost all other databases. e.g. SELECT * FROM "gosales"."branch" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-3779) An empty value to hive.logquery.location can't disable the creation of hive history log files
Bing Li created HIVE-3779: - Summary: An empty value to hive.logquery.location can't disable the creation of hive history log files Key: HIVE-3779 URL: https://issues.apache.org/jira/browse/HIVE-3779 Project: Hive Issue Type: Bug Components: Documentation Affects Versions: 0.9.0 Reporter: Bing Li Priority: Minor In AdminManual Configuration (https://cwiki.apache.org/Hive/adminmanual-configuration.html), the description of hive.querylog.location mentioned that if the variable set to empty string structured log will not be created. But it fails with the following setting, hive.querylog.location It seems that it can NOT get an empty value from HiveConf.ConfVars.HIVEHISTORYFILELOC, but the default value. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3685) TestCliDriver (script_pipe.q) failed with IBM JDK
[ https://issues.apache.org/jira/browse/HIVE-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-3685: -- Status: Patch Available (was: Open) the patch would resolve this failure on IBM JDK and won't affect the results on Sun's > TestCliDriver (script_pipe.q) failed with IBM JDK > - > > Key: HIVE-3685 > URL: https://issues.apache.org/jira/browse/HIVE-3685 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.9.0, 0.8.0, 0.7.1 > Environment: ant-1.8.2 > IBM JDK 1.6 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.9.0, 0.8.0 > > Attachments: HIVE-3685.1.patch-trunk.txt > > > 1 failed: TestCliDriver (script_pipe.q) > [junit] Begin query: script_pipe.q > [junit] java.io.IOException: No such file or directory > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:293) > [junit] at > java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:76) > [junit] at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:134) > [junit] at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:135) > [junit] at java.io.DataOutputStream.flush(DataOutputStream.java:117) > [junit] at > org.apache.hadoop.hive.ql.exec.TextRecordWriter.close(TextRecordWriter.java:48) > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:365) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] Ended Job = job_local_0001 with errors > [junit] Error during job, obtaining debugging information... > [
[jira] [Updated] (HIVE-3691) TestDynamicSerDe failed with IBM JDK
[ https://issues.apache.org/jira/browse/HIVE-3691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-3691: -- Status: Patch Available (was: Open) replace HashMap with LinkedHashMap in the test case to avoid different order of the output from different JDKs > TestDynamicSerDe failed with IBM JDK > > > Key: HIVE-3691 > URL: https://issues.apache.org/jira/browse/HIVE-3691 > Project: Hive > Issue Type: Bug >Affects Versions: 0.9.0, 0.8.0, 0.7.1 > Environment: ant-1.8.2, IBM JDK 1.6 >Reporter: Bing Li >Assignee: Bing Li >Priority: Minor > Attachments: HIVE-3691.1.patch-trunk.txt, HIVE-3691.1.patch.txt > > > the order of the output in the gloden file are different from JDKs. > the root cause of this is the implementation of HashMap in JDK -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-3685) TestCliDriver (script_pipe.q) failed with IBM JDK
[ https://issues.apache.org/jira/browse/HIVE-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bing Li updated HIVE-3685: -- Status: Open (was: Patch Available) > TestCliDriver (script_pipe.q) failed with IBM JDK > - > > Key: HIVE-3685 > URL: https://issues.apache.org/jira/browse/HIVE-3685 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 0.9.0, 0.8.0, 0.7.1 > Environment: ant-1.8.2 > IBM JDK 1.6 >Reporter: Bing Li >Assignee: Bing Li > Fix For: 0.9.0, 0.8.0 > > Attachments: HIVE-3685.1.patch-trunk.txt > > > 1 failed: TestCliDriver (script_pipe.q) > [junit] Begin query: script_pipe.q > [junit] java.io.IOException: No such file or directory > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:293) > [junit] at > java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:76) > [junit] at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:134) > [junit] at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:135) > [junit] at java.io.DataOutputStream.flush(DataOutputStream.java:117) > [junit] at > org.apache.hadoop.hive.ql.exec.TextRecordWriter.close(TextRecordWriter.java:48) > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:365) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while > closing .. > [junit] at > org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566) > [junit] at > org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303) > [junit] at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473) > [junit] at > org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411) > [junit] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216) > [junit] Ended Job = job_local_0001 with errors > [junit] Error during job, obtaining debugging information... > [junit] Exception: Client Execution failed with error code = 9 > [junit] See build/ql/tmp/hive.lo