[jira] [Commented] (HIVE-6910) Invalid column access info for partitioned table
[ https://issues.apache.org/jira/browse/HIVE-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008059#comment-14008059 ] Ashutosh Chauhan commented on HIVE-6910: +1 Invalid column access info for partitioned table Key: HIVE-6910 URL: https://issues.apache.org/jira/browse/HIVE-6910 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.11.0, 0.12.0, 0.13.0 Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-6910.1.patch.txt, HIVE-6910.2.patch.txt, HIVE-6910.3.patch.txt, HIVE-6910.4.patch.txt, HIVE-6910.5.patch.txt, HIVE-6910.6.patch.txt From http://www.mail-archive.com/user@hive.apache.org/msg11324.html neededColumnIDs in TS is only for non-partition columns. But ColumnAccessAnalyzer is calculating it on all columns. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6910) Invalid column access info for partitioned table
[ https://issues.apache.org/jira/browse/HIVE-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6910: --- Status: Patch Available (was: Open) Invalid column access info for partitioned table Key: HIVE-6910 URL: https://issues.apache.org/jira/browse/HIVE-6910 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.13.0, 0.12.0, 0.11.0 Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-6910.1.patch.txt, HIVE-6910.2.patch.txt, HIVE-6910.3.patch.txt, HIVE-6910.4.patch.txt, HIVE-6910.5.patch.txt, HIVE-6910.6.patch.txt From http://www.mail-archive.com/user@hive.apache.org/msg11324.html neededColumnIDs in TS is only for non-partition columns. But ColumnAccessAnalyzer is calculating it on all columns. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4561) Column stats : LOW_VALUE (or HIGH_VALUE) will always be 0.0000 ,if all the column values larger than 0.0 (or if all column values smaller than 0.0)
[ https://issues.apache.org/jira/browse/HIVE-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-4561: Status: Patch Available (was: Open) Column stats : LOW_VALUE (or HIGH_VALUE) will always be 0. ,if all the column values larger than 0.0 (or if all column values smaller than 0.0) Key: HIVE-4561 URL: https://issues.apache.org/jira/browse/HIVE-4561 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.12.0 Reporter: caofangkun Assignee: Navis Attachments: HIVE-4561.1.patch, HIVE-4561.2.patch, HIVE-4561.3.patch, HIVE-4561.4.patch.txt, HIVE-4561.5.patch.txt if all column values larger than 0.0 DOUBLE_LOW_VALUE always will be 0.0 or if all column values less than 0.0, DOUBLE_HIGH_VALUE will always be hive (default) create table src_test (price double); hive (default) load data local inpath './test.txt' into table src_test; hive (default) select * from src_test; OK 1.0 2.0 3.0 Time taken: 0.313 seconds, Fetched: 3 row(s) hive (default) analyze table src_test compute statistics for columns price; mysql select * from TAB_COL_STATS \G; CS_ID: 16 DB_NAME: default TABLE_NAME: src_test COLUMN_NAME: price COLUMN_TYPE: double TBL_ID: 2586 LONG_LOW_VALUE: 0 LONG_HIGH_VALUE: 0 DOUBLE_LOW_VALUE: 0. # Wrong Result ! Expected is 1. DOUBLE_HIGH_VALUE: 3. BIG_DECIMAL_LOW_VALUE: NULL BIG_DECIMAL_HIGH_VALUE: NULL NUM_NULLS: 0 NUM_DISTINCTS: 1 AVG_COL_LEN: 0. MAX_COL_LEN: 0 NUM_TRUES: 0 NUM_FALSES: 0 LAST_ANALYZED: 1368596151 2 rows in set (0.00 sec) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-4561) Column stats : LOW_VALUE (or HIGH_VALUE) will always be 0.0000 ,if all the column values larger than 0.0 (or if all column values smaller than 0.0)
[ https://issues.apache.org/jira/browse/HIVE-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-4561: Attachment: HIVE-4561.5.patch.txt Column stats : LOW_VALUE (or HIGH_VALUE) will always be 0. ,if all the column values larger than 0.0 (or if all column values smaller than 0.0) Key: HIVE-4561 URL: https://issues.apache.org/jira/browse/HIVE-4561 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.12.0 Reporter: caofangkun Assignee: Navis Attachments: HIVE-4561.1.patch, HIVE-4561.2.patch, HIVE-4561.3.patch, HIVE-4561.4.patch.txt, HIVE-4561.5.patch.txt if all column values larger than 0.0 DOUBLE_LOW_VALUE always will be 0.0 or if all column values less than 0.0, DOUBLE_HIGH_VALUE will always be hive (default) create table src_test (price double); hive (default) load data local inpath './test.txt' into table src_test; hive (default) select * from src_test; OK 1.0 2.0 3.0 Time taken: 0.313 seconds, Fetched: 3 row(s) hive (default) analyze table src_test compute statistics for columns price; mysql select * from TAB_COL_STATS \G; CS_ID: 16 DB_NAME: default TABLE_NAME: src_test COLUMN_NAME: price COLUMN_TYPE: double TBL_ID: 2586 LONG_LOW_VALUE: 0 LONG_HIGH_VALUE: 0 DOUBLE_LOW_VALUE: 0. # Wrong Result ! Expected is 1. DOUBLE_HIGH_VALUE: 3. BIG_DECIMAL_LOW_VALUE: NULL BIG_DECIMAL_HIGH_VALUE: NULL NUM_NULLS: 0 NUM_DISTINCTS: 1 AVG_COL_LEN: 0. MAX_COL_LEN: 0 NUM_TRUES: 0 NUM_FALSES: 0 LAST_ANALYZED: 1368596151 2 rows in set (0.00 sec) -- This message was sent by Atlassian JIRA (v6.2#6252)
Review Request 21886: Column stats : LOW_VALUE (or HIGH_VALUE) will always be 0.0000 , if all the column values larger than 0.0 (or if all column values smaller than 0.0)
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/21886/ --- Review request for hive. Bugs: HIVE-4561 https://issues.apache.org/jira/browse/HIVE-4561 Repository: hive-git Description --- if all column values larger than 0.0 DOUBLE_LOW_VALUE always will be 0.0 or if all column values less than 0.0, DOUBLE_HIGH_VALUE will always be hive (default) create table src_test (price double); hive (default) load data local inpath './test.txt' into table src_test; hive (default) select * from src_test; OK 1.0 2.0 3.0 Time taken: 0.313 seconds, Fetched: 3 row(s) hive (default) analyze table src_test compute statistics for columns price; mysql select * from TAB_COL_STATS \G; CS_ID: 16 DB_NAME: default TABLE_NAME: src_test COLUMN_NAME: price COLUMN_TYPE: double TBL_ID: 2586 LONG_LOW_VALUE: 0 LONG_HIGH_VALUE: 0 DOUBLE_LOW_VALUE: 0. # Wrong Result ! Expected is 1. DOUBLE_HIGH_VALUE: 3. BIG_DECIMAL_LOW_VALUE: NULL BIG_DECIMAL_HIGH_VALUE: NULL NUM_NULLS: 0 NUM_DISTINCTS: 1 AVG_COL_LEN: 0. MAX_COL_LEN: 0 NUM_TRUES: 0 NUM_FALSES: 0 LAST_ANALYZED: 1368596151 2 rows in set (0.00 sec) Diffs - metastore/if/hive_metastore.thrift eef1b80 metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h 43869c2 metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp 9e440bb metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DecimalColumnStatsData.java 5661252 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DoubleColumnStatsData.java d3f3f68 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/LongColumnStatsData.java 2cf4380 metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py c4b583b metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb 79b7a1a metastore/src/java/org/apache/hadoop/hive/metastore/StatObjectConverter.java dc0e266 metastore/src/model/org/apache/hadoop/hive/metastore/model/MPartitionColumnStatistics.java f61cdf0 metastore/src/model/org/apache/hadoop/hive/metastore/model/MTableColumnStatistics.java 85f6427 ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsTask.java 3dc02f0 ql/src/java/org/apache/hadoop/hive/ql/optimizer/StatsOptimizer.java ee4d56c ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFComputeStats.java 3b063eb ql/src/test/queries/clientpositive/metadata_only_queries.q b549a56 ql/src/test/results/clientpositive/compute_stats_empty_table.q.out 50d6c8d ql/src/test/results/clientpositive/compute_stats_long.q.out 2f5cbdd ql/src/test/results/clientpositive/metadata_only_queries.q.out 531ea41 Diff: https://reviews.apache.org/r/21886/diff/ Testing --- Thanks, Navis Ryu
[jira] [Commented] (HIVE-4790) MapredLocalTask task does not make virtual columns
[ https://issues.apache.org/jira/browse/HIVE-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008075#comment-14008075 ] Hive QA commented on HIVE-4790: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12646426/HIVE-4790.8.patch.txt {color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 5533 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketsortoptimize_insert_6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_vc org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_load_dyn_part1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_dml org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimal org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalX org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalXY {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/277/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/277/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-277/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 10 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12646426 MapredLocalTask task does not make virtual columns -- Key: HIVE-4790 URL: https://issues.apache.org/jira/browse/HIVE-4790 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Navis Assignee: Navis Priority: Minor Attachments: D11511.3.patch, D11511.4.patch, HIVE-4790.5.patch.txt, HIVE-4790.6.patch.txt, HIVE-4790.7.patch.txt, HIVE-4790.8.patch.txt, HIVE-4790.D11511.1.patch, HIVE-4790.D11511.2.patch From mailing list, http://www.mail-archive.com/user@hive.apache.org/msg08264.html {noformat} SELECT *,b.BLOCK__OFFSET__INSIDE__FILE FROM a JOIN b ON b.rownumber = a.number; fails with this error: SELECT *,b.BLOCK__OFFSET__INSIDE__FILE FROM a JOIN b ON b.rownumber = a.number; Automatically selecting local only mode for query Total MapReduce jobs = 1 setting HADOOP_USER_NAMEpmarron 13/06/25 10:52:56 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. Execution log at: /tmp/pmarron/.log 2013-06-25 10:52:56 Starting to launch local task to process map join; maximum memory = 932118528 java.lang.RuntimeException: cannot find field block__offset__inside__file from [0:rownumber, 1:offset] at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:366) at org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.getStructFieldRef(LazySimpleStructObjectInspector.java:168) at org.apache.hadoop.hive.serde2.objectinspector.DelegatedStructObjectInspector.getStructFieldRef(DelegatedStructObjectInspector.java:74) at org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:57) at org.apache.hadoop.hive.ql.exec.JoinUtil.getObjectInspectorsFromEvaluators(JoinUtil.java:68) at org.apache.hadoop.hive.ql.exec.HashTableSinkOperator.initializeOp(HashTableSinkOperator.java:222) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:451) at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:407) at org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:186) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375) at org.apache.hadoop.hive.ql.exec.MapredLocalTask.initializeOperators(MapredLocalTask.java:394) at org.apache.hadoop.hive.ql.exec.MapredLocalTask.executeFromChildJVM(MapredLocalTask.java:277) at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:676) at
Re: Review Request 16034: Add explain authorize for checking privileges
On May 24, 2014, 12:51 a.m., Ashutosh Chauhan wrote: ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java, line 325 https://reviews.apache.org/r/16034/diff/4/?file=545573#file545573line325 This if condition can never be false, right ? Simple explain needs full authorization. This should be true only for explain authorize. On May 24, 2014, 12:51 a.m., Ashutosh Chauhan wrote: ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java, line 302 https://reviews.apache.org/r/16034/diff/4/?file=545573#file545573line302 Better name : collectReferedEntitiesNDoAuth() Renamed to collectAuthRelatedEntities. This just collects(ignores) authorization exception. On May 24, 2014, 12:51 a.m., Ashutosh Chauhan wrote: ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g, line 297 https://reviews.apache.org/r/16034/diff/4/?file=545576#file545576line297 We need to add this to nonReserved list in IdentifiersParser.g as well. ah, sure. On May 24, 2014, 12:51 a.m., Ashutosh Chauhan wrote: ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g, line 298 https://reviews.apache.org/r/16034/diff/4/?file=545576#file545576line298 Also I think explain authorization select * from T is more clear than explain authorize select * from T done. On May 24, 2014, 12:51 a.m., Ashutosh Chauhan wrote: ql/src/java/org/apache/hadoop/hive/ql/Driver.java, line 496 https://reviews.apache.org/r/16034/diff/4/?file=545572#file545572line496 This could be HiveAuthorizationProvider authorizer = AuthorizationFactory.create(ss.getAuthorizer()); See my corresponding comment in create(). I want to avoid DelegtableAuthProvider in non-explain case, if possible. done. On May 24, 2014, 12:51 a.m., Ashutosh Chauhan wrote: ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationFactory.java, line 40 https://reviews.apache.org/r/16034/diff/4/?file=545580#file545580line40 This should instead be return delegated; That will avoid creating DelegatableAuthProvider in non-explain case. done. On May 24, 2014, 12:51 a.m., Ashutosh Chauhan wrote: ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationFactory.java, lines 50-56 https://reviews.apache.org/r/16034/diff/4/?file=545580#file545580line50 Not able to understand this. Can you add comments whats happening here? It's not related to this issue. There was an authorization issue on view which should be authorized by the owner of view, not by current user, which was fixed long before than this(yes, in my version). I should book that as a following issue. Will remove this part for that. On May 24, 2014, 12:51 a.m., Ashutosh Chauhan wrote: ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationFactory.java, line 104 https://reviews.apache.org/r/16034/diff/4/?file=545580#file545580line104 Can you add comments when this class is used and for what? same as above - Navis --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16034/#review43887 --- On April 2, 2014, 9:07 a.m., Navis Ryu wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16034/ --- (Updated April 2, 2014, 9:07 a.m.) Review request for hive. Bugs: HIVE-5961 https://issues.apache.org/jira/browse/HIVE-5961 Repository: hive-git Description --- For easy checking of need privileges for a query, {noformat} explain authorize select * from src join srcpart INPUTS: default@srcpart default@srcpart@ds=2008-04-08/hr=11 default@srcpart@ds=2008-04-08/hr=12 default@srcpart@ds=2008-04-09/hr=11 default@srcpart@ds=2008-04-09/hr=12 default@src OUTPUTS: file:/home/navis/apache/oss-hive/itests/qtest/target/tmp/localscratchdir/hive_2013-12-04_21-57-53_748_5323811717799107868-1/-mr-1 CURRENT_USER: hive_test_user OPERATION: QUERY AUTHORIZATION_FAILURES: No privilege 'Select' found for inputs { database:default, table:srcpart, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} {noformat} Hopefully good for debugging of authorization, which is in progress on HIVE-5837. Diffs - ql/src/java/org/apache/hadoop/hive/ql/Driver.java d42895a ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java 35f4fa9 ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java db9fa74
Re: Review Request 16034: Add explain authorize for checking privileges
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16034/ --- (Updated May 24, 2014, 8:45 a.m.) Review request for hive. Changes --- Addressed comments Bugs: HIVE-5961 https://issues.apache.org/jira/browse/HIVE-5961 Repository: hive-git Description --- For easy checking of need privileges for a query, {noformat} explain authorize select * from src join srcpart INPUTS: default@srcpart default@srcpart@ds=2008-04-08/hr=11 default@srcpart@ds=2008-04-08/hr=12 default@srcpart@ds=2008-04-09/hr=11 default@srcpart@ds=2008-04-09/hr=12 default@src OUTPUTS: file:/home/navis/apache/oss-hive/itests/qtest/target/tmp/localscratchdir/hive_2013-12-04_21-57-53_748_5323811717799107868-1/-mr-1 CURRENT_USER: hive_test_user OPERATION: QUERY AUTHORIZATION_FAILURES: No privilege 'Select' found for inputs { database:default, table:srcpart, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} {noformat} Hopefully good for debugging of authorization, which is in progress on HIVE-5837. Diffs (updated) - ql/src/java/org/apache/hadoop/hive/ql/Driver.java d57dabb ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java 35f4fa9 ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java db9fa74 ql/src/java/org/apache/hadoop/hive/ql/parse/ExplainSemanticAnalyzer.java 26863f1 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 38e8e25 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g af3ecd0 ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 5406412 ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java b6f3748 ql/src/java/org/apache/hadoop/hive/ql/plan/ExplainWork.java d7140ca ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationFactory.java PRE-CREATION ql/src/test/queries/clientpositive/authorization_explain.q PRE-CREATION ql/src/test/results/clientpositive/authorization_explain.q.out PRE-CREATION Diff: https://reviews.apache.org/r/16034/diff/ Testing --- Thanks, Navis Ryu
[jira] [Updated] (HIVE-5961) Add explain authorize for checking privileges
[ https://issues.apache.org/jira/browse/HIVE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-5961: Status: Patch Available (was: Open) Add explain authorize for checking privileges - Key: HIVE-5961 URL: https://issues.apache.org/jira/browse/HIVE-5961 Project: Hive Issue Type: Improvement Components: Authorization Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-5961.1.patch.txt, HIVE-5961.2.patch.txt, HIVE-5961.3.patch.txt, HIVE-5961.4.patch.txt, HIVE-5961.5.patch.txt, HIVE-5961.6.patch.txt For easy checking of need privileges for a query, {noformat} explain authorize select * from src join srcpart INPUTS: default@srcpart default@srcpart@ds=2008-04-08/hr=11 default@srcpart@ds=2008-04-08/hr=12 default@srcpart@ds=2008-04-09/hr=11 default@srcpart@ds=2008-04-09/hr=12 default@src OUTPUTS: file:/home/navis/apache/oss-hive/itests/qtest/target/tmp/localscratchdir/hive_2013-12-04_21-57-53_748_5323811717799107868-1/-mr-1 CURRENT_USER: hive_test_user OPERATION: QUERY AUTHORIZATION_FAILURES: No privilege 'Select' found for inputs { database:default, table:srcpart, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} {noformat} Hopefully good for debugging of authorization, which is in progress on HIVE-5837. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5961) Add explain authorize for checking privileges
[ https://issues.apache.org/jira/browse/HIVE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-5961: Attachment: HIVE-5961.6.patch.txt Add explain authorize for checking privileges - Key: HIVE-5961 URL: https://issues.apache.org/jira/browse/HIVE-5961 Project: Hive Issue Type: Improvement Components: Authorization Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-5961.1.patch.txt, HIVE-5961.2.patch.txt, HIVE-5961.3.patch.txt, HIVE-5961.4.patch.txt, HIVE-5961.5.patch.txt, HIVE-5961.6.patch.txt For easy checking of need privileges for a query, {noformat} explain authorize select * from src join srcpart INPUTS: default@srcpart default@srcpart@ds=2008-04-08/hr=11 default@srcpart@ds=2008-04-08/hr=12 default@srcpart@ds=2008-04-09/hr=11 default@srcpart@ds=2008-04-09/hr=12 default@src OUTPUTS: file:/home/navis/apache/oss-hive/itests/qtest/target/tmp/localscratchdir/hive_2013-12-04_21-57-53_748_5323811717799107868-1/-mr-1 CURRENT_USER: hive_test_user OPERATION: QUERY AUTHORIZATION_FAILURES: No privilege 'Select' found for inputs { database:default, table:srcpart, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} {noformat} Hopefully good for debugging of authorization, which is in progress on HIVE-5837. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-3907) Hive should support adding multiple resources at once
[ https://issues.apache.org/jira/browse/HIVE-3907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008107#comment-14008107 ] Hive QA commented on HIVE-3907: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12646433/HIVE-3907.2.patch.txt {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 5458 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_udf_nonexistent_resource org.apache.hadoop.hive.common.metrics.TestMetrics.testScopeConcurrency org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimal org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalX org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalXY {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/278/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/278/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-278/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12646433 Hive should support adding multiple resources at once - Key: HIVE-3907 URL: https://issues.apache.org/jira/browse/HIVE-3907 Project: Hive Issue Type: Improvement Components: CLI Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-3907.2.patch.txt, HIVE-3907.D7971.1.patch Currently hive adds resources in one by one manner. And for JAR resources, one classloader is created for each jar file, which seemed not good idea. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 16034: Add explain authorize for checking privileges
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16034/#review43899 --- Ship it! Ship It! - Ashutosh Chauhan On May 24, 2014, 8:45 a.m., Navis Ryu wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16034/ --- (Updated May 24, 2014, 8:45 a.m.) Review request for hive. Bugs: HIVE-5961 https://issues.apache.org/jira/browse/HIVE-5961 Repository: hive-git Description --- For easy checking of need privileges for a query, {noformat} explain authorize select * from src join srcpart INPUTS: default@srcpart default@srcpart@ds=2008-04-08/hr=11 default@srcpart@ds=2008-04-08/hr=12 default@srcpart@ds=2008-04-09/hr=11 default@srcpart@ds=2008-04-09/hr=12 default@src OUTPUTS: file:/home/navis/apache/oss-hive/itests/qtest/target/tmp/localscratchdir/hive_2013-12-04_21-57-53_748_5323811717799107868-1/-mr-1 CURRENT_USER: hive_test_user OPERATION: QUERY AUTHORIZATION_FAILURES: No privilege 'Select' found for inputs { database:default, table:srcpart, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} {noformat} Hopefully good for debugging of authorization, which is in progress on HIVE-5837. Diffs - ql/src/java/org/apache/hadoop/hive/ql/Driver.java d57dabb ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java 35f4fa9 ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java db9fa74 ql/src/java/org/apache/hadoop/hive/ql/parse/ExplainSemanticAnalyzer.java 26863f1 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 38e8e25 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g af3ecd0 ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 5406412 ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java b6f3748 ql/src/java/org/apache/hadoop/hive/ql/plan/ExplainWork.java d7140ca ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationFactory.java PRE-CREATION ql/src/test/queries/clientpositive/authorization_explain.q PRE-CREATION ql/src/test/results/clientpositive/authorization_explain.q.out PRE-CREATION Diff: https://reviews.apache.org/r/16034/diff/ Testing --- Thanks, Navis Ryu
[jira] [Commented] (HIVE-5961) Add explain authorize for checking privileges
[ https://issues.apache.org/jira/browse/HIVE-5961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008137#comment-14008137 ] Ashutosh Chauhan commented on HIVE-5961: +1 Add explain authorize for checking privileges - Key: HIVE-5961 URL: https://issues.apache.org/jira/browse/HIVE-5961 Project: Hive Issue Type: Improvement Components: Authorization Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-5961.1.patch.txt, HIVE-5961.2.patch.txt, HIVE-5961.3.patch.txt, HIVE-5961.4.patch.txt, HIVE-5961.5.patch.txt, HIVE-5961.6.patch.txt For easy checking of need privileges for a query, {noformat} explain authorize select * from src join srcpart INPUTS: default@srcpart default@srcpart@ds=2008-04-08/hr=11 default@srcpart@ds=2008-04-08/hr=12 default@srcpart@ds=2008-04-09/hr=11 default@srcpart@ds=2008-04-09/hr=12 default@src OUTPUTS: file:/home/navis/apache/oss-hive/itests/qtest/target/tmp/localscratchdir/hive_2013-12-04_21-57-53_748_5323811717799107868-1/-mr-1 CURRENT_USER: hive_test_user OPERATION: QUERY AUTHORIZATION_FAILURES: No privilege 'Select' found for inputs { database:default, table:srcpart, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} No privilege 'Select' found for inputs { database:default, table:src, columnName:key} {noformat} Hopefully good for debugging of authorization, which is in progress on HIVE-5837. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7048) CompositeKeyHBaseFactory should not use FamilyFilter
[ https://issues.apache.org/jira/browse/HIVE-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008143#comment-14008143 ] Hive QA commented on HIVE-7048: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12646450/HIVE-7048.3.patch.txt {color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 5534 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_exists_having org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_insert1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_dml org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table org.apache.hadoop.hive.common.metrics.TestMetrics.testScopeConcurrency org.apache.hadoop.hive.hbase.TestHBaseSerDe.testHBaseSerDeCompositeKeyWithoutSeparator org.apache.hadoop.hive.metastore.TestMetastoreVersion.testDefaults org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes org.apache.hive.hcatalog.pig.TestOrcHCatLoader.testReadDataPrimitiveTypes org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimal org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalX org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalXY {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/280/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/280/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-280/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 12 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12646450 CompositeKeyHBaseFactory should not use FamilyFilter Key: HIVE-7048 URL: https://issues.apache.org/jira/browse/HIVE-7048 Project: Hive Issue Type: Improvement Components: HBase Handler Affects Versions: 0.14.0 Reporter: Swarnim Kulkarni Assignee: Swarnim Kulkarni Priority: Blocker Attachments: HIVE-7048.1.patch.txt, HIVE-7048.2.patch.txt, HIVE-7048.3.patch.txt HIVE-6411 introduced a more generic way to provide composite key implementations via custom factory implementations. However it seems like the CompositeHBaseKeyFactory implementation uses a FamilyFilter for row key scans which doesn't seem appropriate. This should be investigated further and if possible replaced with a RowRangeScanFilter. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6936) Provide table properties to InputFormats
[ https://issues.apache.org/jira/browse/HIVE-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley updated HIVE-6936: Attachment: HIVE-6936.patch I had to add java quoting to the values of the table properties for the unit tests. Provide table properties to InputFormats Key: HIVE-6936 URL: https://issues.apache.org/jira/browse/HIVE-6936 Project: Hive Issue Type: Bug Components: File Formats Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 0.14.0 Attachments: HIVE-6936.patch, HIVE-6936.patch, HIVE-6936.patch, HIVE-6936.patch, HIVE-6936.patch, HIVE-6936.patch, HIVE-6936.patch Some advanced file formats need the table properties made available to them. Additionally, it would be convenient to provide a unique id for fetch operators and the complete list of directories. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7048) CompositeKeyHBaseFactory should not use FamilyFilter
[ https://issues.apache.org/jira/browse/HIVE-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HIVE-7048: --- Attachment: HIVE-7048.4.patch.txt Updating patch to address one related failure inHBaseSerDe. CompositeKeyHBaseFactory should not use FamilyFilter Key: HIVE-7048 URL: https://issues.apache.org/jira/browse/HIVE-7048 Project: Hive Issue Type: Improvement Components: HBase Handler Affects Versions: 0.14.0 Reporter: Swarnim Kulkarni Assignee: Swarnim Kulkarni Priority: Blocker Attachments: HIVE-7048.1.patch.txt, HIVE-7048.2.patch.txt, HIVE-7048.3.patch.txt, HIVE-7048.4.patch.txt HIVE-6411 introduced a more generic way to provide composite key implementations via custom factory implementations. However it seems like the CompositeHBaseKeyFactory implementation uses a FamilyFilter for row key scans which doesn't seem appropriate. This should be investigated further and if possible replaced with a RowRangeScanFilter. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7122) Storage format for create like table
Vasanth kumar RJ created HIVE-7122: -- Summary: Storage format for create like table Key: HIVE-7122 URL: https://issues.apache.org/jira/browse/HIVE-7122 Project: Hive Issue Type: New Feature Components: Query Processor Reporter: Vasanth kumar RJ Fix For: 0.14.0 Using create like table user can specify the table storage format. Example: create table table1 like table2 stored as ORC; -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7122) Storage format for create like table
[ https://issues.apache.org/jira/browse/HIVE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasanth kumar RJ updated HIVE-7122: --- Attachment: HIVE-7122.patch Storage format for create like table Key: HIVE-7122 URL: https://issues.apache.org/jira/browse/HIVE-7122 Project: Hive Issue Type: New Feature Components: Query Processor Reporter: Vasanth kumar RJ Fix For: 0.14.0 Attachments: HIVE-7122.patch Using create like table user can specify the table storage format. Example: create table table1 like table2 stored as ORC; -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7122) Storage format for create like table
[ https://issues.apache.org/jira/browse/HIVE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008164#comment-14008164 ] Vasanth kumar RJ commented on HIVE-7122: Sorry I am able to assign JIRA myself. Can anyone assign to me? Storage format for create like table Key: HIVE-7122 URL: https://issues.apache.org/jira/browse/HIVE-7122 Project: Hive Issue Type: New Feature Components: Query Processor Reporter: Vasanth kumar RJ Fix For: 0.14.0 Attachments: HIVE-7122.patch Using create like table user can specify the table storage format. Example: create table table1 like table2 stored as ORC; -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7122) Storage format for create like table
[ https://issues.apache.org/jira/browse/HIVE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008163#comment-14008163 ] Vasanth kumar RJ commented on HIVE-7122: I am able to assign JIRA myself. Can anyone assign to me? Storage format for create like table Key: HIVE-7122 URL: https://issues.apache.org/jira/browse/HIVE-7122 Project: Hive Issue Type: New Feature Components: Query Processor Reporter: Vasanth kumar RJ Fix For: 0.14.0 Attachments: HIVE-7122.patch Using create like table user can specify the table storage format. Example: create table table1 like table2 stored as ORC; -- This message was sent by Atlassian JIRA (v6.2#6252)
Review Request 21887: Storage format for create like table
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/21887/ --- Review request for hive. Bugs: HIVE-7122 https://issues.apache.org/jira/browse/HIVE-7122 Repository: hive-git Description --- Using create like table user can specify the table storage format. Example: create table table1 like table2 stored as ORC; Diffs - ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java bbc6105 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g af3ecd0 ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 49eb83f ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableLikeDesc.java cb5d64c ql/src/test/queries/clientpositive/create_like.q 13539a6 ql/src/test/results/clientpositive/create_like.q.out 62254fe Diff: https://reviews.apache.org/r/21887/diff/ Testing --- Thanks, Vasanth kumar RJ
[jira] [Updated] (HIVE-7122) Storage format for create like table
[ https://issues.apache.org/jira/browse/HIVE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasanth kumar RJ updated HIVE-7122: --- Status: Patch Available (was: Open) Storage format for create like table Key: HIVE-7122 URL: https://issues.apache.org/jira/browse/HIVE-7122 Project: Hive Issue Type: New Feature Components: Query Processor Reporter: Vasanth kumar RJ Fix For: 0.14.0 Attachments: HIVE-7122.patch Using create like table user can specify the table storage format. Example: create table table1 like table2 stored as ORC; -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6367) Implement Decimal in ParquetSerde
[ https://issues.apache.org/jira/browse/HIVE-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008166#comment-14008166 ] Brock Noland commented on HIVE-6367: +1 Implement Decimal in ParquetSerde - Key: HIVE-6367 URL: https://issues.apache.org/jira/browse/HIVE-6367 Project: Hive Issue Type: Sub-task Components: Serializers/Deserializers Affects Versions: 0.13.0 Reporter: Brock Noland Assignee: Xuefu Zhang Labels: Parquet Attachments: HIVE-6367.patch, dec.parq Some code in the Parquet Serde deals with decimal and other does not. For example in ETypeConverter we convert Decimal to double (which is invalid) whereas in DataWritableWriter and other locations we throw an exception if decimal is used. This JIRA is to implement decimal support. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7109) Resource leak in HBaseStorageHandler
[ https://issues.apache.org/jira/browse/HIVE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008190#comment-14008190 ] Hive QA commented on HIVE-7109: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12646456/HIVE-7109.1.patch.txt {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 5458 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_partitioned org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table org.apache.hadoop.hive.common.metrics.TestMetrics.testScopeConcurrency org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimal org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalX org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalXY org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/282/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/282/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-282/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12646456 Resource leak in HBaseStorageHandler Key: HIVE-7109 URL: https://issues.apache.org/jira/browse/HIVE-7109 Project: Hive Issue Type: Bug Components: HBase Handler Affects Versions: 0.13.0 Reporter: Swarnim Kulkarni Assignee: Swarnim Kulkarni Attachments: HIVE-7109.1.patch.txt The preCreateTable method in the HBaseStorageHandler checks that the HBase table is still online by creating a new instance of HTable {code} // ensure the table is online new HTable(hbaseConf, tableDesc.getName()); {code} However this instance is never closed. So if this test succeeds, we would have a resource leak in the code. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7087) Remove lineage information after query completion
[ https://issues.apache.org/jira/browse/HIVE-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7087: --- Status: Open (was: Patch Available) [~navis] Hive QA didnt run because patch is not appropriately named. Was that intentional ? Remove lineage information after query completion - Key: HIVE-7087 URL: https://issues.apache.org/jira/browse/HIVE-7087 Project: Hive Issue Type: Bug Components: Logging Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-7087.1.patch.txt, HIVE-7087.2.patch.txt, HIVE-7087.3a.patch.txt Lineage information is stacked in session and is not cleared before the session is closed. That also makes redundant lineage logs in q.out files for all of the queries after any inserts, which should be available only for insert queries. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4561) Column stats : LOW_VALUE (or HIGH_VALUE) will always be 0.0000 ,if all the column values larger than 0.0 (or if all column values smaller than 0.0)
[ https://issues.apache.org/jira/browse/HIVE-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008199#comment-14008199 ] Ashutosh Chauhan commented on HIVE-4561: +1 Column stats : LOW_VALUE (or HIGH_VALUE) will always be 0. ,if all the column values larger than 0.0 (or if all column values smaller than 0.0) Key: HIVE-4561 URL: https://issues.apache.org/jira/browse/HIVE-4561 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.12.0 Reporter: caofangkun Assignee: Navis Attachments: HIVE-4561.1.patch, HIVE-4561.2.patch, HIVE-4561.3.patch, HIVE-4561.4.patch.txt, HIVE-4561.5.patch.txt if all column values larger than 0.0 DOUBLE_LOW_VALUE always will be 0.0 or if all column values less than 0.0, DOUBLE_HIGH_VALUE will always be hive (default) create table src_test (price double); hive (default) load data local inpath './test.txt' into table src_test; hive (default) select * from src_test; OK 1.0 2.0 3.0 Time taken: 0.313 seconds, Fetched: 3 row(s) hive (default) analyze table src_test compute statistics for columns price; mysql select * from TAB_COL_STATS \G; CS_ID: 16 DB_NAME: default TABLE_NAME: src_test COLUMN_NAME: price COLUMN_TYPE: double TBL_ID: 2586 LONG_LOW_VALUE: 0 LONG_HIGH_VALUE: 0 DOUBLE_LOW_VALUE: 0. # Wrong Result ! Expected is 1. DOUBLE_HIGH_VALUE: 3. BIG_DECIMAL_LOW_VALUE: NULL BIG_DECIMAL_HIGH_VALUE: NULL NUM_NULLS: 0 NUM_DISTINCTS: 1 AVG_COL_LEN: 0. MAX_COL_LEN: 0 NUM_TRUES: 0 NUM_FALSES: 0 LAST_ANALYZED: 1368596151 2 rows in set (0.00 sec) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4719) EmbeddedLockManager should be shared to all clients
[ https://issues.apache.org/jira/browse/HIVE-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008202#comment-14008202 ] Ashutosh Chauhan commented on HIVE-4719: [~navis] Can you create RB entry for this ? EmbeddedLockManager should be shared to all clients --- Key: HIVE-4719 URL: https://issues.apache.org/jira/browse/HIVE-4719 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-4719.5.patch.txt, HIVE-4719.6.patch.txt, HIVE-4719.D11229.1.patch, HIVE-4719.D11229.2.patch, HIVE-4719.D11229.3.patch, HIVE-4719.D11229.4.patch Currently, EmbeddedLockManager is created per Driver instance, so locking has no meaning. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-3925) dependencies of fetch task are not shown by explain
[ https://issues.apache.org/jira/browse/HIVE-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-3925: --- Status: Open (was: Patch Available) seems like tests don't run on previous patch. In need of rebase. dependencies of fetch task are not shown by explain --- Key: HIVE-3925 URL: https://issues.apache.org/jira/browse/HIVE-3925 Project: Hive Issue Type: Bug Components: Query Processor Reporter: Namit Jain Assignee: Navis Attachments: HIVE-3925.4.patch.txt, HIVE-3925.5.patch.txt, HIVE-3925.6.patch.txt, HIVE-3925.D8577.1.patch, HIVE-3925.D8577.2.patch, HIVE-3925.D8577.3.patch A simple query like: hive explain select * from src order by key; OK ABSTRACT SYNTAX TREE: (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME src))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR TOK_ALLCOLREF)) (TOK_ORDERBY (TOK_TABSORTCOLNAMEASC (TOK_TABLE_OR_COL key) STAGE DEPENDENCIES: Stage-1 is a root stage Stage-0 is a root stage Stage: Stage-0 Fetch Operator limit: -1 Stage-0 is not a root stage and depends on stage-1. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7109) Resource leak in HBaseStorageHandler
[ https://issues.apache.org/jira/browse/HIVE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7109: --- Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks, Swarnim! Resource leak in HBaseStorageHandler Key: HIVE-7109 URL: https://issues.apache.org/jira/browse/HIVE-7109 Project: Hive Issue Type: Bug Components: HBase Handler Affects Versions: 0.13.0 Reporter: Swarnim Kulkarni Assignee: Swarnim Kulkarni Fix For: 0.14.0 Attachments: HIVE-7109.1.patch.txt The preCreateTable method in the HBaseStorageHandler checks that the HBase table is still online by creating a new instance of HTable {code} // ensure the table is online new HTable(hbaseConf, tableDesc.getName()); {code} However this instance is never closed. So if this test succeeds, we would have a resource leak in the code. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7122) Storage format for create like table
[ https://issues.apache.org/jira/browse/HIVE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7122: --- Assignee: Vasanth kumar RJ Storage format for create like table Key: HIVE-7122 URL: https://issues.apache.org/jira/browse/HIVE-7122 Project: Hive Issue Type: New Feature Components: Query Processor Reporter: Vasanth kumar RJ Assignee: Vasanth kumar RJ Fix For: 0.14.0 Attachments: HIVE-7122.patch Using create like table user can specify the table storage format. Example: create table table1 like table2 stored as ORC; -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HIVE-7077) Hive contrib compilation maybe broken with removal of org.apache.hadoop.record
[ https://issues.apache.org/jira/browse/HIVE-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan reassigned HIVE-7077: -- Assignee: Ashutosh Chauhan Hive contrib compilation maybe broken with removal of org.apache.hadoop.record -- Key: HIVE-7077 URL: https://issues.apache.org/jira/browse/HIVE-7077 Project: Hive Issue Type: Bug Components: Contrib Affects Versions: 0.12.0, 0.13.0 Environment: Hadoop 2.4.0.5 and beyond Reporter: Viraj Bhat Assignee: Ashutosh Chauhan Fix For: 0.13.0, 0.12.1 Attachments: HIVE-7077.patch Hadoop decided to move record to hadoop-streaming so the compilation of the contrib code will be broken if we do not include this jar. {quote} compile: [echo] Project: contrib [javac] Compiling 39 source files to /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/build/contrib/classes [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesWritableOutput.java:47: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Record; [javac]^ [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesOutput.java:30: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Buffer; [javac]^ [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesWritableOutput.java:224: error: cannot find symbol [javac] public void writeRecord(Record r) throws IOException { [javac] ^ [javac] symbol: class Record [javac] location: class TypedBytesWritableOutput [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesInput.java:29: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Buffer; [javac]^ [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesRecordInput.java:24: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Buffer; [javac]^ {quote} Besides this, https://issues.apache.org/jira/browse/HADOOP-10485 removes most of these classes. This Jira is being created to track this. Viraj -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7077) Hive contrib compilation maybe broken with removal of org.apache.hadoop.record
[ https://issues.apache.org/jira/browse/HIVE-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7077: --- Status: Patch Available (was: Open) Hive contrib compilation maybe broken with removal of org.apache.hadoop.record -- Key: HIVE-7077 URL: https://issues.apache.org/jira/browse/HIVE-7077 Project: Hive Issue Type: Bug Components: Contrib Affects Versions: 0.13.0, 0.12.0 Environment: Hadoop 2.4.0.5 and beyond Reporter: Viraj Bhat Assignee: Ashutosh Chauhan Fix For: 0.12.1, 0.13.0 Attachments: HIVE-7077.patch Hadoop decided to move record to hadoop-streaming so the compilation of the contrib code will be broken if we do not include this jar. {quote} compile: [echo] Project: contrib [javac] Compiling 39 source files to /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/build/contrib/classes [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesWritableOutput.java:47: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Record; [javac]^ [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesOutput.java:30: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Buffer; [javac]^ [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesWritableOutput.java:224: error: cannot find symbol [javac] public void writeRecord(Record r) throws IOException { [javac] ^ [javac] symbol: class Record [javac] location: class TypedBytesWritableOutput [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesInput.java:29: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Buffer; [javac]^ [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesRecordInput.java:24: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Buffer; [javac]^ {quote} Besides this, https://issues.apache.org/jira/browse/HADOOP-10485 removes most of these classes. This Jira is being created to track this. Viraj -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7077) Hive contrib compilation maybe broken with removal of org.apache.hadoop.record
[ https://issues.apache.org/jira/browse/HIVE-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7077: --- Attachment: HIVE-7077.patch Hive contrib compilation maybe broken with removal of org.apache.hadoop.record -- Key: HIVE-7077 URL: https://issues.apache.org/jira/browse/HIVE-7077 Project: Hive Issue Type: Bug Components: Contrib Affects Versions: 0.12.0, 0.13.0 Environment: Hadoop 2.4.0.5 and beyond Reporter: Viraj Bhat Fix For: 0.13.0, 0.12.1 Attachments: HIVE-7077.patch Hadoop decided to move record to hadoop-streaming so the compilation of the contrib code will be broken if we do not include this jar. {quote} compile: [echo] Project: contrib [javac] Compiling 39 source files to /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/build/contrib/classes [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesWritableOutput.java:47: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Record; [javac]^ [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesOutput.java:30: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Buffer; [javac]^ [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesWritableOutput.java:224: error: cannot find symbol [javac] public void writeRecord(Record r) throws IOException { [javac] ^ [javac] symbol: class Record [javac] location: class TypedBytesWritableOutput [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesInput.java:29: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Buffer; [javac]^ [javac] /home/y/var/builds/thread2/workspace/Cloud-Hive-branch-0.12-Hadoop2-Component-JDK7/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesRecordInput.java:24: error: package org.apache.hadoop.record does not exist [javac] import org.apache.hadoop.record.Buffer; [javac]^ {quote} Besides this, https://issues.apache.org/jira/browse/HADOOP-10485 removes most of these classes. This Jira is being created to track this. Viraj -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7019) Hive cannot build against Hadoop branch-2 after YARN-1553
[ https://issues.apache.org/jira/browse/HIVE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7019: --- Status: Open (was: Patch Available) Compilation problem has been fixed via HIVE-6900 although functionality loss of failed task logs is still missing and waiting arrival of MAPREDUCE-5857 Hive cannot build against Hadoop branch-2 after YARN-1553 - Key: HIVE-7019 URL: https://issues.apache.org/jira/browse/HIVE-7019 Project: Hive Issue Type: Bug Components: Shims Affects Versions: 0.13.0 Reporter: Fengdong Yu Attachments: HIVE-7019.patch Hive cannot build against Hadoop branch-2 after YARN-1553, I'll upload patch laterly. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HIVE-7019) Hive cannot build against Hadoop branch-2 after YARN-1553
[ https://issues.apache.org/jira/browse/HIVE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan resolved HIVE-7019. Resolution: Not a Problem Hive cannot build against Hadoop branch-2 after YARN-1553 - Key: HIVE-7019 URL: https://issues.apache.org/jira/browse/HIVE-7019 Project: Hive Issue Type: Bug Components: Shims Affects Versions: 0.13.0 Reporter: Fengdong Yu Attachments: HIVE-7019.patch Hive cannot build against Hadoop branch-2 after YARN-1553, I'll upload patch laterly. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6934) PartitionPruner doesn't handle top level constant expression correctly
[ https://issues.apache.org/jira/browse/HIVE-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6934: --- Status: Open (was: Patch Available) Failed tests need to be looked at PartitionPruner doesn't handle top level constant expression correctly -- Key: HIVE-6934 URL: https://issues.apache.org/jira/browse/HIVE-6934 Project: Hive Issue Type: Bug Reporter: Harish Butani Assignee: Hari Sankar Sivarama Subramaniyan Attachments: HIVE-6934.1.patch, HIVE-6934.2.patch You hit this error indirectly, because how we handle invalid constant comparisons. Consider: {code} create table x(key int, value string) partitioned by (dt int, ts string); -- both these queries hit this issue select * from x where key = 'abc'; select * from x where dt = 'abc'; -- the issue is the comparison get converted to the constant false -- and the PartitionPruner doesn't handle top level constant exprs corrcetly {code} Thanks to [~hsubramaniyan] for uncovering this as part of adding tests for HIVE-5376 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7117) Partitions not inheriting table permissions after alter rename partition
[ https://issues.apache.org/jira/browse/HIVE-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008233#comment-14008233 ] Hive QA commented on HIVE-7117: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12646460/HIVE-7117.4.patch {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 5463 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_decimal1 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table org.apache.hadoop.hive.metastore.TestMetastoreVersion.testDefaults org.apache.hadoop.hive.ql.security.TestFolderPermissions.testAlterPartitionPerms org.apache.hive.hcatalog.pig.TestOrcHCatLoader.testReadDataPrimitiveTypes org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimal org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalX org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalXY {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/283/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/283/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-283/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12646460 Partitions not inheriting table permissions after alter rename partition Key: HIVE-7117 URL: https://issues.apache.org/jira/browse/HIVE-7117 Project: Hive Issue Type: Bug Components: Security Reporter: Ashish Kumar Singh Assignee: Ashish Kumar Singh Attachments: HIVE-7117.2.patch, HIVE-7117.3.patch, HIVE-7117.4.patch, HIVE-7117.patch On altering/renaming a partition it must inherit permission of the parent directory, if the flag hive.warehouse.subdir.inherit.perms is set. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6923) Use slf4j For Logging Everywhere
[ https://issues.apache.org/jira/browse/HIVE-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6923: --- Status: Open (was: Patch Available) I think we should go other way. That is replace slf4j references in Hive codebase with commons-logging so that we have one consistent facade. For one, this should result in much smaller patch, but more importantly most of other components in hadoop ecosystem use commons-logging and its important that we are aligned with hadoop in our choice of facade to make it easier to configure logging. Use slf4j For Logging Everywhere Key: HIVE-6923 URL: https://issues.apache.org/jira/browse/HIVE-6923 Project: Hive Issue Type: Improvement Components: HiveServer2 Reporter: Nick White Assignee: Nick White Fix For: 0.14.0 Attachments: HIVE-6923.patch Hive uses a mixture of slf4j (backed by log4j) and commons-logging. I've attached a patch to tidy this up, by just using slf4j for all loggers. This means that applications using the JDBC driver can make Hive log through their own slf4j implementation consistently. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6893) out of sequence error in HiveMetastore server
[ https://issues.apache.org/jira/browse/HIVE-6893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008240#comment-14008240 ] Ashutosh Chauhan commented on HIVE-6893: Patch in current form eschews underlying problem. I think its worth spending time to figure out whats going on. FWIW, I have seen {{TestRetryingHMSHandler.testRetryingHMSHandler}} test fail on trunk with HiveQA runs with following trace: {code} org.apache.thrift.TApplicationException: create_database failed: out of sequence response at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:76) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_create_database(ThriftHiveMetastore.java:511) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.create_database(ThriftHiveMetastore.java:498) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:534) at org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler(TestRetryingHMSHandler.java:76) {code} Above test failure which is independent of HS2 provides an independent repro test case. Though, in Hive QA runs it only fails rarely, hinting at some race condition. out of sequence error in HiveMetastore server - Key: HIVE-6893 URL: https://issues.apache.org/jira/browse/HIVE-6893 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.12.0 Reporter: Romain Rigaux Assignee: Naveen Gangam Fix For: 0.14.0 Attachments: HIVE-6893.1.patch Calls listing databases or tables fail. It seems to be a concurrency problem. {code} 014-03-06 05:34:00,785 ERROR hive.log: org.apache.thrift.TApplicationException: get_databases failed: out of sequence response at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:76) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_databases(ThriftHiveMetastore.java:472) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_databases(ThriftHiveMetastore.java:459) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:648) at org.apache.hive.service.cli.operation.GetSchemasOperation.run(GetSchemasOperation.java:66) at org.apache.hive.service.cli.session.HiveSessionImpl.getSchemas(HiveSessionImpl.java:278) at sun.reflect.GeneratedMethodAccessor323.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:62) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:582) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:57) at com.sun.proxy.$Proxy9.getSchemas(Unknown Source) at org.apache.hive.service.cli.CLIService.getSchemas(CLIService.java:192) at org.apache.hive.service.cli.thrift.ThriftCLIService.GetSchemas(ThriftCLIService.java:263) at org.apache.hive.service.cli.thrift.TCLIService$Processor$GetSchemas.getResult(TCLIService.java:1433) at org.apache.hive.service.cli.thrift.TCLIService$Processor$GetSchemas.getResult(TCLIService.java:1418) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hive.service.cli.thrift.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:38) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6853) show create table for hbase tables should exclude LOCATION
[ https://issues.apache.org/jira/browse/HIVE-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008242#comment-14008242 ] Ashutosh Chauhan commented on HIVE-6853: +1 show create table for hbase tables should exclude LOCATION --- Key: HIVE-6853 URL: https://issues.apache.org/jira/browse/HIVE-6853 Project: Hive Issue Type: Bug Components: StorageHandler Affects Versions: 0.10.0 Reporter: Miklos Christine Attachments: HIVE-6853-0.patch, HIVE-6853.patch If you create a table on top of hbase in hive and issue a show create table hbase_table, it gives a bad DDL. It should not show LOCATION: [hive]$ cat /tmp/test_create.sql CREATE EXTERNAL TABLE nba_twitter.hbase2( key string COMMENT 'from deserializer', name string COMMENT 'from deserializer', pdt string COMMENT 'from deserializer', service string COMMENT 'from deserializer', term string COMMENT 'from deserializer', update1 string COMMENT 'from deserializer') ROW FORMAT SERDE 'org.apache.hadoop.hive.hbase.HBaseSerDe' STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( 'serialization.format'='1', 'hbase.columns.mapping'=':key,srv:name,srv:pdt,srv:service,srv:term,srv:update') LOCATION 'hdfs://nameservice1/user/hive/warehouse/nba_twitter.db/hbase' TBLPROPERTIES ( 'hbase.table.name'='NBATwitter', 'transient_lastDdlTime'='1386172188') Trying to create a table using the above fails: [hive]$ hive -f /tmp/test_create.sql cli -f /tmp/test_create.sql Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hive/lib/hive-common-0.10.0-cdh4.4.0.jar!/hive-log4j.properties FAILED: Error in metadata: MetaException(message:LOCATION may not be specified for HBase.) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask However, if I remove the LOCATION, then the DDL is valid. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6853) show create table for hbase tables should exclude LOCATION
[ https://issues.apache.org/jira/browse/HIVE-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6853: --- Assignee: Miklos Christine show create table for hbase tables should exclude LOCATION --- Key: HIVE-6853 URL: https://issues.apache.org/jira/browse/HIVE-6853 Project: Hive Issue Type: Bug Components: StorageHandler Affects Versions: 0.10.0 Reporter: Miklos Christine Assignee: Miklos Christine Attachments: HIVE-6853-0.patch, HIVE-6853.patch If you create a table on top of hbase in hive and issue a show create table hbase_table, it gives a bad DDL. It should not show LOCATION: [hive]$ cat /tmp/test_create.sql CREATE EXTERNAL TABLE nba_twitter.hbase2( key string COMMENT 'from deserializer', name string COMMENT 'from deserializer', pdt string COMMENT 'from deserializer', service string COMMENT 'from deserializer', term string COMMENT 'from deserializer', update1 string COMMENT 'from deserializer') ROW FORMAT SERDE 'org.apache.hadoop.hive.hbase.HBaseSerDe' STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( 'serialization.format'='1', 'hbase.columns.mapping'=':key,srv:name,srv:pdt,srv:service,srv:term,srv:update') LOCATION 'hdfs://nameservice1/user/hive/warehouse/nba_twitter.db/hbase' TBLPROPERTIES ( 'hbase.table.name'='NBATwitter', 'transient_lastDdlTime'='1386172188') Trying to create a table using the above fails: [hive]$ hive -f /tmp/test_create.sql cli -f /tmp/test_create.sql Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hive/lib/hive-common-0.10.0-cdh4.4.0.jar!/hive-log4j.properties FAILED: Error in metadata: MetaException(message:LOCATION may not be specified for HBase.) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask However, if I remove the LOCATION, then the DDL is valid. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6367) Implement Decimal in ParquetSerde
[ https://issues.apache.org/jira/browse/HIVE-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-6367: --- Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Thank you Xuefu! I have committed this to trunk. Implement Decimal in ParquetSerde - Key: HIVE-6367 URL: https://issues.apache.org/jira/browse/HIVE-6367 Project: Hive Issue Type: Sub-task Components: Serializers/Deserializers Affects Versions: 0.13.0 Reporter: Brock Noland Assignee: Xuefu Zhang Labels: Parquet Fix For: 0.14.0 Attachments: HIVE-6367.patch, dec.parq Some code in the Parquet Serde deals with decimal and other does not. For example in ETypeConverter we convert Decimal to double (which is invalid) whereas in DataWritableWriter and other locations we throw an exception if decimal is used. This JIRA is to implement decimal support. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6765) ASTNodeOrigin unserializable leads to fail when join with view
[ https://issues.apache.org/jira/browse/HIVE-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6765: --- Resolution: Won't Fix Fix Version/s: (was: 0.12.1) Status: Resolved (was: Patch Available) Resolving as wont fix since 0.13 has been released which uses Kryo based serialization which doesnt have this problem. ASTNodeOrigin unserializable leads to fail when join with view -- Key: HIVE-6765 URL: https://issues.apache.org/jira/browse/HIVE-6765 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Adrian Wang Attachments: HIVE-6765.patch.1 when a view contains a UDF, and the view comes into a JOIN operation, Hive will encounter a bug with stack trace like Caused by: java.lang.InstantiationException: org.apache.hadoop.hive.ql.parse.ASTNodeOrigin at java.lang.Class.newInstance0(Class.java:359) at java.lang.Class.newInstance(Class.java:327) at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7123) Follow-up of HIVE-6367
Xuefu Zhang created HIVE-7123: - Summary: Follow-up of HIVE-6367 Key: HIVE-7123 URL: https://issues.apache.org/jira/browse/HIVE-7123 Project: Hive Issue Type: Bug Components: Serializers/Deserializers Affects Versions: 0.14.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang HIVE-6367 provides initial decimal support in Parquet serde. The are a few minor items left over: 1. parquet_decimal.q seems failing 2. will use fixed length binary to encode decimal instead of variable length binary. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7123) Follow-up of HIVE-6367
[ https://issues.apache.org/jira/browse/HIVE-7123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-7123: -- Status: Patch Available (was: Open) Follow-up of HIVE-6367 -- Key: HIVE-7123 URL: https://issues.apache.org/jira/browse/HIVE-7123 Project: Hive Issue Type: Bug Components: Serializers/Deserializers Affects Versions: 0.14.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang Attachments: HIVE-7123.patch HIVE-6367 provides initial decimal support in Parquet serde. The are a few minor items left over: 1. parquet_decimal.q seems failing 2. will use fixed length binary to encode decimal instead of variable length binary. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7123) Follow-up of HIVE-6367
[ https://issues.apache.org/jira/browse/HIVE-7123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-7123: -- Attachment: HIVE-7123.patch Follow-up of HIVE-6367 -- Key: HIVE-7123 URL: https://issues.apache.org/jira/browse/HIVE-7123 Project: Hive Issue Type: Bug Components: Serializers/Deserializers Affects Versions: 0.14.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang Attachments: HIVE-7123.patch HIVE-6367 provides initial decimal support in Parquet serde. The are a few minor items left over: 1. parquet_decimal.q seems failing 2. will use fixed length binary to encode decimal instead of variable length binary. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6756) alter table set fileformat should set serde too
[ https://issues.apache.org/jira/browse/HIVE-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008250#comment-14008250 ] Ashutosh Chauhan commented on HIVE-6756: Patch looks good, accept for textfile seqfile + serde = conf.getVar(HiveConf.ConfVars.HIVESCRIPTSERDE); scriptserde config is used for other purposes, I think its better just to do serde = LazySimpleSerDe.getClass().getName() since thats equivalent behavior with create table stored as textfile / sequencefile Looks good otherwise alter table set fileformat should set serde too --- Key: HIVE-6756 URL: https://issues.apache.org/jira/browse/HIVE-6756 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Owen O'Malley Assignee: Chinna Rao Lalam Attachments: HIVE-6756.1.patch, HIVE-6756.2.patch, HIVE-6756.patch Currently doing alter table set fileformat doesn't change the serde. This is unexpected by customers because the serdes are largely file format specific. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6756) alter table set fileformat should set serde too
[ https://issues.apache.org/jira/browse/HIVE-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6756: --- Status: Open (was: Patch Available) alter table set fileformat should set serde too --- Key: HIVE-6756 URL: https://issues.apache.org/jira/browse/HIVE-6756 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Owen O'Malley Assignee: Chinna Rao Lalam Attachments: HIVE-6756.1.patch, HIVE-6756.2.patch, HIVE-6756.patch Currently doing alter table set fileformat doesn't change the serde. This is unexpected by customers because the serdes are largely file format specific. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6853) show create table for hbase tables should exclude LOCATION
[ https://issues.apache.org/jira/browse/HIVE-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6853: --- Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks, Miklos! show create table for hbase tables should exclude LOCATION --- Key: HIVE-6853 URL: https://issues.apache.org/jira/browse/HIVE-6853 Project: Hive Issue Type: Bug Components: StorageHandler Affects Versions: 0.10.0 Reporter: Miklos Christine Assignee: Miklos Christine Fix For: 0.14.0 Attachments: HIVE-6853-0.patch, HIVE-6853.patch If you create a table on top of hbase in hive and issue a show create table hbase_table, it gives a bad DDL. It should not show LOCATION: [hive]$ cat /tmp/test_create.sql CREATE EXTERNAL TABLE nba_twitter.hbase2( key string COMMENT 'from deserializer', name string COMMENT 'from deserializer', pdt string COMMENT 'from deserializer', service string COMMENT 'from deserializer', term string COMMENT 'from deserializer', update1 string COMMENT 'from deserializer') ROW FORMAT SERDE 'org.apache.hadoop.hive.hbase.HBaseSerDe' STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( 'serialization.format'='1', 'hbase.columns.mapping'=':key,srv:name,srv:pdt,srv:service,srv:term,srv:update') LOCATION 'hdfs://nameservice1/user/hive/warehouse/nba_twitter.db/hbase' TBLPROPERTIES ( 'hbase.table.name'='NBATwitter', 'transient_lastDdlTime'='1386172188') Trying to create a table using the above fails: [hive]$ hive -f /tmp/test_create.sql cli -f /tmp/test_create.sql Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hive/lib/hive-common-0.10.0-cdh4.4.0.jar!/hive-log4j.properties FAILED: Error in metadata: MetaException(message:LOCATION may not be specified for HBase.) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask However, if I remove the LOCATION, then the DDL is valid. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6936) Provide table properties to InputFormats
[ https://issues.apache.org/jira/browse/HIVE-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008262#comment-14008262 ] Hive QA commented on HIVE-6936: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12646677/HIVE-6936.patch {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 5539 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_multi_single_reducer3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_decimal1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_load_dyn_part1 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table org.apache.hadoop.hive.common.metrics.TestMetrics.testScopeConcurrency org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimal org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalX org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalXY {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/284/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/284/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-284/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12646677 Provide table properties to InputFormats Key: HIVE-6936 URL: https://issues.apache.org/jira/browse/HIVE-6936 Project: Hive Issue Type: Bug Components: File Formats Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 0.14.0 Attachments: HIVE-6936.patch, HIVE-6936.patch, HIVE-6936.patch, HIVE-6936.patch, HIVE-6936.patch, HIVE-6936.patch, HIVE-6936.patch Some advanced file formats need the table properties made available to them. Additionally, it would be convenient to provide a unique id for fetch operators and the complete list of directories. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6367) Implement Decimal in ParquetSerde
[ https://issues.apache.org/jira/browse/HIVE-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008274#comment-14008274 ] Lefty Leverenz commented on HIVE-6367: -- This needs to be documented in the wiki with a 0.14.0 note: * [Parquet - Limitations | https://cwiki.apache.org/confluence/display/Hive/Parquet#Parquet-Limitations] ** Binary, timestamp, date, char, varchar or decimal support are pending (HIVE-6384) But the doc should wait until 0.14.0 is released, so please add a release note as a reminder. Implement Decimal in ParquetSerde - Key: HIVE-6367 URL: https://issues.apache.org/jira/browse/HIVE-6367 Project: Hive Issue Type: Sub-task Components: Serializers/Deserializers Affects Versions: 0.13.0 Reporter: Brock Noland Assignee: Xuefu Zhang Labels: Parquet Fix For: 0.14.0 Attachments: HIVE-6367.patch, dec.parq Some code in the Parquet Serde deals with decimal and other does not. For example in ETypeConverter we convert Decimal to double (which is invalid) whereas in DataWritableWriter and other locations we throw an exception if decimal is used. This JIRA is to implement decimal support. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6967) Hive transaction manager fails when SQLServer is used as an RDBMS
[ https://issues.apache.org/jira/browse/HIVE-6967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008286#comment-14008286 ] Hive QA commented on HIVE-6967: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12646620/HIVE-6967.patch {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 5462 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_decimal1 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table org.apache.hadoop.hive.metastore.TestMetastoreVersion.testDefaults org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimal org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalX org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer.testWriteDecimalXY {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/286/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/286/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-286/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12646620 Hive transaction manager fails when SQLServer is used as an RDBMS - Key: HIVE-6967 URL: https://issues.apache.org/jira/browse/HIVE-6967 Project: Hive Issue Type: Bug Components: Locking Affects Versions: 0.13.0 Reporter: Alan Gates Assignee: Alan Gates Attachments: HIVE-6967.patch When using SQLServer as an RDBMS for the metastore, any transaction or DbLockMgr operations fail with: {code} MetaException(message:Unable to select from transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Line 1: FOR UPDATE clause allowed only for DECLARE CURSOR. {code} The issue is that SQLServer does not support the FOR UPDATE clause in SELECT. -- This message was sent by Atlassian JIRA (v6.2#6252)