[jira] [Created] (HIVE-22114) insert query for partitioned table failing when all buckets are empty, s3 storage location
Aswathy Chellammal Sreekumar created HIVE-22114: --- Summary: insert query for partitioned table failing when all buckets are empty, s3 storage location Key: HIVE-22114 URL: https://issues.apache.org/jira/browse/HIVE-22114 Project: Hive Issue Type: Bug Components: Hive Affects Versions: 3.1.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Vineet Garg Following insert query fails when all buckets are empty {noformat} create table src_emptybucket_partitioned_1 (name string, age int, gpa decimal(3,2)) partitioned by(year int) clustered by (age) sorted by (age) into 100 buckets stored as orc; insert into table src_emptybucket_partitioned_1 partition(year=2015) select * from studenttab10k limit 0; {noformat} Error: {noformat} ERROR : Job Commit failed with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException: No such file or directory: s3a://warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015)' # org.apache.hadoop.hive.ql.metadata.HiveException: java.io.FileNotFoundException: No such file or directory: s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015 at org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1403) at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798) at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803) at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:590) at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:327) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2335) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2002) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1674) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1372) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1366) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.FileNotFoundException: No such file or directory: s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015 at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2805) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2694) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2587) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2388) at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2367) at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2367) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1880) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1922) at org.apache.hadoop.hive.ql.exec.Utilities.getMmDirectoryCandidates(Utilities.java:4185) at org.apache.hadoop.hive.ql.exec.Utilities.handleMmTableFinalPath(Utilities.java:4386) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1397) ... 26 more ERROR : FAILED: Execution Error, return
[jira] [Created] (HIVE-19292) More than one materialized view in DB affecting query rewrite
Aswathy Chellammal Sreekumar created HIVE-19292: --- Summary: More than one materialized view in DB affecting query rewrite Key: HIVE-19292 URL: https://issues.apache.org/jira/browse/HIVE-19292 Project: Hive Issue Type: Bug Components: Hive Affects Versions: 3.0.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Jesus Camacho Rodriguez When there are more than one materialized view query rewrite fails to pick the materialized view, which it picks otherwise {noformat} 1: jdbc:hive2://> show materialized views; INFO : Compiling command(queryId=hive_20180424204708_e39107e4-ae65-4e3e-a73f-19e0519b515c): show materialized views INFO : Semantic Analysis Completed INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null) INFO : Completed compiling command(queryId=hive_20180424204708_e39107e4-ae65-4e3e-a73f-19e0519b515c); Time taken: 0.021 seconds INFO : Executing command(queryId=hive_20180424204708_e39107e4-ae65-4e3e-a73f-19e0519b515c): show materialized views INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing command(queryId=hive_20180424204708_e39107e4-ae65-4e3e-a73f-19e0519b515c); Time taken: 0.174 seconds INFO : OK +--+ | tab_name | +--+ | cmv_mat_view | | mv_agg | | source_table_001_mv | +--+ 3 rows selected (0.3 seconds) 1: jdbc:hive2://> drop materialized view cmv_mat_view; INFO : Compiling command(queryId=hive_20180424204724_5d4f3aaf-ed22-4828-a1a8-d8fe9f6bd9af): drop materialized view cmv_mat_view INFO : Semantic Analysis Completed INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) INFO : Completed compiling command(queryId=hive_20180424204724_5d4f3aaf-ed22-4828-a1a8-d8fe9f6bd9af); Time taken: 0.029 seconds INFO : Executing command(queryId=hive_20180424204724_5d4f3aaf-ed22-4828-a1a8-d8fe9f6bd9af): drop materialized view cmv_mat_view INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing command(queryId=hive_20180424204724_5d4f3aaf-ed22-4828-a1a8-d8fe9f6bd9af); Time taken: 0.312 seconds INFO : OK No rows affected (0.369 seconds) 1: jdbc:hive2://> explain . . . . . . . . . . . . . . . . . . . . . . .> select . . . . . . . . . . . . . . . . . . . . . . .> SUM(A.DOWN_VOLUME) AS DOWNLOAD_VOLUME_BYTES, . . . . . . . . . . . . . . . . . . . . . . .> FLOOR(A.MY_DATE to hour),A.MY_ID2,A.ENVIRONMENT . . . . . . . . . . . . . . . . . . . . . . .> FROM source_table_001 AS A . . . . . . . . . . . . . . . . . . . . . . .> group by A.MY_ID,A.MY_ID2,A.ENVIRONMENT,FLOOR(A.MY_DATE to hour); INFO : Compiling command(queryId=hive_20180424204736_76958a4d-0f08-4e22-93c6-67e3a1493b92): explain select SUM(A.DOWN_VOLUME) AS DOWNLOAD_VOLUME_BYTES, FLOOR(A.MY_DATE to hour),A.MY_ID2,A.ENVIRONMENT FROM source_table_001 AS A group by A.MY_ID,A.MY_ID2,A.ENVIRONMENT,FLOOR(A.MY_DATE to hour) INFO : Semantic Analysis Completed INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:Explain, type:string, comment:null)], properties:null) INFO : Completed compiling command(queryId=hive_20180424204736_76958a4d-0f08-4e22-93c6-67e3a1493b92); Time taken: 0.374 seconds INFO : Executing command(queryId=hive_20180424204736_76958a4d-0f08-4e22-93c6-67e3a1493b92): explain select SUM(A.DOWN_VOLUME) AS DOWNLOAD_VOLUME_BYTES, FLOOR(A.MY_DATE to hour),A.MY_ID2,A.ENVIRONMENT FROM source_table_001 AS A group by A.MY_ID,A.MY_ID2,A.ENVIRONMENT,FLOOR(A.MY_DATE to hour) INFO : Starting task [Stage-3:EXPLAIN] in serial mode INFO : Completed executing command(queryId=hive_20180424204736_76958a4d-0f08-4e22-93c6-67e3a1493b92); Time taken: 0.006 seconds INFO : OK ++ | Explain | ++ | Plan optimized by CBO. | || | Vertex dependency in root stage| | Reducer 2 <- Map 1 (SIMPLE_EDGE) | || | Stage-0| | Fetch Operator | | limit:-1 | | Stage-1| | Reducer 2 vectorized, llap | | File Output Operator [FS_13] | | Select Operator [SEL_12] (rows=1 width=143) | | Output:["_col0","_col1","_col2","_col3"] | | Group By Operator [GBY_11] (rows=1 width=151) | | Output:["_col0","_col1","_col2","_col3","_col4"],aggregations:["sum(VALUE._col0)"],keys:KEY._col0,
[jira] [Created] (HIVE-19001) ADD CONSTRAINT support for DEFAULT, NOT NULL constraints
Aswathy Chellammal Sreekumar created HIVE-19001: --- Summary: ADD CONSTRAINT support for DEFAULT, NOT NULL constraints Key: HIVE-19001 URL: https://issues.apache.org/jira/browse/HIVE-19001 Project: Hive Issue Type: Improvement Components: Hive Affects Versions: 3.0.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Vineet Garg Add ALTER TABLE ADD CONSTRAINT support for DEFAULT and NOT NULL constraints. Currently we are able to add them only via CREATE TABLE statement or ALTER TABLE CHANGE -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-18950) DESCRIBE EXTENDED missing details of default constraint
Aswathy Chellammal Sreekumar created HIVE-18950: --- Summary: DESCRIBE EXTENDED missing details of default constraint Key: HIVE-18950 URL: https://issues.apache.org/jira/browse/HIVE-18950 Project: Hive Issue Type: Bug Components: Hive Affects Versions: 3.0.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Vineet Garg Fix For: 3.0.0 Describe extended output is missing default constraint details {noformat} 0: jdbc:hive2://ctr-e138-1518143905142-95188-> create table t1(j int constraint c1 default 4); INFO : Compiling command(queryId=hive_20180313202851_de315f0e-4064-467d-9dcc-f8dd7f737318): create table t1(j int constraint c1 default 4) INFO : Semantic Analysis Completed INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) INFO : Completed compiling command(queryId=hive_20180313202851_de315f0e-4064-467d-9dcc-f8dd7f737318); Time taken: 0.015 seconds INFO : Executing command(queryId=hive_20180313202851_de315f0e-4064-467d-9dcc-f8dd7f737318): create table t1(j int constraint c1 default 4) INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing command(queryId=hive_20180313202851_de315f0e-4064-467d-9dcc-f8dd7f737318); Time taken: 0.048 seconds INFO : OK No rows affected (0.087 seconds) {noformat} {noformat} 0: jdbc:hive2://ctr-e138-1518143905142-95188-> DESCRIBE EXTENDED t1; INFO : Compiling command(queryId=hive_20180313215805_0596cea8-918c-46f7-bd9a-8611972eb3cc): DESCRIBE EXTENDED t1 INFO : Semantic Analysis Completed INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col_name, type:string, comment:from deserializer), FieldSchema(name:data_type, type:string, comment:from deserializer), FieldSchema(name:comment, type:string, comment:from deserializer)], properties:null) INFO : Completed compiling command(queryId=hive_20180313215805_0596cea8-918c-46f7-bd9a-8611972eb3cc); Time taken: 0.029 seconds INFO : Executing command(queryId=hive_20180313215805_0596cea8-918c-46f7-bd9a-8611972eb3cc): DESCRIBE EXTENDED t1 INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing command(queryId=hive_20180313215805_0596cea8-918c-46f7-bd9a-8611972eb3cc); Time taken: 0.03 seconds INFO : OK +-++--+ | col_name | data_type | comment | +-++--+ | j | int | | | | NULL | NULL | | Detailed Table Information | Table(tableName:t1, dbName:default, owner:hrt_qa, createTime:1520972931, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:j, type:int, comment:null)], location:hdfs://mycluster/apps/hive/warehouse/t1, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{totalSize=0, numRows=0, rawDataSize=0, transactional_properties=insert_only, COLUMN_STATS_ACCURATE={\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"j\":\"true\"}}, numFiles=0, transient_lastDdlTime=1520972931, transactional=true}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, rewriteEnabled:false) | | +-++--+ 3 rows selected (0.099 seconds){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-18687) Triggers failing to result in event in HA clusters
Aswathy Chellammal Sreekumar created HIVE-18687: --- Summary: Triggers failing to result in event in HA clusters Key: HIVE-18687 URL: https://issues.apache.org/jira/browse/HIVE-18687 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 3.0.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Prasanth Jayachandran Fix For: 3.0.0 Triggers in active plan are failing to get picked in some cases, in HA cluster . In HA environment when the query to activate plan and the test query (which we expect to get killed by trigger) end up in different hiveserver2 instances in the same cluster, trigger fails to kick in and kill the query. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-18656) Trigger with counter TOTAL_TASKS fails to result in an event even when condition is met
Aswathy Chellammal Sreekumar created HIVE-18656: --- Summary: Trigger with counter TOTAL_TASKS fails to result in an event even when condition is met Key: HIVE-18656 URL: https://issues.apache.org/jira/browse/HIVE-18656 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 3.0.0 Environment: Trigger involving counter TOTAL_TASKS seems to fail to trigger event in definition even when the trigger condition is met Trigger definition: {noformat} ++ |line| ++ | plan_1[status=ACTIVE,parallelism=null,defaultPool=default] | | + default[allocFraction=1.0,schedulingPolicy=null,parallelism=4] | | | mapped for default | | + | | | trigger limit_task_per_vertex_trigger: if (TOTAL_TASKS > 5) { KILL } | ++ {noformat} Query is finishing fine even when one vertex is having 29 tasks {noformat} INFO : Query ID = hive_20180208193705_73642730-2c6b-4d4d-a608-a849b147bc37 INFO : Total jobs = 1 INFO : Launching Job 1 out of 1 INFO : Starting task [Stage-1:MAPRED] in serial mode INFO : Subscribed to counters: [TOTAL_TASKS] for queryId: hive_20180208193705_73642730-2c6b-4d4d-a608-a849b147bc37 INFO : Tez session hasn't been created yet. Opening session INFO : Dag name: with ssales as (select c_last_name...ssales) (Stage-1) INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 INFO : Status: Running (Executing on YARN cluster with App id application_151782410_0199) -- VERTICES MODESTATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED -- Map 6 .. container SUCCEEDED 1 100 0 0 Map 8 .. container SUCCEEDED 1 100 0 0 Map 7 .. container SUCCEEDED 1 100 0 0 Map 9 .. container SUCCEEDED 1 100 0 0 Map 10 . container SUCCEEDED 3 300 0 0 Map 11 . container SUCCEEDED 1 100 0 0 Map 12 . container SUCCEEDED 1 100 0 0 Map 13 . container SUCCEEDED 3 300 0 0 Map 1 .. container SUCCEEDED 9 900 0 0 Reducer 2 .. container SUCCEEDED 2 200 0 0 Reducer 4 .. container SUCCEEDED 29 2900 0 0 Reducer 5 .. container SUCCEEDED 1 100 0 0 Reducer 3container SUCCEEDED 0 000 0 0 -- VERTICES: 12/13 [==>>] 100% ELAPSED TIME: 21.15 s -- INFO : Status: DAG finished successfully in 21.07 seconds {noformat} Reporter: Aswathy Chellammal Sreekumar Assignee: Prasanth Jayachandran -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-18638) Triggers for multi-pool move, failing to initiate the move event
Aswathy Chellammal Sreekumar created HIVE-18638: --- Summary: Triggers for multi-pool move, failing to initiate the move event Key: HIVE-18638 URL: https://issues.apache.org/jira/browse/HIVE-18638 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 3.0.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Prasanth Jayachandran Resource plan with multiple pools and trigger set to move job across those pools seems to be failing to do so Resource plan: {noformat} 1: jdbc:hive2://ctr-e137-1514896590304-51538-> show resource plan plan_2; INFO : Compiling command(queryId=hive_20180202220823_2fb8bca7-5b7a-48cf-8ff9-8d5f3548d334): show resource plan plan_2 INFO : Semantic Analysis Completed INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:line, type:string, comment:from deserializer)], properties:null) INFO : Completed compiling command(queryId=hive_20180202220823_2fb8bca7-5b7a-48cf-8ff9-8d5f3548d334); Time taken: 0.008 seconds INFO : Executing command(queryId=hive_20180202220823_2fb8bca7-5b7a-48cf-8ff9-8d5f3548d334): show resource plan plan_2 INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing command(queryId=hive_20180202220823_2fb8bca7-5b7a-48cf-8ff9-8d5f3548d334); Time taken: 0.196 seconds INFO : OK ++ | line | ++ | plan_2[status=ACTIVE,parallelism=null,defaultPool=pool2] | | + pool2[allocFraction=0.5,schedulingPolicy=default,parallelism=3] | | | trigger too_large_write_triger: if (HDFS_BYTES_WRITTEN > 10kb) { MOVE TO pool1 } | | | mapped for default | | + pool1[allocFraction=0.3,schedulingPolicy=default,parallelism=5] | | | trigger slow_pool_trigger: if (ELAPSED_TIME > 3) { MOVE TO pool3 } | | + pool3[allocFraction=0.2,schedulingPolicy=default,parallelism=3] | | + default[allocFraction=0.0,schedulingPolicy=null,parallelism=4] | ++ 8 rows selected (0.25 seconds) {noformat} Workload Manager Events Summary from query run: {noformat} INFO : { "queryId" : "hive_20180202213425_9633d7af-4242-4e95-a391-2cd3823e3eac", "queryStartTime" : 1517607265395, "queryEndTime" : 1517607321648, "queryCompleted" : true, "queryWmEvents" : [ { "wmTezSessionInfo" : { "sessionId" : "21f8a4ab-511e-4828-a2dd-1d5f2932c492", "poolName" : "pool2", "clusterPercent" : 50.0 }, "eventStartTimestamp" : 1517607269660, "eventEndTimestamp" : 1517607269661, "eventType" : "GET", "elapsedTime" : 1 }, { "wmTezSessionInfo" : { "sessionId" : "21f8a4ab-511e-4828-a2dd-1d5f2932c492", "poolName" : null, "clusterPercent" : 0.0 }, "eventStartTimestamp" : 1517607321663, "eventEndTimestamp" : 1517607321663, "eventType" : "RETURN", "elapsedTime" : 0 } ], "appliedTriggers" : [ { "name" : "too_large_write_triger", "expression" : { "counterLimit" : { "limit" : 10240, "name" : "HDFS_BYTES_WRITTEN" }, "predicate" : "GREATER_THAN" }, "action" : { "type" : "MOVE_TO_POOL", "poolName" : "pool1" }, "violationMsg" : null } ], "subscribedCounters" : [ "HDFS_BYTES_WRITTEN" ], "currentCounters" : { "HDFS_BYTES_WRITTEN" : 33306829 }, "elapsedTime" : 56284 } {noformat} >From the Workload Manager Event Summary it could seen that the 'MOVE' event >didn't happen though the limit for counter (10240) HDFS_BYTES_WRITTEN was >exceeded -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-18637) WorkloadManagent Event Summary leaving subscribedCounters and currentCounters fields empty
Aswathy Chellammal Sreekumar created HIVE-18637: --- Summary: WorkloadManagent Event Summary leaving subscribedCounters and currentCounters fields empty Key: HIVE-18637 URL: https://issues.apache.org/jira/browse/HIVE-18637 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 3.0.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Harish Jaiprakash subscribedCounters and currentCounters values are empty when trigger results in MOVE event WorkloadManager Events Summary {noformat} INFO : { "queryId" : "hive_20180205214449_d2955891-e3b2-4ac3-bca9-5d2a53feb8c0", "queryStartTime" : 1517867089060, "queryEndTime" : 1517867144341, "queryCompleted" : true, "queryWmEvents" : [ { "wmTezSessionInfo" : { "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", "poolName" : "pool2", "clusterPercent" : 30.0 }, "eventStartTimestamp" : 1517867094797, "eventEndTimestamp" : 1517867094798, "eventType" : "GET", "elapsedTime" : 1 }, { "wmTezSessionInfo" : { "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", "poolName" : "pool1", "clusterPercent" : 70.0 }, "eventStartTimestamp" : 1517867139886, "eventEndTimestamp" : 1517867139887, "eventType" : "MOVE", "elapsedTime" : 1 }, { "wmTezSessionInfo" : { "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", "poolName" : null, "clusterPercent" : 0.0 }, "eventStartTimestamp" : 1517867144360, "eventEndTimestamp" : 1517867144360, "eventType" : "RETURN", "elapsedTime" : 0 } ], "appliedTriggers" : [ { "name" : "too_large_write_triger", "expression" : { "counterLimit" : { "limit" : 10240, "name" : "HDFS_BYTES_WRITTEN" }, "predicate" : "GREATER_THAN" }, "action" : { "type" : "MOVE_TO_POOL", "poolName" : "pool1" }, "violationMsg" : "Trigger { name: too_large_write_triger, expression: HDFS_BYTES_WRITTEN > 10240, action: MOVE TO pool1 } violated. Current value: 5096345" } ], "subscribedCounters" : [ ], "currentCounters" : { }, "elapsedTime" : 55304 } {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-16181) Make logic for hdfs directory location extraction more generic, in webhcat test driver
Aswathy Chellammal Sreekumar created HIVE-16181: --- Summary: Make logic for hdfs directory location extraction more generic, in webhcat test driver Key: HIVE-16181 URL: https://issues.apache.org/jira/browse/HIVE-16181 Project: Hive Issue Type: Test Components: WebHCat Reporter: Aswathy Chellammal Sreekumar Priority: Minor Patch to make regular expression for directory location lookup in setLocationPermGroup of TestDriverCurl more generic to accommodate patterns without port number like hdfs://mycluster//hive/warehouse/ -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HIVE-15919) Row count mismatch for count * query
Aswathy Chellammal Sreekumar created HIVE-15919: --- Summary: Row count mismatch for count * query Key: HIVE-15919 URL: https://issues.apache.org/jira/browse/HIVE-15919 Project: Hive Issue Type: Bug Components: HiveServer2 Reporter: Aswathy Chellammal Sreekumar Attachments: table_14.q, table_6.q The following query is returning different output when run against hive and postgres. Query: SELECT COUNT (*) FROM (SELECT LAG(COALESCE(t2.int_col_14, t1.int_col_80),22) OVER (ORDER BY t1.tinyint_col_52 DESC) AS int_col FROM table_6 t1 INNER JOIN table_14 t2 ON ((t2.decimal0101_col_55) = (t1.decimal0101_col_9))) AS FOO; >From hive: 0 >From postgres: 66903279 Attaching ddl and data files for the tables. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HIVE-15904) select query throwing Null Pointer Exception from org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan
Aswathy Chellammal Sreekumar created HIVE-15904: --- Summary: select query throwing Null Pointer Exception from org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan Key: HIVE-15904 URL: https://issues.apache.org/jira/browse/HIVE-15904 Project: Hive Issue Type: Bug Components: HiveServer2 Reporter: Aswathy Chellammal Sreekumar Assignee: Jason Dere Following query failing with Null Pointer Exception from org.apache.hadoop.hive.ql.optimizer.DynamicPartitionPruningOptimization.generateSemiJoinOperatorPlan Attaching create table statements for table_1 and table_18 Query: SELECT COALESCE(498, LEAD(COALESCE(-973, -684, 515)) OVER (PARTITION BY (t2.int_col_10 + t1.smallint_col_50) ORDER BY (t2.int_col_10 + t1.smallint_col_50), FLOOR(t1.double_col_16) DESC), 524) AS int_col, (t2.int_col_10) + (t1.smallint_col_50) AS int_col_1, FLOOR(t1.double_col_16) AS float_col, COALESCE(SUM(COALESCE(62, -380, -435)) OVER (PARTITION BY (t2.int_col_10 + t1.smallint_col_50) ORDER BY (t2.int_col_10 + t1.smallint_col_50) DESC, FLOOR(t1.double_col_16) DESC ROWS BETWEEN UNBOUNDED PRECEDING AND 48 FOLLOWING), 704) AS int_col_2 FROM table_1 t1 INNER JOIN table_18 t2 ON (((t2.tinyint_col_15) = (t1.bigint_col_7)) AND ((t2.decimal2709_col_9) = (t1.decimal2016_col_26))) AND ((t2.tinyint_col_20) = (t1.tinyint_col_3)) WHERE (t2.smallint_col_19) IN (SELECT COALESCE(-92, -994) AS int_col FROM table_1 tt1 INNER JOIN table_18 tt2 ON (tt2.decimal1911_col_16) = (tt1.decimal2612_col_77) WHERE (t1.timestamp_col_9) = (tt2.timestamp_col_18)); -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HIVE-15902) Select query involving date throwing Hive 2 Internal error: unsupported conversion from type: date
Aswathy Chellammal Sreekumar created HIVE-15902: --- Summary: Select query involving date throwing Hive 2 Internal error: unsupported conversion from type: date Key: HIVE-15902 URL: https://issues.apache.org/jira/browse/HIVE-15902 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 2.1.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Jason Dere The following query is throwing Hive 2 Internal error: unsupported conversion from type: date Query: create table table_one (ts timestamp, dt date) stored as orc; insert into table_one values ('2034-08-04 17:42:59','2038-07-01'); insert into table_one values ('2031-02-07 13:02:38','2072-10-19'); create table table_two (ts timestamp, dt date) stored as orc; insert into table_two values ('2069-04-01 09:05:54','1990-10-12'); insert into table_two values ('2031-02-07 13:02:38','2072-10-19'); create table table_three as select count(*) from table_one group by ts,dt having dt in (select dt from table_two); Error while running task ( failure ) : attempt_1486991777989_0184_18_02_00_0:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1833) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:95) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:70) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:420) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185) ... 15 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:883) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:86) ... 18 more Caused by: java.lang.RuntimeException: Hive 2 Internal error: unsupported conversion from type: date at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getLong(PrimitiveObjectInspectorUtils.java:770) at org.apache.hadoop.hive.ql.exec.vector.expressions.gen.FilterLongColumnBetweenDynamicValue.evaluate(FilterLongColumnBetweenDynamicValue.java:82) at org.apache.hadoop.hive.ql.exec.vector.expressions.FilterExprAndExpr.evaluate(FilterExprAndExpr.java:39) at org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:112) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:883) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130) at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:783) ... 19 more -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HIVE-15900) Tez job progress included in stdout instead of stderr
Aswathy Chellammal Sreekumar created HIVE-15900: --- Summary: Tez job progress included in stdout instead of stderr Key: HIVE-15900 URL: https://issues.apache.org/jira/browse/HIVE-15900 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 2.1.0 Reporter: Aswathy Chellammal Sreekumar Tez job progress messages are getting updated to stdout instead of stderr Attaching output file for below command, with the tez job status printed /usr/hdp/current/hive-server2-hive2/bin/beeline -n -p -u "stdout -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HIVE-15004) Query for parquet tables failing with java.lang.IllegalArgumentException: FilterPredicate column: f's declared type (java.lang.Double) does not match the schema found in
Aswathy Chellammal Sreekumar created HIVE-15004: --- Summary: Query for parquet tables failing with java.lang.IllegalArgumentException: FilterPredicate column: f's declared type (java.lang.Double) does not match the schema found in file metadata. Key: HIVE-15004 URL: https://issues.apache.org/jira/browse/HIVE-15004 Project: Hive Issue Type: Bug Components: Hive, HiveServer2 Affects Versions: 1.2.1 Reporter: Aswathy Chellammal Sreekumar Assignee: Jason Dere Fix For: 1.2.1 Queries involving float data type, run against parquet tables failing hive> desc extended all100k; OK t tinyint sismallint i int b bigint f float d double s string dcdecimal(38,18) boboolean v varchar(25) c char(25) tstimestamp dtdate Detailed Table Information Table(tableName:all100k, dbName:default, owner:hrt_qa, createTime:1476765150, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:t, type:tinyint, comment:null), FieldSchema(name:si, type:smallint, comment:null), FieldSchema(name:i, type:int, comment:null), FieldSchema(name:b, type:bigint, comment:null), FieldSchema(name:f, type:float, comment:null), FieldSchema(name:d, type:double, comment:null), FieldSchema(name:s, type:string, comment:null), FieldSchema(name:dc, type:decimal(38,18), comment:null), FieldSchema(name:bo, type:boolean, comment:null), FieldSchema(name:v, type:varchar(25), comment:null), FieldSchema(name:c, type:char(25), comment:null), FieldSchema(name:ts, type:timestamp, comment:null), FieldSchema(name:dt, type:date, comment:null)], location:hdfs://ctr-e45-1475874954070-9012-01-08.hwx.site:8020/apps/hive/warehouse/all100k, inputFormat:org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{numFiles=1, transient_lastDdlTime=1476765184, COLUMN_STATS_ACCURATE={"COLUMN_STATS":{"t":"true","si":"true","i":"true","b":"true","f":"true","d":"true","s":"true","dc":"true","bo":"true","v":"true","c":"true","ts":"true"},"BASIC_STATS":"true"}, totalSize=6564143, numRows=10, rawDataSize=130}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE) Time taken: 0.54 seconds, Fetched: 15 row(s) hive> select t from all100k > where t<>0 and s<>0 and b<>0 and (f<>0 or d<>0); OK SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Failed with exception java.io.IOException:java.lang.IllegalArgumentException: FilterPredicate column: f's declared type (java.lang.Double) does not match the schema found in file metadata. Column f is of type: FLOAT Valid types for this column are: [class java.lang.Float] Time taken: 0.919 seconds -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-12697) Remove deprecated post option from webhcat test files
Aswathy Chellammal Sreekumar created HIVE-12697: --- Summary: Remove deprecated post option from webhcat test files Key: HIVE-12697 URL: https://issues.apache.org/jira/browse/HIVE-12697 Project: Hive Issue Type: Test Components: WebHCat Affects Versions: 2.0.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Aswathy Chellammal Sreekumar Tests are still having the deprecated post option user.name. Need to remove them and add the same to query string -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-10910) Alter table drop partition queries in encrypted zone failing to remove data from HDFS
Aswathy Chellammal Sreekumar created HIVE-10910: --- Summary: Alter table drop partition queries in encrypted zone failing to remove data from HDFS Key: HIVE-10910 URL: https://issues.apache.org/jira/browse/HIVE-10910 Project: Hive Issue Type: Bug Components: Hive Affects Versions: 1.2.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Eugene Koifman Alter table query trying to drop partition removes metadata of partition but fails to remove the data from HDFS hive create table table_1(name string, age int, gpa double) partitioned by (b string) stored as textfile; OK Time taken: 0.732 seconds hive alter table table_1 add partition (b='2010-10-10'); OK Time taken: 0.496 seconds hive show partitions table_1; OK b=2010-10-10 Time taken: 0.781 seconds, Fetched: 1 row(s) hive alter table table_1 drop partition (b='2010-10-10'); FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Got exception: java.io.IOException Failed to move to trash: hdfs://ip-address:8020/warehouse-dir/table_1/b=2010-10-10 hive show partitions table_1; OK Time taken: 0.622 seconds -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-10828) Insert...values for fewer number of columns fail
Aswathy Chellammal Sreekumar created HIVE-10828: --- Summary: Insert...values for fewer number of columns fail Key: HIVE-10828 URL: https://issues.apache.org/jira/browse/HIVE-10828 Project: Hive Issue Type: Bug Components: Hive Affects Versions: 1.2.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Eugene Koifman Schema on insert queries with fewer number of columns fails with below error message ERROR ql.Driver (SessionState.java:printError(957)) - FAILED: NullPointerException null java.lang.NullPointerException at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genReduceSinkPlan(SemanticAnalyzer.java:7277) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBucketingSortingDest(SemanticAnalyzer.java:6120) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFileSinkPlan(SemanticAnalyzer.java:6291) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8992) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8883) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9728) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9621) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10094) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:324) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10105) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1170) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311) at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:409) at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:425) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:714) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Steps to reproduce: drop table if exists table1; create table table1 (a int, b string, c string) partitioned by (bkt int) clustered by (a) into 2 buckets stored as orc tblproperties ('transactional'='true'); insert into table_1 partition (bkt) (b, a, bkt) values ('part one', 1, 1), ('part one', 2, 1), ('part two', 3, 2), ('part three', 4, 3); -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-10776) Schema on insert for acid tables throwing Nullpointer exception
Aswathy Chellammal Sreekumar created HIVE-10776: --- Summary: Schema on insert for acid tables throwing Nullpointer exception Key: HIVE-10776 URL: https://issues.apache.org/jira/browse/HIVE-10776 Project: Hive Issue Type: Bug Components: Hive Affects Versions: 1.2.0 Environment: Linux, Windows Reporter: Aswathy Chellammal Sreekumar Assignee: Eugene Koifman Fix For: 1.2.0 Hive schema on insert queries, with select * , are failing with below exception 2015-05-15 19:29:01,278 ERROR [main]: ql.Driver (SessionState.java:printError(957)) - FAILED: NullPointerException null java.lang.NullPointerException at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genReduceSinkPlan(SemanticAnalyzer.java:7257) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBucketingSortingDest(SemanticAnalyzer.java:6100) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFileSinkPlan(SemanticAnalyzer.java:6271) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8972) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8863) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9708) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9601) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10037) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:323) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10048) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:207) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1170) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311) at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:409) at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:425) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:714) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Steps to reproduce set hive.support.concurrency=true; set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; set hive.enforce.bucketing=true; drop table if exists studenttab10k; create table studenttab10k (age int, name varchar(50),gpa decimal(3,2)); insert into studenttab10k values(1,'foo', 1.1), (2,'bar', 2.3),(3,'baz', 3.1); drop table if exists student_acid; create table student_acid (age int, name varchar(50),gpa decimal(3,2), grade int) clustered by (age) into 2 buckets stored as orc tblproperties ('transactional'='true'); insert into student_acid(name,age,gpa) select * from studenttab10k; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-10409) Webhcat tests need to be updated, to accomodate HADOOP-10193
Aswathy Chellammal Sreekumar created HIVE-10409: --- Summary: Webhcat tests need to be updated, to accomodate HADOOP-10193 Key: HIVE-10409 URL: https://issues.apache.org/jira/browse/HIVE-10409 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 1.2.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Aswathy Chellammal Sreekumar Priority: Minor Fix For: 1.2.0 Webhcat tests need to be updated to accommodate the url change brought in by HADOOP-10193. Add ?user.name=user-name for the templeton calls. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9272) Tests for utf-8 support
[ https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14284644#comment-14284644 ] Aswathy Chellammal Sreekumar commented on HIVE-9272: Thanks [~ekoifman] and [~sushanth] for review comments and verification. Tests for utf-8 support --- Key: HIVE-9272 URL: https://issues.apache.org/jira/browse/HIVE-9272 Project: Hive Issue Type: Test Components: Tests, WebHCat Affects Versions: 0.14.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Aswathy Chellammal Sreekumar Priority: Minor Fix For: 0.15.0 Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.3.patch, HIVE-9272.4.patch, HIVE-9272.patch Including some test cases for utf8 support in webhcat. The first four tests invoke hive, pig, mapred and streaming apis for testing the utf8 support for data processed, file names and job name. The last test case tests the filtering of job name with utf8 character -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9272) Tests for utf-8 support
[ https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar updated HIVE-9272: --- Attachment: HIVE-9272.3.patch Attaching the patch with encoded characters replaced with original characters in file name. Please review the same. Tests for utf-8 support --- Key: HIVE-9272 URL: https://issues.apache.org/jira/browse/HIVE-9272 Project: Hive Issue Type: Test Components: Tests, WebHCat Affects Versions: 0.14.0 Reporter: Aswathy Chellammal Sreekumar Assignee: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.3.patch, HIVE-9272.patch Including some test cases for utf8 support in webhcat. The first four tests invoke hive, pig, mapred and streaming apis for testing the utf8 support for data processed, file names and job name. The last test case tests the filtering of job name with utf8 character NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9272) Tests for utf-8 support
[ https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar updated HIVE-9272: --- Attachment: HIVE-9272.2.patch Tests for utf-8 support --- Key: HIVE-9272 URL: https://issues.apache.org/jira/browse/HIVE-9272 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Assignee: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.patch Including some test cases for utf8 support in webhcat. The first four tests invoke hive, pig, mapred and streaming apis for testing the utf8 support for data processed, file names and job name. The last test case tests the filtering of job name with utf8 character -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9272) Tests for utf-8 support
[ https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14277353#comment-14277353 ] Aswathy Chellammal Sreekumar commented on HIVE-9272: Thanks for the comments Eugene and Sushanth. I am uploading a new patch with comment added in deploy_e2e_artifacts.sh mentioning the purpose of new jar file moved to HDFS. Please review. Tests for utf-8 support --- Key: HIVE-9272 URL: https://issues.apache.org/jira/browse/HIVE-9272 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Assignee: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.patch Including some test cases for utf8 support in webhcat. The first four tests invoke hive, pig, mapred and streaming apis for testing the utf8 support for data processed, file names and job name. The last test case tests the filtering of job name with utf8 character -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9272) Tests for utf-8 support
[ https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268764#comment-14268764 ] Aswathy Chellammal Sreekumar commented on HIVE-9272: Please find the updated patch for utf-8 tests for review. Tests for utf-8 support --- Key: HIVE-9272 URL: https://issues.apache.org/jira/browse/HIVE-9272 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-9272.patch Including some test cases for utf8 support in webhcat. The first four tests invoke hive, pig, mapred and streaming apis for testing the utf8 support for data processed, file names and job name. The last test case tests the filtering of job name with utf8 character -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9272) Tests for utf-8 support
[ https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar updated HIVE-9272: --- Attachment: HIVE-9272.1.patch Tests for utf-8 support --- Key: HIVE-9272 URL: https://issues.apache.org/jira/browse/HIVE-9272 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-9272.1.patch, HIVE-9272.patch Including some test cases for utf8 support in webhcat. The first four tests invoke hive, pig, mapred and streaming apis for testing the utf8 support for data processed, file names and job name. The last test case tests the filtering of job name with utf8 character -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9272) Tests for utf-8 support
Aswathy Chellammal Sreekumar created HIVE-9272: -- Summary: Tests for utf-8 support Key: HIVE-9272 URL: https://issues.apache.org/jira/browse/HIVE-9272 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Priority: Minor Including some test cases for utf8 support in webhcat. The first four tests invoke hive, pig, mapred and streaming apis for testing the utf8 support for data processed, file names and job name. The last test case tests the filtering of job name with utf8 character -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9272) Tests for utf-8 support
[ https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar updated HIVE-9272: --- Attachment: HIVE-9272.patch Please review this patch with the test cases for utf8 support Tests for utf-8 support --- Key: HIVE-9272 URL: https://issues.apache.org/jira/browse/HIVE-9272 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-9272.patch Including some test cases for utf8 support in webhcat. The first four tests invoke hive, pig, mapred and streaming apis for testing the utf8 support for data processed, file names and job name. The last test case tests the filtering of job name with utf8 character -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7948) Add an E2E test to verify fix for HIVE-7155
[ https://issues.apache.org/jira/browse/HIVE-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar updated HIVE-7948: --- Attachment: HIVE-7948.2.patch Add an E2E test to verify fix for HIVE-7155 Key: HIVE-7948 URL: https://issues.apache.org/jira/browse/HIVE-7948 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Assignee: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-7948.1.patch, HIVE-7948.2.patch, HIVE-7948.patch E2E Test to verify webhcat property templeton.mapper.memory.mb correctly overrides mapreduce.map.memory.mb. The feature was added as part of HIVE-7155. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7948) Add an E2E test to verify fix for HIVE-7155
[ https://issues.apache.org/jira/browse/HIVE-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219827#comment-14219827 ] Aswathy Chellammal Sreekumar commented on HIVE-7948: Included customized webhcat-site.xml that could be used for this test (webhcat-site.updateConfig.xml). Also included two script files that will update the webhcat configuration (by moving the custom webhcat-site.xml to installation directory) before test run and restore the webhcat configuration after test run. Please review the updated patch. Add an E2E test to verify fix for HIVE-7155 Key: HIVE-7948 URL: https://issues.apache.org/jira/browse/HIVE-7948 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Assignee: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-7948.1.patch, HIVE-7948.2.patch, HIVE-7948.patch E2E Test to verify webhcat property templeton.mapper.memory.mb correctly overrides mapreduce.map.memory.mb. The feature was added as part of HIVE-7155. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7948) Add an E2E test to verify fix for HIVE-7155
[ https://issues.apache.org/jira/browse/HIVE-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14217256#comment-14217256 ] Aswathy Chellammal Sreekumar commented on HIVE-7948: I have automated the test data download step in deploy_e2e_artifacts and updated the deployers/conf/webhcat-site.xml with the templeton.mapper.memory.mb property along with the value expected for this test run. Please review the patch. Add an E2E test to verify fix for HIVE-7155 Key: HIVE-7948 URL: https://issues.apache.org/jira/browse/HIVE-7948 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Assignee: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-7948.patch E2E Test to verify webhcat property templeton.mapper.memory.mb correctly overrides mapreduce.map.memory.mb. The feature was added as part of HIVE-7155. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7948) Add an E2E test to verify fix for HIVE-7155
[ https://issues.apache.org/jira/browse/HIVE-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar updated HIVE-7948: --- Attachment: HIVE-7948.1.patch Add an E2E test to verify fix for HIVE-7155 Key: HIVE-7948 URL: https://issues.apache.org/jira/browse/HIVE-7948 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Assignee: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-7948.1.patch, HIVE-7948.patch E2E Test to verify webhcat property templeton.mapper.memory.mb correctly overrides mapreduce.map.memory.mb. The feature was added as part of HIVE-7155. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8360) Add cross cluster support for webhcat E2E tests
Aswathy Chellammal Sreekumar created HIVE-8360: -- Summary: Add cross cluster support for webhcat E2E tests Key: HIVE-8360 URL: https://issues.apache.org/jira/browse/HIVE-8360 Project: Hive Issue Type: Test Components: Tests, WebHCat Environment: Secure cluster Reporter: Aswathy Chellammal Sreekumar In current Webhcat E2E test setup, cross domain secure cluster runs will fail since the realm name for user principles are not included in the kinit command. This patch concatenates the realm name to the user principal there by resulting in a successful kinit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8360) Add cross cluster support for webhcat E2E tests
[ https://issues.apache.org/jira/browse/HIVE-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar updated HIVE-8360: --- Attachment: AD-MIT.patch Including the patch that implements cross domain support in secure cluster for E2E tests. Please review the same. Add cross cluster support for webhcat E2E tests --- Key: HIVE-8360 URL: https://issues.apache.org/jira/browse/HIVE-8360 Project: Hive Issue Type: Test Components: Tests, WebHCat Environment: Secure cluster Reporter: Aswathy Chellammal Sreekumar Attachments: AD-MIT.patch In current Webhcat E2E test setup, cross domain secure cluster runs will fail since the realm name for user principles are not included in the kinit command. This patch concatenates the realm name to the user principal there by resulting in a successful kinit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-7948) Add an E2E test to verify fix for HIVE-7155
Aswathy Chellammal Sreekumar created HIVE-7948: -- Summary: Add an E2E test to verify fix for HIVE-7155 Key: HIVE-7948 URL: https://issues.apache.org/jira/browse/HIVE-7948 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Priority: Minor E2E Test to verify webhcat property templeton.mapper.memory.mb correctly overrides mapreduce.map.memory.mb. The feature was added as part of HIVE-7155. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7948) Add an E2E test to verify fix for HIVE-7155
[ https://issues.apache.org/jira/browse/HIVE-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar updated HIVE-7948: --- Attachment: HIVE-7948.patch Attaching a patch for review with the test case included. Add an E2E test to verify fix for HIVE-7155 Key: HIVE-7948 URL: https://issues.apache.org/jira/browse/HIVE-7948 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-7948.patch E2E Test to verify webhcat property templeton.mapper.memory.mb correctly overrides mapreduce.map.memory.mb. The feature was added as part of HIVE-7155. -- This message was sent by Atlassian JIRA (v6.3.4#6332)