hive git commit: HIVE-18198: TablePropertyEnrichmentOptimizer.java is missing the Apache license header (Deepesh Khandelwal via Gunther Hagleitner)

2017-12-01 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master 4218629de -> af6f80ab5


HIVE-18198: TablePropertyEnrichmentOptimizer.java is missing the Apache license 
header (Deepesh Khandelwal via Gunther Hagleitner)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/af6f80ab
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/af6f80ab
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/af6f80ab

Branch: refs/heads/master
Commit: af6f80ab541856ba6ceb7771f425f9168516812c
Parents: 4218629
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Fri Dec 1 13:51:58 2017 -0800
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Fri Dec 1 13:51:58 2017 -0800

--
 .../TablePropertyEnrichmentOptimizer.java | 18 ++
 1 file changed, 18 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/af6f80ab/ql/src/java/org/apache/hadoop/hive/ql/optimizer/TablePropertyEnrichmentOptimizer.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/TablePropertyEnrichmentOptimizer.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/TablePropertyEnrichmentOptimizer.java
index 98acb0d..5824490 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/TablePropertyEnrichmentOptimizer.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/TablePropertyEnrichmentOptimizer.java
@@ -1,3 +1,21 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.hadoop.hive.ql.optimizer;
 
 import com.google.common.collect.Lists;



hive git commit: HIVE-18195: Hive schema broken on postgres (Deepesh Khandelwal, reviewed by Sergey Shelukhin)

2017-12-01 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master 1b4baf474 -> 4218629de


HIVE-18195: Hive schema broken on postgres (Deepesh Khandelwal, reviewed by 
Sergey Shelukhin)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/4218629d
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/4218629d
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/4218629d

Branch: refs/heads/master
Commit: 4218629de715df43a1778de03f85e41bc682b1a8
Parents: 1b4baf4
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Fri Dec 1 13:42:02 2017 -0800
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Fri Dec 1 13:42:02 2017 -0800

--
 metastore/scripts/upgrade/postgres/045-HIVE-17566.postgres.sql| 2 +-
 metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/4218629d/metastore/scripts/upgrade/postgres/045-HIVE-17566.postgres.sql
--
diff --git a/metastore/scripts/upgrade/postgres/045-HIVE-17566.postgres.sql 
b/metastore/scripts/upgrade/postgres/045-HIVE-17566.postgres.sql
index bd588c4..358247b 100644
--- a/metastore/scripts/upgrade/postgres/045-HIVE-17566.postgres.sql
+++ b/metastore/scripts/upgrade/postgres/045-HIVE-17566.postgres.sql
@@ -17,7 +17,7 @@ CREATE TABLE "WM_POOL" (
 "POOL_ID" bigint NOT NULL,
 "RP_ID" bigint NOT NULL,
 "PATH" character varying(1024) NOT NULL,
-"ALLOC_FRACTION" DOUBLE,
+"ALLOC_FRACTION" double precision,
 "QUERY_PARALLELISM" integer,
 "SCHEDULING_POLICY" character varying(1024)
 );

http://git-wip-us.apache.org/repos/asf/hive/blob/4218629d/metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql
--
diff --git a/metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql 
b/metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql
index 931d3e6..065974f 100644
--- a/metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql
+++ b/metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql
@@ -631,7 +631,7 @@ CREATE TABLE "WM_POOL" (
 "POOL_ID" bigint NOT NULL,
 "RP_ID" bigint NOT NULL,
 "PATH" character varying(1024) NOT NULL,
-"ALLOC_FRACTION" DOUBLE,
+"ALLOC_FRACTION" double precision,
 "QUERY_PARALLELISM" integer,
 "SCHEDULING_POLICY" character varying(1024)
 );



hive git commit: HIVE-14731 (addendum): Use Tez cartesian product edge in Hive (unpartitioned case only) (Zhiyuan Yang via Gunther Hagleitner)

2017-10-25 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master 17f05f4f1 -> 66c522676


HIVE-14731 (addendum): Use Tez cartesian product edge in Hive (unpartitioned 
case only) (Zhiyuan Yang via Gunther Hagleitner)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/66c52267
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/66c52267
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/66c52267

Branch: refs/heads/master
Commit: 66c5226761bce17ef0b07778630949bcdf1feaf9
Parents: 17f05f4
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Wed Oct 25 15:35:18 2017 -0700
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Wed Oct 25 15:35:18 2017 -0700

--
 ql/src/test/results/clientpositive/spark/subquery_multi.q.out | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/66c52267/ql/src/test/results/clientpositive/spark/subquery_multi.q.out
--
diff --git a/ql/src/test/results/clientpositive/spark/subquery_multi.q.out 
b/ql/src/test/results/clientpositive/spark/subquery_multi.q.out
index 8a2b9b3..e90252e 100644
--- a/ql/src/test/results/clientpositive/spark/subquery_multi.q.out
+++ b/ql/src/test/results/clientpositive/spark/subquery_multi.q.out
@@ -234,8 +234,8 @@ POSTHOOK: Input: default@part_null
 17273  almond antique forest lavender goldenrodManufacturer#3  
Brand#35PROMO ANODIZED TIN  14  JUMBO CASE  1190.27 along 
the
 45261  almond aquamarine floral ivory bisque   Manufacturer#4  Brand#42
SMALL PLATED STEEL  27  WRAP CASE   1206.26 careful
 48427  almond antique violet mint lemonManufacturer#4  Brand#42
PROMO POLISHED STEEL39  SM CASE 1375.42 hely ironic i
-78487  NULLManufacturer#6  Brand#52LARGE BRUSHED BRASS 23  
MED BAG 1464.48 hely blith
 78486  almond azure blanched chiffon midnight  Manufacturer#5  Brand#52
LARGE BRUSHED BRASS 23  MED BAG 1464.48 hely blith
+78487  NULLManufacturer#6  Brand#52LARGE BRUSHED BRASS 23  
MED BAG 1464.48 hely blith
 192697 almond antique blue firebrick mint  Manufacturer#5  Brand#52
MEDIUM BURNISHED TIN31  LG DRUM 1789.69 ickly ir
 Warning: Shuffle Join JOIN[27][tables = [$hdt$_0, $hdt$_1, $hdt$_2]] in Work 
'Reducer 3' is a cross product
 PREHOOK: query: explain select * from part_null where p_name IN (select p_name 
from part_null) AND p_brand NOT IN (select p_name from part_null)



[5/7] hive git commit: HIVE-14731: Use Tez cartesian product edge in Hive (unpartitioned case only) (Zhiyuan Yang via Gunther Hagleitner)

2017-10-24 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/cfbe6125/ql/src/test/results/clientpositive/llap/cross_prod_1.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/cross_prod_1.q.out 
b/ql/src/test/results/clientpositive/llap/cross_prod_1.q.out
new file mode 100644
index 000..fd03fe5
--- /dev/null
+++ b/ql/src/test/results/clientpositive/llap/cross_prod_1.q.out
@@ -0,0 +1,2502 @@
+PREHOOK: query: create table X as
+select distinct * from src order by key limit 10
+PREHOOK: type: CREATETABLE_AS_SELECT
+PREHOOK: Input: default@src
+PREHOOK: Output: database:default
+PREHOOK: Output: default@X
+POSTHOOK: query: create table X as
+select distinct * from src order by key limit 10
+POSTHOOK: type: CREATETABLE_AS_SELECT
+POSTHOOK: Input: default@src
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@X
+POSTHOOK: Lineage: x.key SIMPLE [(src)src.FieldSchema(name:key, type:string, 
comment:default), ]
+POSTHOOK: Lineage: x.value SIMPLE [(src)src.FieldSchema(name:value, 
type:string, comment:default), ]
+Warning: Shuffle Join MERGEJOIN[11][tables = [$hdt$_0, $hdt$_1]] in Stage 
'Reducer 2' is a cross product
+PREHOOK: query: explain select * from X as A, X as B order by A.key, B.key
+PREHOOK: type: QUERY
+POSTHOOK: query: explain select * from X as A, X as B order by A.key, B.key
+POSTHOOK: type: QUERY
+STAGE DEPENDENCIES:
+  Stage-1 is a root stage
+  Stage-0 depends on stages: Stage-1
+
+STAGE PLANS:
+  Stage: Stage-1
+Tez
+ A masked pattern was here 
+  Edges:
+Reducer 2 <- Map 1 (XPROD_EDGE), Map 4 (XPROD_EDGE)
+Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
+ A masked pattern was here 
+  Vertices:
+Map 1 
+Map Operator Tree:
+TableScan
+  alias: a
+  Statistics: Num rows: 10 Data size: 3680 Basic stats: 
COMPLETE Column stats: NONE
+  Select Operator
+expressions: key (type: string), value (type: string)
+outputColumnNames: _col0, _col1
+Statistics: Num rows: 10 Data size: 3680 Basic stats: 
COMPLETE Column stats: NONE
+Reduce Output Operator
+  sort order: 
+  Statistics: Num rows: 10 Data size: 3680 Basic stats: 
COMPLETE Column stats: NONE
+  value expressions: _col0 (type: string), _col1 (type: 
string)
+Execution mode: llap
+LLAP IO: no inputs
+Map 4 
+Map Operator Tree:
+TableScan
+  alias: b
+  Statistics: Num rows: 10 Data size: 3680 Basic stats: 
COMPLETE Column stats: NONE
+  Select Operator
+expressions: key (type: string), value (type: string)
+outputColumnNames: _col0, _col1
+Statistics: Num rows: 10 Data size: 3680 Basic stats: 
COMPLETE Column stats: NONE
+Reduce Output Operator
+  sort order: 
+  Statistics: Num rows: 10 Data size: 3680 Basic stats: 
COMPLETE Column stats: NONE
+  value expressions: _col0 (type: string), _col1 (type: 
string)
+Execution mode: llap
+LLAP IO: no inputs
+Reducer 2 
+Execution mode: llap
+Reduce Operator Tree:
+  Merge Join Operator
+condition map:
+ Inner Join 0 to 1
+keys:
+  0 
+  1 
+outputColumnNames: _col0, _col1, _col2, _col3
+Statistics: Num rows: 100 Data size: 73700 Basic stats: 
COMPLETE Column stats: NONE
+Reduce Output Operator
+  key expressions: _col0 (type: string), _col2 (type: string)
+  sort order: ++
+  Statistics: Num rows: 100 Data size: 73700 Basic stats: 
COMPLETE Column stats: NONE
+  value expressions: _col1 (type: string), _col3 (type: string)
+Reducer 3 
+Execution mode: llap
+Reduce Operator Tree:
+  Select Operator
+expressions: KEY.reducesinkkey0 (type: string), VALUE._col0 
(type: string), KEY.reducesinkkey1 (type: string), VALUE._col1 (type: string)
+outputColumnNames: _col0, _col1, _col2, _col3
+Statistics: Num rows: 100 Data size: 73700 Basic stats: 
COMPLETE Column stats: NONE
+File Output Operator
+  compressed: false
+  Statistics: Num rows: 100 Data size: 73700 Basic stats: 
COMPLETE Column stats: NONE
+  table:
+  input format: 
org.apache.hadoop.mapred.SequenceFileInputFormat
+  output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
+  serde: 

[3/7] hive git commit: HIVE-14731: Use Tez cartesian product edge in Hive (unpartitioned case only) (Zhiyuan Yang via Gunther Hagleitner)

2017-10-24 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/cfbe6125/ql/src/test/results/clientpositive/llap/subquery_exists.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/subquery_exists.q.out 
b/ql/src/test/results/clientpositive/llap/subquery_exists.q.out
index 53bbad2..e206f08 100644
--- a/ql/src/test/results/clientpositive/llap/subquery_exists.q.out
+++ b/ql/src/test/results/clientpositive/llap/subquery_exists.q.out
@@ -326,7 +326,7 @@ STAGE PLANS:
 Tez
  A masked pattern was here 
   Edges:
-Reducer 2 <- Map 1 (CUSTOM_SIMPLE_EDGE), Reducer 4 (CUSTOM_SIMPLE_EDGE)
+Reducer 2 <- Map 1 (XPROD_EDGE), Reducer 4 (XPROD_EDGE)
 Reducer 4 <- Map 3 (SIMPLE_EDGE)
  A masked pattern was here 
   Vertices:
@@ -962,7 +962,7 @@ STAGE PLANS:
 Tez
  A masked pattern was here 
   Edges:
-Reducer 2 <- Map 1 (CUSTOM_SIMPLE_EDGE), Reducer 4 (CUSTOM_SIMPLE_EDGE)
+Reducer 2 <- Map 1 (XPROD_EDGE), Reducer 4 (XPROD_EDGE)
 Reducer 4 <- Map 3 (CUSTOM_SIMPLE_EDGE)
  A masked pattern was here 
   Vertices:
@@ -1289,7 +1289,7 @@ STAGE PLANS:
  A masked pattern was here 
   Edges:
 Reducer 2 <- Map 1 (SIMPLE_EDGE), Reducer 4 (SIMPLE_EDGE)
-Reducer 4 <- Map 3 (CUSTOM_SIMPLE_EDGE), Reducer 6 (CUSTOM_SIMPLE_EDGE)
+Reducer 4 <- Map 3 (XPROD_EDGE), Reducer 6 (XPROD_EDGE)
 Reducer 6 <- Map 5 (SIMPLE_EDGE)
  A masked pattern was here 
   Vertices:

http://git-wip-us.apache.org/repos/asf/hive/blob/cfbe6125/ql/src/test/results/clientpositive/llap/subquery_in.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/subquery_in.q.out 
b/ql/src/test/results/clientpositive/llap/subquery_in.q.out
index 780bda9..af42131 100644
--- a/ql/src/test/results/clientpositive/llap/subquery_in.q.out
+++ b/ql/src/test/results/clientpositive/llap/subquery_in.q.out
@@ -4274,7 +4274,7 @@ STAGE PLANS:
  A masked pattern was here 
   Edges:
 Reducer 2 <- Map 1 (SIMPLE_EDGE), Reducer 5 (SIMPLE_EDGE)
-Reducer 4 <- Map 3 (CUSTOM_SIMPLE_EDGE), Reducer 7 (CUSTOM_SIMPLE_EDGE)
+Reducer 4 <- Map 3 (XPROD_EDGE), Reducer 7 (XPROD_EDGE)
 Reducer 5 <- Reducer 4 (SIMPLE_EDGE)
 Reducer 7 <- Map 6 (SIMPLE_EDGE)
  A masked pattern was here 

http://git-wip-us.apache.org/repos/asf/hive/blob/cfbe6125/ql/src/test/results/clientpositive/llap/subquery_multi.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/subquery_multi.q.out 
b/ql/src/test/results/clientpositive/llap/subquery_multi.q.out
index a98a011..96fe17a 100644
--- a/ql/src/test/results/clientpositive/llap/subquery_multi.q.out
+++ b/ql/src/test/results/clientpositive/llap/subquery_multi.q.out
@@ -262,7 +262,7 @@ STAGE PLANS:
  A masked pattern was here 
   Edges:
 Reducer 2 <- Map 1 (SIMPLE_EDGE), Reducer 6 (ONE_TO_ONE_EDGE)
-Reducer 3 <- Reducer 2 (CUSTOM_SIMPLE_EDGE), Reducer 7 
(CUSTOM_SIMPLE_EDGE)
+Reducer 3 <- Reducer 2 (XPROD_EDGE), Reducer 7 (XPROD_EDGE)
 Reducer 4 <- Reducer 3 (SIMPLE_EDGE), Reducer 8 (ONE_TO_ONE_EDGE)
 Reducer 6 <- Map 5 (SIMPLE_EDGE)
 Reducer 7 <- Map 5 (CUSTOM_SIMPLE_EDGE)
@@ -463,7 +463,7 @@ STAGE PLANS:
  A masked pattern was here 
   Edges:
 Reducer 2 <- Map 1 (SIMPLE_EDGE), Reducer 6 (ONE_TO_ONE_EDGE)
-Reducer 3 <- Reducer 2 (CUSTOM_SIMPLE_EDGE), Reducer 8 
(CUSTOM_SIMPLE_EDGE)
+Reducer 3 <- Reducer 2 (XPROD_EDGE), Reducer 8 (XPROD_EDGE)
 Reducer 4 <- Reducer 3 (SIMPLE_EDGE), Reducer 9 (ONE_TO_ONE_EDGE)
 Reducer 6 <- Map 5 (SIMPLE_EDGE)
 Reducer 8 <- Map 7 (CUSTOM_SIMPLE_EDGE)
@@ -647,41 +647,41 @@ STAGE PLANS:
   Processor Tree:
 ListSink
 
-Warning: Shuffle Join MERGEJOIN[41][tables = [$hdt$_0, $hdt$_1, $hdt$_2]] in 
Stage 'Reducer 3' is a cross product
-PREHOOK: query: select * from part_null where p_name IN (select p_name from 
part_null) AND p_brand NOT IN (select p_type from part_null)
+Warning: Shuffle Join MERGEJOIN[43][tables = [$hdt$_0, $hdt$_1, $hdt$_2]] in 
Stage 'Reducer 3' is a cross product
+PREHOOK: query: select * from part_null where p_name IN (select p_name from 
part_null) AND p_brand NOT IN (select p_type from part_null) order by 
part_null.p_partkey
 PREHOOK: type: QUERY
 PREHOOK: Input: default@part_null
  A masked pattern was here 
-POSTHOOK: query: select * from part_null where p_name IN (select p_name from 
part_null) AND p_brand NOT IN (select p_type from part_null)
+POSTHOOK: query: select * from part_null where p_name IN (select p_name from 
part_null) AND p_brand NOT IN (select p_type from part_null) order by 
part_null.p_partkey
 POSTHOOK: type: QUERY
 POSTHOOK: Input: default@part_null
 

[1/7] hive git commit: HIVE-14731: Use Tez cartesian product edge in Hive (unpartitioned case only) (Zhiyuan Yang via Gunther Hagleitner)

2017-10-24 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master a284df1f8 -> cfbe61257


http://git-wip-us.apache.org/repos/asf/hive/blob/cfbe6125/ql/src/test/results/clientpositive/tez/hybridgrace_hashjoin_1.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/tez/hybridgrace_hashjoin_1.q.out 
b/ql/src/test/results/clientpositive/tez/hybridgrace_hashjoin_1.q.out
index 4dfcc33..a709920 100644
--- a/ql/src/test/results/clientpositive/tez/hybridgrace_hashjoin_1.q.out
+++ b/ql/src/test/results/clientpositive/tez/hybridgrace_hashjoin_1.q.out
@@ -1215,7 +1215,7 @@ POSTHOOK: Lineage: decimal_mapjoin.cdecimal1 EXPRESSION 
[(alltypesorc)alltypesor
 POSTHOOK: Lineage: decimal_mapjoin.cdecimal2 EXPRESSION 
[(alltypesorc)alltypesorc.FieldSchema(name:cdouble, type:double, comment:null), 
]
 POSTHOOK: Lineage: decimal_mapjoin.cdouble SIMPLE 
[(alltypesorc)alltypesorc.FieldSchema(name:cdouble, type:double, comment:null), 
]
 POSTHOOK: Lineage: decimal_mapjoin.cint SIMPLE 
[(alltypesorc)alltypesorc.FieldSchema(name:cint, type:int, comment:null), ]
-Warning: Map Join MAPJOIN[13][bigTable=?] in task 'Map 1' is a cross product
+Warning: Shuffle Join MERGEJOIN[13][tables = [$hdt$_0, $hdt$_1]] in Stage 
'Reducer 2' is a cross product
 PREHOOK: query: EXPLAIN SELECT l.cint, r.cint, l.cdecimal1, r.cdecimal2
   FROM decimal_mapjoin l
   JOIN decimal_mapjoin r ON l.cint = r.cint
@@ -1235,7 +1235,7 @@ STAGE PLANS:
 Tez
  A masked pattern was here 
   Edges:
-Map 1 <- Map 2 (BROADCAST_EDGE)
+Reducer 2 <- Map 1 (XPROD_EDGE), Map 3 (XPROD_EDGE)
  A masked pattern was here 
   Vertices:
 Map 1 
@@ -1250,29 +1250,12 @@ STAGE PLANS:
   expressions: cdecimal1 (type: decimal(20,10))
   outputColumnNames: _col0
   Statistics: Num rows: 5 Data size: 551 Basic stats: 
COMPLETE Column stats: NONE
-  Map Join Operator
-condition map:
- Inner Join 0 to 1
-keys:
-  0 
-  1 
-outputColumnNames: _col0, _col2
-input vertices:
-  1 Map 2
-Statistics: Num rows: 25 Data size: 5535 Basic stats: 
COMPLETE Column stats: NONE
-Select Operator
-  expressions: 6981 (type: int), 6981 (type: int), 
_col0 (type: decimal(20,10)), _col2 (type: decimal(23,14))
-  outputColumnNames: _col0, _col1, _col2, _col3
-  Statistics: Num rows: 25 Data size: 5535 Basic 
stats: COMPLETE Column stats: NONE
-  File Output Operator
-compressed: false
-Statistics: Num rows: 25 Data size: 5535 Basic 
stats: COMPLETE Column stats: NONE
-table:
-input format: 
org.apache.hadoop.mapred.SequenceFileInputFormat
-output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
-serde: 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+  Reduce Output Operator
+sort order: 
+Statistics: Num rows: 5 Data size: 551 Basic stats: 
COMPLETE Column stats: NONE
+value expressions: _col0 (type: decimal(20,10))
 Execution mode: vectorized
-Map 2 
+Map 3 
 Map Operator Tree:
 TableScan
   alias: r
@@ -1289,6 +1272,27 @@ STAGE PLANS:
 Statistics: Num rows: 5 Data size: 551 Basic stats: 
COMPLETE Column stats: NONE
 value expressions: _col0 (type: decimal(23,14))
 Execution mode: vectorized
+Reducer 2 
+Reduce Operator Tree:
+  Merge Join Operator
+condition map:
+ Inner Join 0 to 1
+keys:
+  0 
+  1 
+outputColumnNames: _col0, _col2
+Statistics: Num rows: 25 Data size: 5535 Basic stats: COMPLETE 
Column stats: NONE
+Select Operator
+  expressions: 6981 (type: int), 6981 (type: int), _col0 
(type: decimal(20,10)), _col2 (type: decimal(23,14))
+  outputColumnNames: _col0, _col1, _col2, _col3
+  Statistics: Num rows: 25 Data size: 5535 Basic stats: 
COMPLETE Column stats: NONE
+  File Output Operator
+compressed: false
+Statistics: Num rows: 25 Data size: 5535 Basic stats: 
COMPLETE Column stats: NONE
+table:
+input format: 

[7/7] hive git commit: HIVE-14731: Use Tez cartesian product edge in Hive (unpartitioned case only) (Zhiyuan Yang via Gunther Hagleitner)

2017-10-24 Thread gunther
HIVE-14731: Use Tez cartesian product edge in Hive (unpartitioned case only) 
(Zhiyuan Yang via Gunther Hagleitner)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/cfbe6125
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/cfbe6125
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/cfbe6125

Branch: refs/heads/master
Commit: cfbe6125725223657dff1e2c9bc3131a5193ae51
Parents: a284df1
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Tue Oct 24 13:06:09 2017 -0700
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Tue Oct 24 13:06:09 2017 -0700

--
 .../hadoop/hive/common/jsonexplain/Vertex.java  |2 +-
 .../common/jsonexplain/tez/TezJsonParser.java   |2 +
 .../org/apache/hadoop/hive/conf/HiveConf.java   |2 +
 data/conf/llap/hive-site.xml|5 +
 data/conf/tez/hive-site.xml |5 +
 .../test/resources/testconfiguration.properties |6 +
 .../hadoop/hive/ql/exec/tez/DagUtils.java   |   69 +-
 .../apache/hadoop/hive/ql/exec/tez/TezTask.java |5 +-
 .../hive/ql/optimizer/ConvertJoinMapJoin.java   |   74 +-
 .../optimizer/physical/CrossProductCheck.java   |  368 ---
 .../optimizer/physical/CrossProductHandler.java |  382 +++
 .../optimizer/physical/PhysicalOptimizer.java   |2 +-
 .../physical/SparkCrossProductCheck.java|   12 +-
 .../hadoop/hive/ql/parse/TezCompiler.java   |4 +-
 .../hadoop/hive/ql/plan/TezEdgeProperty.java|4 +-
 .../hadoop/hive/ql/exec/tez/TestTezTask.java|4 +-
 .../test/queries/clientpositive/cross_prod_1.q  |   34 +
 .../test/queries/clientpositive/cross_prod_3.q  |   13 +
 .../test/queries/clientpositive/cross_prod_4.q  |   10 +
 .../dynamic_partition_pruning_2.q   |2 +-
 .../clientpositive/hybridgrace_hashjoin_1.q |1 +
 .../queries/clientpositive/subquery_multi.q |4 +-
 .../queries/clientpositive/subquery_notin.q |4 +-
 .../queries/clientpositive/subquery_select.q|4 +-
 .../clientpositive/llap/auto_join0.q.out|   56 +-
 .../clientpositive/llap/auto_join_filters.q.out |4 +-
 .../clientpositive/llap/auto_join_nulls.q.out   |2 +-
 .../llap/auto_sortmerge_join_12.q.out   |   64 +-
 .../clientpositive/llap/cross_join.q.out|   94 +-
 .../clientpositive/llap/cross_prod_1.q.out  | 2502 ++
 .../clientpositive/llap/cross_prod_3.q.out  |  133 +
 .../clientpositive/llap/cross_prod_4.q.out  |  195 ++
 .../llap/cross_product_check_1.q.out|   12 +-
 .../llap/cross_product_check_2.q.out|  305 ++-
 .../results/clientpositive/llap/cte_5.q.out |   10 +-
 .../results/clientpositive/llap/cte_mat_1.q.out |   10 +-
 .../results/clientpositive/llap/cte_mat_2.q.out |   10 +-
 .../llap/dynamic_partition_pruning.q.out|   81 +-
 .../llap/dynamic_partition_pruning_2.q.out  |   52 +-
 .../llap/dynamic_semijoin_reduction_sw.q.out|2 +-
 .../clientpositive/llap/explainuser_1.q.out |   30 +-
 .../llap/hybridgrace_hashjoin_1.q.out   |  166 +-
 .../clientpositive/llap/jdbc_handler.q.out  |2 +-
 .../results/clientpositive/llap/join0.q.out |2 +-
 .../clientpositive/llap/leftsemijoin.q.out  |2 +-
 .../results/clientpositive/llap/mapjoin2.q.out  |2 +-
 .../clientpositive/llap/mapjoin_hint.q.out  |   64 +-
 .../clientpositive/llap/subquery_exists.q.out   |6 +-
 .../clientpositive/llap/subquery_in.q.out   |2 +-
 .../clientpositive/llap/subquery_multi.q.out|  106 +-
 .../clientpositive/llap/subquery_notin.q.out|  107 +-
 .../clientpositive/llap/subquery_null_agg.q.out |2 +-
 .../clientpositive/llap/subquery_scalar.q.out   |   48 +-
 .../clientpositive/llap/subquery_select.q.out   |  103 +-
 .../clientpositive/llap/tez_self_join.q.out |2 +-
 .../llap/vector_between_columns.q.out   |  155 +-
 .../llap/vector_complex_all.q.out   |   92 +-
 .../llap/vector_groupby_mapjoin.q.out   |  113 +-
 .../llap/vector_include_no_sel.q.out|   99 +-
 .../llap/vector_join_filters.q.out  |2 +-
 .../clientpositive/llap/vector_join_nulls.q.out |2 +-
 .../vectorized_dynamic_partition_pruning.q.out  |   97 +-
 .../llap/vectorized_multi_output_select.q.out   |   58 +-
 .../clientpositive/spark/subquery_multi.q.out   |   80 +-
 .../clientpositive/spark/subquery_notin.q.out   |  106 +-
 .../clientpositive/spark/subquery_select.q.out  |   84 +-
 .../tez/hybridgrace_hashjoin_1.q.out|  164 +-
 67 files changed, 4670 insertions(+), 1576 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/cfbe6125/common/src/java/org/apache/hadoop/hive/common/jsonexplain

[2/7] hive git commit: HIVE-14731: Use Tez cartesian product edge in Hive (unpartitioned case only) (Zhiyuan Yang via Gunther Hagleitner)

2017-10-24 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/cfbe6125/ql/src/test/results/clientpositive/llap/vector_groupby_mapjoin.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/vector_groupby_mapjoin.q.out 
b/ql/src/test/results/clientpositive/llap/vector_groupby_mapjoin.q.out
index e43b4d1..e644f14 100644
--- a/ql/src/test/results/clientpositive/llap/vector_groupby_mapjoin.q.out
+++ b/ql/src/test/results/clientpositive/llap/vector_groupby_mapjoin.q.out
@@ -1,4 +1,4 @@
-Warning: Map Join MAPJOIN[27][bigTable=?] in task 'Map 1' is a cross product
+Warning: Shuffle Join MERGEJOIN[27][tables = [$hdt$_0, $hdt$_1]] in Stage 
'Reducer 2' is a cross product
 PREHOOK: query: explain vectorization expression
 select *
 from src
@@ -26,10 +26,10 @@ STAGE PLANS:
 Tez
  A masked pattern was here 
   Edges:
-Map 1 <- Reducer 4 (BROADCAST_EDGE), Reducer 5 (BROADCAST_EDGE)
-Reducer 2 <- Map 1 (SIMPLE_EDGE)
-Reducer 4 <- Map 3 (CUSTOM_SIMPLE_EDGE)
-Reducer 5 <- Map 3 (SIMPLE_EDGE)
+Reducer 2 <- Map 1 (XPROD_EDGE), Reducer 5 (XPROD_EDGE), Reducer 6 
(BROADCAST_EDGE)
+Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
+Reducer 5 <- Map 4 (CUSTOM_SIMPLE_EDGE)
+Reducer 6 <- Map 4 (SIMPLE_EDGE)
  A masked pattern was here 
   Vertices:
 Map 1 
@@ -48,58 +48,14 @@ STAGE PLANS:
 native: true
 projectedOutputColumns: [0, 1]
 Statistics: Num rows: 500 Data size: 89000 Basic stats: 
COMPLETE Column stats: COMPLETE
-Map Join Operator
-  condition map:
-   Inner Join 0 to 1
-  keys:
-0 
-1 
-  Map Join Vectorization:
-  className: VectorMapJoinInnerMultiKeyOperator
+Reduce Output Operator
+  sort order: 
+  Reduce Sink Vectorization:
+  className: VectorReduceSinkEmptyKeyOperator
   native: true
-  nativeConditionsMet: 
hive.mapjoin.optimized.hashtable IS true, 
hive.vectorized.execution.mapjoin.native.enabled IS true, hive.execution.engine 
tez IN [tez, spark] IS true, One MapJoin Condition IS true, No nullsafe IS 
true, Small table vectorizes IS true, Optimized Table and Supports Key Types IS 
true
-  outputColumnNames: _col0, _col1, _col2, _col3
-  input vertices:
-1 Reducer 4
-  Statistics: Num rows: 500 Data size: 97000 Basic stats: 
COMPLETE Column stats: COMPLETE
-  Map Join Operator
-condition map:
- Left Outer Join 0 to 1
-keys:
-  0 _col0 (type: string)
-  1 _col0 (type: string)
-Map Join Vectorization:
-className: VectorMapJoinOuterStringOperator
-native: true
-nativeConditionsMet: 
hive.mapjoin.optimized.hashtable IS true, 
hive.vectorized.execution.mapjoin.native.enabled IS true, hive.execution.engine 
tez IN [tez, spark] IS true, One MapJoin Condition IS true, No nullsafe IS 
true, Small table vectorizes IS true, Optimized Table and Supports Key Types IS 
true
-outputColumnNames: _col0, _col1, _col2, _col3, _col5
-input vertices:
-  1 Reducer 5
-Statistics: Num rows: 500 Data size: 98620 Basic 
stats: COMPLETE Column stats: COMPLETE
-Filter Operator
-  Filter Vectorization:
-  className: VectorFilterOperator
-  native: true
-  predicateExpression: FilterExprOrExpr(children: 
FilterLongColEqualLongScalar(col 2, val 0) -> boolean, 
FilterExprAndExpr(children: SelectColumnIsNull(col 4) -> boolean, 
SelectColumnIsNotNull(col 0) -> boolean, 
FilterLongColGreaterEqualLongColumn(col 3, col 2) -> boolean) -> boolean) -> 
boolean
-  predicate: ((_col2 = 0) or (_col5 is null and _col0 
is not null and (_col3 >= _col2))) (type: boolean)
-  Statistics: Num rows: 500 Data size: 98620 Basic 
stats: COMPLETE Column stats: COMPLETE
-  Select Operator
-expressions: _col0 (type: string), _col1 (type: 
string)
-outputColumnNames: _col0, _col1
-Select Vectorization:
-className: VectorSelectOperator
-native: true
-   

[4/7] hive git commit: HIVE-14731: Use Tez cartesian product edge in Hive (unpartitioned case only) (Zhiyuan Yang via Gunther Hagleitner)

2017-10-24 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/cfbe6125/ql/src/test/results/clientpositive/llap/cross_prod_3.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/cross_prod_3.q.out 
b/ql/src/test/results/clientpositive/llap/cross_prod_3.q.out
new file mode 100644
index 000..94fe942
--- /dev/null
+++ b/ql/src/test/results/clientpositive/llap/cross_prod_3.q.out
@@ -0,0 +1,133 @@
+PREHOOK: query: create table X (key string, value string) clustered by (key) 
into 2 buckets
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@X
+POSTHOOK: query: create table X (key string, value string) clustered by (key) 
into 2 buckets
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@X
+PREHOOK: query: insert overwrite table X select distinct * from src order by 
key limit 10
+PREHOOK: type: QUERY
+PREHOOK: Input: default@src
+PREHOOK: Output: default@x
+POSTHOOK: query: insert overwrite table X select distinct * from src order by 
key limit 10
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@src
+POSTHOOK: Output: default@x
+POSTHOOK: Lineage: x.key SIMPLE [(src)src.FieldSchema(name:key, type:string, 
comment:default), ]
+POSTHOOK: Lineage: x.value SIMPLE [(src)src.FieldSchema(name:value, 
type:string, comment:default), ]
+PREHOOK: query: create table Y as
+select * from src order by key limit 1
+PREHOOK: type: CREATETABLE_AS_SELECT
+PREHOOK: Input: default@src
+PREHOOK: Output: database:default
+PREHOOK: Output: default@Y
+POSTHOOK: query: create table Y as
+select * from src order by key limit 1
+POSTHOOK: type: CREATETABLE_AS_SELECT
+POSTHOOK: Input: default@src
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@Y
+POSTHOOK: Lineage: y.key SIMPLE [(src)src.FieldSchema(name:key, type:string, 
comment:default), ]
+POSTHOOK: Lineage: y.value SIMPLE [(src)src.FieldSchema(name:value, 
type:string, comment:default), ]
+PREHOOK: query: explain select * from Y, (select * from X as A join X as B on 
A.key=B.key) as C where Y.key=C.key
+PREHOOK: type: QUERY
+POSTHOOK: query: explain select * from Y, (select * from X as A join X as B on 
A.key=B.key) as C where Y.key=C.key
+POSTHOOK: type: QUERY
+STAGE DEPENDENCIES:
+  Stage-1 is a root stage
+  Stage-0 depends on stages: Stage-1
+
+STAGE PLANS:
+  Stage: Stage-1
+Tez
+ A masked pattern was here 
+  Edges:
+Map 1 <- Map 2 (CUSTOM_EDGE), Map 3 (CUSTOM_EDGE)
+ A masked pattern was here 
+  Vertices:
+Map 1 
+Map Operator Tree:
+TableScan
+  alias: a
+  Statistics: Num rows: 10 Data size: 3680 Basic stats: 
COMPLETE Column stats: NONE
+  Filter Operator
+predicate: key is not null (type: boolean)
+Statistics: Num rows: 10 Data size: 3680 Basic stats: 
COMPLETE Column stats: NONE
+Select Operator
+  expressions: key (type: string), value (type: string)
+  outputColumnNames: _col0, _col1
+  Statistics: Num rows: 10 Data size: 3680 Basic stats: 
COMPLETE Column stats: NONE
+  Map Join Operator
+condition map:
+ Inner Join 0 to 1
+ Inner Join 0 to 2
+keys:
+  0 _col0 (type: string)
+  1 _col0 (type: string)
+  2 _col0 (type: string)
+outputColumnNames: _col0, _col1, _col2, _col3, _col4, 
_col5
+input vertices:
+  1 Map 2
+  2 Map 3
+Statistics: Num rows: 22 Data size: 8096 Basic stats: 
COMPLETE Column stats: NONE
+Select Operator
+  expressions: _col2 (type: string), _col3 (type: 
string), _col0 (type: string), _col1 (type: string), _col4 (type: string), 
_col5 (type: string)
+  outputColumnNames: _col0, _col1, _col2, _col3, 
_col4, _col5
+  Statistics: Num rows: 22 Data size: 8096 Basic 
stats: COMPLETE Column stats: NONE
+  File Output Operator
+compressed: false
+Statistics: Num rows: 22 Data size: 8096 Basic 
stats: COMPLETE Column stats: NONE
+table:
+input format: 
org.apache.hadoop.mapred.SequenceFileInputFormat
+output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
+serde: 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+Execution mode: llap
+LLAP IO: no inputs
+Map 2 
+Map Operator Tree:
+  

hive git commit: HIVE-17228: Bump tez version to 0.9.0 (Zhiyuan Yang via Gunther Hagleitner)

2017-08-08 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master f067df6f5 -> 844ec3431


HIVE-17228: Bump tez version to 0.9.0 (Zhiyuan Yang via Gunther Hagleitner)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/844ec343
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/844ec343
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/844ec343

Branch: refs/heads/master
Commit: 844ec34317b566f226df38c0d9efa9cb55894d93
Parents: f067df6
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Tue Aug 8 11:00:43 2017 -0700
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Tue Aug 8 11:00:43 2017 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/844ec343/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 9c2967f..40699bc 100644
--- a/pom.xml
+++ b/pom.xml
@@ -192,7 +192,7 @@
 1.7.10
 4.0.4
 3.0.0-SNAPSHOT
-0.8.4
+0.9.0
 0.92.0-incubating
 2.2.0
 2.0.0



hive git commit: HIVE-16942: INFORMATION_SCHEMA: schematool for setting it up is not idempotent (Gunther Hagleitner, reviewed by Thejas Nair)

2017-06-27 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master 6fd0d1a48 -> 22494f8bd


HIVE-16942: INFORMATION_SCHEMA: schematool for setting it up is not idempotent 
(Gunther Hagleitner, reviewed by Thejas Nair)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/22494f8b
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/22494f8b
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/22494f8b

Branch: refs/heads/master
Commit: 22494f8bdc99af632cfbd4763b88174876936be3
Parents: 6fd0d1a
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Tue Jun 27 10:18:22 2017 -0700
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Tue Jun 27 10:18:22 2017 -0700

--
 metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql | 2 ++
 ql/src/test/results/clientpositive/llap/sysdb.q.out   | 8 
 2 files changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/22494f8b/metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql
--
diff --git a/metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql 
b/metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql
index 218ac04..2db7e7d 100644
--- a/metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql
+++ b/metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql
@@ -1,5 +1,6 @@
 -- HIVE system db
 
+DROP DATABASE IF EXISTS SYS;
 CREATE DATABASE SYS;
 
 USE SYS;
@@ -946,6 +947,7 @@ SELECT
   max(CASE `PARAM_KEY` WHEN 'transient_lastDdlTime' THEN `PARAM_VALUE` END) AS 
TRANSIENT_LAST_DDL_TIME
 FROM `PARTITION_PARAMS` GROUP BY `PART_ID`;
 
+DROP DATABASE IF EXISTS INFORMATION_SCHEMA;
 CREATE DATABASE INFORMATION_SCHEMA;
 
 USE INFORMATION_SCHEMA;

http://git-wip-us.apache.org/repos/asf/hive/blob/22494f8b/ql/src/test/results/clientpositive/llap/sysdb.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/sysdb.q.out 
b/ql/src/test/results/clientpositive/llap/sysdb.q.out
index 7eba2d8..fbbf8d9 100644
--- a/ql/src/test/results/clientpositive/llap/sysdb.q.out
+++ b/ql/src/test/results/clientpositive/llap/sysdb.q.out
@@ -130,6 +130,10 @@ defaultsrcpart hive_test_user  USER
DELETE  true-1  hive_test_user
 defaultsrcpart hive_test_user  USERINSERT  true
-1  hive_test_user
 defaultsrcpart hive_test_user  USERSELECT  true
-1  hive_test_user
 defaultsrcpart hive_test_user  USERUPDATE  true
-1  hive_test_user
+PREHOOK: query: DROP DATABASE IF EXISTS SYS
+PREHOOK: type: DROPDATABASE
+POSTHOOK: query: DROP DATABASE IF EXISTS SYS
+POSTHOOK: type: DROPDATABASE
 PREHOOK: query: CREATE DATABASE SYS
 PREHOOK: type: CREATEDATABASE
 PREHOOK: Output: database:SYS
@@ -2183,6 +2187,10 @@ POSTHOOK: Lineage: PARTITION_STATS_VIEW.part_id SIMPLE 
[(partition_params)partit
 POSTHOOK: Lineage: PARTITION_STATS_VIEW.raw_data_size EXPRESSION 
[(partition_params)partition_params.FieldSchema(name:param_key, type:string, 
comment:from deserializer), 
(partition_params)partition_params.FieldSchema(name:param_value, type:string, 
comment:from deserializer), ]
 POSTHOOK: Lineage: PARTITION_STATS_VIEW.total_size EXPRESSION 
[(partition_params)partition_params.FieldSchema(name:param_key, type:string, 
comment:from deserializer), 
(partition_params)partition_params.FieldSchema(name:param_value, type:string, 
comment:from deserializer), ]
 POSTHOOK: Lineage: PARTITION_STATS_VIEW.transient_last_ddl_time EXPRESSION 
[(partition_params)partition_params.FieldSchema(name:param_key, type:string, 
comment:from deserializer), 
(partition_params)partition_params.FieldSchema(name:param_value, type:string, 
comment:from deserializer), ]
+PREHOOK: query: DROP DATABASE IF EXISTS INFORMATION_SCHEMA
+PREHOOK: type: DROPDATABASE
+POSTHOOK: query: DROP DATABASE IF EXISTS INFORMATION_SCHEMA
+POSTHOOK: type: DROPDATABASE
 PREHOOK: query: CREATE DATABASE INFORMATION_SCHEMA
 PREHOOK: type: CREATEDATABASE
 PREHOOK: Output: database:INFORMATION_SCHEMA



hive git commit: HIVE-16938: INFORMATION_SCHEMA usability: difficult to access # of table records (Gunther Hagleitner, reviewed by Thejas Nair)

2017-06-26 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master 309cb0d43 -> 055f6a0de


HIVE-16938: INFORMATION_SCHEMA usability: difficult to access # of table 
records (Gunther Hagleitner, reviewed by Thejas Nair)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/055f6a0d
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/055f6a0d
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/055f6a0d

Branch: refs/heads/master
Commit: 055f6a0dee7ff0485773ad97f6bc11b62fd6f386
Parents: 309cb0d
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Fri Jun 23 21:50:55 2017 -0700
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Mon Jun 26 09:28:27 2017 -0700

--
 .../upgrade/hive/hive-schema-3.0.0.hive.sql |  22 ++
 ql/src/test/queries/clientpositive/sysdb.q  |  16 +-
 .../results/clientpositive/llap/sysdb.q.out | 292 +--
 3 files changed, 230 insertions(+), 100 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/055f6a0d/metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql
--
diff --git a/metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql 
b/metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql
index 70559cb..218ac04 100644
--- a/metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql
+++ b/metastore/scripts/upgrade/hive/hive-schema-3.0.0.hive.sql
@@ -924,6 +924,28 @@ FROM
   \"KEY_CONSTRAINTS\""
 );
 
+CREATE VIEW `TABLE_STATS_VIEW` AS
+SELECT
+  `TBL_ID`,
+  max(CASE `PARAM_KEY` WHEN 'COLUMN_STATS_ACCURATE' THEN `PARAM_VALUE` END) AS 
COLUMN_STATS_ACCURATE,
+  max(CASE `PARAM_KEY` WHEN 'numFiles' THEN `PARAM_VALUE` END) AS NUM_FILES,
+  max(CASE `PARAM_KEY` WHEN 'numRows' THEN `PARAM_VALUE` END) AS NUM_ROWS,
+  max(CASE `PARAM_KEY` WHEN 'rawDataSize' THEN `PARAM_VALUE` END) AS 
RAW_DATA_SIZE,
+  max(CASE `PARAM_KEY` WHEN 'totalSize' THEN `PARAM_VALUE` END) AS TOTAL_SIZE,
+  max(CASE `PARAM_KEY` WHEN 'transient_lastDdlTime' THEN `PARAM_VALUE` END) AS 
TRANSIENT_LAST_DDL_TIME
+FROM `TABLE_PARAMS` GROUP BY `TBL_ID`;
+
+CREATE VIEW `PARTITION_STATS_VIEW` AS
+SELECT
+  `PART_ID`,
+  max(CASE `PARAM_KEY` WHEN 'COLUMN_STATS_ACCURATE' THEN `PARAM_VALUE` END) AS 
COLUMN_STATS_ACCURATE,
+  max(CASE `PARAM_KEY` WHEN 'numFiles' THEN `PARAM_VALUE` END) AS NUM_FILES,
+  max(CASE `PARAM_KEY` WHEN 'numRows' THEN `PARAM_VALUE` END) AS NUM_ROWS,
+  max(CASE `PARAM_KEY` WHEN 'rawDataSize' THEN `PARAM_VALUE` END) AS 
RAW_DATA_SIZE,
+  max(CASE `PARAM_KEY` WHEN 'totalSize' THEN `PARAM_VALUE` END) AS TOTAL_SIZE,
+  max(CASE `PARAM_KEY` WHEN 'transient_lastDdlTime' THEN `PARAM_VALUE` END) AS 
TRANSIENT_LAST_DDL_TIME
+FROM `PARTITION_PARAMS` GROUP BY `PART_ID`;
+
 CREATE DATABASE INFORMATION_SCHEMA;
 
 USE INFORMATION_SCHEMA;

http://git-wip-us.apache.org/repos/asf/hive/blob/055f6a0d/ql/src/test/queries/clientpositive/sysdb.q
--
diff --git a/ql/src/test/queries/clientpositive/sysdb.q 
b/ql/src/test/queries/clientpositive/sysdb.q
index 9c9d6be..36d80e2 100644
--- a/ql/src/test/queries/clientpositive/sysdb.q
+++ b/ql/src/test/queries/clientpositive/sysdb.q
@@ -102,6 +102,16 @@ select func_name, func_type from funcs order by func_name, 
func_type limit 5;
 
 select constraint_name from key_constraints order by constraint_name limit 5;
 
+select COLUMN_STATS_ACCURATE, NUM_FILES, NUM_ROWS, RAW_DATA_SIZE, TOTAL_SIZE 
FROM TABLE_STATS_VIEW where COLUMN_STATS_ACCURATE is not null order by 
NUM_FILES, NUM_ROWS, RAW_DATA_SIZE limit 5;
+
+select COLUMN_STATS_ACCURATE, NUM_FILES, NUM_ROWS, RAW_DATA_SIZE, TOTAL_SIZE 
FROM PARTITION_STATS_VIEW where COLUMN_STATS_ACCURATE is not null order by 
NUM_FILES, NUM_ROWS, RAW_DATA_SIZE limit 5;
+
+describe sys.tab_col_stats;
+
+explain select max(num_distincts) from sys.tab_col_stats;
+
+select max(num_distincts) from sys.tab_col_stats;
+
 use INFORMATION_SCHEMA;
 
 select count(*) from SCHEMATA;
@@ -115,9 +125,3 @@ select * from COLUMNS where TABLE_NAME = 'alltypesorc' or 
TABLE_NAME = 'moretype
 select * from COLUMN_PRIVILEGES order by GRANTOR, GRANTEE, TABLE_SCHEMA, 
TABLE_NAME, COLUMN_NAME limit 10;
 
 select TABLE_SCHEMA, TABLE_NAME from views order by TABLE_SCHEMA, TABLE_NAME;
-
-describe sys.tab_col_stats;
-
-explain select max(num_distincts) from sys.tab_col_stats;
-
-select max(num_distincts) from sys.tab_col_stats;

http://git-wip-us.apache.org/repos/asf/hive/blob/055f6a0d/ql/src/test/results/clientpositive/llap/sysdb.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/sysdb.q.out 
b/ql/src/test/results/clientpositive/llap/sysdb.q.out
index f360c65..7eba2d8 100644
--- a/ql/src/te

hive git commit: HIVE-16937: INFORMATION_SCHEMA usability: everything is currently a string (Gunther Hagleitner, reviewed by Jason Dere)

2017-06-23 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master bc510f63d -> 287113edb


HIVE-16937: INFORMATION_SCHEMA usability: everything is currently a string 
(Gunther Hagleitner, reviewed by Jason Dere)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/287113ed
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/287113ed
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/287113ed

Branch: refs/heads/master
Commit: 287113edb62bcf091651ba150dfe51ad76e12fde
Parents: bc510f6
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Fri Jun 23 21:47:34 2017 -0700
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Fri Jun 23 21:47:34 2017 -0700

--
 .../hive/storage/jdbc/JdbcRecordReader.java |   7 +-
 .../org/apache/hive/storage/jdbc/JdbcSerDe.java |  18 +--
 .../storage/jdbc/dao/JdbcRecordIterator.java|   8 +-
 .../dao/GenericJdbcDatabaseAccessorTest.java|   8 +-
 .../test/queries/clientpositive/jdbc_handler.q  |   7 +-
 ql/src/test/queries/clientpositive/sysdb.q  |   8 +-
 .../clientpositive/llap/jdbc_handler.q.out  |  16 +--
 .../results/clientpositive/llap/sysdb.q.out | 116 +--
 8 files changed, 144 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/287113ed/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcRecordReader.java
--
diff --git 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcRecordReader.java 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcRecordReader.java
index 8321a66..88b2f0a 100644
--- 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcRecordReader.java
+++ 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcRecordReader.java
@@ -17,6 +17,7 @@ package org.apache.hive.storage.jdbc;
 import org.apache.hadoop.io.LongWritable;
 import org.apache.hadoop.io.NullWritable;
 import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.io.ObjectWritable;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.mapred.JobConf;
 import org.apache.hadoop.mapred.RecordReader;
@@ -61,11 +62,11 @@ public class JdbcRecordReader implements 
RecordReader<LongWritable, MapWritable>
 LOGGER.debug("JdbcRecordReader has more records to read.");
 key.set(pos);
 pos++;
-Map<String, String> record = iterator.next();
+Map<String, Object> record = iterator.next();
 if ((record != null) && (!record.isEmpty())) {
-  for (Entry<String, String> entry : record.entrySet()) {
+  for (Entry<String, Object> entry : record.entrySet()) {
 value.put(new Text(entry.getKey()),
-entry.getValue() == null ? NullWritable.get() : new 
Text(entry.getValue()));
+entry.getValue() == null ? NullWritable.get() : new 
ObjectWritable(entry.getValue()));
   }
   return true;
 }

http://git-wip-us.apache.org/repos/asf/hive/blob/287113ed/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcSerDe.java
--
diff --git 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcSerDe.java 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcSerDe.java
index e785e9c..3764c8c 100644
--- a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcSerDe.java
+++ b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcSerDe.java
@@ -23,8 +23,11 @@ import 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
 import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory;
 import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
 import 
org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;
+import org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo;
+import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory;
 import org.apache.hadoop.io.NullWritable;
 import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.io.ObjectWritable;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.Writable;
 import org.slf4j.Logger;
@@ -48,7 +51,7 @@ public class JdbcSerDe extends AbstractSerDe {
   private int numColumns;
   private String[] hiveColumnTypeArray;
   private List columnNames;
-  private List row;
+  private List row;
 
 
   /*
@@ -83,13 +86,15 @@ public class JdbcSerDe extends AbstractSerDe {
 
 List fieldInspectors = new 
ArrayList(numColumns);
 for (int i = 0; i < numColumns; i++) {
-  
fieldInspectors.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
+  PrimitiveTypeInf

hive git commit: HIVE-1010 Addendum: Commit file missing from original commit (Gunther Hagleitner)

2017-05-15 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master 7d4554dd1 -> 0ce98b3a7


HIVE-1010 Addendum: Commit file missing from original commit (Gunther 
Hagleitner)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/0ce98b3a
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/0ce98b3a
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/0ce98b3a

Branch: refs/heads/master
Commit: 0ce98b3a7527f72216e9e41f7e610b44ee524758
Parents: 7d4554d
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Mon May 15 14:57:30 2017 -0700
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Mon May 15 14:57:30 2017 -0700

--
 metastore/scripts/upgrade/hive/upgrade.order.hive | 0
 1 file changed, 0 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/0ce98b3a/metastore/scripts/upgrade/hive/upgrade.order.hive
--
diff --git a/metastore/scripts/upgrade/hive/upgrade.order.hive 
b/metastore/scripts/upgrade/hive/upgrade.order.hive
new file mode 100644
index 000..e69de29



[1/3] hive git commit: HIVE-1010: Implement INFORMATION_SCHEMA in Hive (Gunther Hagleitner, reviewed by Thejas Nair)

2017-05-15 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master 72604208e -> 77f44b66d


http://git-wip-us.apache.org/repos/asf/hive/blob/77f44b66/ql/src/test/results/clientpositive/llap/sysdb.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/sysdb.q.out 
b/ql/src/test/results/clientpositive/llap/sysdb.q.out
new file mode 100644
index 000..0ddc373
--- /dev/null
+++ b/ql/src/test/results/clientpositive/llap/sysdb.q.out
@@ -0,0 +1,3447 @@
+PREHOOK: query: create table src_buck (key int, value string) clustered 
by(value) into 2 buckets
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@src_buck
+POSTHOOK: query: create table src_buck (key int, value string) clustered 
by(value) into 2 buckets
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@src_buck
+PREHOOK: query: create table src_skew (key int) skewed by (key) on (1,2,3)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@src_skew
+POSTHOOK: query: create table src_skew (key int) skewed by (key) on (1,2,3)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@src_skew
+PREHOOK: query: CREATE TABLE scr_txn (key int, value string)
+CLUSTERED BY (key) INTO 2 BUCKETS STORED AS ORC
+TBLPROPERTIES (
+  "transactional"="true",
+  "compactor.mapreduce.map.memory.mb"="2048",
+  "compactorthreshold.hive.compactor.delta.num.threshold"="4",
+  "compactorthreshold.hive.compactor.delta.pct.threshold"="0.5")
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@scr_txn
+POSTHOOK: query: CREATE TABLE scr_txn (key int, value string)
+CLUSTERED BY (key) INTO 2 BUCKETS STORED AS ORC
+TBLPROPERTIES (
+  "transactional"="true",
+  "compactor.mapreduce.map.memory.mb"="2048",
+  "compactorthreshold.hive.compactor.delta.num.threshold"="4",
+  "compactorthreshold.hive.compactor.delta.pct.threshold"="0.5")
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@scr_txn
+PREHOOK: query: CREATE TEMPORARY TABLE src_tmp (key int, value string)
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@src_tmp
+POSTHOOK: query: CREATE TEMPORARY TABLE src_tmp (key int, value string)
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@src_tmp
+PREHOOK: query: CREATE TABLE moretypes (a decimal(10,2), b tinyint, c 
smallint, d int, e bigint, f varchar(10), g char(3))
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@moretypes
+POSTHOOK: query: CREATE TABLE moretypes (a decimal(10,2), b tinyint, c 
smallint, d int, e bigint, f varchar(10), g char(3))
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@moretypes
+PREHOOK: query: show grant user hive_test_user
+PREHOOK: type: SHOW_GRANT
+POSTHOOK: query: show grant user hive_test_user
+POSTHOOK: type: SHOW_GRANT
+defaultalltypesorc hive_test_user  USERDELETE  
true-1  hive_test_user
+defaultalltypesorc hive_test_user  USERINSERT  
true-1  hive_test_user
+defaultalltypesorc hive_test_user  USERSELECT  
true-1  hive_test_user
+defaultalltypesorc hive_test_user  USERUPDATE  
true-1  hive_test_user
+defaultcbo_t1  hive_test_user  USERDELETE  true
-1  hive_test_user
+defaultcbo_t1  hive_test_user  USERINSERT  true
-1  hive_test_user
+defaultcbo_t1  hive_test_user  USERSELECT  true
-1  hive_test_user
+defaultcbo_t1  hive_test_user  USERUPDATE  true
-1  hive_test_user
+defaultcbo_t2  hive_test_user  USERDELETE  true
-1  hive_test_user
+defaultcbo_t2  hive_test_user  USERINSERT  true
-1  hive_test_user
+defaultcbo_t2  hive_test_user  USERSELECT  true
-1  hive_test_user
+defaultcbo_t2  hive_test_user  USERUPDATE  true
-1  hive_test_user
+defaultcbo_t3  hive_test_user  USERDELETE  true
-1  hive_test_user
+defaultcbo_t3  hive_test_user  USERINSERT  true
-1  hive_test_user
+defaultcbo_t3  hive_test_user  USERSELECT  true
-1  hive_test_user
+defaultcbo_t3  hive_test_user  USERUPDATE  true
-1  hive_test_user
+defaultlineitemhive_test_user  USERDELETE  
true-1  hive_test_user
+defaultlineitemhive_test_user  USER 

[2/3] hive git commit: HIVE-1010: Implement INFORMATION_SCHEMA in Hive (Gunther Hagleitner, reviewed by Thejas Nair)

2017-05-15 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/77f44b66/ql/src/java/org/apache/hadoop/hive/ql/index/HiveIndexedInputFormat.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/index/HiveIndexedInputFormat.java 
b/ql/src/java/org/apache/hadoop/hive/ql/index/HiveIndexedInputFormat.java
index 0e6ec84..a02baf9 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/index/HiveIndexedInputFormat.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/index/HiveIndexedInputFormat.java
@@ -81,7 +81,12 @@ public class HiveIndexedInputFormat extends HiveInputFormat {
   // class
   Class inputFormatClass = part.getInputFileFormatClass();
   InputFormat inputFormat = getInputFormatFromCache(inputFormatClass, job);
-  Utilities.copyTableJobPropertiesToConf(part.getTableDesc(), newjob);
+
+  try {
+Utilities.copyTableJobPropertiesToConf(part.getTableDesc(), newjob);
+  } catch (HiveException e) {
+throw new IOException(e);
+  }
 
   FileInputFormat.setInputPaths(newjob, dir);
   newjob.setInputFormat(inputFormat.getClass());

http://git-wip-us.apache.org/repos/asf/hive/blob/77f44b66/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
index 010b88c..21394c6 100755
--- a/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
@@ -357,9 +357,13 @@ public class HiveInputFormat
   LOG.debug("Found spec for " + hsplit.getPath() + " " + part + " from " + 
pathToPartitionInfo);
 }
 
-if ((part != null) && (part.getTableDesc() != null)) {
-  Utilities.copyTableJobPropertiesToConf(part.getTableDesc(), job);
-  nonNative = part.getTableDesc().isNonNative();
+try {
+  if ((part != null) && (part.getTableDesc() != null)) {
+Utilities.copyTableJobPropertiesToConf(part.getTableDesc(), job);
+nonNative = part.getTableDesc().isNonNative();
+  }
+} catch (HiveException e) {
+  throw new IOException(e);
 }
 
 Path splitPath = hsplit.getPath();
@@ -419,7 +423,11 @@ public class HiveInputFormat
   InputFormat inputFormat, Class inputFormatClass, 
int splits,
   TableDesc table, List result) throws IOException {
 
-Utilities.copyTablePropertiesToConf(table, conf);
+try {
+  Utilities.copyTablePropertiesToConf(table, conf);
+} catch (HiveException e) {
+  throw new IOException(e);
+}
 
 if (tableScan != null) {
   pushFilters(conf, tableScan);

http://git-wip-us.apache.org/repos/asf/hive/blob/77f44b66/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ProjectionPusher.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ProjectionPusher.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ProjectionPusher.java
index 68407f5..42f9b66 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ProjectionPusher.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/ProjectionPusher.java
@@ -183,9 +183,14 @@ public class ProjectionPusher {
 final JobConf cloneJobConf = new JobConf(jobConf);
 final PartitionDesc part = pathToPartitionInfo.get(path);
 
-if ((part != null) && (part.getTableDesc() != null)) {
-  Utilities.copyTableJobPropertiesToConf(part.getTableDesc(), 
cloneJobConf);
+try {
+  if ((part != null) && (part.getTableDesc() != null)) {
+Utilities.copyTableJobPropertiesToConf(part.getTableDesc(), 
cloneJobConf);
+  }
+} catch (Exception e) {
+  throw new IOException(e);
 }
+
 pushProjectionsAndFilters(cloneJobConf, path.toString(), 
path.toUri().getPath());
 return cloneJobConf;
   }

http://git-wip-us.apache.org/repos/asf/hive/blob/77f44b66/ql/src/java/org/apache/hadoop/hive/ql/metadata/DefaultStorageHandler.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/metadata/DefaultStorageHandler.java 
b/ql/src/java/org/apache/hadoop/hive/ql/metadata/DefaultStorageHandler.java
index 82b78b8..e87a96d 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/metadata/DefaultStorageHandler.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/metadata/DefaultStorageHandler.java
@@ -93,6 +93,11 @@ public class DefaultStorageHandler implements 
HiveStorageHandler {
   }
 
   @Override
+  public void configureInputJobCredentials(TableDesc tableDesc, Map secrets) {
+//do nothing by default
+  }
+
+  @Override
   public Configuration getConf() {
 return conf;
   }

http://git-wip-us.apache.org/repos/asf/hive/blob/77f44b66/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStorageHandler.java

[2/5] hive git commit: HIVE-16423: Add hint to enforce semi join optimization (Deepak Jaiswal, reviewed by Jason Dere)

2017-04-20 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java.orig
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java.orig 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java.orig
new file mode 100644
index 000..b5a5645
--- /dev/null
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java.orig
@@ -0,0 +1,13508 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.parse;
+
+import static org.apache.hadoop.hive.conf.HiveConf.ConfVars.HIVESTATSDBCLASS;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.Serializable;
+import java.security.AccessControlException;
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Deque;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Queue;
+import java.util.Set;
+import java.util.TreeSet;
+import java.util.UUID;
+import java.util.regex.Pattern;
+import java.util.regex.PatternSyntaxException;
+
+import org.antlr.runtime.ClassicToken;
+import org.antlr.runtime.CommonToken;
+import org.antlr.runtime.Token;
+import org.antlr.runtime.tree.Tree;
+import org.antlr.runtime.tree.TreeVisitor;
+import org.antlr.runtime.tree.TreeVisitorAction;
+import org.antlr.runtime.tree.TreeWizard;
+import org.antlr.runtime.tree.TreeWizard.ContextVisitor;
+import org.apache.calcite.rel.RelNode;
+import org.apache.commons.lang.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.hive.common.FileUtils;
+import org.apache.hadoop.hive.common.ObjectPair;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.common.StatsSetupConst.StatDB;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.conf.HiveConf.StrictChecks;
+import org.apache.hadoop.hive.metastore.MetaStoreUtils;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.Warehouse;
+import org.apache.hadoop.hive.metastore.api.Database;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.Order;
+import org.apache.hadoop.hive.metastore.api.SQLForeignKey;
+import org.apache.hadoop.hive.metastore.api.SQLPrimaryKey;
+import org.apache.hadoop.hive.ql.CompilationOpContext;
+import org.apache.hadoop.hive.ql.Context;
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.QueryProperties;
+import org.apache.hadoop.hive.ql.QueryState;
+import org.apache.hadoop.hive.ql.exec.AbstractMapJoinOperator;
+import org.apache.hadoop.hive.ql.exec.ArchiveUtils;
+import org.apache.hadoop.hive.ql.exec.ColumnInfo;
+import org.apache.hadoop.hive.ql.exec.ExprNodeEvaluatorFactory;
+import org.apache.hadoop.hive.ql.exec.FetchTask;
+import org.apache.hadoop.hive.ql.exec.FileSinkOperator;
+import org.apache.hadoop.hive.ql.exec.FilterOperator;
+import org.apache.hadoop.hive.ql.exec.FunctionInfo;
+import org.apache.hadoop.hive.ql.exec.FunctionRegistry;
+import org.apache.hadoop.hive.ql.exec.GroupByOperator;
+import org.apache.hadoop.hive.ql.exec.JoinOperator;
+import org.apache.hadoop.hive.ql.exec.Operator;
+import org.apache.hadoop.hive.ql.exec.OperatorFactory;
+import org.apache.hadoop.hive.ql.exec.RecordReader;
+import org.apache.hadoop.hive.ql.exec.RecordWriter;
+import org.apache.hadoop.hive.ql.exec.ReduceSinkOperator;
+import org.apache.hadoop.hive.ql.exec.RowSchema;
+import org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator;
+import org.apache.hadoop.hive.ql.exec.SelectOperator;
+import org.apache.hadoop.hive.ql.exec.TableScanOperator;

[5/5] hive git commit: HIVE-16423: Add hint to enforce semi join optimization (Deepak Jaiswal, reviewed by Jason Dere)

2017-04-20 Thread gunther
HIVE-16423: Add hint to enforce semi join optimization (Deepak Jaiswal, 
reviewed by Jason Dere)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/9d5d737d
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/9d5d737d
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/9d5d737d

Branch: refs/heads/master
Commit: 9d5d737db4f715a880f0d544d548a5ce370f602b
Parents: fa24d4b
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Thu Apr 20 10:07:52 2017 -0700
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Thu Apr 20 10:07:52 2017 -0700

--
 .../org/apache/hadoop/hive/conf/HiveConf.java   | 2 +
 .../test/resources/testconfiguration.properties | 1 +
 .../hive/ql/optimizer/ConvertJoinMapJoin.java   | 4 +-
 .../DynamicPartitionPruningOptimization.java|   102 +-
 .../calcite/translator/HiveOpConverter.java |24 +-
 .../hadoop/hive/ql/parse/CalcitePlanner.java|35 +
 .../hive/ql/parse/CalcitePlanner.java.orig  |  4188 +
 .../hadoop/hive/ql/parse/GenTezUtils.java   |25 +-
 .../apache/hadoop/hive/ql/parse/HintParser.g| 3 +
 .../hadoop/hive/ql/parse/ParseContext.java  |25 +-
 .../apache/hadoop/hive/ql/parse/QBJoinTree.java |16 +
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java  |63 +
 .../hive/ql/parse/SemanticAnalyzer.java.orig| 13508 +
 .../hive/ql/parse/SemiJoinBranchInfo.java   |45 +
 .../hadoop/hive/ql/parse/SemiJoinHint.java  |43 +
 .../hadoop/hive/ql/parse/TaskCompiler.java  | 2 +-
 .../hadoop/hive/ql/parse/TezCompiler.java   |   137 +-
 .../hive/ql/plan/ExprNodeDynamicListDesc.java   |15 +-
 .../apache/hadoop/hive/ql/plan/JoinDesc.java|18 +
 .../hive/ql/ppd/SyntheticJoinPredicate.java | 6 +-
 .../ql/udf/generic/GenericUDAFBloomFilter.java  |13 +
 .../test/queries/clientpositive/semijoin_hint.q |54 +
 .../clientpositive/llap/semijoin_hint.q.out |   899 ++
 23 files changed, 19107 insertions(+), 121 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
--
diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 420d35e..b10b08e 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -2892,6 +2892,8 @@ public class HiveConf extends Configuration {
 "Big table for runtime filteting should be of atleast this size"),
 
TEZ_DYNAMIC_SEMIJOIN_REDUCTION_THRESHOLD("hive.tez.dynamic.semijoin.reduction.threshold",
 (float) 0.50,
 "Only perform semijoin optimization if the estimated benefit at or 
above this fraction of the target table"),
+
TEZ_DYNAMIC_SEMIJOIN_REDUCTION_HINT_ONLY("hive.tez.dynamic.semijoin.reduction.hint.only",
 false,
+"When true, only enforce semijoin when a hint is provided"),
 TEZ_SMB_NUMBER_WAVES(
 "hive.tez.smb.number.waves",
 (float) 0.5,

http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/itests/src/test/resources/testconfiguration.properties
--
diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index ed5ce9d..116d0eb 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -572,6 +572,7 @@ minillaplocal.query.files=acid_globallimit.q,\
   schema_evol_text_vecrow_table.q,\
   selectDistinctStar.q,\
   semijoin.q,\
+  semijoin_hint.q,\
   smb_cache.q,\
   special_character_in_tabnames_1.q,\
   sqlmerge.q,\

http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
index db6b05b..637bc54 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
@@ -794,7 +794,7 @@ public class ConvertJoinMapJoin implements NodeProcessor {
   // The semijoin branch can potentially create a task level cycle
   // with the hashjoin except when it is dynamically partitioned hash
   // join which takes place in a separate task.
-  if (context.parseContext.getRsOpToTsOpMap().siz

[3/5] hive git commit: HIVE-16423: Add hint to enforce semi join optimization (Deepak Jaiswal, reviewed by Jason Dere)

2017-04-20 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java
index d58f447..83e89af 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java
@@ -266,11 +266,14 @@ public class GenTezUtils {
 }
   }
   // This TableScanOperator could be part of semijoin optimization.
-  Map rsOpToTsOpMap =
-  context.parseContext.getRsOpToTsOpMap();
-  for (ReduceSinkOperator rs : rsOpToTsOpMap.keySet()) {
-if (rsOpToTsOpMap.get(rs) == orig) {
-  rsOpToTsOpMap.put(rs, (TableScanOperator) newRoot);
+  Map rsToSemiJoinBranchInfo =
+  context.parseContext.getRsToSemiJoinBranchInfo();
+  for (ReduceSinkOperator rs : rsToSemiJoinBranchInfo.keySet()) {
+SemiJoinBranchInfo sjInfo = rsToSemiJoinBranchInfo.get(rs);
+if (sjInfo.getTsOp() == orig) {
+  SemiJoinBranchInfo newSJInfo = new SemiJoinBranchInfo(
+  (TableScanOperator)newRoot, sjInfo.getIsHint());
+  rsToSemiJoinBranchInfo.put(rs, newSJInfo);
 }
   }
 }
@@ -516,19 +519,18 @@ public class GenTezUtils {
 return EdgeType.SIMPLE_EDGE;
   }
 
-  public static void processDynamicMinMaxPushDownOperator(
+  public static void processDynamicSemiJoinPushDownOperator(
   GenTezProcContext procCtx, RuntimeValuesInfo runtimeValuesInfo,
   ReduceSinkOperator rs)
   throws SemanticException {
-TableScanOperator ts = procCtx.parseContext.getRsOpToTsOpMap().get(rs);
+SemiJoinBranchInfo sjInfo = 
procCtx.parseContext.getRsToSemiJoinBranchInfo().get(rs);
 
 List rsWorkList = procCtx.childToWorkMap.get(rs);
-if (ts == null || rsWorkList == null) {
+if (sjInfo == null || rsWorkList == null) {
   // This happens when the ReduceSink's edge has been removed by cycle
   // detection logic. Nothing to do here.
   return;
 }
-LOG.debug("ResduceSink " + rs + " to TableScan " + ts);
 
 if (rsWorkList.size() != 1) {
   StringBuilder sb = new StringBuilder();
@@ -541,6 +543,9 @@ public class GenTezUtils {
   throw new SemanticException(rs + " belongs to multiple BaseWorks: " + 
sb.toString());
 }
 
+TableScanOperator ts = sjInfo.getTsOp();
+LOG.debug("ResduceSink " + rs + " to TableScan " + ts);
+
 BaseWork parentWork = rsWorkList.get(0);
 BaseWork childWork = procCtx.rootToWorkMap.get(ts);
 
@@ -611,7 +616,7 @@ public class GenTezUtils {
 skip = true;
   }
 }
-context.getRsOpToTsOpMap().remove(rs);
+context.getRsToSemiJoinBranchInfo().remove(rs);
   }
 
   private static class DynamicValuePredicateContext implements 
NodeProcessorCtx {

http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/ql/src/java/org/apache/hadoop/hive/ql/parse/HintParser.g
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/HintParser.g 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/HintParser.g
index 8e70a46..e110fb3 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/HintParser.g
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/HintParser.g
@@ -31,6 +31,7 @@ tokens {
   TOK_MAPJOIN;
   TOK_STREAMTABLE;
   TOK_HINTARGLIST;
+  TOK_LEFTSEMIJOIN;
 }
 
 @header {
@@ -69,6 +70,7 @@ hintItem
 hintName
 :
 KW_MAPJOIN -> TOK_MAPJOIN
+| KW_SEMI -> TOK_LEFTSEMIJOIN
 | KW_STREAMTABLE -> TOK_STREAMTABLE
 ;
 
@@ -80,4 +82,5 @@ hintArgs
 hintArgName
 :
 Identifier
+| Number
 ;

http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java
index 3f9f76c..9a69f90 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java
@@ -33,17 +33,7 @@ import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
 import org.apache.hadoop.hive.ql.Context;
 import org.apache.hadoop.hive.ql.QueryProperties;
 import org.apache.hadoop.hive.ql.QueryState;
-import org.apache.hadoop.hive.ql.exec.AbstractMapJoinOperator;
-import org.apache.hadoop.hive.ql.exec.FetchTask;
-import org.apache.hadoop.hive.ql.exec.JoinOperator;
-import org.apache.hadoop.hive.ql.exec.ListSinkOperator;
-import org.apache.hadoop.hive.ql.exec.MapJoinOperator;
-import 

[1/5] hive git commit: HIVE-16423: Add hint to enforce semi join optimization (Deepak Jaiswal, reviewed by Jason Dere)

2017-04-20 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master fa24d4b9b -> 9d5d737db


http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/ql/src/java/org/apache/hadoop/hive/ql/parse/SemiJoinBranchInfo.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemiJoinBranchInfo.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemiJoinBranchInfo.java
new file mode 100644
index 000..5d7b9e5
--- /dev/null
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemiJoinBranchInfo.java
@@ -0,0 +1,45 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.parse;
+
+
+import org.apache.hadoop.hive.ql.exec.TableScanOperator;
+
+public class SemiJoinBranchInfo {
+  private TableScanOperator ts;
+  private boolean isHint;
+
+  public SemiJoinBranchInfo(TableScanOperator ts) {
+this.ts = ts;
+isHint = false;
+  }
+
+  public SemiJoinBranchInfo(TableScanOperator ts, boolean isHint) {
+this.ts = ts;
+this.isHint = isHint;
+  }
+
+  public TableScanOperator getTsOp() {
+return ts;
+  }
+
+  public boolean getIsHint() {
+return isHint;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/ql/src/java/org/apache/hadoop/hive/ql/parse/SemiJoinHint.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemiJoinHint.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemiJoinHint.java
new file mode 100644
index 000..1f24e23
--- /dev/null
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemiJoinHint.java
@@ -0,0 +1,43 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.parse;
+
+public class SemiJoinHint {
+  private String tabAlias;
+  private String colName;
+  private Integer numEntries;
+
+  public SemiJoinHint(String tabAlias, String colName, Integer numEntries) {
+this.tabAlias = tabAlias;
+this.colName = colName;
+this.numEntries = numEntries;
+  }
+
+  public String getTabAlias() {
+return tabAlias;
+  }
+
+  public String getColName() {
+return colName;
+  }
+
+  public Integer getNumEntries() {
+return numEntries != null ? numEntries : -1;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java
index 7caeb78..96525b4 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java
@@ -531,7 +531,7 @@ public abstract class TaskCompiler {
 clone.setLineageInfo(pCtx.getLineageInfo());
 clone.setMapJoinOps(pCtx.getMapJoinOps());
 clone.setRsToRuntimeValuesInfoMap(pCtx.getRsToRuntimeValuesInfoMap());
-clone.setRsOpToTsOpMap(pCtx.getRsOpToTsOpMap());
+clone.setRsToSemiJoinBranchInfo(pCtx.getRsToSemiJoinBranchInfo());
 
 return clone;
   }

http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java 

[4/5] hive git commit: HIVE-16423: Add hint to enforce semi join optimization (Deepak Jaiswal, reviewed by Jason Dere)

2017-04-20 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/9d5d737d/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java.orig
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java.orig 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java.orig
new file mode 100644
index 000..c97b3e7
--- /dev/null
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java.orig
@@ -0,0 +1,4188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.parse;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.UndeclaredThrowableException;
+import java.math.BigDecimal;
+import java.util.AbstractMap.SimpleEntry;
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.BitSet;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Deque;
+import java.util.EnumSet;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.antlr.runtime.ClassicToken;
+import org.antlr.runtime.CommonToken;
+import org.antlr.runtime.tree.TreeVisitor;
+import org.antlr.runtime.tree.TreeVisitorAction;
+import org.apache.calcite.adapter.druid.DruidQuery;
+import org.apache.calcite.adapter.druid.DruidRules;
+import org.apache.calcite.adapter.druid.DruidSchema;
+import org.apache.calcite.adapter.druid.DruidTable;
+import org.apache.calcite.adapter.druid.LocalInterval;
+import org.apache.calcite.config.CalciteConnectionConfigImpl;
+import org.apache.calcite.config.CalciteConnectionProperty;
+import org.apache.calcite.plan.RelOptCluster;
+import org.apache.calcite.plan.RelOptMaterialization;
+import org.apache.calcite.plan.RelOptPlanner;
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptSchema;
+import org.apache.calcite.plan.RelOptUtil;
+import org.apache.calcite.plan.RelTraitSet;
+import org.apache.calcite.plan.hep.HepMatchOrder;
+import org.apache.calcite.plan.hep.HepPlanner;
+import org.apache.calcite.plan.hep.HepProgram;
+import org.apache.calcite.plan.hep.HepProgramBuilder;
+import org.apache.calcite.rel.RelCollation;
+import org.apache.calcite.rel.RelCollationImpl;
+import org.apache.calcite.rel.RelCollations;
+import org.apache.calcite.rel.RelFieldCollation;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Aggregate;
+import org.apache.calcite.rel.core.AggregateCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.rel.core.SetOp;
+import org.apache.calcite.rel.core.TableScan;
+import org.apache.calcite.rel.metadata.CachingRelMetadataProvider;
+import org.apache.calcite.rel.metadata.ChainedRelMetadataProvider;
+import org.apache.calcite.rel.metadata.DefaultRelMetadataProvider;
+import org.apache.calcite.rel.metadata.JaninoRelMetadataProvider;
+import org.apache.calcite.rel.metadata.RelMetadataProvider;
+import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.rel.rules.FilterMergeRule;
+import org.apache.calcite.rel.rules.JoinToMultiJoinRule;
+import org.apache.calcite.rel.rules.LoptOptimizeJoinRule;
+import org.apache.calcite.rel.rules.ProjectMergeRule;
+import org.apache.calcite.rel.rules.ProjectRemoveRule;
+import org.apache.calcite.rel.rules.SemiJoinFilterTransposeRule;
+import org.apache.calcite.rel.rules.SemiJoinJoinTransposeRule;
+import org.apache.calcite.rel.rules.SemiJoinProjectTransposeRule;
+import org.apache.calcite.rel.rules.UnionMergeRule;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.rel.type.RelDataTypeImpl;
+import org.apache.calcite.rex.RexBuilder;
+import 

[2/3] hive git commit: HIVE-16132: DataSize stats don't seem correct in semijoin opt branch (Deepak Jaiswal via Gunther Hagleitner)

2017-03-13 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/be47d9e3/ql/src/test/results/clientpositive/llap/dynamic_semijoin_reduction.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/dynamic_semijoin_reduction.q.out 
b/ql/src/test/results/clientpositive/llap/dynamic_semijoin_reduction.q.out
index 012db41..d32cb5c 100644
--- a/ql/src/test/results/clientpositive/llap/dynamic_semijoin_reduction.q.out
+++ b/ql/src/test/results/clientpositive/llap/dynamic_semijoin_reduction.q.out
@@ -102,6 +102,52 @@ POSTHOOK: Input: default@srcpart@ds=2008-04-09/hr=12
 POSTHOOK: Output: default@srcpart_small@ds=2008-04-09
 POSTHOOK: Lineage: srcpart_small PARTITION(ds=2008-04-09).key1 SIMPLE 
[(srcpart)srcpart.FieldSchema(name:key, type:string, comment:default), ]
 POSTHOOK: Lineage: srcpart_small PARTITION(ds=2008-04-09).value1 SIMPLE 
[(srcpart)srcpart.FieldSchema(name:value, type:string, comment:default), ]
+PREHOOK: query: analyze table alltypesorc_int compute statistics for columns
+PREHOOK: type: QUERY
+PREHOOK: Input: default@alltypesorc_int
+PREHOOK: Output: default@alltypesorc_int
+ A masked pattern was here 
+POSTHOOK: query: analyze table alltypesorc_int compute statistics for columns
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@alltypesorc_int
+POSTHOOK: Output: default@alltypesorc_int
+ A masked pattern was here 
+PREHOOK: query: analyze table srcpart_date compute statistics for columns
+PREHOOK: type: QUERY
+PREHOOK: Input: default@srcpart_date
+PREHOOK: Input: default@srcpart_date@ds=2008-04-08
+PREHOOK: Input: default@srcpart_date@ds=2008-04-09
+PREHOOK: Output: default@srcpart_date
+PREHOOK: Output: default@srcpart_date@ds=2008-04-08
+PREHOOK: Output: default@srcpart_date@ds=2008-04-09
+ A masked pattern was here 
+POSTHOOK: query: analyze table srcpart_date compute statistics for columns
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@srcpart_date
+POSTHOOK: Input: default@srcpart_date@ds=2008-04-08
+POSTHOOK: Input: default@srcpart_date@ds=2008-04-09
+POSTHOOK: Output: default@srcpart_date
+POSTHOOK: Output: default@srcpart_date@ds=2008-04-08
+POSTHOOK: Output: default@srcpart_date@ds=2008-04-09
+ A masked pattern was here 
+PREHOOK: query: analyze table srcpart_small compute statistics for columns
+PREHOOK: type: QUERY
+PREHOOK: Input: default@srcpart_small
+PREHOOK: Input: default@srcpart_small@ds=2008-04-08
+PREHOOK: Input: default@srcpart_small@ds=2008-04-09
+PREHOOK: Output: default@srcpart_small
+PREHOOK: Output: default@srcpart_small@ds=2008-04-08
+PREHOOK: Output: default@srcpart_small@ds=2008-04-09
+ A masked pattern was here 
+POSTHOOK: query: analyze table srcpart_small compute statistics for columns
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@srcpart_small
+POSTHOOK: Input: default@srcpart_small@ds=2008-04-08
+POSTHOOK: Input: default@srcpart_small@ds=2008-04-09
+POSTHOOK: Output: default@srcpart_small
+POSTHOOK: Output: default@srcpart_small@ds=2008-04-08
+POSTHOOK: Output: default@srcpart_small@ds=2008-04-09
+ A masked pattern was here 
 PREHOOK: query: EXPLAIN select count(*) from srcpart_date join srcpart_small 
on (srcpart_date.key = srcpart_small.key1)
 PREHOOK: type: QUERY
 POSTHOOK: query: EXPLAIN select count(*) from srcpart_date join srcpart_small 
on (srcpart_date.key = srcpart_small.key1)
@@ -124,19 +170,19 @@ STAGE PLANS:
 TableScan
   alias: srcpart_date
   filterExpr: key is not null (type: boolean)
-  Statistics: Num rows: 2000 Data size: 368000 Basic stats: 
COMPLETE Column stats: NONE
+  Statistics: Num rows: 2000 Data size: 174000 Basic stats: 
COMPLETE Column stats: COMPLETE
   Filter Operator
 predicate: key is not null (type: boolean)
-Statistics: Num rows: 2000 Data size: 368000 Basic stats: 
COMPLETE Column stats: NONE
+Statistics: Num rows: 2000 Data size: 174000 Basic stats: 
COMPLETE Column stats: COMPLETE
 Select Operator
   expressions: key (type: string)
   outputColumnNames: _col0
-  Statistics: Num rows: 2000 Data size: 368000 Basic 
stats: COMPLETE Column stats: NONE
+  Statistics: Num rows: 2000 Data size: 174000 Basic 
stats: COMPLETE Column stats: COMPLETE
   Reduce Output Operator
 key expressions: _col0 (type: string)
 sort order: +
 Map-reduce partition columns: _col0 (type: string)
-Statistics: Num rows: 2000 Data size: 368000 Basic 
stats: COMPLETE Column stats: NONE
+Statistics: Num rows: 2000 Data size: 174000 Basic 
stats: COMPLETE Column stats: COMPLETE
 Execution mode: llap
 LLAP IO: all inputs
 Map 4 

[3/3] hive git commit: HIVE-16132: DataSize stats don't seem correct in semijoin opt branch (Deepak Jaiswal via Gunther Hagleitner)

2017-03-13 Thread gunther
HIVE-16132: DataSize stats don't seem correct in semijoin opt branch (Deepak 
Jaiswal via Gunther Hagleitner)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/be47d9e3
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/be47d9e3
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/be47d9e3

Branch: refs/heads/master
Commit: be47d9e3fae437c7644e47679119d20b86f8a332
Parents: c76ce91
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Mon Mar 13 11:14:56 2017 -0700
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Mon Mar 13 11:15:22 2017 -0700

--
 .../DynamicPartitionPruningOptimization.java|  57 +-
 .../clientpositive/dynamic_semijoin_reduction.q |  14 +-
 .../llap/dynamic_semijoin_reduction.q.out   | 870 +++
 .../results/clientpositive/llap/mergejoin.q.out |   2 +-
 4 files changed, 584 insertions(+), 359 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/be47d9e3/ql/src/java/org/apache/hadoop/hive/ql/optimizer/DynamicPartitionPruningOptimization.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/DynamicPartitionPruningOptimization.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/DynamicPartitionPruningOptimization.java
index e6f21e9..b6db6aa 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/DynamicPartitionPruningOptimization.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/DynamicPartitionPruningOptimization.java
@@ -394,19 +394,18 @@ public class DynamicPartitionPruningOptimization 
implements NodeProcessor {
 // we need the expr that generated the key of the reduce sink
 ExprNodeDesc key = 
ctx.generator.getConf().getKeyCols().get(ctx.desc.getKeyIndex());
 
-if (parentOfRS instanceof SelectOperator) {
-  // Make sure the semijoin branch is not on parition column.
-  String internalColName = null;
-  ExprNodeDesc exprNodeDesc = key;
-  // Find the ExprNodeColumnDesc
-  while (!(exprNodeDesc instanceof ExprNodeColumnDesc) &&
-  (exprNodeDesc.getChildren() != null)) {
-exprNodeDesc = exprNodeDesc.getChildren().get(0);
-  }
-
-  if (exprNodeDesc instanceof ExprNodeColumnDesc) {
-internalColName = ((ExprNodeColumnDesc) exprNodeDesc).getColumn();
+String internalColName = null;
+ExprNodeDesc exprNodeDesc = key;
+// Find the ExprNodeColumnDesc
+while (!(exprNodeDesc instanceof ExprNodeColumnDesc) &&
+(exprNodeDesc.getChildren() != null)) {
+  exprNodeDesc = exprNodeDesc.getChildren().get(0);
+}
 
+if (exprNodeDesc instanceof ExprNodeColumnDesc) {
+  internalColName = ((ExprNodeColumnDesc) exprNodeDesc).getColumn();
+  if (parentOfRS instanceof SelectOperator) {
+// Make sure the semijoin branch is not on parition column.
 ExprNodeColumnDesc colExpr = ((ExprNodeColumnDesc) (parentOfRS.
 getColumnExprMap().get(internalColName)));
 String colName = ExprNodeDescUtils.extractColName(colExpr);
@@ -423,12 +422,13 @@ public class DynamicPartitionPruningOptimization 
implements NodeProcessor {
   // The column is partition column, skip the optimization.
   return false;
 }
-  } else {
-// No column found!
-// Bail out
-return false;
   }
+} else {
+  // No column found!
+  // Bail out
+  return false;
 }
+
 List keyExprs = new ArrayList();
 keyExprs.add(key);
 
@@ -438,9 +438,32 @@ public class DynamicPartitionPruningOptimization 
implements NodeProcessor {
 
 // project the relevant key column
 SelectDesc select = new SelectDesc(keyExprs, outputNames);
+
+// Create the new RowSchema for the projected column
+ColumnInfo columnInfo = 
parentOfRS.getSchema().getColumnInfo(internalColName);
+ArrayList signature = new ArrayList();
+signature.add(columnInfo);
+RowSchema rowSchema = new RowSchema(signature);
+
+// Create the column expr map
+Map<String, ExprNodeDesc> colExprMap = new HashMap<String, ExprNodeDesc>();
+ExprNodeDesc exprNode = null;
+if ( parentOfRS.getColumnExprMap() != null) {
+  exprNode = parentOfRS.getColumnExprMap().get(internalColName).clone();
+} else {
+  exprNode = new ExprNodeColumnDesc(columnInfo);
+}
+
+if (exprNode instanceof ExprNodeColumnDesc) {
+  ExprNodeColumnDesc encd = (ExprNodeColumnDesc) exprNode;
+  encd.setColumn(internalColName);
+}
+colExprMap.put(internalColName, exprNode);
+
+// Create the Select Operator
 SelectOperator selectOp =
 (SelectOperator) OperatorFactory.getAndMakeChild(select,
-new RowSc

[1/3] hive git commit: HIVE-16132: DataSize stats don't seem correct in semijoin opt branch (Deepak Jaiswal via Gunther Hagleitner)

2017-03-13 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master c76ce912b -> be47d9e3f


http://git-wip-us.apache.org/repos/asf/hive/blob/be47d9e3/ql/src/test/results/clientpositive/llap/mergejoin.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/mergejoin.q.out 
b/ql/src/test/results/clientpositive/llap/mergejoin.q.out
index 2dcfd6b..ae99e66 100644
--- a/ql/src/test/results/clientpositive/llap/mergejoin.q.out
+++ b/ql/src/test/results/clientpositive/llap/mergejoin.q.out
@@ -61,7 +61,7 @@ STAGE PLANS:
   Select Operator
 expressions: _col0 (type: string)
 outputColumnNames: _col0
-Statistics: Num rows: 25 Data size: 4375 Basic stats: 
COMPLETE Column stats: COMPLETE
+Statistics: Num rows: 25 Data size: 2150 Basic stats: 
COMPLETE Column stats: COMPLETE
 Group By Operator
   aggregations: min(_col0), max(_col0), 
bloom_filter(_col0, expectedEntries=14)
   mode: hash



[2/2] hive git commit: HIVE-1555: JDBC Storage Handler (Gunther Hagleitner, reviewed by Jason Dere)

2017-02-28 Thread gunther
HIVE-1555: JDBC Storage Handler (Gunther Hagleitner, reviewed by Jason Dere)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/12b27a35
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/12b27a35
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/12b27a35

Branch: refs/heads/master
Commit: 12b27a38499f6422e49742bd5cad71416fb2
Parents: a9de1cd
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Tue Feb 28 23:29:32 2017 -0800
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Tue Feb 28 23:55:05 2017 -0800

--
 itests/qtest/pom.xml|   7 +
 .../test/resources/testconfiguration.properties |   1 +
 jdbc-handler/pom.xml| 127 
 .../hive/storage/jdbc/JdbcInputFormat.java  | 108 +++
 .../hive/storage/jdbc/JdbcInputSplit.java   | 100 ++
 .../hive/storage/jdbc/JdbcOutputFormat.java |  68 +
 .../hive/storage/jdbc/JdbcRecordReader.java | 133 
 .../org/apache/hive/storage/jdbc/JdbcSerDe.java | 164 ++
 .../hive/storage/jdbc/JdbcStorageHandler.java   | 106 +++
 .../storage/jdbc/QueryConditionBuilder.java | 186 
 .../storage/jdbc/conf/CustomConfigManager.java  |  23 ++
 .../jdbc/conf/CustomConfigManagerFactory.java   |  50 +++
 .../hive/storage/jdbc/conf/DatabaseType.java|  21 ++
 .../storage/jdbc/conf/JdbcStorageConfig.java|  49 +++
 .../jdbc/conf/JdbcStorageConfigManager.java |  97 ++
 .../hive/storage/jdbc/dao/DatabaseAccessor.java |  34 +++
 .../jdbc/dao/DatabaseAccessorFactory.java   |  53 
 .../jdbc/dao/GenericJdbcDatabaseAccessor.java   | 253 
 .../storage/jdbc/dao/JdbcRecordIterator.java| 104 +++
 .../storage/jdbc/dao/MySqlDatabaseAccessor.java |  39 +++
 .../HiveJdbcDatabaseAccessException.java|  41 +++
 .../exception/HiveJdbcStorageException.java |  40 +++
 .../src/test/java/org/apache/TestSuite.java |  29 ++
 .../config/JdbcStorageConfigManagerTest.java|  87 ++
 .../hive/storage/jdbc/JdbcInputFormatTest.java  |  81 +
 .../storage/jdbc/QueryConditionBuilderTest.java | 151 +
 .../dao/GenericJdbcDatabaseAccessorTest.java| 206 +
 jdbc-handler/src/test/resources/condition1.xml  |  48 +++
 jdbc-handler/src/test/resources/condition2.xml  | 101 +++
 jdbc-handler/src/test/resources/test_script.sql |  21 ++
 packaging/pom.xml   |   5 +
 packaging/src/main/assembly/src.xml |   1 +
 pom.xml |   3 +
 .../test/queries/clientpositive/jdbc_handler.q  |  58 
 .../clientpositive/llap/jdbc_handler.q.out  | 303 +++
 35 files changed, 2898 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/12b27a35/itests/qtest/pom.xml
--
diff --git a/itests/qtest/pom.xml b/itests/qtest/pom.xml
index 1b49e88..1c3b601 100644
--- a/itests/qtest/pom.xml
+++ b/itests/qtest/pom.xml
@@ -119,6 +119,13 @@
   tests
   test
 
+
+  org.apache.hive
+  hive-jdbc-handler
+  ${project.version}
+  test
+
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hive/blob/12b27a35/itests/src/test/resources/testconfiguration.properties
--
diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index 778b614..807b124 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -499,6 +499,7 @@ minillaplocal.query.files=acid_globallimit.q,\
   input16_cc.q,\
   insert_dir_distcp.q,\
   insert_into_with_schema.q,\
+  jdbc_handler.q,\
   join1.q,\
   join_acid_non_acid.q,\
   join_filters.q,\

http://git-wip-us.apache.org/repos/asf/hive/blob/12b27a35/jdbc-handler/pom.xml
--
diff --git a/jdbc-handler/pom.xml b/jdbc-handler/pom.xml
new file mode 100644
index 000..364886a
--- /dev/null
+++ b/jdbc-handler/pom.xml
@@ -0,0 +1,127 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  4.0.0
+  
+org.apache.hive
+hive
+2.2.0-SNAPSHOT
+../pom.xml
+  
+
+  hive-jdbc-handler
+  jar
+  Hive JDBC Handler
+
+  
+..
+  
+
+  
+
+  org.apache.hive
+  hive-common
+  ${project.version}
+  
+
+  org.eclipse.jetty.aggregate
+  jetty-all
+
+  
+
+
+
+  org.apac

[1/2] hive git commit: HIVE-1555: JDBC Storage Handler (Gunther Hagleitner, reviewed by Jason Dere)

2017-02-28 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master a9de1cdbb -> 12b27a355


http://git-wip-us.apache.org/repos/asf/hive/blob/12b27a35/jdbc-handler/src/test/java/org/apache/hive/storage/jdbc/dao/GenericJdbcDatabaseAccessorTest.java
--
diff --git 
a/jdbc-handler/src/test/java/org/apache/hive/storage/jdbc/dao/GenericJdbcDatabaseAccessorTest.java
 
b/jdbc-handler/src/test/java/org/apache/hive/storage/jdbc/dao/GenericJdbcDatabaseAccessorTest.java
new file mode 100644
index 000..5fd600b
--- /dev/null
+++ 
b/jdbc-handler/src/test/java/org/apache/hive/storage/jdbc/dao/GenericJdbcDatabaseAccessorTest.java
@@ -0,0 +1,206 @@
+/*
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hive.storage.jdbc.dao;
+
+import static org.hamcrest.Matchers.equalTo;
+import static org.hamcrest.Matchers.equalToIgnoringCase;
+import static org.hamcrest.Matchers.is;
+import static org.hamcrest.Matchers.notNullValue;
+import static org.junit.Assert.assertThat;
+
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Test;
+
+import org.apache.hive.storage.jdbc.conf.JdbcStorageConfig;
+import org.apache.hive.storage.jdbc.exception.HiveJdbcDatabaseAccessException;
+
+import java.util.List;
+import java.util.Map;
+
+public class GenericJdbcDatabaseAccessorTest {
+
+  @Test
+  public void testGetColumnNames_starQuery() throws 
HiveJdbcDatabaseAccessException {
+Configuration conf = buildConfiguration();
+DatabaseAccessor accessor = DatabaseAccessorFactory.getAccessor(conf);
+List columnNames = accessor.getColumnNames(conf);
+
+assertThat(columnNames, is(notNullValue()));
+assertThat(columnNames.size(), is(equalTo(7)));
+assertThat(columnNames.get(0), is(equalToIgnoringCase("strategy_id")));
+  }
+
+
+  @Test
+  public void testGetColumnNames_fieldListQuery() throws 
HiveJdbcDatabaseAccessException {
+Configuration conf = buildConfiguration();
+conf.set(JdbcStorageConfig.QUERY.getPropertyName(), "select name,referrer 
from test_strategy");
+DatabaseAccessor accessor = DatabaseAccessorFactory.getAccessor(conf);
+List columnNames = accessor.getColumnNames(conf);
+
+assertThat(columnNames, is(notNullValue()));
+assertThat(columnNames.size(), is(equalTo(2)));
+assertThat(columnNames.get(0), is(equalToIgnoringCase("name")));
+  }
+
+
+  @Test(expected = HiveJdbcDatabaseAccessException.class)
+  public void testGetColumnNames_invalidQuery() throws 
HiveJdbcDatabaseAccessException {
+Configuration conf = buildConfiguration();
+conf.set(JdbcStorageConfig.QUERY.getPropertyName(), "select * from 
invalid_strategy");
+DatabaseAccessor accessor = DatabaseAccessorFactory.getAccessor(conf);
+@SuppressWarnings("unused")
+  List columnNames = accessor.getColumnNames(conf);
+  }
+
+
+  @Test
+  public void testGetTotalNumberOfRecords() throws 
HiveJdbcDatabaseAccessException {
+Configuration conf = buildConfiguration();
+DatabaseAccessor accessor = DatabaseAccessorFactory.getAccessor(conf);
+int numRecords = accessor.getTotalNumberOfRecords(conf);
+
+assertThat(numRecords, is(equalTo(5)));
+  }
+
+
+  @Test
+  public void testGetTotalNumberOfRecords_whereClause() throws 
HiveJdbcDatabaseAccessException {
+Configuration conf = buildConfiguration();
+conf.set(JdbcStorageConfig.QUERY.getPropertyName(), "select * from 
test_strategy where strategy_id = '5'");
+DatabaseAccessor accessor = DatabaseAccessorFactory.getAccessor(conf);
+int numRecords = accessor.getTotalNumberOfRecords(conf);
+
+assertThat(numRecords, is(equalTo(1)));
+  }
+
+
+  @Test
+  public void testGetTotalNumberOfRecords_noRecords() throws 
HiveJdbcDatabaseAccessException {
+Configuration conf = buildConfiguration();
+conf.set(JdbcStorageConfig.QUERY.getPropertyName(), "select * from 
test_strategy where strategy_id = '25'");
+DatabaseAccessor accessor = DatabaseAccessorFactory.getAccessor(conf);
+int numRecords = accessor.getTotalNumberOfRecords(conf);
+
+assertThat(numRecords, is(equalTo(0)));
+  }
+
+
+  @Test(expected = HiveJdbcDatabaseAccessException.class)
+  public void testGetTotalNumberOfRecords_invalidQuery() throws 
HiveJdbcDatabaseAccessException {
+Configuration conf = buildConfiguration();
+conf.set(JdbcStorageConfig.QUERY.getPropertyName(), "select * from 
strategyx where strategy_id = '5'");
+DatabaseAccessor accessor = 

[5/7] hive git commit: HIVE-15873: Remove Windows-specific code (Gunther Hagleitner, reviewed by Ashtuosh Chauhan)

2017-02-11 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/38ad7792/hcatalog/webhcat/java-client/src/test/java/org/apache/hive/hcatalog/api/TestHCatClient.java
--
diff --git 
a/hcatalog/webhcat/java-client/src/test/java/org/apache/hive/hcatalog/api/TestHCatClient.java
 
b/hcatalog/webhcat/java-client/src/test/java/org/apache/hive/hcatalog/api/TestHCatClient.java
index 48ee7cf..b9cb067 100644
--- 
a/hcatalog/webhcat/java-client/src/test/java/org/apache/hive/hcatalog/api/TestHCatClient.java
+++ 
b/hcatalog/webhcat/java-client/src/test/java/org/apache/hive/hcatalog/api/TestHCatClient.java
@@ -40,7 +40,6 @@ import org.apache.hadoop.hive.metastore.Warehouse;
 import org.apache.hadoop.hive.metastore.api.MetaException;
 import org.apache.hadoop.hive.metastore.api.NotificationEvent;
 import org.apache.hadoop.hive.metastore.api.PartitionEventType;
-import org.apache.hadoop.hive.ql.WindowsPathUtil;
 import org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat;
 import org.apache.hadoop.hive.ql.io.RCFileInputFormat;
 import org.apache.hadoop.hive.ql.io.RCFileOutputFormat;
@@ -109,9 +108,6 @@ public class TestHCatClient {
   useExternalMS = true;
   return;
 }
-if (Shell.WINDOWS) {
-  WindowsPathUtil.convertPathsFromWindowsToHdfs(hcatConf);
-}
 
 System.setProperty(HiveConf.ConfVars.METASTORE_EVENT_LISTENERS.varname,
 DbNotificationListener.class.getName()); // turn on db notification 
listener on metastore
@@ -136,9 +132,6 @@ public class TestHCatClient {
   }
 
   public static String fixPath(String path) {
-if(!Shell.WINDOWS) {
-  return path;
-}
 String expectedDir = path.replaceAll("", "/");
 if (!expectedDir.startsWith("/")) {
   expectedDir = "/" + expectedDir;

http://git-wip-us.apache.org/repos/asf/hive/blob/38ad7792/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/ExecServiceImpl.java
--
diff --git 
a/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/ExecServiceImpl.java
 
b/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/ExecServiceImpl.java
index e868102..54b8419 100644
--- 
a/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/ExecServiceImpl.java
+++ 
b/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/ExecServiceImpl.java
@@ -175,53 +175,7 @@ public class ExecServiceImpl implements ExecService {
 LOG.info("Running: " + cmd);
 ExecBean res = new ExecBean();
 
-if(Shell.WINDOWS){
-  //The default executor is sometimes causing failure on windows. hcat
-  // command sometimes returns non zero exit status with it. It seems
-  // to hit some race conditions on windows.
-  env = execEnv(env);
-  String[] envVals = new String[env.size()];
-  int i=0;
-  for( Entry kv : env.entrySet()){
-envVals[i++] = kv.getKey() + "=" + kv.getValue();
-LOG.info("Setting " +  kv.getKey() + "=" + kv.getValue());
-  }
-
-  Process proc;
-  synchronized (WindowsProcessLaunchLock) {
-// To workaround the race condition issue with child processes
-// inheriting unintended handles during process launch that can
-// lead to hangs on reading output and error streams, we
-// serialize process creation. More info available at:
-// http://support.microsoft.com/kb/315939
-proc = Runtime.getRuntime().exec(cmd.toStrings(), envVals);
-  }
-
-  //consume stderr
-  StreamOutputWriter errorGobbler = new
-StreamOutputWriter(proc.getErrorStream(), "ERROR", errStream);
-
-  //consume stdout
-  StreamOutputWriter outputGobbler = new
-StreamOutputWriter(proc.getInputStream(), "OUTPUT", outStream);
-
-  //start collecting input streams
-  errorGobbler.start();
-  outputGobbler.start();
-  //execute
-  try{
-res.exitcode = proc.waitFor();
-  } catch (InterruptedException e) {
-throw new IOException(e);
-  } finally {
-//flush
-errorGobbler.out.flush();
-outputGobbler.out.flush();
-  }
-}
-else {
-  res.exitcode = executor.execute(cmd, execEnv(env));
-}
+res.exitcode = executor.execute(cmd, execEnv(env));
 
 String enc = appConf.get(AppConfig.EXEC_ENCODING_NAME);
 res.stdout = outStream.toString(enc);

http://git-wip-us.apache.org/repos/asf/hive/blob/38ad7792/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HiveDelegator.java
--
diff --git 
a/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HiveDelegator.java
 
b/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HiveDelegator.java
index 0ea964f..f0296cb 100644
--- 

[3/7] hive git commit: HIVE-15873: Remove Windows-specific code (Gunther Hagleitner, reviewed by Ashtuosh Chauhan)

2017-02-11 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/38ad7792/ql/src/test/results/clientpositive/partition_timestamp2_win.q.out
--
diff --git a/ql/src/test/results/clientpositive/partition_timestamp2_win.q.out 
b/ql/src/test/results/clientpositive/partition_timestamp2_win.q.out
deleted file mode 100755
index f39db1f..000
--- a/ql/src/test/results/clientpositive/partition_timestamp2_win.q.out
+++ /dev/null
@@ -1,399 +0,0 @@
-PREHOOK: query: -- Windows-specific due to space character being escaped in 
Hive paths on Windows.
--- INCLUDE_OS_WINDOWS
-drop table partition_timestamp2_1
-PREHOOK: type: DROPTABLE
-POSTHOOK: query: -- Windows-specific due to space character being escaped in 
Hive paths on Windows.
--- INCLUDE_OS_WINDOWS
-drop table partition_timestamp2_1
-POSTHOOK: type: DROPTABLE
-PREHOOK: query: create table partition_timestamp2_1 (key string, value string) 
partitioned by (dt timestamp, region int)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@partition_timestamp2_1
-POSTHOOK: query: create table partition_timestamp2_1 (key string, value 
string) partitioned by (dt timestamp, region int)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@partition_timestamp2_1
-PREHOOK: query: -- test timestamp literal syntax
-from (select * from src tablesample (1 rows)) x
-insert overwrite table partition_timestamp2_1 partition(dt=timestamp 
'2000-01-01 00:00:00', region=1) select *
-insert overwrite table partition_timestamp2_1 partition(dt=timestamp 
'2000-01-01 01:00:00', region=1) select *
-insert overwrite table partition_timestamp2_1 partition(dt=timestamp 
'1999-01-01 00:00:00', region=2) select *
-insert overwrite table partition_timestamp2_1 partition(dt=timestamp 
'1999-01-01 01:00:00', region=2) select *
-PREHOOK: type: QUERY
-PREHOOK: Input: default@src
-PREHOOK: Output: 
default@partition_timestamp2_1@dt=1999-01-01%2000%3A00%3A00.0/region=2
-PREHOOK: Output: 
default@partition_timestamp2_1@dt=1999-01-01%2001%3A00%3A00.0/region=2
-PREHOOK: Output: 
default@partition_timestamp2_1@dt=2000-01-01%2000%3A00%3A00.0/region=1
-PREHOOK: Output: 
default@partition_timestamp2_1@dt=2000-01-01%2001%3A00%3A00.0/region=1
-POSTHOOK: query: -- test timestamp literal syntax
-from (select * from src tablesample (1 rows)) x
-insert overwrite table partition_timestamp2_1 partition(dt=timestamp 
'2000-01-01 00:00:00', region=1) select *
-insert overwrite table partition_timestamp2_1 partition(dt=timestamp 
'2000-01-01 01:00:00', region=1) select *
-insert overwrite table partition_timestamp2_1 partition(dt=timestamp 
'1999-01-01 00:00:00', region=2) select *
-insert overwrite table partition_timestamp2_1 partition(dt=timestamp 
'1999-01-01 01:00:00', region=2) select *
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@src
-POSTHOOK: Output: 
default@partition_timestamp2_1@dt=1999-01-01%2000%3A00%3A00.0/region=2
-POSTHOOK: Output: 
default@partition_timestamp2_1@dt=1999-01-01%2001%3A00%3A00.0/region=2
-POSTHOOK: Output: 
default@partition_timestamp2_1@dt=2000-01-01%2000%3A00%3A00.0/region=1
-POSTHOOK: Output: 
default@partition_timestamp2_1@dt=2000-01-01%2001%3A00%3A00.0/region=1
-POSTHOOK: Lineage: partition_timestamp2_1 PARTITION(dt=1999-01-01 
00:00:00.0,region=2).key SIMPLE [(src)src.FieldSchema(name:key, type:string, 
comment:default), ]
-POSTHOOK: Lineage: partition_timestamp2_1 PARTITION(dt=1999-01-01 
00:00:00.0,region=2).value SIMPLE [(src)src.FieldSchema(name:value, 
type:string, comment:default), ]
-POSTHOOK: Lineage: partition_timestamp2_1 PARTITION(dt=1999-01-01 
01:00:00.0,region=2).key SIMPLE [(src)src.FieldSchema(name:key, type:string, 
comment:default), ]
-POSTHOOK: Lineage: partition_timestamp2_1 PARTITION(dt=1999-01-01 
01:00:00.0,region=2).value SIMPLE [(src)src.FieldSchema(name:value, 
type:string, comment:default), ]
-POSTHOOK: Lineage: partition_timestamp2_1 PARTITION(dt=2000-01-01 
00:00:00.0,region=1).key SIMPLE [(src)src.FieldSchema(name:key, type:string, 
comment:default), ]
-POSTHOOK: Lineage: partition_timestamp2_1 PARTITION(dt=2000-01-01 
00:00:00.0,region=1).value SIMPLE [(src)src.FieldSchema(name:value, 
type:string, comment:default), ]
-POSTHOOK: Lineage: partition_timestamp2_1 PARTITION(dt=2000-01-01 
01:00:00.0,region=1).key SIMPLE [(src)src.FieldSchema(name:key, type:string, 
comment:default), ]
-POSTHOOK: Lineage: partition_timestamp2_1 PARTITION(dt=2000-01-01 
01:00:00.0,region=1).value SIMPLE [(src)src.FieldSchema(name:value, 
type:string, comment:default), ]
-PREHOOK: query: select distinct dt from partition_timestamp2_1
-PREHOOK: type: QUERY
-PREHOOK: Input: default@partition_timestamp2_1
-PREHOOK: Input: 
default@partition_timestamp2_1@dt=1999-01-01%2000%3A00%3A00.0/region=2
-PREHOOK: Input: 
default@partition_timestamp2_1@dt=1999-01-01%2001%3A00%3A00.0/region=2
-PREHOOK: Input: 

[4/7] hive git commit: HIVE-15873: Remove Windows-specific code (Gunther Hagleitner, reviewed by Ashtuosh Chauhan)

2017-02-11 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/38ad7792/ql/src/test/results/clientpositive/avro_timestamp_win.q.java1.8.out
--
diff --git 
a/ql/src/test/results/clientpositive/avro_timestamp_win.q.java1.8.out 
b/ql/src/test/results/clientpositive/avro_timestamp_win.q.java1.8.out
deleted file mode 100755
index 087d571..000
--- a/ql/src/test/results/clientpositive/avro_timestamp_win.q.java1.8.out
+++ /dev/null
@@ -1,134 +0,0 @@
-PREHOOK: query: -- Windows-specific test due to space character being escaped 
in Hive paths on Windows.
--- INCLUDE_OS_WINDOWS
--- JAVA_VERSION_SPECIFIC_OUTPUT
-
-DROP TABLE avro_timestamp_staging
-PREHOOK: type: DROPTABLE
-POSTHOOK: query: -- Windows-specific test due to space character being escaped 
in Hive paths on Windows.
--- INCLUDE_OS_WINDOWS
--- JAVA_VERSION_SPECIFIC_OUTPUT
-
-DROP TABLE avro_timestamp_staging
-POSTHOOK: type: DROPTABLE
-PREHOOK: query: DROP TABLE avro_timestamp
-PREHOOK: type: DROPTABLE
-POSTHOOK: query: DROP TABLE avro_timestamp
-POSTHOOK: type: DROPTABLE
-PREHOOK: query: DROP TABLE avro_timestamp_casts
-PREHOOK: type: DROPTABLE
-POSTHOOK: query: DROP TABLE avro_timestamp_casts
-POSTHOOK: type: DROPTABLE
-PREHOOK: query: CREATE TABLE avro_timestamp_staging (d timestamp, m1 
map, l1 array)
-   ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
-   COLLECTION ITEMS TERMINATED BY ',' MAP KEYS TERMINATED BY ':'
-   STORED AS TEXTFILE
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@avro_timestamp_staging
-POSTHOOK: query: CREATE TABLE avro_timestamp_staging (d timestamp, m1 
map, l1 array)
-   ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
-   COLLECTION ITEMS TERMINATED BY ',' MAP KEYS TERMINATED BY ':'
-   STORED AS TEXTFILE
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@avro_timestamp_staging
-PREHOOK: query: LOAD DATA LOCAL INPATH '../../data/files/avro_timestamp.txt' 
OVERWRITE INTO TABLE avro_timestamp_staging
-PREHOOK: type: LOAD
- A masked pattern was here 
-PREHOOK: Output: default@avro_timestamp_staging
-POSTHOOK: query: LOAD DATA LOCAL INPATH '../../data/files/avro_timestamp.txt' 
OVERWRITE INTO TABLE avro_timestamp_staging
-POSTHOOK: type: LOAD
- A masked pattern was here 
-POSTHOOK: Output: default@avro_timestamp_staging
-PREHOOK: query: CREATE TABLE avro_timestamp (d timestamp, m1 map, l1 array)
-  PARTITIONED BY (p1 int, p2 timestamp)
-  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
-  COLLECTION ITEMS TERMINATED BY ',' MAP KEYS TERMINATED BY ':'
-  STORED AS AVRO
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@avro_timestamp
-POSTHOOK: query: CREATE TABLE avro_timestamp (d timestamp, m1 map, l1 array)
-  PARTITIONED BY (p1 int, p2 timestamp)
-  ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
-  COLLECTION ITEMS TERMINATED BY ',' MAP KEYS TERMINATED BY ':'
-  STORED AS AVRO
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@avro_timestamp
-PREHOOK: query: INSERT OVERWRITE TABLE avro_timestamp PARTITION(p1=2, 
p2='2014-09-26 07:08:09.123') SELECT * FROM avro_timestamp_staging
-PREHOOK: type: QUERY
-PREHOOK: Input: default@avro_timestamp_staging
-PREHOOK: Output: default@avro_timestamp@p1=2/p2=2014-09-26%2007%3A08%3A09.123
-POSTHOOK: query: INSERT OVERWRITE TABLE avro_timestamp PARTITION(p1=2, 
p2='2014-09-26 07:08:09.123') SELECT * FROM avro_timestamp_staging
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@avro_timestamp_staging
-POSTHOOK: Output: default@avro_timestamp@p1=2/p2=2014-09-26%2007%3A08%3A09.123
-POSTHOOK: Lineage: avro_timestamp PARTITION(p1=2,p2=2014-09-26 07:08:09.123).d 
SIMPLE [(avro_timestamp_staging)avro_timestamp_staging.FieldSchema(name:d, 
type:timestamp, comment:null), ]
-POSTHOOK: Lineage: avro_timestamp PARTITION(p1=2,p2=2014-09-26 
07:08:09.123).l1 SIMPLE 
[(avro_timestamp_staging)avro_timestamp_staging.FieldSchema(name:l1, 
type:array, comment:null), ]
-POSTHOOK: Lineage: avro_timestamp PARTITION(p1=2,p2=2014-09-26 
07:08:09.123).m1 SIMPLE 
[(avro_timestamp_staging)avro_timestamp_staging.FieldSchema(name:m1, 
type:map, comment:null), ]
-PREHOOK: query: SELECT * FROM avro_timestamp
-PREHOOK: type: QUERY
-PREHOOK: Input: default@avro_timestamp
-PREHOOK: Input: default@avro_timestamp@p1=2/p2=2014-09-26%2007%3A08%3A09.123
- A masked pattern was here 
-POSTHOOK: query: SELECT * FROM avro_timestamp
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@avro_timestamp
-POSTHOOK: Input: default@avro_timestamp@p1=2/p2=2014-09-26%2007%3A08%3A09.123
- A masked pattern was here 
-2012-02-21 07:08:09.123{"bar":"1998-05-07 
07:08:09.123","foo":"1980-12-16 07:08:09.123"}   ["2011-09-04 
07:08:09.123","2011-09-05 07:08:09.123"]   2   2014-09-26 07:08:09.123
-2014-02-11 

[7/7] hive git commit: HIVE-15873: Remove Windows-specific code (Gunther Hagleitner, reviewed by Ashtuosh Chauhan)

2017-02-11 Thread gunther
HIVE-15873: Remove Windows-specific code (Gunther Hagleitner, reviewed by 
Ashtuosh Chauhan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/38ad7792
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/38ad7792
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/38ad7792

Branch: refs/heads/master
Commit: 38ad77929980dc155dcc4a5d009a9a855eb5b017
Parents: eb1da30
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Thu Feb 9 17:49:50 2017 -0800
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Sat Feb 11 12:52:46 2017 -0800

--
 bin/beeline.cmd |   66 -
 bin/derbyserver.cmd |   60 -
 bin/ext/cleardanglingscratchdir.cmd |   34 -
 bin/ext/cli.cmd |   58 -
 bin/ext/debug.cmd   |  110 -
 bin/ext/hbaseimport.cmd |   35 -
 bin/ext/help.cmd|   30 -
 bin/ext/hiveserver2.cmd |  139 --
 bin/ext/jar.cmd |   43 -
 bin/ext/lineage.cmd |   30 -
 bin/ext/metastore.cmd   |   47 -
 bin/ext/orcfiledump.cmd |   35 -
 bin/ext/rcfilecat.cmd   |   34 -
 bin/ext/schemaTool.cmd  |   33 -
 bin/ext/util/execHiveCmd.cmd|   24 -
 bin/hive-config.cmd |   61 -
 bin/hive.cmd|  383 
 bin/hplsql.cmd  |   58 -
 .../hadoop/hive/cli/TestCliDriverMethods.java   |   28 -
 .../apache/hadoop/hive/common/FileUtils.java|9 -
 .../org/apache/hadoop/hive/conf/HiveConf.java   |4 +-
 .../java/org/apache/hive/http/HttpServer.java   |2 +-
 .../apache/hadoop/hive/conf/TestHiveConf.java   |5 -
 .../hadoop/hive/contrib/mr/TestGenericMR.java   |   13 +-
 hcatalog/bin/templeton.cmd  |   90 -
 .../hive/hcatalog/mapreduce/HCatBaseTest.java   |4 -
 .../mapreduce/TestHCatPartitionPublish.java |   11 +-
 .../pig/TestHCatLoaderComplexSchema.java|5 -
 .../hcatalog/pig/TestHCatLoaderEncryption.java  |   11 +-
 .../e2e/templeton/drivers/TestDriverCurl.pm | 1984 +
 .../hive/hcatalog/api/TestHCatClient.java   |7 -
 .../hcatalog/templeton/ExecServiceImpl.java |   48 +-
 .../hive/hcatalog/templeton/HiveDelegator.java  |9 +-
 .../hive/hcatalog/templeton/JarDelegator.java   |9 +-
 .../hive/hcatalog/templeton/PigDelegator.java   |5 +-
 .../hive/hcatalog/templeton/SqoopDelegator.java |7 +-
 .../hcatalog/templeton/StreamingDelegator.java  |4 +-
 .../hcatalog/templeton/tool/LaunchMapper.java   |9 -
 .../hcatalog/templeton/tool/TempletonUtils.java |   42 -
 .../org/apache/hive/jdbc/miniHS2/MiniHS2.java   |3 +-
 .../hive/ql/TestReplicationScenarios.java   |3 -
 .../security/StorageBasedMetastoreTestBase.java |4 -
 .../ql/session/TestClearDanglingScratchDir.java |4 -
 .../server/TestHS2ClearDanglingScratchDir.java  |4 -
 .../org/apache/hadoop/hive/ql/QTestUtil.java|   55 +-
 .../llap/shufflehandler/ShuffleHandler.java |5 +-
 .../hive/llap/daemon/MiniLlapCluster.java   |   36 +-
 .../hadoop/hive/ql/exec/ScriptOperator.java |   17 -
 .../apache/hadoop/hive/ql/exec/Utilities.java   |   16 -
 .../apache/hadoop/hive/ql/util/DosToUnix.java   |  107 -
 .../hadoop/hive/ql/util/ResourceDownloader.java |   12 +-
 .../apache/hadoop/hive/ql/WindowsPathUtil.java  |   57 -
 .../hadoop/hive/ql/exec/TestExecDriver.java |8 +-
 .../ql/metadata/TestHiveMetaStoreChecker.java   |4 -
 .../hadoop/hive/ql/session/TestAddResource.java |8 +-
 .../hadoop/hive/ql/util/TestDosToUnix.java  |   77 -
 .../queries/clientpositive/avro_timestamp.q |2 -
 .../queries/clientpositive/avro_timestamp_win.q |   28 -
 ql/src/test/queries/clientpositive/combine2.q   |3 -
 .../queries/clientpositive/combine2_hadoop20.q  |3 -
 .../test/queries/clientpositive/combine2_win.q  |   41 -
 ql/src/test/queries/clientpositive/escape1.q|3 -
 ql/src/test/queries/clientpositive/escape2.q|3 -
 .../test/queries/clientpositive/input_part10.q  |3 -
 .../queries/clientpositive/input_part10_win.q   |   23 -
 .../queries/clientpositive/load_dyn_part14.q|3 -
 .../clientpositive/load_dyn_part14_win.q|   38 -
 .../clientpositive/partition_timestamp.q|2 -
 .../clientpositive/partition_timestamp2.q   |2 -
 .../clientpositive/partition_timestamp2_win.q   |   58 -
 .../clientpositive/partition_timestamp_win.q|   59 -
 .../test/queries/clientpositive/scriptfile1.q   |2 -
 .../queries/clientpositive/scriptfile1_win.q|   16 -
 .../queries/cl

[1/7] hive git commit: HIVE-15873: Remove Windows-specific code (Gunther Hagleitner, reviewed by Ashtuosh Chauhan)

2017-02-11 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master eb1da3087 -> 38ad77929


http://git-wip-us.apache.org/repos/asf/hive/blob/38ad7792/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java
--
diff --git 
a/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java 
b/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java
index 844baf7..ebec165 100644
--- 
a/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java
+++ 
b/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java
@@ -107,7 +107,7 @@ public class ThriftHttpCLIService extends ThriftCLIService {
   }
   connector.setPort(portNum);
   // Linux:yes, Windows:no
-  connector.setReuseAddress(!Shell.WINDOWS);
+  connector.setReuseAddress(true);
   int maxIdleTime = (int) 
hiveConf.getTimeVar(ConfVars.HIVE_SERVER2_THRIFT_HTTP_MAX_IDLE_TIME,
   TimeUnit.MILLISECONDS);
   connector.setMaxIdleTime(maxIdleTime);

http://git-wip-us.apache.org/repos/asf/hive/blob/38ad7792/shims/common/src/main/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java
--
diff --git 
a/shims/common/src/main/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java 
b/shims/common/src/main/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java
index 197b965..dc32190 100644
--- a/shims/common/src/main/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java
+++ b/shims/common/src/main/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java
@@ -59,14 +59,6 @@ public class ProxyLocalFileSystem extends FilterFileSystem {
 // from the supplied URI
 this.scheme = name.getScheme();
 String nameUriString = name.toString();
-if (Shell.WINDOWS) {
-  // Replace the encoded backward slash with forward slash
-  // Remove the windows drive letter
-  nameUriString =
-  nameUriString.replaceAll("%5C", "/").replaceFirst("/[c-zC-Z]:", "/")
-  .replaceFirst("^[c-zC-Z]:", "");
-  name = URI.create(nameUriString);
-}
 
 String authority = name.getAuthority() != null ? name.getAuthority() : "";
 String proxyUriString = scheme + "://" + authority + "/";

http://git-wip-us.apache.org/repos/asf/hive/blob/38ad7792/testutils/hadoop.cmd
--
diff --git a/testutils/hadoop.cmd b/testutils/hadoop.cmd
deleted file mode 100644
index 1ff147c..000
--- a/testutils/hadoop.cmd
+++ /dev/null
@@ -1,252 +0,0 @@
-@echo off
-@rem Licensed to the Apache Software Foundation (ASF) under one or more
-@rem contributor license agreements.  See the NOTICE file distributed with
-@rem this work for additional information regarding copyright ownership.
-@rem The ASF licenses this file to You under the Apache License, Version 2.0
-@rem (the "License"); you may not use this file except in compliance with
-@rem the License.  You may obtain a copy of the License at
-@rem
-@rem http://www.apache.org/licenses/LICENSE-2.0
-@rem
-@rem Unless required by applicable law or agreed to in writing, software
-@rem distributed under the License is distributed on an "AS IS" BASIS,
-@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-@rem See the License for the specific language governing permissions and
-@rem limitations under the License.
-
-
-@rem The Hadoop command script
-@rem
-@rem Environment Variables
-@rem
-@rem   JAVA_HOMEThe java implementation to use.  Overrides JAVA_HOME.
-@rem
-@rem   HADOOP_CLASSPATH Extra Java CLASSPATH entries.
-@rem
-@rem   HADOOP_HEAPSIZE  The maximum amount of heap to use, in MB.
-@remDefault is 1000.
-@rem
-@rem   HADOOP_OPTS  Extra Java runtime options.
-@rem
-@rem   HADOOP_NAMENODE_OPTS   These options are added to HADOOP_OPTS
-@rem   HADOOP_CLIENT_OPTS when the respective command is run.
-@rem   HADOOP_{COMMAND}_OPTS etc  HADOOP_JT_OPTS applies to JobTracker
-@rem  for e.g.  HADOOP_CLIENT_OPTS applies to
-@rem  more than one command (fs, dfs, fsck,
-@rem  dfsadmin etc)
-@rem
-@rem   HADOOP_CONF_DIR  Alternate conf dir. Default is ${HADOOP_HOME}/conf.
-@rem
-@rem   HADOOP_ROOT_LOGGER The root appender. Default is INFO,console
-@rem
-
-if not defined HADOOP_BIN_PATH ( 
-  set HADOOP_BIN_PATH=%~dp0
-)
-
-if "%HADOOP_BIN_PATH:~-1%" == "\" (
-  set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
-)
-call :updatepath %HADOOP_BIN_PATH%
-
-set BIN=%~dp0
-for %%i in (%BIN%.) do (
-  set BIN=%%~dpi
-)
-if "%BIN:~-1%" == "\" (
-  set BIN=%BIN:~0,-1%
-)
-
-
-@rem
-@rem setup java environment variables
-@rem
-
-if not defined JAVA_HOME (
-  echo Error: JAVA_HOME is not set.
-  goto :eof
-)
-
-if not exist %JAVA_HOME%\bin\java.exe (
-  echo Error: JAVA_HOME is incorrectly set.
-  goto 

[6/7] hive git commit: HIVE-15873: Remove Windows-specific code (Gunther Hagleitner, reviewed by Ashtuosh Chauhan)

2017-02-11 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/38ad7792/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
--
diff --git a/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm 
b/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
index b965eec..ea718c3 100644
--- a/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
+++ b/hcatalog/src/test/e2e/templeton/drivers/TestDriverCurl.pm
@@ -1,4 +1,4 @@
-   

+
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -15,13 +15,13 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-   

+
 package TestDriverCurl;
 
 ###
 # Class: TestDriver
 # A base class for TestDrivers.
-# 
+#
 
 use TestDriverFactory;
 use TestReport;
@@ -59,25 +59,25 @@ my $dependStr = 'failed_dependency';
 # None.
 #
 sub printResults
-  {
+{
 my ($testStatuses, $log, $prefix) = @_;
 
 my ($pass, $fail, $abort, $depend, $skipped) = (0, 0, 0, 0, 0);
 
 foreach (keys(%$testStatuses)) {
-  ($testStatuses->{$_} eq $passedStr) && $pass++;
-  ($testStatuses->{$_} eq $failedStr) && $fail++;
-  ($testStatuses->{$_} eq $abortedStr) && $abort++;
-  ($testStatuses->{$_} eq $dependStr) && $depend++;
-  ($testStatuses->{$_} eq $skippedStr) && $skipped++;
+($testStatuses->{$_} eq $passedStr) && $pass++;
+($testStatuses->{$_} eq $failedStr) && $fail++;
+($testStatuses->{$_} eq $abortedStr) && $abort++;
+($testStatuses->{$_} eq $dependStr) && $depend++;
+($testStatuses->{$_} eq $skippedStr) && $skipped++;
 }
 
 my $msg = "$prefix, PASSED: $pass FAILED: $fail SKIPPED: $skipped ABORTED: 
$abort "
-  . "FAILED DEPENDENCY: $depend";
+. "FAILED DEPENDENCY: $depend";
 print $log "$msg\n";
 print "$msg\n";
- 
-  }
+
+}
 
 ##
 #  Sub: printGroupResultsXml
@@ -93,25 +93,25 @@ sub printResults
 # None.
 #
 sub printGroupResultsXml
-  {
+{
 my ( $report, $groupName, $testStatuses,  $totalDuration) = @_;
 $totalDuration=0 if  ( !$totalDuration );
 
 my ($pass, $fail, $abort, $depend) = (0, 0, 0, 0);
 
 foreach my $key (keys(%$testStatuses)) {
-  if ( $key =~ /^$groupName/ ) {
-($testStatuses->{$key} eq $passedStr) && $pass++;
-($testStatuses->{$key} eq $failedStr) && $fail++;
-($testStatuses->{$key} eq $abortedStr) && $abort++;
-($testStatuses->{$key} eq $dependStr) && $depend++;
-  }
+if ( $key =~ /^$groupName/ ) {
+($testStatuses->{$key} eq $passedStr) && $pass++;
+($testStatuses->{$key} eq $failedStr) && $fail++;
+($testStatuses->{$key} eq $abortedStr) && $abort++;
+($testStatuses->{$key} eq $dependStr) && $depend++;
+}
 }
 
 my $total= $pass + $fail + $abort;
 $report->totals( $groupName, $total, $fail, $abort, $totalDuration );
 
-  }
+}
 
 ##
 #  Sub: new
@@ -123,7 +123,7 @@ sub printGroupResultsXml
 # Returns:
 # None.
 sub new
-  {
+{
 my $proto = shift;
 my $class = ref($proto) || $proto;
 my $self = {};
@@ -133,7 +133,7 @@ sub new
 $self->{'wrong_execution_mode'} = "_xyz_wrong_execution_mode_zyx_";
 
 return $self;
-  }
+}
 
 ##
 #  Sub: globalSetup
@@ -150,7 +150,7 @@ sub new
 # None
 #
 sub globalSetup
-  {
+{
 my ($self, $globalHash, $log) = @_;
 my $subName = (caller(0))[3];
 
@@ -166,11 +166,11 @@ sub globalSetup
 # if "-ignore false" was provided on the command line,
 # it means do run tests even when marked as 'ignore'
 if (defined($globalHash->{'ignore'}) && $globalHash->{'ignore'} eq 
'false') {
-  $self->{'ignore'} = 'false';
+$self->{'ignore'} = 'false';
 }
 
 if (! defined $globalHash->{'localpathbase'}) {
-  $globalHash->{'localpathbase'} = '/tmp';
+$globalHash->{'localpathbase'} = '/tmp';
 }
 
 $globalHash->{'outpath'} = $globalHash->{'outpathbase'} . "/" . 
$globalHash->{'runid'} . "/";
@@ -197,19 +197,19 @@ sub globalSetup
 
 # add libexec location to the path
 if (defined($ENV{'PATH'})) {
-  $ENV{'PATH'} = $globalHash->{'scriptPath'} . ":" . $ENV{'PATH'};
+$ENV{'PATH'} = $globalHash->{'scriptPath'} . ":" . 

[2/7] hive git commit: HIVE-15873: Remove Windows-specific code (Gunther Hagleitner, reviewed by Ashtuosh Chauhan)

2017-02-11 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/38ad7792/ql/src/test/results/clientpositive/vector_partitioned_date_time_win.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/vector_partitioned_date_time_win.q.out 
b/ql/src/test/results/clientpositive/vector_partitioned_date_time_win.q.out
deleted file mode 100755
index 580e552..000
--- a/ql/src/test/results/clientpositive/vector_partitioned_date_time_win.q.out
+++ /dev/null
@@ -1,2036 +0,0 @@
-PREHOOK: query: -- Windows-specific test due to space character being escaped 
in Hive paths on Windows.
--- INCLUDE_OS_WINDOWS
-
--- Check if vectorization code is handling partitioning on DATE and the other 
data types.
-
-
-CREATE TABLE flights_tiny (
-  origin_city_name STRING,
-  dest_city_name STRING,
-  fl_date DATE,
-  arr_delay FLOAT,
-  fl_num INT
-)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@flights_tiny
-POSTHOOK: query: -- Windows-specific test due to space character being escaped 
in Hive paths on Windows.
--- INCLUDE_OS_WINDOWS
-
--- Check if vectorization code is handling partitioning on DATE and the other 
data types.
-
-
-CREATE TABLE flights_tiny (
-  origin_city_name STRING,
-  dest_city_name STRING,
-  fl_date DATE,
-  arr_delay FLOAT,
-  fl_num INT
-)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@flights_tiny
-PREHOOK: query: LOAD DATA LOCAL INPATH '../../data/files/flights_tiny.txt.1' 
OVERWRITE INTO TABLE flights_tiny
-PREHOOK: type: LOAD
- A masked pattern was here 
-PREHOOK: Output: default@flights_tiny
-POSTHOOK: query: LOAD DATA LOCAL INPATH '../../data/files/flights_tiny.txt.1' 
OVERWRITE INTO TABLE flights_tiny
-POSTHOOK: type: LOAD
- A masked pattern was here 
-POSTHOOK: Output: default@flights_tiny
-PREHOOK: query: CREATE TABLE flights_tiny_orc STORED AS ORC AS
-SELECT origin_city_name, dest_city_name, fl_date, to_utc_timestamp(fl_date, 
'America/Los_Angeles') as fl_time, arr_delay, fl_num
-FROM flights_tiny
-PREHOOK: type: CREATETABLE_AS_SELECT
-PREHOOK: Input: default@flights_tiny
-PREHOOK: Output: database:default
-PREHOOK: Output: default@flights_tiny_orc
-POSTHOOK: query: CREATE TABLE flights_tiny_orc STORED AS ORC AS
-SELECT origin_city_name, dest_city_name, fl_date, to_utc_timestamp(fl_date, 
'America/Los_Angeles') as fl_time, arr_delay, fl_num
-FROM flights_tiny
-POSTHOOK: type: CREATETABLE_AS_SELECT
-POSTHOOK: Input: default@flights_tiny
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@flights_tiny_orc
-PREHOOK: query: SELECT * FROM flights_tiny_orc
-PREHOOK: type: QUERY
-PREHOOK: Input: default@flights_tiny_orc
- A masked pattern was here 
-POSTHOOK: query: SELECT * FROM flights_tiny_orc
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@flights_tiny_orc
- A masked pattern was here 
-Baltimore  New York2010-10-20  2010-10-20 07:00:00 -30.0   
1064
-Baltimore  New York2010-10-20  2010-10-20 07:00:00 23.0
1142
-Baltimore  New York2010-10-20  2010-10-20 07:00:00 6.0 
1599
-ChicagoNew York2010-10-20  2010-10-20 07:00:00 42.0
361
-ChicagoNew York2010-10-20  2010-10-20 07:00:00 24.0
897
-ChicagoNew York2010-10-20  2010-10-20 07:00:00 15.0
1531
-ChicagoNew York2010-10-20  2010-10-20 07:00:00 -6.0
1610
-ChicagoNew York2010-10-20  2010-10-20 07:00:00 -2.0
3198
-Baltimore  New York2010-10-21  2010-10-21 07:00:00 17.0
1064
-Baltimore  New York2010-10-21  2010-10-21 07:00:00 105.0   
1142
-Baltimore  New York2010-10-21  2010-10-21 07:00:00 28.0
1599
-ChicagoNew York2010-10-21  2010-10-21 07:00:00 142.0   
361
-ChicagoNew York2010-10-21  2010-10-21 07:00:00 77.0
897
-ChicagoNew York2010-10-21  2010-10-21 07:00:00 53.0
1531
-ChicagoNew York2010-10-21  2010-10-21 07:00:00 -5.0
1610
-ChicagoNew York2010-10-21  2010-10-21 07:00:00 51.0
3198
-Baltimore  New York2010-10-22  2010-10-22 07:00:00 -12.0   
1064
-Baltimore  New York2010-10-22  2010-10-22 07:00:00 54.0
1142
-Baltimore  New York2010-10-22  2010-10-22 07:00:00 18.0
1599
-ChicagoNew York2010-10-22  2010-10-22 07:00:00 2.0 
361
-ChicagoNew York2010-10-22  2010-10-22 07:00:00 24.0
897
-ChicagoNew York2010-10-22  2010-10-22 07:00:00 16.0
1531
-ChicagoNew York2010-10-22  2010-10-22 07:00:00 -6.0
1610
-ChicagoNew York2010-10-22  2010-10-22 07:00:00 -11.0   
3198
-Baltimore 

[5/5] hive git commit: HIVE-15791: Remove unused ant files (Gunther Hagleitner, reviewed by Ashutosh Chauhan)

2017-02-09 Thread gunther
HIVE-15791: Remove unused ant files (Gunther Hagleitner, reviewed by Ashutosh 
Chauhan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/1f1e91aa
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/1f1e91aa
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/1f1e91aa

Branch: refs/heads/master
Commit: 1f1e91aa02d726613a364678288caa8b252d8bd6
Parents: 2429bb2
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Tue Feb 7 17:07:38 2017 -0800
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Thu Feb 9 15:33:54 2017 -0800

--
 ant/pom.xml |   69 -
 .../hive/ant/DistinctElementsClassPath.java |   90 -
 .../apache/hadoop/hive/ant/GenVectorCode.java   | 3309 --
 .../hadoop/hive/ant/GenVectorTestCode.java  |  261 --
 .../apache/hadoop/hive/ant/GetVersionPref.java  |   94 -
 ant/src/org/apache/hadoop/hive/ant/antlib.xml   |   24 -
 itests/hive-blobstore/pom.xml   |6 -
 itests/qtest-accumulo/pom.xml   |6 -
 itests/qtest-spark/pom.xml  |6 -
 itests/qtest/pom.xml|6 -
 jdbc/pom.xml|2 +-
 pom.xml |7 +-
 ql/pom.xml  |4 +-
 vector-code-gen/pom.xml |   69 +
 .../apache/hadoop/hive/tools/GenVectorCode.java | 3309 ++
 .../hadoop/hive/tools/GenVectorTestCode.java|  261 ++
 16 files changed, 3643 insertions(+), 3880 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/1f1e91aa/ant/pom.xml
--
diff --git a/ant/pom.xml b/ant/pom.xml
deleted file mode 100644
index 6414ef6..000
--- a/ant/pom.xml
+++ /dev/null
@@ -1,69 +0,0 @@
-
-
-http://maven.apache.org/POM/4.0.0;
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
-  4.0.0
-  
-org.apache.hive
-hive
-2.2.0-SNAPSHOT
-../pom.xml
-  
-
-  hive-ant
-  jar
-  Hive Ant Utilities
-
-  
-..
-  
-
-  
-
-
-
-  commons-lang
-  commons-lang
-  ${commons-lang.version}
-
-  
-com.google.guava
-guava
-${guava.version}
-  
-
-  org.apache.ant
-  ant
-  ${ant.version}
-
-
-  org.apache.velocity
-  velocity
-  ${velocity.version}
-   
- 
-commons-collections
-commons-collections
-  
-   
-
-  
-
-  
-${basedir}/src
-  
-
-

http://git-wip-us.apache.org/repos/asf/hive/blob/1f1e91aa/ant/src/org/apache/hadoop/hive/ant/DistinctElementsClassPath.java
--
diff --git a/ant/src/org/apache/hadoop/hive/ant/DistinctElementsClassPath.java 
b/ant/src/org/apache/hadoop/hive/ant/DistinctElementsClassPath.java
deleted file mode 100644
index 233dc6e..000
--- a/ant/src/org/apache/hadoop/hive/ant/DistinctElementsClassPath.java
+++ /dev/null
@@ -1,90 +0,0 @@
-/*
- *  Licensed to the Apache Software Foundation (ASF) under one or more
- *  contributor license agreements.  See the NOTICE file distributed with
- *  this work for additional information regarding copyright ownership.
- *  The ASF licenses this file to You under the Apache License, Version 2.0
- *  (the "License"); you may not use this file except in compliance with
- *  the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- *  Unless required by applicable law or agreed to in writing, software
- *  distributed under the License is distributed on an "AS IS" BASIS,
- *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *  See the License for the specific language governing permissions and
- *  limitations under the License.
- *
- */
-
-package org.apache.hadoop.hive.ant;
-
-
-import java.io.File;
-import java.util.ArrayList;
-import java.util.HashSet; 
-
-import org.apache.tools.ant.Project;
-import org.apache.tools.ant.types.Path;
-
-/**
- * This object represents a path as used by CLASSPATH or PATH environment 
variable. String 
- * representation of this object returns the path with unique elements to 
reduce the chances of
- * exceeding  the character limit problem on windows by removing if there are 
duplicate files(JARs)
- * in the original class path.
- */
-public class DistinctElementsClassPath extends Path {
-  
-  /**
-   * Invoked by IntrospectionHelper for setXXX(Path p)
-   * attribute setters.
-   * @param p

[2/5] hive git commit: HIVE-15791: Remove unused ant files (Gunther Hagleitner, reviewed by Ashutosh Chauhan)

2017-02-09 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/1f1e91aa/vector-code-gen/src/org/apache/hadoop/hive/tools/GenVectorCode.java
--
diff --git 
a/vector-code-gen/src/org/apache/hadoop/hive/tools/GenVectorCode.java 
b/vector-code-gen/src/org/apache/hadoop/hive/tools/GenVectorCode.java
new file mode 100644
index 000..22b8752
--- /dev/null
+++ b/vector-code-gen/src/org/apache/hadoop/hive/tools/GenVectorCode.java
@@ -0,0 +1,3309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.tools;
+
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileReader;
+import java.io.FileWriter;
+import java.io.IOException;
+
+import org.apache.tools.ant.BuildException;
+import org.apache.tools.ant.Task;
+
+/**
+ * This class generates java classes from the templates.
+ */
+public class GenVectorCode extends Task {
+
+  private static String [][] templateExpansions =
+{
+
+  /**
+   * date is stored in a LongColumnVector as epochDays
+   * interval_year_month is stored in a LongColumnVector as epochMonths
+   *
+   * interval_day_time and timestamp are stored in a TimestampColumnVector 
(2 longs to hold
+   * very large number of nanoseconds)
+   *
+   * date – date --> type: interval_day_time
+   * timestamp – date --> type: interval_day_time
+   * date – timestamp --> type: interval_day_time
+   * timestamp – timestamp --> type: interval_day_time
+   *
+   * date +|- interval_day_time --> type: timestamp
+   * interval_day_time + date --> type: timestamp
+   *
+   * timestamp +|- interval_day_time --> type: timestamp
+   * interval_day_time +|- timestamp --> type: timestamp
+   *
+   * date +|- interval_year_month --> type: date
+   * interval_year_month + date --> type: date
+   *
+   * timestamp +|- interval_year_month --> type: timestamp
+   * interval_year_month + timestamp --> type: timestamp
+   *
+   * Adding/Subtracting months done with Calendar object
+   *
+   * Timestamp Compare with Long with long interpreted as seconds
+   * Timestamp Compare with Double with double interpreted as seconds with 
fractional nanoseconds
+   *
+   */
+
+  // The following datetime/interval arithmetic operations can be done 
using the vectorized values.
+  // Type interval_year_month (LongColumnVector storing months).
+  {"DTIColumnArithmeticDTIScalarNoConvert", "Add", "interval_year_month", 
"interval_year_month", "+"},
+  {"DTIScalarArithmeticDTIColumnNoConvert", "Add", "interval_year_month", 
"interval_year_month", "+"},
+  {"DTIColumnArithmeticDTIColumnNoConvert", "Add", "interval_year_month", 
"interval_year_month", "+"},
+
+  {"DTIColumnArithmeticDTIScalarNoConvert", "Subtract", 
"interval_year_month", "interval_year_month", "-"},
+  {"DTIScalarArithmeticDTIColumnNoConvert", "Subtract", 
"interval_year_month", "interval_year_month", "-"},
+  {"DTIColumnArithmeticDTIColumnNoConvert", "Subtract", 
"interval_year_month", "interval_year_month", "-"},
+
+  // Arithmetic on two type interval_day_time (TimestampColumnVector 
storing nanosecond interval
+  // in 2 longs) produces a interval_day_time.
+  {"TimestampArithmeticTimestamp", "Add", "interval_day_time", "Col", 
"interval_day_time", "Scalar"},
+  {"TimestampArithmeticTimestamp", "Add", "interval_day_time", "Scalar", 
"interval_day_time", "Column"},
+  {"TimestampArithmeticTimestamp", "Add", "interval_day_time", "Col", 
"interval_day_time", "Column"},
+
+  {"TimestampArithmeticTimestamp", "Subtract", "interval_day_time", "Col", 
"interval_day_time", "Scalar"},
+  {"TimestampArithmeticTimestamp", "Subtract", "interval_day_time", 
"Scalar", "interval_day_time", "Column"},
+  {"TimestampArithmeticTimestamp", "Subtract", "interval_day_time", "Col", 
"interval_day_time", "Column"},
+
+  // A type timestamp (TimestampColumnVector) plus/minus a type 
interval_day_time (TimestampColumnVector
+  // storing nanosecond interval in 2 longs) produces a timestamp.
+  

[4/5] hive git commit: HIVE-15791: Remove unused ant files (Gunther Hagleitner, reviewed by Ashutosh Chauhan)

2017-02-09 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/1f1e91aa/ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java
--
diff --git a/ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java 
b/ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java
deleted file mode 100644
index 133ef0a..000
--- a/ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java
+++ /dev/null
@@ -1,3309 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hive.ant;
-
-import java.io.BufferedReader;
-import java.io.BufferedWriter;
-import java.io.File;
-import java.io.FileReader;
-import java.io.FileWriter;
-import java.io.IOException;
-
-import org.apache.tools.ant.BuildException;
-import org.apache.tools.ant.Task;
-
-/**
- * This class generates java classes from the templates.
- */
-public class GenVectorCode extends Task {
-
-  private static String [][] templateExpansions =
-{
-
-  /**
-   * date is stored in a LongColumnVector as epochDays
-   * interval_year_month is stored in a LongColumnVector as epochMonths
-   *
-   * interval_day_time and timestamp are stored in a TimestampColumnVector 
(2 longs to hold
-   * very large number of nanoseconds)
-   *
-   * date – date --> type: interval_day_time
-   * timestamp – date --> type: interval_day_time
-   * date – timestamp --> type: interval_day_time
-   * timestamp – timestamp --> type: interval_day_time
-   *
-   * date +|- interval_day_time --> type: timestamp
-   * interval_day_time + date --> type: timestamp
-   *
-   * timestamp +|- interval_day_time --> type: timestamp
-   * interval_day_time +|- timestamp --> type: timestamp
-   *
-   * date +|- interval_year_month --> type: date
-   * interval_year_month + date --> type: date
-   *
-   * timestamp +|- interval_year_month --> type: timestamp
-   * interval_year_month + timestamp --> type: timestamp
-   *
-   * Adding/Subtracting months done with Calendar object
-   *
-   * Timestamp Compare with Long with long interpreted as seconds
-   * Timestamp Compare with Double with double interpreted as seconds with 
fractional nanoseconds
-   *
-   */
-
-  // The following datetime/interval arithmetic operations can be done 
using the vectorized values.
-  // Type interval_year_month (LongColumnVector storing months).
-  {"DTIColumnArithmeticDTIScalarNoConvert", "Add", "interval_year_month", 
"interval_year_month", "+"},
-  {"DTIScalarArithmeticDTIColumnNoConvert", "Add", "interval_year_month", 
"interval_year_month", "+"},
-  {"DTIColumnArithmeticDTIColumnNoConvert", "Add", "interval_year_month", 
"interval_year_month", "+"},
-
-  {"DTIColumnArithmeticDTIScalarNoConvert", "Subtract", 
"interval_year_month", "interval_year_month", "-"},
-  {"DTIScalarArithmeticDTIColumnNoConvert", "Subtract", 
"interval_year_month", "interval_year_month", "-"},
-  {"DTIColumnArithmeticDTIColumnNoConvert", "Subtract", 
"interval_year_month", "interval_year_month", "-"},
-
-  // Arithmetic on two type interval_day_time (TimestampColumnVector 
storing nanosecond interval
-  // in 2 longs) produces a interval_day_time.
-  {"TimestampArithmeticTimestamp", "Add", "interval_day_time", "Col", 
"interval_day_time", "Scalar"},
-  {"TimestampArithmeticTimestamp", "Add", "interval_day_time", "Scalar", 
"interval_day_time", "Column"},
-  {"TimestampArithmeticTimestamp", "Add", "interval_day_time", "Col", 
"interval_day_time", "Column"},
-
-  {"TimestampArithmeticTimestamp", "Subtract", "interval_day_time", "Col", 
"interval_day_time", "Scalar"},
-  {"TimestampArithmeticTimestamp", "Subtract", "interval_day_time", 
"Scalar", "interval_day_time", "Column"},
-  {"TimestampArithmeticTimestamp", "Subtract", "interval_day_time", "Col", 
"interval_day_time", "Column"},
-
-  // A type timestamp (TimestampColumnVector) plus/minus a type 
interval_day_time (TimestampColumnVector
-  // storing nanosecond interval in 2 longs) produces a timestamp.
-  {"TimestampArithmeticTimestamp", "Add", "interval_day_time", 

[1/5] hive git commit: HIVE-15791: Remove unused ant files (Gunther Hagleitner, reviewed by Ashutosh Chauhan)

2017-02-09 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master 2429bb28a -> 1f1e91aa0


http://git-wip-us.apache.org/repos/asf/hive/blob/1f1e91aa/vector-code-gen/src/org/apache/hadoop/hive/tools/GenVectorTestCode.java
--
diff --git 
a/vector-code-gen/src/org/apache/hadoop/hive/tools/GenVectorTestCode.java 
b/vector-code-gen/src/org/apache/hadoop/hive/tools/GenVectorTestCode.java
new file mode 100644
index 000..bfa0091
--- /dev/null
+++ b/vector-code-gen/src/org/apache/hadoop/hive/tools/GenVectorTestCode.java
@@ -0,0 +1,261 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.tools;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+
+/**
+ *
+ * GenVectorTestCode.
+ * This class is mutable and maintains a hashmap of TestSuiteClassName to test 
cases.
+ * The tests cases are added over the course of vectorized expressions class 
generation,
+ * with test classes being outputted at the end. For each column vector 
(inputs and/or outputs)
+ * a matrix of pairwise covering Booleans is used to generate test cases 
across nulls and
+ * repeating dimensions. Based on the input column vector(s) nulls and 
repeating states
+ * the states of the output column vector (if there is one) is validated, 
along with the null
+ * vector. For filter operations the selection vector is validated against the 
generated
+ * data. Each template corresponds to a class representing a test suite.
+ */
+public class GenVectorTestCode {
+
+  public enum TestSuiteClassName{
+TestColumnScalarOperationVectorExpressionEvaluation,
+TestColumnScalarFilterVectorExpressionEvaluation,
+TestColumnColumnOperationVectorExpressionEvaluation,
+TestColumnColumnFilterVectorExpressionEvaluation,
+  }
+
+  private final String testOutputDir;
+  private final String testTemplateDirectory;
+  private final HashMap testsuites;
+
+  public GenVectorTestCode(String testOutputDir, String testTemplateDirectory) 
{
+this.testOutputDir = testOutputDir;
+this.testTemplateDirectory = testTemplateDirectory;
+testsuites = new HashMap();
+
+for(TestSuiteClassName className : TestSuiteClassName.values()) {
+  testsuites.put(className,new StringBuilder());
+}
+
+  }
+
+  public void addColumnScalarOperationTestCases(boolean op1IsCol, String 
vectorExpClassName,
+  String inputColumnVectorType, String outputColumnVectorType, String 
scalarType)
+  throws IOException {
+
+TestSuiteClassName template =
+TestSuiteClassName.TestColumnScalarOperationVectorExpressionEvaluation;
+
+//Read the template into a string;
+String templateFile = 
GenVectorCode.joinPath(this.testTemplateDirectory,template.toString()+".txt");
+String templateString = 
removeTemplateComments(GenVectorCode.readFile(templateFile));
+
+for(Boolean[] testMatrix :new Boolean[][]{
+// Pairwise: InitOuputColHasNulls, InitOuputColIsRepeating, 
ColumnHasNulls, ColumnIsRepeating
+{false,   true,true,true},
+{false,   false,   false,   false},
+{true,false,   true,false},
+{true,true,false,   false},
+{true,false,   false,   true}}) {
+  String testCase = templateString;
+  testCase = testCase.replaceAll("",
+  "test"
+   + vectorExpClassName
+   + createNullRepeatingNameFragment("Out", testMatrix[0], 
testMatrix[1])
+   + createNullRepeatingNameFragment("Col", testMatrix[2], 
testMatrix[3]));
+  testCase = testCase.replaceAll("", 
vectorExpClassName);
+  testCase = testCase.replaceAll("", 
inputColumnVectorType);
+  testCase = testCase.replaceAll("", 
outputColumnVectorType);
+  testCase = testCase.replaceAll("", scalarType);
+  testCase = testCase.replaceAll("", 
GenVectorCode.getCamelCaseType(scalarType));
+  testCase = testCase.replaceAll("", 
testMatrix[0].toString());
+  testCase = testCase.replaceAll("", 
testMatrix[1].toString());
+  testCase = testCase.replaceAll("", 
testMatrix[2].toString());
+  testCase = 

[3/5] hive git commit: HIVE-15791: Remove unused ant files (Gunther Hagleitner, reviewed by Ashutosh Chauhan)

2017-02-09 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/1f1e91aa/ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java
--
diff --git a/ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java 
b/ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java
deleted file mode 100644
index 802cbb2..000
--- a/ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java
+++ /dev/null
@@ -1,261 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hive.ant;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.HashMap;
-
-/**
- *
- * GenVectorTestCode.
- * This class is mutable and maintains a hashmap of TestSuiteClassName to test 
cases.
- * The tests cases are added over the course of vectorized expressions class 
generation,
- * with test classes being outputted at the end. For each column vector 
(inputs and/or outputs)
- * a matrix of pairwise covering Booleans is used to generate test cases 
across nulls and
- * repeating dimensions. Based on the input column vector(s) nulls and 
repeating states
- * the states of the output column vector (if there is one) is validated, 
along with the null
- * vector. For filter operations the selection vector is validated against the 
generated
- * data. Each template corresponds to a class representing a test suite.
- */
-public class GenVectorTestCode {
-
-  public enum TestSuiteClassName{
-TestColumnScalarOperationVectorExpressionEvaluation,
-TestColumnScalarFilterVectorExpressionEvaluation,
-TestColumnColumnOperationVectorExpressionEvaluation,
-TestColumnColumnFilterVectorExpressionEvaluation,
-  }
-
-  private final String testOutputDir;
-  private final String testTemplateDirectory;
-  private final HashMap testsuites;
-
-  public GenVectorTestCode(String testOutputDir, String testTemplateDirectory) 
{
-this.testOutputDir = testOutputDir;
-this.testTemplateDirectory = testTemplateDirectory;
-testsuites = new HashMap();
-
-for(TestSuiteClassName className : TestSuiteClassName.values()) {
-  testsuites.put(className,new StringBuilder());
-}
-
-  }
-
-  public void addColumnScalarOperationTestCases(boolean op1IsCol, String 
vectorExpClassName,
-  String inputColumnVectorType, String outputColumnVectorType, String 
scalarType)
-  throws IOException {
-
-TestSuiteClassName template =
-TestSuiteClassName.TestColumnScalarOperationVectorExpressionEvaluation;
-
-//Read the template into a string;
-String templateFile = 
GenVectorCode.joinPath(this.testTemplateDirectory,template.toString()+".txt");
-String templateString = 
removeTemplateComments(GenVectorCode.readFile(templateFile));
-
-for(Boolean[] testMatrix :new Boolean[][]{
-// Pairwise: InitOuputColHasNulls, InitOuputColIsRepeating, 
ColumnHasNulls, ColumnIsRepeating
-{false,   true,true,true},
-{false,   false,   false,   false},
-{true,false,   true,false},
-{true,true,false,   false},
-{true,false,   false,   true}}) {
-  String testCase = templateString;
-  testCase = testCase.replaceAll("",
-  "test"
-   + vectorExpClassName
-   + createNullRepeatingNameFragment("Out", testMatrix[0], 
testMatrix[1])
-   + createNullRepeatingNameFragment("Col", testMatrix[2], 
testMatrix[3]));
-  testCase = testCase.replaceAll("", 
vectorExpClassName);
-  testCase = testCase.replaceAll("", 
inputColumnVectorType);
-  testCase = testCase.replaceAll("", 
outputColumnVectorType);
-  testCase = testCase.replaceAll("", scalarType);
-  testCase = testCase.replaceAll("", 
GenVectorCode.getCamelCaseType(scalarType));
-  testCase = testCase.replaceAll("", 
testMatrix[0].toString());
-  testCase = testCase.replaceAll("", 
testMatrix[1].toString());
-  testCase = testCase.replaceAll("", 
testMatrix[2].toString());
-  testCase = testCase.replaceAll("", 
testMatrix[3].toString());
-
-  if(op1IsCol){
-testCase = testCase.replaceAll("","0, scalarValue");
-  }else{

hive git commit: HIVE-15808: Remove semijoin reduction branch if it is on bigtable along with hash join (Deepak Jaiswal, reviewed by Jason Dere)

2017-02-07 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master 3ed7dc2b8 -> f6cdbc879


HIVE-15808: Remove semijoin reduction branch if it is on bigtable along with 
hash join (Deepak Jaiswal, reviewed by Jason Dere)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/f6cdbc87
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/f6cdbc87
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/f6cdbc87

Branch: refs/heads/master
Commit: f6cdbc87955aa5cdb83f174a73db9a7d8071f78b
Parents: 3ed7dc2
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Tue Feb 7 11:11:09 2017 -0800
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Tue Feb 7 11:11:09 2017 -0800

--
 .../hive/ql/optimizer/ConvertJoinMapJoin.java   | 64 +++-
 .../hadoop/hive/ql/parse/GenTezUtils.java   |  8 +--
 2 files changed, 39 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/f6cdbc87/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
index 0f9e86b..e3b293a 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
@@ -775,51 +775,57 @@ public class ConvertJoinMapJoin implements NodeProcessor {
 return mapJoinOp;
   }
 
-  // Remove any semijoin branch associated with mapjoin's parent's operator
-  // pipeline which can cause a cycle after mapjoin optimization.
+  // Remove any semijoin branch associated with hashjoin's parent's operator
+  // pipeline which can cause a cycle after hashjoin optimization.
   private void removeCycleCreatingSemiJoinOps(MapJoinOperator mapjoinOp,
   Operator 
parentSelectOpOfBigTable,
   ParseContext parseContext) 
throws SemanticException {
-boolean semiJoinCycle = false;
-ReduceSinkOperator rs = null;
-TableScanOperator ts = null;
+Map<ReduceSinkOperator, TableScanOperator> semiJoinMap =
+new HashMap<ReduceSinkOperator, TableScanOperator>();
 for (Operator op : parentSelectOpOfBigTable.getChildOperators()) {
   if (!(op instanceof SelectOperator)) {
 continue;
   }
 
-  while (op.getChildOperators().size() > 0 ) {
+  while (op.getChildOperators().size() > 0) {
 op = op.getChildOperators().get(0);
-if (!(op instanceof ReduceSinkOperator)) {
-  continue;
-}
-rs = (ReduceSinkOperator) op;
-ts = parseContext.getRsOpToTsOpMap().get(rs);
-if (ts == null) {
+  }
+
+  // If not ReduceSink Op, skip
+  if (!(op instanceof ReduceSinkOperator)) {
+continue;
+  }
+
+  ReduceSinkOperator rs = (ReduceSinkOperator) op;
+  TableScanOperator ts = parseContext.getRsOpToTsOpMap().get(rs);
+  if (ts == null) {
+// skip, no semijoin branch
+continue;
+  }
+
+  // Found a semijoin branch.
+  for (Operator parent : mapjoinOp.getParentOperators()) {
+if (!(parent instanceof ReduceSinkOperator)) {
   continue;
 }
-for (Operator parent : mapjoinOp.getParentOperators()) {
-  if (!(parent instanceof ReduceSinkOperator)) {
-continue;
-  }
 
-  Set tsOps = 
OperatorUtils.findOperatorsUpstream(parent,
-  TableScanOperator.class);
-  for (TableScanOperator parentTS : tsOps) {
-// If the parent is same as the ts, then we have a cycle.
-if (ts == parentTS) {
-  semiJoinCycle = true;
-  break;
-}
+Set tsOps = 
OperatorUtils.findOperatorsUpstream(parent,
+TableScanOperator.class);
+for (TableScanOperator parentTS : tsOps) {
+  // If the parent is same as the ts, then we have a cycle.
+  if (ts == parentTS) {
+semiJoinMap.put(rs, ts);
+break;
   }
 }
   }
 }
-
-// By design there can be atmost 1 such cycle.
-if (semiJoinCycle) {
-  GenTezUtils.removeBranch(rs);
-  GenTezUtils.removeSemiJoinOperator(parseContext, rs, ts);
+if (semiJoinMap.size() > 0) {
+  for (ReduceSinkOperator rs : semiJoinMap.keySet()) {
+GenTezUtils.removeBranch(rs);
+GenTezUtils.removeSemiJoinOperator(parseContext, rs,
+semiJoinMap.get(rs));
+  }
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hive/blob/f6cdbc87/ql/src/java/org/apache/h

[27/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part.q.out
deleted file mode 100644
index 71e7f9c..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part.q.out
+++ /dev/null
@@ -1,3727 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
--- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the row SERDE methods.
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
--- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the row SERDE methods.
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@part_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Partition Information 
-# col_name data_type   comment 
-
-part   int 
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
-InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
-OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter 

[51/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
HIVE-15560: clean up out files that do not correspond to any q files (Gunther 
Hagleitner, reviewed by Sergey Shelukhin)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/8230b579
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/8230b579
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/8230b579

Branch: refs/heads/master
Commit: 8230b579370a21ed4f54e658f536258c96e7987a
Parents: b260654
Author: Gunther Hagleitner <gunt...@apache.org>
Authored: Fri Feb 3 14:00:39 2017 -0800
Committer: Gunther Hagleitner <gunt...@apache.org>
Committed: Mon Feb 6 12:55:03 2017 -0800

--
 .../alter_partition_invalidspec.q.out   |   31 -
 .../clientnegative/alter_partition_nodrop.q.out |   43 -
 .../alter_partition_nodrop_table.q.out  |   47 -
 .../alter_partition_offline.q.out   |   79 -
 .../ambiguous_col_patterned.q.out   |1 -
 .../database_already_exists.q.out   |   15 -
 .../drop_partition_filter_failure2.q.out|   40 -
 .../clientnegative/drop_table_failure3.q.out|   55 -
 ql/src/test/results/clientnegative/fatal.q.out  |5 -
 .../clientnegative/orc_replace_columns.q.out|   13 -
 .../clientnegative/protectmode_part.q.out   |   70 -
 .../clientnegative/protectmode_part1.q.out  |   87 -
 .../clientnegative/protectmode_part2.q.out  |   41 -
 .../protectmode_part_no_drop.q.out  |   49 -
 .../protectmode_part_no_drop2.q.out |   51 -
 .../clientnegative/protectmode_tbl1.q.out   |   33 -
 .../clientnegative/protectmode_tbl2.q.out   |   63 -
 .../clientnegative/protectmode_tbl3.q.out   |   42 -
 .../clientnegative/protectmode_tbl4.q.out   |   75 -
 .../clientnegative/protectmode_tbl5.q.out   |   75 -
 .../clientnegative/protectmode_tbl6.q.out   |   29 -
 .../clientnegative/protectmode_tbl7.q.out   |   55 -
 .../clientnegative/protectmode_tbl8.q.out   |   55 -
 .../protectmode_tbl_no_drop.q.out   |   46 -
 .../results/clientnegative/sa_fail_hook3.q.out  |   25 -
 .../truncate_column_archived.q.out  |   20 -
 .../alter_partition_protect_mode.q.out  |  190 -
 .../drop_partitions_ignore_protection.q.out |   60 -
 .../schema_evol_orc_acid_mapwork_part.q.out | 3540 ---
 .../schema_evol_orc_acid_mapwork_table.q.out| 3209 --
 .../schema_evol_orc_acidvec_mapwork_part.q.out  | 3540 ---
 .../schema_evol_orc_acidvec_mapwork_table.q.out | 3209 --
 .../schema_evol_orc_nonvec_fetchwork_part.q.out | 3651 
 ...schema_evol_orc_nonvec_fetchwork_table.q.out | 3403 --
 .../schema_evol_orc_nonvec_mapwork_part.q.out   | 3723 
 ...ol_orc_nonvec_mapwork_part_all_complex.q.out |  646 ---
 ..._orc_nonvec_mapwork_part_all_primitive.q.out | 2697 
 .../schema_evol_orc_nonvec_mapwork_table.q.out  | 3475 ---
 .../llap/schema_evol_orc_vec_mapwork_part.q.out | 3723 
 ..._evol_orc_vec_mapwork_part_all_complex.q.out |  646 ---
 ...vol_orc_vec_mapwork_part_all_primitive.q.out | 2697 
 .../schema_evol_orc_vec_mapwork_table.q.out | 3475 ---
 .../schema_evol_text_nonvec_mapwork_part.q.out  | 3723 
 ...l_text_nonvec_mapwork_part_all_complex.q.out |  646 ---
 ...text_nonvec_mapwork_part_all_primitive.q.out | 2697 
 .../schema_evol_text_nonvec_mapwork_table.q.out | 3475 ---
 .../schema_evol_text_vec_mapwork_part.q.out | 3727 
 ...evol_text_vec_mapwork_part_all_complex.q.out |  650 ---
 ...ol_text_vec_mapwork_part_all_primitive.q.out | 2701 
 .../schema_evol_text_vec_mapwork_table.q.out| 3479 ---
 .../schema_evol_text_vecrow_mapwork_part.q.out  | 3727 
 ...l_text_vecrow_mapwork_part_all_complex.q.out |  652 ---
 ...text_vecrow_mapwork_part_all_primitive.q.out | 2701 
 .../schema_evol_text_vecrow_mapwork_table.q.out | 3479 ---
 .../results/clientpositive/perf/query45.q.out   |  121 -
 .../results/clientpositive/protectmode.q.out|  409 --
 .../results/clientpositive/protectmode2.q.out   |  205 -
 .../schema_evol_orc_acid_mapwork_part.q.out | 3540 ---
 .../schema_evol_orc_acid_mapwork_table.q.out| 3209 --
 .../schema_evol_orc_acidvec_mapwork_part.q.out  | 3540 ---
 .../schema_evol_orc_acidvec_mapwork_table.q.out | 3209 --
 .../schema_evol_orc_nonvec_fetchwork_part.q.out | 3819 
 ...schema_evol_orc_nonvec_fetchwork_table.q.out | 3571 ---
 .../schema_evol_orc_nonvec_mapwork_part.q.out   | 4107 -
 ...ol_orc_nonvec_mapwork_part_all_complex.q.out |  694 ---
 ..._orc_nonvec_mapwork_part_all_primitive.

[31/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part.q.out
deleted file mode 100644
index 86181de..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part.q.out
+++ /dev/null
@@ -1,3727 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@part_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Partition Information 
-# col_name data_type   comment 
-
-part   int 
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
-InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
-OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter 

[36/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_table.q.out
deleted file mode 100644
index 7a77747..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_table.q.out
+++ /dev/null
@@ -1,3475 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@table_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
-   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
-   numFiles0   
-   numRows 0   
-   rawDataSize 0   
-   totalSize   0   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: 

[26/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_complex.q.out
deleted file mode 100644
index fcfe969..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_complex.q.out
+++ /dev/null
@@ -1,652 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
--- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the row SERDE methods.
-
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_struct1
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
--- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the row SERDE methods.
-
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_struct1
-PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-PREHOOK: type: LOAD
- A masked pattern was here 
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-POSTHOOK: type: LOAD
- A masked pattern was here 
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-PREHOOK: type: QUERY
-PREHOOK: Input: default@struct1_a_txt
-PREHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@struct1_a_txt
-POSTHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
-struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
-PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-PREHOOK: type: QUERY
-PREHOOK: Input: default@part_change_various_various_struct1
-PREHOOK: Input: 

[24/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_table.q.out
deleted file mode 100644
index 9b2f805..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_table.q.out
+++ /dev/null
@@ -1,3479 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
--- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the row SERDE methods.
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
--- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the row SERDE methods.
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@table_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
-   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
-   numFiles0   
-   numRows 0   
-   rawDataSize 0   
-   totalSize   0   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
-InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
-OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS

[23/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/perf/query45.q.out
--
diff --git a/ql/src/test/results/clientpositive/perf/query45.q.out 
b/ql/src/test/results/clientpositive/perf/query45.q.out
deleted file mode 100644
index e2d0da5..000
--- a/ql/src/test/results/clientpositive/perf/query45.q.out
+++ /dev/null
@@ -1,121 +0,0 @@
-PREHOOK: query: explain select ca_zip, ca_county, sum(ws_sales_price) from 
web_sales JOIN customer ON web_sales.ws_bill_customer_sk = 
customer.c_customer_sk JOIN customer_address ON customer.c_current_addr_sk = 
customer_address.ca_address_sk JOIN date_dim ON web_sales.ws_sold_date_sk = 
date_dim.d_date_sk JOIN item ON web_sales.ws_item_sk = item.i_item_sk where ( 
item.i_item_id in (select i_item_id from item i2 where i2.i_item_sk in (2, 3, 
5, 7, 11, 13, 17, 19, 23, 29) ) ) and d_qoy = 2 and d_year = 2000 group by 
ca_zip, ca_county order by ca_zip, ca_county limit 100
-PREHOOK: type: QUERY
-POSTHOOK: query: explain select ca_zip, ca_county, sum(ws_sales_price) from 
web_sales JOIN customer ON web_sales.ws_bill_customer_sk = 
customer.c_customer_sk JOIN customer_address ON customer.c_current_addr_sk = 
customer_address.ca_address_sk JOIN date_dim ON web_sales.ws_sold_date_sk = 
date_dim.d_date_sk JOIN item ON web_sales.ws_item_sk = item.i_item_sk where ( 
item.i_item_id in (select i_item_id from item i2 where i2.i_item_sk in (2, 3, 
5, 7, 11, 13, 17, 19, 23, 29) ) ) and d_qoy = 2 and d_year = 2000 group by 
ca_zip, ca_county order by ca_zip, ca_county limit 100
-POSTHOOK: type: QUERY
-Plan optimized by CBO.
-
-Vertex dependency in root stage
-Reducer 11 <- Map 10 (SIMPLE_EDGE)
-Reducer 13 <- Map 12 (SIMPLE_EDGE), Map 14 (SIMPLE_EDGE)
-Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 6 (SIMPLE_EDGE)
-Reducer 3 <- Reducer 2 (SIMPLE_EDGE), Reducer 9 (SIMPLE_EDGE)
-Reducer 4 <- Reducer 3 (SIMPLE_EDGE)
-Reducer 5 <- Reducer 4 (SIMPLE_EDGE)
-Reducer 8 <- Map 7 (SIMPLE_EDGE), Reducer 11 (SIMPLE_EDGE)
-Reducer 9 <- Reducer 13 (SIMPLE_EDGE), Reducer 8 (SIMPLE_EDGE)
-
-Stage-0
-  Fetch Operator
-limit:100
-Stage-1
-  Reducer 5
-  File Output Operator [FS_47]
-Limit [LIM_46] (rows=100 width=135)
-  Number of rows:100
-  Select Operator [SEL_45] (rows=95833781 width=135)
-Output:["_col0","_col1","_col2"]
-  <-Reducer 4 [SIMPLE_EDGE]
-SHUFFLE [RS_44]
-  Group By Operator [GBY_42] (rows=95833781 width=135)
-
Output:["_col0","_col1","_col2"],aggregations:["sum(VALUE._col0)"],keys:KEY._col0,
 KEY._col1
-  <-Reducer 3 [SIMPLE_EDGE]
-SHUFFLE [RS_41]
-  PartitionCols:_col0, _col1
-  Group By Operator [GBY_40] (rows=191667562 width=135)
-
Output:["_col0","_col1","_col2"],aggregations:["sum(_col11)"],keys:_col4, _col3
-Select Operator [SEL_39] (rows=191667562 width=135)
-  Output:["_col4","_col3","_col11"]
-  Merge Join Operator [MERGEJOIN_74] (rows=191667562 
width=135)
-
Conds:RS_36._col0=RS_37._col5(Inner),Output:["_col3","_col4","_col11"]
-  <-Reducer 2 [SIMPLE_EDGE]
-SHUFFLE [RS_36]
-  PartitionCols:_col0
-  Merge Join Operator [MERGEJOIN_70] (rows=8801 
width=860)
-
Conds:RS_33._col1=RS_34._col0(Inner),Output:["_col0","_col3","_col4"]
-  <-Map 1 [SIMPLE_EDGE]
-SHUFFLE [RS_33]
-  PartitionCols:_col1
-  Select Operator [SEL_2] (rows=8000 width=860)
-Output:["_col0","_col1"]
-Filter Operator [FIL_64] (rows=8000 
width=860)
-  predicate:(c_customer_sk is not null and 
c_current_addr_sk is not null)
-  TableScan [TS_0] (rows=8000 width=860)
-
default@customer,customer,Tbl:COMPLETE,Col:NONE,Output:["c_customer_sk","c_current_addr_sk"]
-  <-Map 6 [SIMPLE_EDGE]
-SHUFFLE [RS_34]
-  PartitionCols:_col0
-  Select Operator [SEL_5] (rows=4000 
width=1014)
-Output:["_col0","_col1","_col2"]
-Filter Operator [FIL_65] (rows=4000 
width=1014)
-  predicate:ca_address_sk is not null
-  TableScan [TS_3] (rows=4000 width=1014)
-
default@customer_address,customer_address,Tbl:COMPLETE,Col:NONE,Output:["ca_address_sk","ca_county","ca_zip"]
-  <-Reducer 9 

[22/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_acid_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_acid_mapwork_part.q.out 
b/ql/src/test/results/clientpositive/schema_evol_orc_acid_mapwork_part.q.out
deleted file mode 100644
index 5e84806..000
--- a/ql/src/test/results/clientpositive/schema_evol_orc_acid_mapwork_part.q.out
+++ /dev/null
@@ -1,3540 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Partitioned
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Partitioned
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=2)
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=2
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=2)
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=2
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).b SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).c EXPRESSION 

[35/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part.q.out
deleted file mode 100644
index 3fdbc3c..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part.q.out
+++ /dev/null
@@ -1,3723 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@part_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Partition Information 
-# col_name data_type   comment 
-
-part   int 
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
-InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
-OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: 

[43/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part.q.out
deleted file mode 100644
index 0a7f1fc..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part.q.out
+++ /dev/null
@@ -1,3723 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@part_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Partition Information 
-# col_name data_type   comment 
-
-part   int 
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: 

[46/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_table.q.out
deleted file mode 100644
index bca953c..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_table.q.out
+++ /dev/null
@@ -1,3209 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Vectorized, MapWork, Table
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Vectorized, MapWork, Table
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: insert into table table_add_int_permute_select
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.c EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 

[38/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_complex.q.out
deleted file mode 100644
index bdf243b..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part_all_complex.q.out
+++ /dev/null
@@ -1,646 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned --> all complex 
conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_struct1
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned --> all complex 
conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_struct1
-PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-PREHOOK: type: LOAD
- A masked pattern was here 
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-POSTHOOK: type: LOAD
- A masked pattern was here 
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-PREHOOK: type: QUERY
-PREHOOK: Input: default@struct1_a_txt
-PREHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@struct1_a_txt
-POSTHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
-struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
-PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-PREHOOK: type: QUERY
-PREHOOK: Input: default@part_change_various_various_struct1
-PREHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-POSTHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@part_change_various_various_struct1
-POSTHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-insert_num parts1  b
-1  1

[25/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_primitive.q.out
deleted file mode 100644
index 38cbb39..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vecrow_mapwork_part_all_primitive.q.out
+++ /dev/null
@@ -1,2701 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
--- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the row SERDE methods.
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_boolean
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
--- NOTE: the use of hive.vectorized.use.row.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the row SERDE methods.
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_boolean
-PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col5, 

[01/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
Repository: hive
Updated Branches:
  refs/heads/master b26065454 -> 8230b5793


http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_table.q.out 
b/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_table.q.out
deleted file mode 100644
index 093920d..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_table.q.out
+++ /dev/null
@@ -1,3887 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@table_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
-   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
-   numFiles0   
-   numRows 0   
-   rawDataSize 0   
-   totalSize   0   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
-InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
-OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select 

[41/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
deleted file mode 100644
index c0f2014..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
+++ /dev/null
@@ -1,2697 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all primitive 
conversions
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_boolean
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all primitive 
conversions
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_boolean
-PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col5, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c5 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col6, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c6 
EXPRESSION 

[16/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part.q.out 
b/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part.q.out
deleted file mode 100644
index fa6e7df..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part.q.out
+++ /dev/null
@@ -1,4107 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@part_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Partition Information 
-# col_name data_type   comment 
-
-part   int 
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: 

[19/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_acidvec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_acidvec_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_orc_acidvec_mapwork_table.q.out
deleted file mode 100644
index bca953c..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_orc_acidvec_mapwork_table.q.out
+++ /dev/null
@@ -1,3209 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Vectorized, MapWork, Table
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Vectorized, MapWork, Table
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: insert into table table_add_int_permute_select
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.c EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col1, 
type:string, 

[14/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
deleted file mode 100644
index 6c8e34a..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part_all_primitive.q.out
+++ /dev/null
@@ -1,2953 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all primitive 
conversions
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_boolean
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all primitive 
conversions
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_boolean
-PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col5, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c5 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col6, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c6 
EXPRESSION 

[07/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
deleted file mode 100644
index d917a7f..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
+++ /dev/null
@@ -1,694 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_struct1
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_struct1
-PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-PREHOOK: type: LOAD
- A masked pattern was here 
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-POSTHOOK: type: LOAD
- A masked pattern was here 
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-PREHOOK: type: QUERY
-PREHOOK: Input: default@struct1_a_txt
-PREHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@struct1_a_txt
-POSTHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
-struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
-PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-PREHOOK: type: QUERY
-PREHOOK: Input: default@part_change_various_various_struct1
-PREHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-POSTHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@part_change_various_various_struct1
-POSTHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-insert_num parts1  

[20/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_acidvec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_acidvec_mapwork_part.q.out 
b/ql/src/test/results/clientpositive/schema_evol_orc_acidvec_mapwork_part.q.out
deleted file mode 100644
index 15ea22b..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_orc_acidvec_mapwork_part.q.out
+++ /dev/null
@@ -1,3540 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Vectorized, MapWork, Partitioned
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Vectorized, MapWork, Partitioned
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=2)
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=2
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=2)
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=2
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).b SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).c EXPRESSION 

[29/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_primitive.q.out
deleted file mode 100644
index 8d4fb33..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_primitive.q.out
+++ /dev/null
@@ -1,2701 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_boolean
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_boolean
-PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col5, 

[34/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
deleted file mode 100644
index 0569b7c..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_complex.q.out
+++ /dev/null
@@ -1,646 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_struct1
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_struct1
-PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-PREHOOK: type: LOAD
- A masked pattern was here 
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-POSTHOOK: type: LOAD
- A masked pattern was here 
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-PREHOOK: type: QUERY
-PREHOOK: Input: default@struct1_a_txt
-PREHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@struct1_a_txt
-POSTHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
-struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
-PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-PREHOOK: type: QUERY
-PREHOOK: Input: default@part_change_various_various_struct1
-PREHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-POSTHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@part_change_various_various_struct1
-POSTHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-insert_num  

[32/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_table.q.out
deleted file mode 100644
index 94cea00..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_table.q.out
+++ /dev/null
@@ -1,3475 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@table_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
-   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
-   numFiles0   
-   numRows 0   
-   rawDataSize 0   
-   totalSize   0   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
-InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
-OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: 

[45/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_part.q.out
deleted file mode 100644
index 4b2bf1c..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_part.q.out
+++ /dev/null
@@ -1,3651 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@part_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Partition Information 
-# col_name data_type   comment 
-
-part   int 
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS

[40/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_table.q.out
deleted file mode 100644
index 235a859..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_table.q.out
+++ /dev/null
@@ -1,3475 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@table_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
-   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
-   numFiles0   
-   numRows 0   
-   rawDataSize 0   
-   totalSize   0   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: default@table_add_int_permute_select

[30/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_complex.q.out
deleted file mode 100644
index d748cc6..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_vec_mapwork_part_all_complex.q.out
+++ /dev/null
@@ -1,650 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_struct1
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_struct1
-PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-PREHOOK: type: LOAD
- A masked pattern was here 
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-POSTHOOK: type: LOAD
- A masked pattern was here 
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-PREHOOK: type: QUERY
-PREHOOK: Input: default@struct1_a_txt
-PREHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@struct1_a_txt
-POSTHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
-struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
-PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-PREHOOK: type: QUERY
-PREHOOK: Input: default@part_change_various_various_struct1
-PREHOOK: Input: 

[04/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part.q.out 
b/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part.q.out
deleted file mode 100644
index 8d1c898..000
--- a/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part.q.out
+++ /dev/null
@@ -1,4135 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@part_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Partition Information 
-# col_name data_type   comment 
-
-part   int 
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
-InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
-OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table 

[47/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_part.q.out
deleted file mode 100644
index 15ea22b..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acidvec_mapwork_part.q.out
+++ /dev/null
@@ -1,3540 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Vectorized, MapWork, Partitioned
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Vectorized, MapWork, Partitioned
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=2)
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=2
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=2)
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=2
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).b SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).c EXPRESSION 

[39/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part.q.out
deleted file mode 100644
index 210589d..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_vec_mapwork_part.q.out
+++ /dev/null
@@ -1,3723 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@part_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Partition Information 
-# col_name data_type   comment 
-
-part   int 
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: 

[10/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part_all_primitive.q.out
deleted file mode 100644
index b25b22c..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part_all_primitive.q.out
+++ /dev/null
@@ -1,2969 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned --> all primitive 
conversions
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_boolean
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned --> all primitive 
conversions
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_boolean
-PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col5, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c5 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col6, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c6 
EXPRESSION 

[42/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
deleted file mode 100644
index 7c644bf..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
+++ /dev/null
@@ -1,646 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all complex 
conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_struct1
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all complex 
conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_struct1
-PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-PREHOOK: type: LOAD
- A masked pattern was here 
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-POSTHOOK: type: LOAD
- A masked pattern was here 
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-PREHOOK: type: QUERY
-PREHOOK: Input: default@struct1_a_txt
-PREHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@struct1_a_txt
-POSTHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
-struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
-PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-PREHOOK: type: QUERY
-PREHOOK: Input: default@part_change_various_various_struct1
-PREHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-POSTHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@part_change_various_various_struct1
-POSTHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-insert_num parts1 

[17/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_fetchwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_fetchwork_table.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_fetchwork_table.q.out
deleted file mode 100644
index c030251..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_fetchwork_table.q.out
+++ /dev/null
@@ -1,3571 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@table_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
-   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
-   numFiles0   
-   numRows 0   
-   rawDataSize 0   
-   totalSize   0   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: 

[48/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_table.q.out
deleted file mode 100644
index ede8693..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_table.q.out
+++ /dev/null
@@ -1,3209 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Table
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Table
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: insert into table table_add_int_permute_select
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.c EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col1, 

[49/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_part.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_part.q.out
deleted file mode 100644
index 5e84806..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_acid_mapwork_part.q.out
+++ /dev/null
@@ -1,3540 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Partitioned
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Partitioned
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT) clustered by (a) into 2 buckets STORED AS ORC 
TBLPROPERTIES ('transactional'='true')
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=2)
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=2
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=2)
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=2
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).b SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=2).c EXPRESSION 

[08/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part.q.out 
b/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part.q.out
deleted file mode 100644
index c9b5a24..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part.q.out
+++ /dev/null
@@ -1,4107 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@part_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Partition Information 
-# col_name data_type   comment 
-
-part   int 
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
-InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
-OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: 

[33/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
deleted file mode 100644
index 3792b3c..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
+++ /dev/null
@@ -1,2697 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_boolean
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_boolean
-PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col5, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c5 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col6, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c6 
EXPRESSION 

[09/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_table.q.out 
b/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_table.q.out
deleted file mode 100644
index 2959be6..000
--- a/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_table.q.out
+++ /dev/null
@@ -1,3883 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@table_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
-   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
-   numFiles0   
-   numRows 0   
-   rawDataSize 0   
-   totalSize   0   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED 

[44/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_table.q.out
 
b/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_table.q.out
deleted file mode 100644
index 4b65c36..000
--- 
a/ql/src/test/results/clientpositive/llap/schema_evol_orc_nonvec_fetchwork_table.q.out
+++ /dev/null
@@ -1,3403 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@table_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
-   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
-   numFiles0   
-   numRows 0   
-   rawDataSize 0   
-   totalSize   0   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: 

[11/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part_all_complex.q.out
deleted file mode 100644
index f8a1d14..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part_all_complex.q.out
+++ /dev/null
@@ -1,694 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned --> all complex 
conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_struct1
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned --> all complex 
conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_struct1
-PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-PREHOOK: type: LOAD
- A masked pattern was here 
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-POSTHOOK: type: LOAD
- A masked pattern was here 
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-PREHOOK: type: QUERY
-PREHOOK: Input: default@struct1_a_txt
-PREHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@struct1_a_txt
-POSTHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
-struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
-PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-PREHOOK: type: QUERY
-PREHOOK: Input: default@part_change_various_various_struct1
-PREHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-POSTHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@part_change_various_various_struct1
-POSTHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-insert_num parts1  b
-1  1   

[06/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
deleted file mode 100644
index c094459..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_part_all_primitive.q.out
+++ /dev/null
@@ -1,2953 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_boolean
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_boolean
-PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col5, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c5 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col6, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c6 
EXPRESSION 

[21/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_acid_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_acid_mapwork_table.q.out 
b/ql/src/test/results/clientpositive/schema_evol_orc_acid_mapwork_table.q.out
deleted file mode 100644
index ede8693..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_orc_acid_mapwork_table.q.out
+++ /dev/null
@@ -1,3209 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Table
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, ACID Non-Vectorized, MapWork, Table
--- *IMPORTANT NOTE* We set hive.exec.schema.evolution=false above since schema 
evolution is always used for ACID.
--- Also, we don't do EXPLAINs on ACID files because the transaction id causes 
Q file statistics differences...
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING) 
clustered by (a) into 2 buckets STORED AS ORC  TBLPROPERTIES 
('transactional'='true')
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: insert into table table_add_int_permute_select
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (5, 1, 'new', 10),
-   (6, 2, 'new', 20),
-   (7, 3, 'new', 30),
-   (8, 4, 'new', 40)
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.c EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__2)values__tmp__table__2.FieldSchema(name:tmp_values_col1, 
type:string, 

[13/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_table.q.out 
b/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_table.q.out
deleted file mode 100644
index b9433e9..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_table.q.out
+++ /dev/null
@@ -1,3859 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@table_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
-   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
-   numFiles0   
-   numRows 0   
-   rawDataSize 0   
-   totalSize   0   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: 

[12/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part.q.out 
b/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part.q.out
deleted file mode 100644
index b4ca786..000
--- a/ql/src/test/results/clientpositive/schema_evol_orc_vec_mapwork_part.q.out
+++ /dev/null
@@ -1,4131 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Vectorized, MapWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@part_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Partition Information 
-# col_name data_type   comment 
-
-part   int 
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@part_add_int_permute_select

[18/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_fetchwork_part.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_fetchwork_part.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_fetchwork_part.q.out
deleted file mode 100644
index dd5352a..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_fetchwork_part.q.out
+++ /dev/null
@@ -1,3819 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, FetchWork, Partitioned
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE part_add_int_permute_select(insert_num int, a INT, b STRING) 
PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@part_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED part_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@part_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Partition Information 
-# col_name data_type   comment 
-
-part   int 
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
-InputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcInputFormat  
-OutputFormat:  org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
 
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table part_add_int_permute_select partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: query: insert into table part_add_int_permute_select 
partition(part=1)
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_add_int_permute_select@part=1
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_add_int_permute_select PARTITION(part=1).insert_num 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@part_add_int_permute_select
-PREHOOK: Output: default@part_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table part_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: 

[02/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part_all_primitive.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part_all_primitive.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part_all_primitive.q.out
deleted file mode 100644
index 91cefe2..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part_all_primitive.q.out
+++ /dev/null
@@ -1,2973 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_boolean
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
primitive conversions
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
--- SECTION: ALTER TABLE CHANGE COLUMNS Various --> Various
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: (BYTE, 
SHORT, INT, LONG, FLOAT, DOUBLE, DECIMAL, STRING, TIMESTAMP) --> BOOLEAN
---
-CREATE TABLE part_change_various_various_boolean(insert_num int, c1 TINYINT, 
c2 SMALLINT, c3 INT, c4 BIGINT, c5 FLOAT, c6 DOUBLE, c7 DECIMAL(38,18), c8 
STRING, c9 TIMESTAMP, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_boolean
-PREHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: query: insert into table part_change_various_various_boolean 
partition(part=1)
-values(1, 255, 2000, 72909, 3244222, -29.0764, 470614135, 470614135, 
'true', '0004-09-22 18:26:29.51954', 'original'),
-  (2, 45, 1000, 483777, -23866739993, -3651.672121, 46114.284799488, 
46114.284799488, '', '2007-02-09 05:17:29.368756876', 'original'),
-  (3, 200, 72909, 3244222, -93222, 30.774, -66475.561431, 
-66475.561431, '1', '6229-06-28 02:54:28.970117179', 'original'),
-  (4, 1, 9, 754072151, 3289094, 46114.284799488 ,9250340.75, 
9250340.75, 'time will come', '2002-05-10 05:29:48.990818073', 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@part_change_various_various_boolean@part=1
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).b 
SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col11,
 type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c1 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c2 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c3 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col4, 
type:string, comment:), ]
-POSTHOOK: Lineage: part_change_various_various_boolean PARTITION(part=1).c4 
EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col5, 
type:string, 

[03/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part_all_complex.q.out
deleted file mode 100644
index 7f2ada9..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_text_vec_mapwork_part_all_complex.q.out
+++ /dev/null
@@ -1,698 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_struct1
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Partitioned --> all 
complex conversions
--- NOTE: the use of hive.vectorized.use.vector.serde.deserialize above which 
enables doing
---  vectorized reading of TEXTFILE format files using the vector SERDE methods.
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_struct1
-PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-PREHOOK: type: LOAD
- A masked pattern was here 
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-POSTHOOK: type: LOAD
- A masked pattern was here 
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-PREHOOK: type: QUERY
-PREHOOK: Input: default@struct1_a_txt
-PREHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@struct1_a_txt
-POSTHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
-struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
-PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-PREHOOK: type: QUERY
-PREHOOK: Input: default@part_change_various_various_struct1
-PREHOOK: Input: default@part_change_various_various_struct1@part=1
- A 

[15/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
deleted file mode 100644
index 571b123..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_orc_nonvec_mapwork_part_all_complex.q.out
+++ /dev/null
@@ -1,694 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all complex 
conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@part_change_various_various_struct1
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: ORC, Non-Vectorized, MapWork, Partitioned --> all complex 
conversions
---
---
---
--- SUBSECTION: ALTER TABLE CHANGE COLUMNS for Various --> Various: 
STRUCT --> STRUCT, b STRING) PARTITIONED BY(part INT)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@part_change_various_various_struct1
-PREHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: CREATE TABLE struct1_a_txt(insert_num int, s1 
STRUCT, b STRING)
-row format delimited fields terminated by '|'
-collection items terminated by ','
-map keys terminated by ':' stored as textfile
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-PREHOOK: type: LOAD
- A masked pattern was here 
-PREHOOK: Output: default@struct1_a_txt
-POSTHOOK: query: load data local inpath '../../data/files/struct1_a.txt' 
overwrite into table struct1_a_txt
-POSTHOOK: type: LOAD
- A masked pattern was here 
-POSTHOOK: Output: default@struct1_a_txt
-PREHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-PREHOOK: type: QUERY
-PREHOOK: Input: default@struct1_a_txt
-PREHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: query: insert into table part_change_various_various_struct1 
partition(part=1) select * from struct1_a_txt
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@struct1_a_txt
-POSTHOOK: Output: default@part_change_various_various_struct1@part=1
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).b 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:b, type:string, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 
PARTITION(part=1).insert_num SIMPLE 
[(struct1_a_txt)struct1_a_txt.FieldSchema(name:insert_num, type:int, 
comment:null), ]
-POSTHOOK: Lineage: part_change_various_various_struct1 PARTITION(part=1).s1 
SIMPLE [(struct1_a_txt)struct1_a_txt.FieldSchema(name:s1, 
type:struct,
 comment:null), ]
-struct1_a_txt.insert_num   struct1_a_txt.s1struct1_a_txt.b
-PREHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-PREHOOK: type: QUERY
-PREHOOK: Input: default@part_change_various_various_struct1
-PREHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-POSTHOOK: query: select insert_num,part,s1,b from 
part_change_various_various_struct1
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@part_change_various_various_struct1
-POSTHOOK: Input: default@part_change_various_various_struct1@part=1
- A masked pattern was here 
-insert_num parts1  b
-1  1

[05/51] [partial] hive git commit: HIVE-15560: clean up out files that do not correspond to any q files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-06 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/8230b579/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_table.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_table.q.out
 
b/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_table.q.out
deleted file mode 100644
index 561f662..000
--- 
a/ql/src/test/results/clientpositive/schema_evol_text_nonvec_mapwork_table.q.out
+++ /dev/null
@@ -1,3859 +0,0 @@
-PREHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- SORT_QUERY_RESULTS
---
--- FILE VARIATION: TEXTFILE, Non-Vectorized, MapWork, Table
---
---
--- SECTION: ALTER TABLE ADD COLUMNS
---
---
--- SUBSECTION: ALTER TABLE ADD COLUMNS: INT PERMUTE SELECT
---
---
-CREATE TABLE table_add_int_permute_select(insert_num int, a INT, b STRING)
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@table_add_int_permute_select
-PREHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-PREHOOK: type: DESCTABLE
-PREHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: query: DESCRIBE FORMATTED table_add_int_permute_select
-POSTHOOK: type: DESCTABLE
-POSTHOOK: Input: default@table_add_int_permute_select
-col_name   data_type   comment
-# col_name data_type   comment 
-
-insert_num int 
-a  int 
-b  string  
-
-# Detailed Table Information
-Database:  default  
- A masked pattern was here 
-Retention: 0
- A masked pattern was here 
-Table Type:MANAGED_TABLE
-Table Parameters:   
-   COLUMN_STATS_ACCURATE   {\"BASIC_STATS\":\"true\"}
-   numFiles0   
-   numRows 0   
-   rawDataSize 0   
-   totalSize   0   
- A masked pattern was here 
-
-# Storage Information   
-SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe  
 
-InputFormat:   org.apache.hadoop.mapred.TextInputFormat 
-OutputFormat:  
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat   
-Compressed:No   
-Num Buckets:   -1   
-Bucket Columns:[]   
-Sort Columns:  []   
-Storage Desc Params:
-   serialization.format1   
-PREHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-PREHOOK: type: QUERY
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: insert into table table_add_int_permute_select
-values (1, 1, 'original'),
-   (2, 2, 'original'),
-   (3, 3, 'original'),
-   (4, 4, 'original')
-POSTHOOK: type: QUERY
-POSTHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: Lineage: table_add_int_permute_select.a EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col2, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.b SIMPLE 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col3, 
type:string, comment:), ]
-POSTHOOK: Lineage: table_add_int_permute_select.insert_num EXPRESSION 
[(values__tmp__table__1)values__tmp__table__1.FieldSchema(name:tmp_values_col1, 
type:string, comment:), ]
-_col0  _col1   _col2
-PREHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-PREHOOK: type: ALTERTABLE_ADDCOLS
-PREHOOK: Input: default@table_add_int_permute_select
-PREHOOK: Output: default@table_add_int_permute_select
-POSTHOOK: query: -- Table-Non-Cascade ADD COLUMNS ...
-alter table table_add_int_permute_select add columns(c int)
-POSTHOOK: type: ALTERTABLE_ADDCOLS
-POSTHOOK: Input: default@table_add_int_permute_select
-POSTHOOK: Output: 

[16/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/groupby9.q.out
--
diff --git a/ql/src/test/results/beelinepositive/groupby9.q.out 
b/ql/src/test/results/beelinepositive/groupby9.q.out
deleted file mode 100644
index 7b5f863..000
--- a/ql/src/test/results/beelinepositive/groupby9.q.out
+++ /dev/null
@@ -1,4204 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/groupby9.q.raw". Enter "record" 
with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/groupby9.q
->>>  
->>>  CREATE TABLE DEST1(key INT, value STRING) STORED AS TEXTFILE;
-No rows affected 
->>>  CREATE TABLE DEST2(key INT, val1 STRING, val2 STRING) STORED AS TEXTFILE;
-No rows affected 
->>>  
->>>  EXPLAIN 
-FROM SRC 
-INSERT OVERWRITE TABLE DEST1 SELECT SRC.key, COUNT(DISTINCT 
SUBSTR(SRC.value,5)) GROUP BY SRC.key 
-INSERT OVERWRITE TABLE DEST2 SELECT SRC.key, SRC.value, COUNT(DISTINCT 
SUBSTR(SRC.value,5)) GROUP BY SRC.key, SRC.value;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME SRC))) (TOK_INSERT 
(TOK_DESTINATION (TOK_TAB (TOK_TABNAME DEST1))) (TOK_SELECT (TOK_SELEXPR (. 
(TOK_TABLE_OR_COL SRC) key)) (TOK_SELEXPR (TOK_FUNCTIONDI COUNT (TOK_FUNCTION 
SUBSTR (. (TOK_TABLE_OR_COL SRC) value) 5 (TOK_GROUPBY (. (TOK_TABLE_OR_COL 
SRC) key))) (TOK_INSERT (TOK_DESTINATION (TOK_TAB (TOK_TABNAME DEST2))) 
(TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL SRC) key)) (TOK_SELEXPR (. 
(TOK_TABLE_OR_COL SRC) value)) (TOK_SELEXPR (TOK_FUNCTIONDI COUNT (TOK_FUNCTION 
SUBSTR (. (TOK_TABLE_OR_COL SRC) value) 5 (TOK_GROUPBY (. (TOK_TABLE_OR_COL 
SRC) key) (. (TOK_TABLE_OR_COL SRC) value'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-2 is a root stage'
-'  Stage-3 depends on stages: Stage-2'
-'  Stage-0 depends on stages: Stage-3'
-'  Stage-4 depends on stages: Stage-0'
-'  Stage-5 depends on stages: Stage-2'
-'  Stage-1 depends on stages: Stage-5'
-'  Stage-6 depends on stages: Stage-1'
-''
-'STAGE PLANS:'
-'  Stage: Stage-2'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'src '
-'  TableScan'
-'alias: src'
-'Reduce Output Operator'
-'  key expressions:'
-'expr: substr(value, 5)'
-'type: string'
-'  sort order: +'
-'  Map-reduce partition columns:'
-'expr: substr(value, 5)'
-'type: string'
-'  tag: -1'
-'  value expressions:'
-'expr: key'
-'type: string'
-'expr: value'
-'type: string'
-'  Reduce Operator Tree:'
-'Forward'
-'  Group By Operator'
-'aggregations:'
-'  expr: count(DISTINCT KEY._col0)'
-'bucketGroup: false'
-'keys:'
-'  expr: VALUE._col0'
-'  type: string'
-'mode: hash'
-'outputColumnNames: _col0, _col1'
-'File Output Operator'
-'  compressed: false'
-'  GlobalTableId: 0'
-'  table:'
-'  input format: 
org.apache.hadoop.mapred.SequenceFileInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
-'  Group By Operator'
-'aggregations:'
-'  expr: count(DISTINCT KEY._col0)'
-'bucketGroup: false'
-'keys:'
-'  expr: VALUE._col0'
-'  type: string'
-'  expr: VALUE._col1'
-'  type: string'
-'mode: hash'
-'outputColumnNames: _col0, _col1, _col2'
-'File Output Operator'
-'  compressed: false'
-'  GlobalTableId: 0'
-'  table:'
-'  input format: 
org.apache.hadoop.mapred.SequenceFileInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
-''
-'  Stage: Stage-3'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'file:!!{hive.exec.scratchdir}!! '
-'Reduce Output Operator'
-'  key expressions:'
-'expr: _col0'
-'type: string'
-'  sort order: +'
-'  Map-reduce partition columns:'
-'expr: _col0'
-'type: string'
-'  tag: -1'
-'  value expressions:'
-'expr: _col1'
-'type: bigint'
-'  Reduce Operator Tree:'
-'Group By Operator'
-'  aggregations:'
-'expr: count(VALUE._col0)'
-'  bucketGroup: false'
-'  keys:'
-'expr: KEY._col0'
-'type: string'
-'  mode: final'
-'  outputColumnNames: _col0, _col1'
-'  Select Operator'
-'

[28/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/bucketmapjoin5.q.out
--
diff --git a/ql/src/test/results/beelinepositive/bucketmapjoin5.q.out 
b/ql/src/test/results/beelinepositive/bucketmapjoin5.q.out
deleted file mode 100644
index 04ae695..000
--- a/ql/src/test/results/beelinepositive/bucketmapjoin5.q.out
+++ /dev/null
@@ -1,1008 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/bucketmapjoin5.q.raw". Enter 
"record" with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/bucketmapjoin5.q
->>>  CREATE TABLE srcbucket_mapjoin(key int, value string) CLUSTERED BY (key) 
INTO 2 BUCKETS STORED AS TEXTFILE;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket20.txt' INTO TABLE 
srcbucket_mapjoin;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket21.txt' INTO TABLE 
srcbucket_mapjoin;
-No rows affected 
->>>  
->>>  CREATE TABLE srcbucket_mapjoin_part (key int, value string) partitioned 
by (ds string) CLUSTERED BY (key) INTO 4 BUCKETS STORED AS TEXTFILE;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket20.txt' INTO TABLE 
srcbucket_mapjoin_part partition(ds='2008-04-08');
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket21.txt' INTO TABLE 
srcbucket_mapjoin_part partition(ds='2008-04-08');
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket22.txt' INTO TABLE 
srcbucket_mapjoin_part partition(ds='2008-04-08');
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket23.txt' INTO TABLE 
srcbucket_mapjoin_part partition(ds='2008-04-08');
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket20.txt' INTO TABLE 
srcbucket_mapjoin_part partition(ds='2008-04-09');
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket21.txt' INTO TABLE 
srcbucket_mapjoin_part partition(ds='2008-04-09');
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket22.txt' INTO TABLE 
srcbucket_mapjoin_part partition(ds='2008-04-09');
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket23.txt' INTO TABLE 
srcbucket_mapjoin_part partition(ds='2008-04-09');
-No rows affected 
->>>  
->>>  CREATE TABLE srcbucket_mapjoin_part_2 (key int, value string) partitioned 
by (ds string) CLUSTERED BY (key) INTO 2 BUCKETS STORED AS TEXTFILE;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket22.txt' INTO TABLE 
srcbucket_mapjoin_part_2 partition(ds='2008-04-08');
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket23.txt' INTO TABLE 
srcbucket_mapjoin_part_2 partition(ds='2008-04-08');
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket22.txt' INTO TABLE 
srcbucket_mapjoin_part_2 partition(ds='2008-04-09');
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket23.txt' INTO TABLE 
srcbucket_mapjoin_part_2 partition(ds='2008-04-09');
-No rows affected 
->>>  
->>>  create table bucketmapjoin_hash_result_1 (key bigint , value1 bigint, 
value2 bigint);
-No rows affected 
->>>  create table bucketmapjoin_hash_result_2 (key bigint , value1 bigint, 
value2 bigint);
-No rows affected 
->>>  
->>>  set hive.optimize.bucketmapjoin = true;
-No rows affected 
->>>  create table bucketmapjoin_tmp_result (key string , value1 string, value2 
string);
-No rows affected 
->>>  
->>>  explain extended 
-insert overwrite table bucketmapjoin_tmp_result 
-select /*+mapjoin(a)*/ a.key, a.value, b.value 
-from srcbucket_mapjoin a join srcbucket_mapjoin_part b 
-on a.key=b.key;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF (TOK_TABNAME srcbucket_mapjoin) 
a) (TOK_TABREF (TOK_TABNAME srcbucket_mapjoin_part) b) (= (. (TOK_TABLE_OR_COL 
a) key) (. (TOK_TABLE_OR_COL b) key (TOK_INSERT (TOK_DESTINATION (TOK_TAB 
(TOK_TABNAME bucketmapjoin_tmp_result))) (TOK_SELECT (TOK_HINTLIST (TOK_HINT 
TOK_MAPJOIN (TOK_HINTARGLIST a))) (TOK_SELEXPR (. (TOK_TABLE_OR_COL a) key)) 
(TOK_SELEXPR (. (TOK_TABLE_OR_COL a) value)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL 
b) value)'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-9 is a root stage'
-'  Stage-1 depends on stages: Stage-9'
-'  Stage-7 depends on stages: Stage-1 , consists of Stage-4, Stage-3, Stage-5'
-'  Stage-4'
-'  Stage-0 depends on stages: Stage-4, Stage-3, Stage-6'
-'  Stage-2 depends on stages: Stage-0'
-'  Stage-3'
-'  Stage-5'
-'  Stage-6 depends on stages: Stage-5'
-''
-'STAGE PLANS:'
-'  Stage: Stage-9'
-'Map Reduce Local Work'
-'  Alias -> Map Local Tables:'
-'a '
-'  Fetch Operator'
-'limit: -1'
-'  Alias -> Map Local Operator Tree:'
-'a '
-'  TableScan'
-'alias: a'
-'GatherStats: false'
-'HashTable Sink Operator'
-'  condition expressions:'
-'0 {key} {value}'
-'1 

[07/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/input20.q.out
--
diff --git a/ql/src/test/results/beelinepositive/input20.q.out 
b/ql/src/test/results/beelinepositive/input20.q.out
deleted file mode 100644
index f1f9c90..000
--- a/ql/src/test/results/beelinepositive/input20.q.out
+++ /dev/null
@@ -1,437 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/input20.q.raw". Enter "record" 
with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/input20.q
->>>  CREATE TABLE dest1(key INT, value STRING) STORED AS TEXTFILE;
-No rows affected 
->>>  
->>>  ADD FILE ../data/scripts/input20_script.py;
-No rows affected 
->>>  
->>>  EXPLAIN 
-FROM ( 
-FROM src 
-MAP src.key, src.key 
-USING 'cat' 
-DISTRIBUTE BY key 
-SORT BY key, value 
-) tmap 
-INSERT OVERWRITE TABLE dest1 
-REDUCE tmap.key, tmap.value 
-USING 'python input20_script.py' 
-AS key, value;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_QUERY (TOK_FROM (TOK_TABREF 
(TOK_TABNAME src))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) 
(TOK_SELECT (TOK_SELEXPR (TOK_TRANSFORM (TOK_EXPLIST (. (TOK_TABLE_OR_COL src) 
key) (. (TOK_TABLE_OR_COL src) key)) TOK_SERDE TOK_RECORDWRITER 'cat' TOK_SERDE 
TOK_RECORDREADER))) (TOK_DISTRIBUTEBY (TOK_TABLE_OR_COL key)) (TOK_SORTBY 
(TOK_TABSORTCOLNAMEASC (TOK_TABLE_OR_COL key)) (TOK_TABSORTCOLNAMEASC 
(TOK_TABLE_OR_COL value) tmap)) (TOK_INSERT (TOK_DESTINATION (TOK_TAB 
(TOK_TABNAME dest1))) (TOK_SELECT (TOK_SELEXPR (TOK_TRANSFORM (TOK_EXPLIST (. 
(TOK_TABLE_OR_COL tmap) key) (. (TOK_TABLE_OR_COL tmap) value)) TOK_SERDE 
TOK_RECORDWRITER 'python input20_script.py' TOK_SERDE TOK_RECORDREADER 
(TOK_ALIASLIST key value))'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-1 is a root stage'
-'  Stage-0 depends on stages: Stage-1'
-'  Stage-2 depends on stages: Stage-0'
-''
-'STAGE PLANS:'
-'  Stage: Stage-1'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'tmap:src '
-'  TableScan'
-'alias: src'
-'Select Operator'
-'  expressions:'
-'expr: key'
-'type: string'
-'expr: key'
-'type: string'
-'  outputColumnNames: _col0, _col1'
-'  Transform Operator'
-'command: cat'
-'output info:'
-'input format: org.apache.hadoop.mapred.TextInputFormat'
-'output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-'Reduce Output Operator'
-'  key expressions:'
-'expr: _col0'
-'type: string'
-'expr: _col1'
-'type: string'
-'  sort order: ++'
-'  Map-reduce partition columns:'
-'expr: _col0'
-'type: string'
-'  tag: -1'
-'  value expressions:'
-'expr: _col0'
-'type: string'
-'expr: _col1'
-'type: string'
-'  Reduce Operator Tree:'
-'Extract'
-'  Select Operator'
-'expressions:'
-'  expr: _col0'
-'  type: string'
-'  expr: _col1'
-'  type: string'
-'outputColumnNames: _col0, _col1'
-'Transform Operator'
-'  command: python input20_script.py'
-'  output info:'
-'  input format: org.apache.hadoop.mapred.TextInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-'  Select Operator'
-'expressions:'
-'  expr: UDFToInteger(_col0)'
-'  type: int'
-'  expr: _col1'
-'  type: string'
-'outputColumnNames: _col0, _col1'
-'File Output Operator'
-'  compressed: false'
-'  GlobalTableId: 1'
-'  table:'
-'  input format: org.apache.hadoop.mapred.TextInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-'  serde: 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
-'  name: input20.dest1'
-''
-'  Stage: Stage-0'
-'Move Operator'
-'  tables:'
-'  replace: true'
-'  table:'
-'  input format: org.apache.hadoop.mapred.TextInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-'  serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
-'  name: input20.dest1'
-''
-'  Stage: Stage-2'
-'Stats-Aggr 

[33/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/bucketmapjoin1.q.out
--
diff --git a/ql/src/test/results/beelinepositive/bucketmapjoin1.q.out 
b/ql/src/test/results/beelinepositive/bucketmapjoin1.q.out
deleted file mode 100644
index e7a798b..000
--- a/ql/src/test/results/beelinepositive/bucketmapjoin1.q.out
+++ /dev/null
@@ -1,1131 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/bucketmapjoin1.q.raw". Enter 
"record" with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/bucketmapjoin1.q
->>>  CREATE TABLE srcbucket_mapjoin(key int, value string) CLUSTERED BY (key) 
INTO 2 BUCKETS STORED AS TEXTFILE;
-No rows affected 
->>>  
->>>  CREATE TABLE srcbucket_mapjoin_part (key int, value string) partitioned 
by (ds string) CLUSTERED BY (key) INTO 4 BUCKETS STORED AS TEXTFILE;
-No rows affected 
->>>  
->>>  CREATE TABLE srcbucket_mapjoin_part_2 (key int, value string) partitioned 
by (ds string) CLUSTERED BY (key) INTO 2 BUCKETS STORED AS TEXTFILE;
-No rows affected 
->>>  
->>>  set hive.optimize.bucketmapjoin = true;
-No rows affected 
->>>  
->>>  -- empty partitions (HIVE-3205)
->>>  explain extended 
-select /*+mapjoin(b)*/ a.key, a.value, b.value 
-from srcbucket_mapjoin_part a join srcbucket_mapjoin_part_2 b 
-on a.key=b.key where b.ds="2008-04-08";
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF (TOK_TABNAME 
srcbucket_mapjoin_part) a) (TOK_TABREF (TOK_TABNAME srcbucket_mapjoin_part_2) 
b) (= (. (TOK_TABLE_OR_COL a) key) (. (TOK_TABLE_OR_COL b) key (TOK_INSERT 
(TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_HINTLIST (TOK_HINT 
TOK_MAPJOIN (TOK_HINTARGLIST b))) (TOK_SELEXPR (. (TOK_TABLE_OR_COL a) key)) 
(TOK_SELEXPR (. (TOK_TABLE_OR_COL a) value)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL 
b) value))) (TOK_WHERE (= (. (TOK_TABLE_OR_COL b) ds) "2008-04-08"'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-3 is a root stage'
-'  Stage-1 depends on stages: Stage-3'
-'  Stage-0 is a root stage'
-''
-'STAGE PLANS:'
-'  Stage: Stage-3'
-'Map Reduce Local Work'
-'  Alias -> Map Local Tables:'
-'b '
-'  Fetch Operator'
-'limit: -1'
-'  Alias -> Map Local Operator Tree:'
-'b '
-'  TableScan'
-'alias: b'
-'GatherStats: false'
-'Filter Operator'
-'  isSamplingPred: false'
-'  predicate:'
-'  expr: (ds = '2008-04-08')'
-'  type: boolean'
-'  HashTable Sink Operator'
-'condition expressions:'
-'  0 {key} {value}'
-'  1 {value} {ds}'
-'handleSkewJoin: false'
-'keys:'
-'  0 [Column[key]]'
-'  1 [Column[key]]'
-'Position of Big Table: 0'
-'  Bucket Mapjoin Context:'
-'  Alias Bucket Base File Name Mapping:'
-'b {}'
-'  Alias Bucket File Name Mapping:'
-'b {}'
-''
-'  Stage: Stage-1'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'a '
-'  TableScan'
-'alias: a'
-'GatherStats: false'
-'Map Join Operator'
-'  condition map:'
-'   Inner Join 0 to 1'
-'  condition expressions:'
-'0 {key} {value}'
-'1 {value} {ds}'
-'  handleSkewJoin: false'
-'  keys:'
-'0 [Column[key]]'
-'1 [Column[key]]'
-'  outputColumnNames: _col0, _col1, _col6, _col7'
-'  Position of Big Table: 0'
-'  Select Operator'
-'expressions:'
-'  expr: _col0'
-'  type: int'
-'  expr: _col1'
-'  type: string'
-'  expr: _col6'
-'  type: string'
-'  expr: _col7'
-'  type: string'
-'outputColumnNames: _col0, _col1, _col6, _col7'
-'Select Operator'
-'  expressions:'
-'expr: _col0'
-'type: int'
-'expr: _col1'
-'type: string'
-'expr: _col6'
-'type: string'
-'  outputColumnNames: _col0, _col1, _col2'
-'  File Output Operator'
-'compressed: false'
-'GlobalTableId: 0'
-'directory: file:!!{hive.exec.scratchdir}!!'
-'NumFilesPerFileSink: 1'
-'Stats Publishing Key Prefix: 
file:!!{hive.exec.scratchdir}!!'
-'table:'
-'input format: 
org.apache.hadoop.mapred.TextInputFormat'
-'output format: 

[17/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/groupby8.q.out
--
diff --git a/ql/src/test/results/beelinepositive/groupby8.q.out 
b/ql/src/test/results/beelinepositive/groupby8.q.out
deleted file mode 100644
index 9e09e8e..000
--- a/ql/src/test/results/beelinepositive/groupby8.q.out
+++ /dev/null
@@ -1,1669 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/groupby8.q.raw". Enter "record" 
with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/groupby8.q
->>>  set hive.map.aggr=false;
-No rows affected 
->>>  set hive.groupby.skewindata=true;
-No rows affected 
->>>  
->>>  CREATE TABLE DEST1(key INT, value STRING) STORED AS TEXTFILE;
-No rows affected 
->>>  CREATE TABLE DEST2(key INT, value STRING) STORED AS TEXTFILE;
-No rows affected 
->>>  
->>>  EXPLAIN 
-FROM SRC 
-INSERT OVERWRITE TABLE DEST1 SELECT SRC.key, COUNT(DISTINCT 
SUBSTR(SRC.value,5)) GROUP BY SRC.key 
-INSERT OVERWRITE TABLE DEST2 SELECT SRC.key, COUNT(DISTINCT 
SUBSTR(SRC.value,5)) GROUP BY SRC.key;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME SRC))) (TOK_INSERT 
(TOK_DESTINATION (TOK_TAB (TOK_TABNAME DEST1))) (TOK_SELECT (TOK_SELEXPR (. 
(TOK_TABLE_OR_COL SRC) key)) (TOK_SELEXPR (TOK_FUNCTIONDI COUNT (TOK_FUNCTION 
SUBSTR (. (TOK_TABLE_OR_COL SRC) value) 5 (TOK_GROUPBY (. (TOK_TABLE_OR_COL 
SRC) key))) (TOK_INSERT (TOK_DESTINATION (TOK_TAB (TOK_TABNAME DEST2))) 
(TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL SRC) key)) (TOK_SELEXPR 
(TOK_FUNCTIONDI COUNT (TOK_FUNCTION SUBSTR (. (TOK_TABLE_OR_COL SRC) value) 
5 (TOK_GROUPBY (. (TOK_TABLE_OR_COL SRC) key'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-2 is a root stage'
-'  Stage-3 depends on stages: Stage-2'
-'  Stage-0 depends on stages: Stage-3'
-'  Stage-4 depends on stages: Stage-0'
-'  Stage-5 depends on stages: Stage-2'
-'  Stage-1 depends on stages: Stage-5'
-'  Stage-6 depends on stages: Stage-1'
-''
-'STAGE PLANS:'
-'  Stage: Stage-2'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'src '
-'  TableScan'
-'alias: src'
-'Reduce Output Operator'
-'  key expressions:'
-'expr: substr(value, 5)'
-'type: string'
-'  sort order: +'
-'  Map-reduce partition columns:'
-'expr: substr(value, 5)'
-'type: string'
-'  tag: -1'
-'  value expressions:'
-'expr: key'
-'type: string'
-'  Reduce Operator Tree:'
-'Forward'
-'  Group By Operator'
-'aggregations:'
-'  expr: count(DISTINCT KEY._col0)'
-'bucketGroup: false'
-'keys:'
-'  expr: VALUE._col0'
-'  type: string'
-'mode: hash'
-'outputColumnNames: _col0, _col1'
-'File Output Operator'
-'  compressed: false'
-'  GlobalTableId: 0'
-'  table:'
-'  input format: 
org.apache.hadoop.mapred.SequenceFileInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
-'  Group By Operator'
-'aggregations:'
-'  expr: count(DISTINCT KEY._col0)'
-'bucketGroup: false'
-'keys:'
-'  expr: VALUE._col0'
-'  type: string'
-'mode: hash'
-'outputColumnNames: _col0, _col1'
-'File Output Operator'
-'  compressed: false'
-'  GlobalTableId: 0'
-'  table:'
-'  input format: 
org.apache.hadoop.mapred.SequenceFileInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
-''
-'  Stage: Stage-3'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'file:!!{hive.exec.scratchdir}!! '
-'Reduce Output Operator'
-'  key expressions:'
-'expr: _col0'
-'type: string'
-'  sort order: +'
-'  Map-reduce partition columns:'
-'expr: _col0'
-'type: string'
-'  tag: -1'
-'  value expressions:'
-'expr: _col1'
-'type: bigint'
-'  Reduce Operator Tree:'
-'Group By Operator'
-'  aggregations:'
-'expr: count(VALUE._col0)'
-'  bucketGroup: false'
-'  keys:'
-'expr: KEY._col0'
-'type: string'
-'  mode: final'
-'  outputColumnNames: _col0, _col1'
-'  Select Operator'
-'expressions:'
-'  expr: _col0'
-'  type: string'
-'  expr: _col1'
-'  type: bigint'
-'

[25/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/combine3.q.out
--
diff --git a/ql/src/test/results/beelinepositive/combine3.q.out 
b/ql/src/test/results/beelinepositive/combine3.q.out
deleted file mode 100644
index 82d91ad..000
--- a/ql/src/test/results/beelinepositive/combine3.q.out
+++ /dev/null
@@ -1,148 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/combine3.q.raw". Enter "record" 
with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/combine3.q
->>>  set hive.exec.compress.output = true;
-No rows affected 
->>>  set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
-No rows affected 
->>>  set mapred.min.split.size=256;
-No rows affected 
->>>  set mapred.min.split.size.per.node=256;
-No rows affected 
->>>  set mapred.min.split.size.per.rack=256;
-No rows affected 
->>>  set mapred.max.split.size=256;
-No rows affected 
->>>  
->>>  
->>>  drop table combine_3_srcpart_seq_rc;
-No rows affected 
->>>  
->>>  create table combine_3_srcpart_seq_rc (key int , value string) 
partitioned by (ds string, hr string) stored as sequencefile;
-No rows affected 
->>>  
->>>  insert overwrite table combine_3_srcpart_seq_rc partition 
(ds="2010-08-03", hr="00") select * from src;
-'_col0','_col1'
-No rows selected 
->>>  
->>>  alter table combine_3_srcpart_seq_rc set fileformat rcfile;
-No rows affected 
->>>  insert overwrite table combine_3_srcpart_seq_rc partition 
(ds="2010-08-03", hr="001") select * from src;
-'_col0','_col1'
-No rows selected 
->>>  
->>>  desc extended combine_3_srcpart_seq_rc partition(ds="2010-08-03", 
hr="00");
-'col_name','data_type','comment'
-'key','int',''
-'value','string',''
-'ds','string',''
-'hr','string',''
-'','',''
-'Detailed Partition Information','Partition(values:[2010-08-03, 00], 
dbName:combine3, tableName:combine_3_srcpart_seq_rc, createTime:!!UNIXTIME!!, 
lastAccessTime:0, sd:StorageDescriptor(cols:[FieldSchema(name:key, type:int, 
comment:null), FieldSchema(name:value, type:string, comment:null), 
FieldSchema(name:ds, type:string, comment:null), FieldSchema(name:hr, 
type:string, comment:null)], 
location:!!{hive.metastore.warehouse.dir}!!/combine3.db/combine_3_srcpart_seq_rc/ds=2010-08-03/hr=00,
 inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, 
outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, 
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
parameters:{serialization.format=1}), bucketCols:[], sortCols:[], 
parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
skewedColValueLocationMaps:{})), parameters:{numFiles=1, 
transient_lastDdlTime=!!UNIXTIME!!, num
 Rows=500, totalSize=15250, rawDataSize=5312})',''
-6 rows selected 
->>>  desc extended combine_3_srcpart_seq_rc partition(ds="2010-08-03", 
hr="001");
-'col_name','data_type','comment'
-'key','int',''
-'value','string',''
-'ds','string',''
-'hr','string',''
-'','',''
-'Detailed Partition Information','Partition(values:[2010-08-03, 001], 
dbName:combine3, tableName:combine_3_srcpart_seq_rc, createTime:!!UNIXTIME!!, 
lastAccessTime:0, sd:StorageDescriptor(cols:[FieldSchema(name:key, type:int, 
comment:null), FieldSchema(name:value, type:string, comment:null), 
FieldSchema(name:ds, type:string, comment:null), FieldSchema(name:hr, 
type:string, comment:null)], 
location:!!{hive.metastore.warehouse.dir}!!/combine3.db/combine_3_srcpart_seq_rc/ds=2010-08-03/hr=001,
 inputFormat:org.apache.hadoop.hive.ql.io.RCFileInputFormat, 
outputFormat:org.apache.hadoop.hive.ql.io.RCFileOutputFormat, compressed:false, 
numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
serializationLib:org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe, 
parameters:{serialization.format=1}), bucketCols:[], sortCols:[], 
parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
skewedColValueLocationMaps:{})), parameters:{numFiles=1, 
transient_lastDdlTime=!!UNIXTIME!!, numRows=500
 , totalSize=1981, rawDataSize=4812})',''
-6 rows selected 
->>>  
->>>  select key, value, ds, hr from combine_3_srcpart_seq_rc where 
ds="2010-08-03" order by key, hr limit 30;
-'key','value','ds','hr'
-'0','val_0','2010-08-03','00'
-'0','val_0','2010-08-03','00'
-'0','val_0','2010-08-03','00'
-'0','val_0','2010-08-03','001'
-'0','val_0','2010-08-03','001'
-'0','val_0','2010-08-03','001'
-'2','val_2','2010-08-03','00'
-'2','val_2','2010-08-03','001'
-'4','val_4','2010-08-03','00'
-'4','val_4','2010-08-03','001'
-'5','val_5','2010-08-03','00'
-'5','val_5','2010-08-03','00'
-'5','val_5','2010-08-03','00'
-'5','val_5','2010-08-03','001'
-'5','val_5','2010-08-03','001'
-'5','val_5','2010-08-03','001'
-'8','val_8','2010-08-03','00'
-'8','val_8','2010-08-03','001'
-'9','val_9','2010-08-03','00'
-'9','val_9','2010-08-03','001'
-'10','val_10','2010-08-03','00'

[26/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/bucketmapjoin_negative3.q.out
--
diff --git a/ql/src/test/results/beelinepositive/bucketmapjoin_negative3.q.out 
b/ql/src/test/results/beelinepositive/bucketmapjoin_negative3.q.out
deleted file mode 100644
index 868c101..000
--- a/ql/src/test/results/beelinepositive/bucketmapjoin_negative3.q.out
+++ /dev/null
@@ -1,1449 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/bucketmapjoin_negative3.q.raw". 
Enter "record" with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/bucketmapjoin_negative3.q
->>>  drop table test1;
-No rows affected 
->>>  drop table test2;
-No rows affected 
->>>  drop table test3;
-No rows affected 
->>>  drop table test4;
-No rows affected 
->>>  
->>>  create table test1 (key string, value string) clustered by (key) sorted 
by (key) into 3 buckets;
-No rows affected 
->>>  create table test2 (key string, value string) clustered by (value) sorted 
by (value) into 3 buckets;
-No rows affected 
->>>  create table test3 (key string, value string) clustered by (key, value) 
sorted by (key, value) into 3 buckets;
-No rows affected 
->>>  create table test4 (key string, value string) clustered by (value, key) 
sorted by (value, key) into 3 buckets;
-No rows affected 
->>>  
->>>  load data local inpath '../data/files/srcbucket20.txt' INTO TABLE test1;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket21.txt' INTO TABLE test1;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket22.txt' INTO TABLE test1;
-No rows affected 
->>>  
->>>  load data local inpath '../data/files/srcbucket20.txt' INTO TABLE test2;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket21.txt' INTO TABLE test2;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket22.txt' INTO TABLE test2;
-No rows affected 
->>>  
->>>  load data local inpath '../data/files/srcbucket20.txt' INTO TABLE test3;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket21.txt' INTO TABLE test3;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket22.txt' INTO TABLE test3;
-No rows affected 
->>>  
->>>  load data local inpath '../data/files/srcbucket20.txt' INTO TABLE test4;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket21.txt' INTO TABLE test4;
-No rows affected 
->>>  load data local inpath '../data/files/srcbucket22.txt' INTO TABLE test4;
-No rows affected 
->>>  
->>>  set hive.optimize.bucketmapjoin = true;
-No rows affected 
->>>  -- should be allowed
->>>  explain extended select /* + MAPJOIN(R) */ * from test1 L join test1 R on 
L.key=R.key AND L.value=R.value;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF (TOK_TABNAME test1) L) 
(TOK_TABREF (TOK_TABNAME test1) R) (AND (= (. (TOK_TABLE_OR_COL L) key) (. 
(TOK_TABLE_OR_COL R) key)) (= (. (TOK_TABLE_OR_COL L) value) (. 
(TOK_TABLE_OR_COL R) value) (TOK_INSERT (TOK_DESTINATION (TOK_DIR 
TOK_TMP_FILE)) (TOK_SELECT (TOK_HINTLIST (TOK_HINT TOK_MAPJOIN (TOK_HINTARGLIST 
R))) (TOK_SELEXPR TOK_ALLCOLREF'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-3 is a root stage'
-'  Stage-1 depends on stages: Stage-3'
-'  Stage-0 is a root stage'
-''
-'STAGE PLANS:'
-'  Stage: Stage-3'
-'Map Reduce Local Work'
-'  Alias -> Map Local Tables:'
-'r '
-'  Fetch Operator'
-'limit: -1'
-'  Alias -> Map Local Operator Tree:'
-'r '
-'  TableScan'
-'alias: r'
-'GatherStats: false'
-'HashTable Sink Operator'
-'  condition expressions:'
-'0 {key} {value}'
-'1 {key} {value}'
-'  handleSkewJoin: false'
-'  keys:'
-'0 [Column[key], Column[value]]'
-'1 [Column[key], Column[value]]'
-'  Position of Big Table: 0'
-'  Bucket Mapjoin Context:'
-'  Alias Bucket Base File Name Mapping:'
-'r {srcbucket20.txt=[srcbucket20.txt], 
srcbucket21.txt=[srcbucket21.txt], srcbucket22.txt=[srcbucket22.txt]}'
-'  Alias Bucket File Name Mapping:'
-'r 
{!!{hive.metastore.warehouse.dir}!!/bucketmapjoin_negative3.db/test1/srcbucket20.txt=[!!{hive.metastore.warehouse.dir}!!/bucketmapjoin_negative3.db/test1/srcbucket20.txt],
 
!!{hive.metastore.warehouse.dir}!!/bucketmapjoin_negative3.db/test1/srcbucket21.txt=[!!{hive.metastore.warehouse.dir}!!/bucketmapjoin_negative3.db/test1/srcbucket21.txt],
 
!!{hive.metastore.warehouse.dir}!!/bucketmapjoin_negative3.db/test1/srcbucket22.txt=[!!{hive.metastore.warehouse.dir}!!/bucketmapjoin_negative3.db/test1/srcbucket22.txt]}'
-'  Alias Bucket Output File Name Mapping:'
-'
!!{hive.metastore.warehouse.dir}!!/bucketmapjoin_negative3.db/test1/srcbucket20.txt
 0'
-'

[34/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/bucketcontext_7.q.out
--
diff --git a/ql/src/test/results/beelinepositive/bucketcontext_7.q.out 
b/ql/src/test/results/beelinepositive/bucketcontext_7.q.out
deleted file mode 100644
index 4c4b10a..000
--- a/ql/src/test/results/beelinepositive/bucketcontext_7.q.out
+++ /dev/null
@@ -1,547 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/bucketcontext_7.q.raw". Enter 
"record" with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/bucketcontext_7.q
->>>  -- small 2 part, 4 bucket & big 2 part, 2 bucket
->>>  CREATE TABLE bucket_small (key string, value string) partitioned by (ds 
string) CLUSTERED BY (key) SORTED BY (key) INTO 4 BUCKETS STORED AS TEXTFILE;
-No rows affected 
->>>  load data local inpath '../data/files/srcsortbucket1outof4.txt' INTO 
TABLE bucket_small partition(ds='2008-04-08');
-No rows affected 
->>>  load data local inpath '../data/files/srcsortbucket2outof4.txt' INTO 
TABLE bucket_small partition(ds='2008-04-08');
-No rows affected 
->>>  load data local inpath '../data/files/srcsortbucket3outof4.txt' INTO 
TABLE bucket_small partition(ds='2008-04-08');
-No rows affected 
->>>  load data local inpath '../data/files/srcsortbucket4outof4.txt' INTO 
TABLE bucket_small partition(ds='2008-04-08');
-No rows affected 
->>>  
->>>  load data local inpath '../data/files/srcsortbucket1outof4.txt' INTO 
TABLE bucket_small partition(ds='2008-04-09');
-No rows affected 
->>>  load data local inpath '../data/files/srcsortbucket2outof4.txt' INTO 
TABLE bucket_small partition(ds='2008-04-09');
-No rows affected 
->>>  load data local inpath '../data/files/srcsortbucket3outof4.txt' INTO 
TABLE bucket_small partition(ds='2008-04-09');
-No rows affected 
->>>  load data local inpath '../data/files/srcsortbucket4outof4.txt' INTO 
TABLE bucket_small partition(ds='2008-04-09');
-No rows affected 
->>>  
->>>  CREATE TABLE bucket_big (key string, value string) partitioned by (ds 
string) CLUSTERED BY (key) SORTED BY (key) INTO 2 BUCKETS STORED AS TEXTFILE;
-No rows affected 
->>>  load data local inpath '../data/files/srcsortbucket1outof4.txt' INTO 
TABLE bucket_big partition(ds='2008-04-08');
-No rows affected 
->>>  load data local inpath '../data/files/srcsortbucket2outof4.txt' INTO 
TABLE bucket_big partition(ds='2008-04-08');
-No rows affected 
->>>  
->>>  load data local inpath '../data/files/srcsortbucket1outof4.txt' INTO 
TABLE bucket_big partition(ds='2008-04-09');
-No rows affected 
->>>  load data local inpath '../data/files/srcsortbucket2outof4.txt' INTO 
TABLE bucket_big partition(ds='2008-04-09');
-No rows affected 
->>>  
->>>  set hive.optimize.bucketmapjoin = true;
-No rows affected 
->>>  explain extended select /* + MAPJOIN(a) */ count(*) FROM bucket_small a 
JOIN bucket_big b ON a.key = b.key;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF (TOK_TABNAME bucket_small) a) 
(TOK_TABREF (TOK_TABNAME bucket_big) b) (= (. (TOK_TABLE_OR_COL a) key) (. 
(TOK_TABLE_OR_COL b) key (TOK_INSERT (TOK_DESTINATION (TOK_DIR 
TOK_TMP_FILE)) (TOK_SELECT (TOK_HINTLIST (TOK_HINT TOK_MAPJOIN (TOK_HINTARGLIST 
a))) (TOK_SELEXPR (TOK_FUNCTIONSTAR count)'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-4 is a root stage'
-'  Stage-1 depends on stages: Stage-4'
-'  Stage-2 depends on stages: Stage-1'
-'  Stage-0 is a root stage'
-''
-'STAGE PLANS:'
-'  Stage: Stage-4'
-'Map Reduce Local Work'
-'  Alias -> Map Local Tables:'
-'a '
-'  Fetch Operator'
-'limit: -1'
-'  Alias -> Map Local Operator Tree:'
-'a '
-'  TableScan'
-'alias: a'
-'GatherStats: false'
-'HashTable Sink Operator'
-'  condition expressions:'
-'0 '
-'1 '
-'  handleSkewJoin: false'
-'  keys:'
-'0 [Column[key]]'
-'1 [Column[key]]'
-'  Position of Big Table: 1'
-'  Bucket Mapjoin Context:'
-'  Alias Bucket Base File Name Mapping:'
-'a 
{ds=2008-04-08/srcsortbucket1outof4.txt=[ds=2008-04-08/srcsortbucket1outof4.txt,
 ds=2008-04-08/srcsortbucket3outof4.txt, 
ds=2008-04-09/srcsortbucket1outof4.txt, 
ds=2008-04-09/srcsortbucket3outof4.txt], 
ds=2008-04-08/srcsortbucket2outof4.txt=[ds=2008-04-08/srcsortbucket2outof4.txt, 
ds=2008-04-08/srcsortbucket4outof4.txt, ds=2008-04-09/srcsortbucket2outof4.txt, 
ds=2008-04-09/srcsortbucket4outof4.txt], 
ds=2008-04-09/srcsortbucket1outof4.txt=[ds=2008-04-08/srcsortbucket1outof4.txt, 
ds=2008-04-08/srcsortbucket3outof4.txt, ds=2008-04-09/srcsortbucket1outof4.txt, 
ds=2008-04-09/srcsortbucket3outof4.txt], 
ds=2008-04-09/srcsortbucket2outof4.txt=[ds=2008-04-08/srcsortbucket2outof4.txt, 
ds=2008-04-08/srcsortbucket4outof4.txt, ds=2008-04-09/srcsortbucket2outof4.txt, 

[12/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/groupby_sort_skew_1.q.out
--
diff --git a/ql/src/test/results/beelinepositive/groupby_sort_skew_1.q.out 
b/ql/src/test/results/beelinepositive/groupby_sort_skew_1.q.out
deleted file mode 100644
index 766f127..000
--- a/ql/src/test/results/beelinepositive/groupby_sort_skew_1.q.out
+++ /dev/null
@@ -1,4891 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/groupby_sort_skew_1.q.raw". Enter 
"record" with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/groupby_sort_skew_1.q
->>>  set hive.enforce.bucketing = true;
-No rows affected 
->>>  set hive.enforce.sorting = true;
-No rows affected 
->>>  set hive.exec.reducers.max = 10;
-No rows affected 
->>>  set hive.map.groupby.sorted=true;
-No rows affected 
->>>  set hive.groupby.skewindata=true;
-No rows affected 
->>>  
->>>  CREATE TABLE T1(key STRING, val STRING) 
-CLUSTERED BY (key) SORTED BY (key) INTO 2 BUCKETS STORED AS TEXTFILE;
-No rows affected 
->>>  
->>>  LOAD DATA LOCAL INPATH '../data/files/T1.txt' INTO TABLE T1;
-No rows affected 
->>>  
->>>  -- perform an insert to make sure there are 2 files
->>>  INSERT OVERWRITE TABLE T1 select key, val from T1;
-'key','val'
-No rows selected 
->>>  
->>>  CREATE TABLE outputTbl1(key int, cnt int);
-No rows affected 
->>>  
->>>  -- The plan should be converted to a map-side group by if the group by key
->>>  -- matches the skewed key
->>>  -- addind a order by at the end to make the test results deterministic
->>>  EXPLAIN EXTENDED 
-INSERT OVERWRITE TABLE outputTbl1 
-SELECT key, count(1) FROM T1 GROUP BY key;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME T1))) (TOK_INSERT 
(TOK_DESTINATION (TOK_TAB (TOK_TABNAME outputTbl1))) (TOK_SELECT (TOK_SELEXPR 
(TOK_TABLE_OR_COL key)) (TOK_SELEXPR (TOK_FUNCTION count 1))) (TOK_GROUPBY 
(TOK_TABLE_OR_COL key'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-1 is a root stage'
-'  Stage-0 depends on stages: Stage-1'
-'  Stage-2 depends on stages: Stage-0'
-''
-'STAGE PLANS:'
-'  Stage: Stage-1'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'t1 '
-'  TableScan'
-'alias: t1'
-'GatherStats: false'
-'Select Operator'
-'  expressions:'
-'expr: key'
-'type: string'
-'  outputColumnNames: key'
-'  Group By Operator'
-'aggregations:'
-'  expr: count(1)'
-'bucketGroup: false'
-'keys:'
-'  expr: key'
-'  type: string'
-'mode: final'
-'outputColumnNames: _col0, _col1'
-'Select Operator'
-'  expressions:'
-'expr: _col0'
-'type: string'
-'expr: _col1'
-'type: bigint'
-'  outputColumnNames: _col0, _col1'
-'  Select Operator'
-'expressions:'
-'  expr: UDFToInteger(_col0)'
-'  type: int'
-'  expr: UDFToInteger(_col1)'
-'  type: int'
-'outputColumnNames: _col0, _col1'
-'File Output Operator'
-'  compressed: false'
-'  GlobalTableId: 1'
-'  directory: pfile:!!{hive.exec.scratchdir}!!'
-'  NumFilesPerFileSink: 1'
-'  Stats Publishing Key Prefix: 
pfile:!!{hive.exec.scratchdir}!!'
-'  table:'
-'  input format: 
org.apache.hadoop.mapred.TextInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-'  properties:'
-'bucket_count -1'
-'columns key,cnt'
-'columns.types int:int'
-'file.inputformat 
org.apache.hadoop.mapred.TextInputFormat'
-'file.outputformat 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-'location 
!!{hive.metastore.warehouse.dir}!!/groupby_sort_skew_1.db/outputtbl1'
-'name groupby_sort_skew_1.outputtbl1'
-'serialization.ddl struct outputtbl1 { i32 key, 
i32 cnt}'
-'serialization.format 1'
-'serialization.lib 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
-'transient_lastDdlTime !!UNIXTIME!!'
-'  serde: 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
-'  name: 

[04/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/input42.q.out
--
diff --git a/ql/src/test/results/beelinepositive/input42.q.out 
b/ql/src/test/results/beelinepositive/input42.q.out
deleted file mode 100644
index 380cd6c..000
--- a/ql/src/test/results/beelinepositive/input42.q.out
+++ /dev/null
@@ -1,2036 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/input42.q.raw". Enter "record" 
with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/input42.q
->>>  explain extended 
-select * from srcpart a where a.ds='2008-04-08' order by a.key, a.hr;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME srcpart) a)) (TOK_INSERT 
(TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR 
TOK_ALLCOLREF)) (TOK_WHERE (= (. (TOK_TABLE_OR_COL a) ds) '2008-04-08')) 
(TOK_ORDERBY (TOK_TABSORTCOLNAMEASC (. (TOK_TABLE_OR_COL a) key)) 
(TOK_TABSORTCOLNAMEASC (. (TOK_TABLE_OR_COL a) hr)'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-1 is a root stage'
-'  Stage-0 is a root stage'
-''
-'STAGE PLANS:'
-'  Stage: Stage-1'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'a '
-'  TableScan'
-'alias: a'
-'GatherStats: false'
-'Select Operator'
-'  expressions:'
-'expr: key'
-'type: string'
-'expr: value'
-'type: string'
-'expr: ds'
-'type: string'
-'expr: hr'
-'type: string'
-'  outputColumnNames: _col0, _col1, _col2, _col3'
-'  Reduce Output Operator'
-'key expressions:'
-'  expr: _col0'
-'  type: string'
-'  expr: _col3'
-'  type: string'
-'sort order: ++'
-'tag: -1'
-'value expressions:'
-'  expr: _col0'
-'  type: string'
-'  expr: _col1'
-'  type: string'
-'  expr: _col2'
-'  type: string'
-'  expr: _col3'
-'  type: string'
-'  Needs Tagging: false'
-'  Path -> Alias:'
-'
!!{hive.metastore.warehouse.dir}!!/input42.db/srcpart/ds=2008-04-08/hr=11 [a]'
-'
!!{hive.metastore.warehouse.dir}!!/input42.db/srcpart/ds=2008-04-08/hr=12 [a]'
-'  Path -> Partition:'
-'
!!{hive.metastore.warehouse.dir}!!/input42.db/srcpart/ds=2008-04-08/hr=11 '
-'  Partition'
-'base file name: hr=11'
-'input format: org.apache.hadoop.mapred.TextInputFormat'
-'output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-'partition values:'
-'  ds 2008-04-08'
-'  hr 11'
-'properties:'
-'  bucket_count -1'
-'  columns key,value'
-'  columns.types string:string'
-'  file.inputformat org.apache.hadoop.mapred.TextInputFormat'
-'  file.outputformat 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-'  location 
!!{hive.metastore.warehouse.dir}!!/input42.db/srcpart/ds=2008-04-08/hr=11'
-'  name input42.srcpart'
-'  numFiles 1'
-'  numPartitions 4'
-'  numRows 0'
-'  partition_columns ds/hr'
-'  rawDataSize 0'
-'  serialization.ddl struct srcpart { string key, string value}'
-'  serialization.format 1'
-'  serialization.lib 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
-'  totalSize 5812'
-'  transient_lastDdlTime !!UNIXTIME!!'
-'serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
-'  '
-'  input format: org.apache.hadoop.mapred.TextInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-'  properties:'
-'bucket_count -1'
-'columns key,value'
-'columns.types string:string'
-'file.inputformat org.apache.hadoop.mapred.TextInputFormat'
-'file.outputformat 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-'location 
!!{hive.metastore.warehouse.dir}!!/input42.db/srcpart'
-'name input42.srcpart'
-'numFiles 4'
-'numPartitions 4'
-'numRows 0'
-'partition_columns ds/hr'
-'rawDataSize 0'
-'serialization.ddl struct srcpart { string key, string value}'
-'serialization.format 1'
-'serialization.lib 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
-'  

[23/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/cross_join.q.out
--
diff --git a/ql/src/test/results/beelinepositive/cross_join.q.out 
b/ql/src/test/results/beelinepositive/cross_join.q.out
deleted file mode 100644
index 125241f..000
--- a/ql/src/test/results/beelinepositive/cross_join.q.out
+++ /dev/null
@@ -1,183 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/cross_join.q.raw". Enter "record" 
with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/cross_join.q
->>>  -- current
->>>  explain select src.key from src join src src2;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF (TOK_TABNAME src)) (TOK_TABREF 
(TOK_TABNAME src) src2))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) 
(TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) key)'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-1 is a root stage'
-'  Stage-0 is a root stage'
-''
-'STAGE PLANS:'
-'  Stage: Stage-1'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'src '
-'  TableScan'
-'alias: src'
-'Reduce Output Operator'
-'  sort order: '
-'  tag: 0'
-'  value expressions:'
-'expr: key'
-'type: string'
-'src2 '
-'  TableScan'
-'alias: src2'
-'Reduce Output Operator'
-'  sort order: '
-'  tag: 1'
-'  Reduce Operator Tree:'
-'Join Operator'
-'  condition map:'
-'   Inner Join 0 to 1'
-'  condition expressions:'
-'0 {VALUE._col0}'
-'1 '
-'  handleSkewJoin: false'
-'  outputColumnNames: _col0'
-'  Select Operator'
-'expressions:'
-'  expr: _col0'
-'  type: string'
-'outputColumnNames: _col0'
-'File Output Operator'
-'  compressed: false'
-'  GlobalTableId: 0'
-'  table:'
-'  input format: org.apache.hadoop.mapred.TextInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-''
-'  Stage: Stage-0'
-'Fetch Operator'
-'  limit: -1'
-''
-''
-52 rows selected 
->>>  -- ansi cross join
->>>  explain select src.key from src cross join src src2;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_CROSSJOIN (TOK_TABREF (TOK_TABNAME src)) 
(TOK_TABREF (TOK_TABNAME src) src2))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR 
TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) key)'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-1 is a root stage'
-'  Stage-0 is a root stage'
-''
-'STAGE PLANS:'
-'  Stage: Stage-1'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'src '
-'  TableScan'
-'alias: src'
-'Reduce Output Operator'
-'  sort order: '
-'  tag: 0'
-'  value expressions:'
-'expr: key'
-'type: string'
-'src2 '
-'  TableScan'
-'alias: src2'
-'Reduce Output Operator'
-'  sort order: '
-'  tag: 1'
-'  Reduce Operator Tree:'
-'Join Operator'
-'  condition map:'
-'   Inner Join 0 to 1'
-'  condition expressions:'
-'0 {VALUE._col0}'
-'1 '
-'  handleSkewJoin: false'
-'  outputColumnNames: _col0'
-'  Select Operator'
-'expressions:'
-'  expr: _col0'
-'  type: string'
-'outputColumnNames: _col0'
-'File Output Operator'
-'  compressed: false'
-'  GlobalTableId: 0'
-'  table:'
-'  input format: org.apache.hadoop.mapred.TextInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
-''
-'  Stage: Stage-0'
-'Fetch Operator'
-'  limit: -1'
-''
-''
-52 rows selected 
->>>  -- appending condition is allowed
->>>  explain select src.key from src cross join src src2 on src.key=src2.key;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_CROSSJOIN (TOK_TABREF (TOK_TABNAME src)) 
(TOK_TABREF (TOK_TABNAME src) src2) (= (. (TOK_TABLE_OR_COL src) key) (. 
(TOK_TABLE_OR_COL src2) key (TOK_INSERT (TOK_DESTINATION (TOK_DIR 
TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL src) key)'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-1 is a root stage'
-'  Stage-0 is a root stage'
-''
-'STAGE PLANS:'
-'  Stage: Stage-1'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'src '
-'  TableScan'
-'alias: src'
-'Reduce Output Operator'
-'  key expressions:'
-'expr: key'
-'type: string'
-'  sort 

[39/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/auto_join_filters.q.out
--
diff --git a/ql/src/test/results/beelinepositive/auto_join_filters.q.out 
b/ql/src/test/results/beelinepositive/auto_join_filters.q.out
deleted file mode 100644
index a1573c2..000
--- a/ql/src/test/results/beelinepositive/auto_join_filters.q.out
+++ /dev/null
@@ -1,254 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/auto_join_filters.q.raw". Enter 
"record" with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/auto_join_filters.q
->>>  set hive.auto.convert.join = true;
-No rows affected 
->>>  
->>>  CREATE TABLE myinput1(key int, value int);
-No rows affected 
->>>  LOAD DATA LOCAL INPATH '../data/files/in3.txt' INTO TABLE myinput1;
-No rows affected 
->>>  
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value))  FROM myinput1 a JOIN 
myinput1 b on a.key > 40 AND a.value > 50 AND a.key = a.value AND b.key > 40 
AND b.value > 50 AND b.key = b.value;
-'_c0'
-'3078400'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value))  FROM myinput1 a LEFT OUTER 
JOIN myinput1 b on a.key > 40 AND a.value > 50 AND a.key = a.value AND b.key > 
40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'4937935'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value))  FROM myinput1 a RIGHT 
OUTER JOIN myinput1 b on a.key > 40 AND a.value > 50 AND a.key = a.value AND 
b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'3080335'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value))  FROM myinput1 a FULL OUTER 
JOIN myinput1 b on a.key > 40 AND a.value > 50 AND a.key = a.value AND b.key > 
40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'19749880'
-1 row selected 
->>>  
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a JOIN 
myinput1 b ON a.key = b.value AND a.key > 40 AND a.value > 50 AND a.key = 
a.value AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'3078400'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a JOIN 
myinput1 b ON a.key = b.key AND a.key > 40 AND a.value > 50 AND a.key = a.value 
AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'3078400'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a JOIN 
myinput1 b ON a.value = b.value AND a.key > 40 AND a.value > 50 AND a.key = 
a.value AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'3078400'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a JOIN 
myinput1 b ON a.value = b.value and a.key=b.key AND a.key > 40 AND a.value > 50 
AND a.key = a.value AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'3078400'
-1 row selected 
->>>  
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a LEFT OUTER 
JOIN myinput1 b ON a.key = b.value AND a.key > 40 AND a.value > 50 AND a.key = 
a.value AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'4937935'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a LEFT OUTER 
JOIN myinput1 b ON a.value = b.value AND a.key > 40 AND a.value > 50 AND a.key 
= a.value AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'4937935'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a LEFT OUTER 
JOIN myinput1 b ON a.key = b.key AND a.key > 40 AND a.value > 50 AND a.key = 
a.value AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'4937935'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a LEFT OUTER 
JOIN myinput1 b ON a.key = b.key and a.value=b.value AND a.key > 40 AND a.value 
> 50 AND a.key = a.value AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'4937935'
-1 row selected 
->>>  
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a RIGHT OUTER 
JOIN myinput1 b ON a.key = b.value AND a.key > 40 AND a.value > 50 AND a.key = 
a.value AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'3080335'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a RIGHT OUTER 
JOIN myinput1 b ON a.key = b.key AND a.key > 40 AND a.value > 50 AND a.key = 
a.value AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'3080335'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a RIGHT OUTER 
JOIN myinput1 b ON a.value = b.value AND a.key > 40 AND a.value > 50 AND a.key 
= a.value AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'3080335'
-1 row selected 
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a RIGHT OUTER 
JOIN myinput1 b ON a.key=b.key and a.value = b.value AND a.key > 40 AND a.value 
> 50 AND a.key = a.value AND b.key > 40 AND b.value > 50 AND b.key = b.value;
-'_c0'
-'3080335'
-1 row selected 
->>>  
->>>  SELECT sum(hash(a.key,a.value,b.key,b.value)) FROM myinput1 a 

[18/51] [partial] hive git commit: HIVE-15790: Remove unused beeline golden files (Gunther Hagleitner, reviewed by Sergey Shelukhin)

2017-02-03 Thread gunther
http://git-wip-us.apache.org/repos/asf/hive/blob/3890ed65/ql/src/test/results/beelinepositive/groupby7_map.q.out
--
diff --git a/ql/src/test/results/beelinepositive/groupby7_map.q.out 
b/ql/src/test/results/beelinepositive/groupby7_map.q.out
deleted file mode 100644
index 7674cc4..000
--- a/ql/src/test/results/beelinepositive/groupby7_map.q.out
+++ /dev/null
@@ -1,836 +0,0 @@
-Saving all output to "!!{outputDirectory}!!/groupby7_map.q.raw". Enter 
"record" with no arguments to stop it.
->>>  !run !!{qFileDirectory}!!/groupby7_map.q
->>>  set hive.map.aggr=true;
-No rows affected 
->>>  set hive.multigroupby.singlereducer=false;
-No rows affected 
->>>  set hive.groupby.skewindata=false;
-No rows affected 
->>>  set mapred.reduce.tasks=31;
-No rows affected 
->>>  
->>>  CREATE TABLE DEST1(key INT, value STRING) STORED AS TEXTFILE;
-No rows affected 
->>>  CREATE TABLE DEST2(key INT, value STRING) STORED AS TEXTFILE;
-No rows affected 
->>>  
->>>  SET hive.exec.compress.intermediate=true;
-No rows affected 
->>>  SET hive.exec.compress.output=true;
-No rows affected 
->>>  
->>>  EXPLAIN 
-FROM SRC 
-INSERT OVERWRITE TABLE DEST1 SELECT SRC.key, sum(SUBSTR(SRC.value,5)) GROUP BY 
SRC.key 
-INSERT OVERWRITE TABLE DEST2 SELECT SRC.key, sum(SUBSTR(SRC.value,5)) GROUP BY 
SRC.key;
-'Explain'
-'ABSTRACT SYNTAX TREE:'
-'  (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME SRC))) (TOK_INSERT 
(TOK_DESTINATION (TOK_TAB (TOK_TABNAME DEST1))) (TOK_SELECT (TOK_SELEXPR (. 
(TOK_TABLE_OR_COL SRC) key)) (TOK_SELEXPR (TOK_FUNCTION sum (TOK_FUNCTION 
SUBSTR (. (TOK_TABLE_OR_COL SRC) value) 5 (TOK_GROUPBY (. (TOK_TABLE_OR_COL 
SRC) key))) (TOK_INSERT (TOK_DESTINATION (TOK_TAB (TOK_TABNAME DEST2))) 
(TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL SRC) key)) (TOK_SELEXPR 
(TOK_FUNCTION sum (TOK_FUNCTION SUBSTR (. (TOK_TABLE_OR_COL SRC) value) 5 
(TOK_GROUPBY (. (TOK_TABLE_OR_COL SRC) key'
-''
-'STAGE DEPENDENCIES:'
-'  Stage-2 is a root stage'
-'  Stage-0 depends on stages: Stage-2'
-'  Stage-3 depends on stages: Stage-0'
-'  Stage-4 depends on stages: Stage-2'
-'  Stage-1 depends on stages: Stage-4'
-'  Stage-5 depends on stages: Stage-1'
-''
-'STAGE PLANS:'
-'  Stage: Stage-2'
-'Map Reduce'
-'  Alias -> Map Operator Tree:'
-'src '
-'  TableScan'
-'alias: src'
-'Select Operator'
-'  expressions:'
-'expr: key'
-'type: string'
-'expr: value'
-'type: string'
-'  outputColumnNames: key, value'
-'  Group By Operator'
-'aggregations:'
-'  expr: sum(substr(value, 5))'
-'bucketGroup: false'
-'keys:'
-'  expr: key'
-'  type: string'
-'mode: hash'
-'outputColumnNames: _col0, _col1'
-'Reduce Output Operator'
-'  key expressions:'
-'expr: _col0'
-'type: string'
-'  sort order: +'
-'  Map-reduce partition columns:'
-'expr: _col0'
-'type: string'
-'  tag: -1'
-'  value expressions:'
-'expr: _col1'
-'type: double'
-'Select Operator'
-'  expressions:'
-'expr: key'
-'type: string'
-'expr: value'
-'type: string'
-'  outputColumnNames: key, value'
-'  Group By Operator'
-'aggregations:'
-'  expr: sum(substr(value, 5))'
-'bucketGroup: false'
-'keys:'
-'  expr: key'
-'  type: string'
-'mode: hash'
-'outputColumnNames: _col0, _col1'
-'File Output Operator'
-'  compressed: true'
-'  GlobalTableId: 0'
-'  table:'
-'  input format: 
org.apache.hadoop.mapred.SequenceFileInputFormat'
-'  output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
-'  Reduce Operator Tree:'
-'Group By Operator'
-'  aggregations:'
-'expr: sum(VALUE._col0)'
-'  bucketGroup: false'
-'  keys:'
-'expr: KEY._col0'
-'type: string'
-'  mode: mergepartial'
-'  outputColumnNames: _col0, _col1'
-'  Select Operator'
-'expressions:'
-'  expr: _col0'
-'  type: string'
-'  expr: _col1'
-'  type: double'
-'outputColumnNames: _col0, _col1'
-'Select Operator'
-'  expressions:'
-'   

  1   2   3   4   5   6   7   8   9   10   >