hive git commit: HIVE-13701: LLAP: Use different prefix for llap task scheduler metrics (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-05-05 Thread prasanthj
Repository: hive
Updated Branches:
  refs/heads/master 3517a99ed -> 0cc404565


HIVE-13701: LLAP: Use different prefix for llap task scheduler metrics 
(Prasanth Jayachandran reviewed by Sergey Shelukhin)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/0cc40456
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/0cc40456
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/0cc40456

Branch: refs/heads/master
Commit: 0cc40456586aa5f3c54a34ceaf65eaef9a3d311b
Parents: 3517a99
Author: Prasanth Jayachandran 
Authored: Thu May 5 21:43:48 2016 -0500
Committer: Prasanth Jayachandran 
Committed: Thu May 5 21:43:48 2016 -0500

--
 ...doop-metrics2-llapdaemon.properties.template | 50 
 ...trics2-llaptaskscheduler.properties.template | 50 
 .../hadoop-metrics2.properties.template | 50 
 .../tezplugins/LlapTaskSchedulerService.java|  2 +-
 .../metrics/LlapTaskSchedulerMetrics.java   |  6 +--
 5 files changed, 104 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/0cc40456/llap-server/src/main/resources/hadoop-metrics2-llapdaemon.properties.template
--
diff --git 
a/llap-server/src/main/resources/hadoop-metrics2-llapdaemon.properties.template 
b/llap-server/src/main/resources/hadoop-metrics2-llapdaemon.properties.template
new file mode 100644
index 000..994acaa
--- /dev/null
+++ 
b/llap-server/src/main/resources/hadoop-metrics2-llapdaemon.properties.template
@@ -0,0 +1,50 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#}
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# syntax: [prefix].[source|sink].[instance].[options]
+# See javadoc of package-info.java for org.apache.hadoop.metrics2 for details
+
+#*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
+# default sampling period, in seconds
+#*.sink.file.period=10
+
+# 
*.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
+# *.sink.timeline.period=60
+
+# llapdeamon metrics for all contexts (jvm,queue,executors,cache) will go to 
this file
+# llapdaemon.sink.file.filename=llapdaemon-metrics.out
+
+# to configure separate files per context define following for each context
+# llapdaemon.sink.file_jvm.class=org.apache.hadoop.metrics2.sink.FileSink
+# llapdaemon.sink.file_jvm.context=jvm
+# llapdaemon.sink.file_jvm.filename=llapdaemon-jvm-metrics.out

http://git-wip-us.apache.org/repos/asf/hive/blob/0cc40456/llap-server/src/main/resources/hadoop-metrics2-llaptaskscheduler.properties.template
--
diff --git 
a/llap-server/src/main/resources/hadoop-metrics2-llaptaskscheduler.properties.template
 
b/llap-server/src/main/resources/hadoop-metrics2-llaptaskscheduler.properties.template
new file mode 100644
index 000..5cf71a7
--- /dev/null
+++ 
b/llap-server/src/main/resources/hadoop-metrics2-llaptaskscheduler.properties.template
@@ -0,0 +1,50 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under t

hive git commit: HIVE-13656 : need to set direct memory limit higher in LlapServiceDriver for certain edge case configurations (Sergey Shelukhin, reviewed by Vikram Dixit K and Siddharth Seth)

2016-05-05 Thread sershe
Repository: hive
Updated Branches:
  refs/heads/master eb2c54b3f -> 3517a99ed


HIVE-13656 : need to set direct memory limit higher in LlapServiceDriver for 
certain edge case configurations (Sergey Shelukhin, reviewed by Vikram Dixit K 
and Siddharth Seth)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/3517a99e
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/3517a99e
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/3517a99e

Branch: refs/heads/master
Commit: 3517a99edde061596d62b41339bacb5aac0e8290
Parents: eb2c54b
Author: Sergey Shelukhin 
Authored: Thu May 5 17:01:47 2016 -0700
Committer: Sergey Shelukhin 
Committed: Thu May 5 17:02:36 2016 -0700

--
 .../hadoop/hive/llap/cli/LlapServiceDriver.java | 21 +++-
 llap-server/src/main/resources/package.py   |  6 +-
 2 files changed, 17 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/3517a99e/llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java
--
diff --git 
a/llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java 
b/llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java
index de6d9b8..006f70f 100644
--- 
a/llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java
+++ 
b/llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java
@@ -236,20 +236,22 @@ public class LlapServiceDriver {
   String.valueOf(options.getIoThreads()));
 }
 
+long cache = -1, xmx = -1;
 if (options.getCache() != -1) {
-  conf.set(HiveConf.ConfVars.LLAP_IO_MEMORY_MAX_SIZE.varname,
-  Long.toString(options.getCache()));
+  cache = options.getCache();
+  conf.set(HiveConf.ConfVars.LLAP_IO_MEMORY_MAX_SIZE.varname, 
Long.toString(cache));
   
propsDirectOptions.setProperty(HiveConf.ConfVars.LLAP_IO_MEMORY_MAX_SIZE.varname,
-  Long.toString(options.getCache()));
+  Long.toString(cache));
 }
 
 if (options.getXmx() != -1) {
   // Needs more explanation here
-  // Xmx is not the max heap value in JDK8
-  // You need to subtract 50% of the survivor fraction from this, to get 
actual usable memory before it goes into GC
-  long xmx = (long) (options.getXmx() / (1024 * 1024));
+  // Xmx is not the max heap value in JDK8. You need to subtract 50% of 
the survivor fraction
+  // from this, to get actual usable  memory before it goes into GC
+  xmx = (long) (options.getXmx() / (1024 * 1024));
   conf.setLong(ConfVars.LLAP_DAEMON_MEMORY_PER_INSTANCE_MB.varname, xmx);
-  
propsDirectOptions.setProperty(ConfVars.LLAP_DAEMON_MEMORY_PER_INSTANCE_MB.varname,
 String.valueOf(xmx));
+  
propsDirectOptions.setProperty(ConfVars.LLAP_DAEMON_MEMORY_PER_INSTANCE_MB.varname,
+  String.valueOf(xmx));
 }
 
 if (options.getLlapQueueName() != null && 
!options.getLlapQueueName().isEmpty()) {
@@ -258,8 +260,6 @@ public class LlapServiceDriver {
   .setProperty(ConfVars.LLAP_DAEMON_QUEUE_NAME.varname, 
options.getLlapQueueName());
 }
 
-
-
 URL logger = conf.getResource(LlapDaemon.LOG4j2_PROPERTIES_FILE);
 
 if (null == logger) {
@@ -460,6 +460,9 @@ public class LlapServiceDriver {
 configs.put(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_VCORES,
 conf.getInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_VCORES, 
-1));
 
+long maxDirect = (xmx > 0 && cache > 0 && xmx < cache * 1.25) ? 
(long)(cache * 1.25) : -1;
+configs.put("max_direct_memory", Long.toString(maxDirect));
+
 FSDataOutputStream os = lfs.create(new Path(tmpDir, "config.json"));
 OutputStreamWriter w = new OutputStreamWriter(os);
 configs.write(w);

http://git-wip-us.apache.org/repos/asf/hive/blob/3517a99e/llap-server/src/main/resources/package.py
--
diff --git a/llap-server/src/main/resources/package.py 
b/llap-server/src/main/resources/package.py
index 63c0ef1..94c9d1a 100644
--- a/llap-server/src/main/resources/package.py
+++ b/llap-server/src/main/resources/package.py
@@ -101,6 +101,10 @@ def main(args):
return
config = json_parse(open(join(input, "config.json")).read())
java_home = config["java.home"]
+   max_direct_memory = config["max_direct_memory"]
+   daemon_args = args.args
+   if max_direct_memory > 0:
+   daemon_args = " -XX:MaxDirectMemorySize=%s %s" % 
(max_direct_memory, daemon_args)
resource = LlapResource(config)
# 5% container failure every monkey_interval seconds
monkey_percentage = 5 # 5%
@@ -114,7 +118,7 @@ def main(args):
"hadoop_home" : os.getenv("

hive git commit: HIVE-13395 (addednum) Lost Update problem in ACID (Eugene Koifman, reviewed by Alan Gates)

2016-05-05 Thread ekoifman
Repository: hive
Updated Branches:
  refs/heads/master 794f161c1 -> eb2c54b3f


HIVE-13395 (addednum) Lost Update problem in ACID (Eugene Koifman, reviewed by 
Alan Gates)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/eb2c54b3
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/eb2c54b3
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/eb2c54b3

Branch: refs/heads/master
Commit: eb2c54b3f80d958c36c22dfb0ee962806e673830
Parents: 794f161
Author: Eugene Koifman 
Authored: Thu May 5 15:29:00 2016 -0700
Committer: Eugene Koifman 
Committed: Thu May 5 15:29:00 2016 -0700

--
 .../scripts/upgrade/mysql/hive-txn-schema-1.3.0.mysql.sql | 10 ++
 1 file changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/eb2c54b3/metastore/scripts/upgrade/mysql/hive-txn-schema-1.3.0.mysql.sql
--
diff --git a/metastore/scripts/upgrade/mysql/hive-txn-schema-1.3.0.mysql.sql 
b/metastore/scripts/upgrade/mysql/hive-txn-schema-1.3.0.mysql.sql
index ea42757..d873012 100644
--- a/metastore/scripts/upgrade/mysql/hive-txn-schema-1.3.0.mysql.sql
+++ b/metastore/scripts/upgrade/mysql/hive-txn-schema-1.3.0.mysql.sql
@@ -34,6 +34,7 @@ CREATE TABLE TXN_COMPONENTS (
   TC_DATABASE varchar(128) NOT NULL,
   TC_TABLE varchar(128),
   TC_PARTITION varchar(767),
+  TC_OPERATION_TYPE char(1) NOT NULL,
   FOREIGN KEY (TC_TXNID) REFERENCES TXNS (TXN_ID)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
@@ -120,3 +121,12 @@ CREATE TABLE AUX_TABLE (
   PRIMARY KEY(MT_KEY1, MT_KEY2)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
+CREATE TABLE WRITE_SET (
+  WS_DATABASE varchar(128) NOT NULL,
+  WS_TABLE varchar(128) NOT NULL,
+  WS_PARTITION varchar(767),
+  WS_TXNID bigint NOT NULL,
+  WS_COMMIT_ID bigint NOT NULL,
+  WS_OPERATION_TYPE char(1) NOT NULL
+) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+



[1/2] hive git commit: HIVE-13395 Lost Update problem in ACID (Eugene Koifman, reviewed by Alan Gates)

2016-05-05 Thread ekoifman
Repository: hive
Updated Branches:
  refs/heads/branch-1 8a59b85a6 -> 7dbc53da9


http://git-wip-us.apache.org/repos/asf/hive/blob/7dbc53da/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager.java
--
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager.java 
b/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager.java
index b355dbe..3f5d0b6 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager.java
@@ -22,6 +22,7 @@ import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
 import org.apache.hadoop.hive.metastore.api.ShowLocksResponse;
 import org.apache.hadoop.hive.metastore.api.ShowLocksResponseElement;
+import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
 import org.apache.hadoop.hive.metastore.txn.TxnDbUtil;
 import org.apache.hadoop.hive.metastore.txn.TxnStore;
 import org.apache.hadoop.hive.ql.Context;
@@ -508,6 +509,12 @@ public class TestDbTxnManager {
   partCols.add(fs);
   t.setPartCols(partCols);
 }
+Map tblProps = t.getParameters();
+if(tblProps == null) {
+  tblProps = new HashMap<>();
+}
+tblProps.put(hive_metastoreConstants.TABLE_IS_TRANSACTIONAL, "true");
+t.setParamters(tblProps);
 return t;
   }
 

http://git-wip-us.apache.org/repos/asf/hive/blob/7dbc53da/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager2.java
--
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager2.java 
b/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager2.java
index 0e2bfc0..832606b 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager2.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager2.java
@@ -17,7 +17,13 @@
  */
 package org.apache.hadoop.hive.ql.lockmgr;
 
-import junit.framework.Assert;
+import org.apache.hadoop.hive.metastore.api.AddDynamicPartitions;
+import org.apache.hadoop.hive.metastore.txn.TxnStore;
+import org.apache.hadoop.hive.metastore.txn.TxnUtils;
+import org.apache.hadoop.hive.ql.TestTxnCommands2;
+import org.apache.hadoop.hive.ql.txn.AcidWriteSetService;
+import org.junit.After;
+import org.junit.Assert;
 import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.metastore.api.LockState;
 import org.apache.hadoop.hive.metastore.api.LockType;
@@ -29,23 +35,32 @@ import org.apache.hadoop.hive.ql.Driver;
 import org.apache.hadoop.hive.ql.ErrorMsg;
 import org.apache.hadoop.hive.ql.processors.CommandProcessorResponse;
 import org.apache.hadoop.hive.ql.session.SessionState;
-import org.junit.After;
 import org.junit.Before;
 import org.junit.BeforeClass;
+import org.junit.Ignore;
 import org.junit.Test;
 
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 
 /**
  * See additional tests in {@link 
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager}
  * Tests here are "end-to-end"ish and simulate concurrent queries.
+ * 
+ * The general approach is to use an instance of Driver to use Driver.run() to 
create tables
+ * Use Driver.compile() to generate QueryPlan which can then be passed to 
HiveTxnManager.acquireLocks().
+ * Same HiveTxnManager is used to openTxn()/commitTxn() etc.  This can 
exercise almost the entire
+ * code path that CLI would but with the advantage that you can create a 2nd 
HiveTxnManager and then
+ * simulate interleaved transactional/locking operations but all from within a 
single thread.
+ * The later not only controls concurrency precisely but is the only way to 
run in UT env with DerbyDB.
  */
 public class TestDbTxnManager2 {
   private static HiveConf conf = new HiveConf(Driver.class);
   private HiveTxnManager txnMgr;
   private Context ctx;
   private Driver driver;
+  TxnStore txnHandler;
 
   @BeforeClass
   public static void setUpClass() throws Exception {
@@ -61,15 +76,17 @@ public class TestDbTxnManager2 {
 driver.init();
 TxnDbUtil.cleanDb();
 TxnDbUtil.prepDb();
-txnMgr = TxnManagerFactory.getTxnManagerFactory().getTxnManager(conf);
+SessionState ss = SessionState.get();
+ss.initTxnMgr(conf);
+txnMgr = ss.getTxnMgr();
 Assert.assertTrue(txnMgr instanceof DbTxnManager);
+txnHandler = TxnUtils.getTxnStore(conf);
+
   }
   @After
   public void tearDown() throws Exception {
 driver.close();
 if (txnMgr != null) txnMgr.closeTxnManager();
-TxnDbUtil.cleanDb();
-TxnDbUtil.prepDb();
   }
   @Test
   public void testLocksInSubquery() throws Exception {
@@ -193,22 +210,24 @@ public class TestDbTxnManager2 {
 checkCmdOnDriver(cpr);
 cpr = driver.compileAndRespond("update temp.T7 set a = 5 where b = 6");
 checkCmdOnDriver(cpr);
+txnMgr.openTxn("Fifer");
 txnMgr.acquireLocks(driver.getPlan(), 

[2/2] hive git commit: HIVE-13395 Lost Update problem in ACID (Eugene Koifman, reviewed by Alan Gates)

2016-05-05 Thread ekoifman
HIVE-13395 Lost Update problem in ACID (Eugene Koifman, reviewed by Alan Gates)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/7dbc53da
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/7dbc53da
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/7dbc53da

Branch: refs/heads/branch-1
Commit: 7dbc53da98fb343fc589ff887a8dcc8893a786da
Parents: 8a59b85
Author: Eugene Koifman 
Authored: Thu May 5 15:23:03 2016 -0700
Committer: Eugene Koifman 
Committed: Thu May 5 15:23:03 2016 -0700

--
 .../org/apache/hadoop/hive/conf/HiveConf.java   |   2 +
 .../hive/metastore/TestHiveMetaStoreTxns.java   |   2 +-
 .../upgrade/derby/035-HIVE-13395.derby.sql  |  11 +
 .../derby/hive-txn-schema-1.3.0.derby.sql   |  11 +-
 .../derby/upgrade-1.2.0-to-1.3.0.derby.sql  |   2 +
 .../upgrade/mssql/020-HIVE-13395.mssql.sql  |   9 +
 .../upgrade/mssql/hive-schema-1.3.0.mssql.sql   |  12 +-
 .../mssql/upgrade-1.2.0-to-1.3.0.mssql.sql  |   1 +
 .../upgrade/mysql/035-HIVE-13395.mysql.sql  |  10 +
 .../mysql/hive-txn-schema-1.3.0.mysql.sql   |   9 +
 .../mysql/upgrade-1.2.0-to-1.3.0.mysql.sql  |   1 +
 .../upgrade/oracle/035-HIVE-13395.oracle.sql|  10 +
 .../oracle/hive-txn-schema-1.3.0.oracle.sql |  12 +-
 .../oracle/upgrade-1.2.0-to-1.3.0.oracle.sql|   1 +
 .../postgres/034-HIVE-13395.postgres.sql|  10 +
 .../postgres/hive-txn-schema-1.3.0.postgres.sql |  11 +-
 .../upgrade-1.2.0-to-1.3.0.postgres.sql |   1 +
 .../hadoop/hive/metastore/HiveMetaStore.java|   1 +
 .../hadoop/hive/metastore/txn/TxnDbUtil.java| 131 ++--
 .../hadoop/hive/metastore/txn/TxnHandler.java   | 466 +++---
 .../hadoop/hive/metastore/txn/TxnStore.java |   8 +-
 .../hadoop/hive/metastore/txn/TxnUtils.java |   2 +
 .../metastore/txn/TestCompactionTxnHandler.java |   6 +-
 .../hive/metastore/txn/TestTxnHandler.java  |  29 +-
 .../org/apache/hadoop/hive/ql/ErrorMsg.java |   2 +-
 .../hadoop/hive/ql/lockmgr/DbLockManager.java   |   5 +-
 .../hadoop/hive/ql/lockmgr/DbTxnManager.java|  27 +-
 .../hadoop/hive/ql/txn/AcidWriteSetService.java |  61 ++
 .../txn/compactor/HouseKeeperServiceBase.java   |   2 +-
 .../hadoop/hive/ql/txn/compactor/Initiator.java |   2 +-
 .../hadoop/hive/ql/txn/compactor/Worker.java|   2 +-
 .../apache/hadoop/hive/ql/TestTxnCommands2.java |   2 +-
 .../apache/hadoop/hive/ql/io/TestAcidUtils.java |  20 +
 .../hive/ql/lockmgr/TestDbTxnManager.java   |   7 +
 .../hive/ql/lockmgr/TestDbTxnManager2.java  | 610 ++-
 .../hive/ql/txn/compactor/TestCleaner.java  |   2 +
 36 files changed, 1313 insertions(+), 187 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/7dbc53da/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
--
diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 7c93e44..1086595 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -1576,6 +1576,8 @@ public class HiveConf extends Configuration {
   new TimeValidator(TimeUnit.MILLISECONDS), "Time delay of 1st reaper run 
after metastore start"),
 HIVE_TIMEDOUT_TXN_REAPER_INTERVAL("hive.timedout.txn.reaper.interval", 
"180s",
   new TimeValidator(TimeUnit.MILLISECONDS), "Time interval describing how 
often the reaper runs"),
+WRITE_SET_REAPER_INTERVAL("hive.writeset.reaper.interval", "60s",
+  new TimeValidator(TimeUnit.MILLISECONDS), "Frequency of WriteSet reaper 
runs"),
 
 // For HBase storage handler
 HIVE_HBASE_WAL_ENABLED("hive.hbase.wal.enabled", true,

http://git-wip-us.apache.org/repos/asf/hive/blob/7dbc53da/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreTxns.java
--
diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreTxns.java
 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreTxns.java
index 5ad5f35..d5ecf98 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreTxns.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreTxns.java
@@ -186,7 +186,7 @@ public class TestHiveMetaStoreTxns {
 .setDbName("mydb")
 .setTableName("mytable")
 .setPartitionName("mypartition")
-.setExclusive()
+.setSemiShared()
 .build())
   .addLockComponent(new LockComponentBuilder()
 .setDbName("mydb")

http://git-wip-us.apache.org/repos/asf/hive/blob/7dbc53da/metastore/scripts/upg

hive git commit: HIVE-13393: Beeline: Print help message for the --incremental option (Vaibhav Gumashta reviewed by Thejas Nair)

2016-05-05 Thread vgumashta
Repository: hive
Updated Branches:
  refs/heads/master 4eb960305 -> 794f161c1


HIVE-13393: Beeline: Print help message for the --incremental option (Vaibhav 
Gumashta reviewed by Thejas Nair)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/794f161c
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/794f161c
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/794f161c

Branch: refs/heads/master
Commit: 794f161c136c4d99693eb60222c0f17b10948e0d
Parents: 4eb9603
Author: Vaibhav Gumashta 
Authored: Thu May 5 15:12:38 2016 -0700
Committer: Vaibhav Gumashta 
Committed: Thu May 5 15:12:38 2016 -0700

--
 beeline/src/main/resources/BeeLine.properties | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/794f161c/beeline/src/main/resources/BeeLine.properties
--
diff --git a/beeline/src/main/resources/BeeLine.properties 
b/beeline/src/main/resources/BeeLine.properties
index a118c09..bc40685 100644
--- a/beeline/src/main/resources/BeeLine.properties
+++ b/beeline/src/main/resources/BeeLine.properties
@@ -171,7 +171,14 @@ cmd-usage: Usage: java org.apache.hive.cli.beeline.BeeLine 
\n \
 \  --silent=[true/false]   be more silent\n \
 \  --autosave=[true/false] automatically save preferences\n \
 \  --outputformat=[table/vertical/csv2/tsv2/dsv/csv/tsv]  format mode for 
result display\n \
-\  Note that csv, and tsv are deprecated - use 
csv2, tsv2 instead\n\
+\  Note that csv, and tsv are deprecated - use 
csv2, tsv2 instead\n \
+\  --incremental=[true/false]  Defaults to false. When set to false, the 
entire result set\n \
+\  is fetched and buffered before being 
displayed, yielding optimal\n \
+\  display column sizing. When set to true, 
result rows are displayed\n \
+\  immediately as they are fetched, yielding 
lower latency and\n \
+\  memory usage at the price of extra display 
column padding.\n \
+\  Setting --incremental=true is recommended 
if you encounter an OutOfMemory\n \
+\  on the client side (due to the fetched 
result set size being large).\n \
 \  --truncateTable=[true/false]truncate table column when it exceeds 
length\n \
 \  --delimiterForDSV=DELIMITER specify the delimiter for 
delimiter-separated values output format (default: |)\n \
 \  --isolation=LEVEL   set the transaction isolation level\n \



hive git commit: HIVE-13619: Bucket map join plan is incorrect (Vikram Dixit K, reviewed by Gunther Hagleitner)

2016-05-05 Thread vikram
Repository: hive
Updated Branches:
  refs/heads/master da82819bc -> 4eb960305


HIVE-13619: Bucket map join plan is incorrect (Vikram Dixit K, reviewed by 
Gunther Hagleitner)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/4eb96030
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/4eb96030
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/4eb96030

Branch: refs/heads/master
Commit: 4eb960305f6cf30aa6e1011ee09388b1ab4c4fd9
Parents: da82819
Author: vikram 
Authored: Thu May 5 14:35:58 2016 -0700
Committer: vikram 
Committed: Thu May 5 14:35:58 2016 -0700

--
 ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorUtils.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/4eb96030/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorUtils.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorUtils.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorUtils.java
index 41507b1..a8ed74c 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorUtils.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorUtils.java
@@ -83,7 +83,7 @@ public class OperatorUtils {
 
   public static  T findSingleOperatorUpstreamJoinAccounted(Operator 
start, Class clazz) {
 Set found = findOperatorsUpstreamJoinAccounted(start, clazz, new 
HashSet());
-return found.size() == 1 ? found.iterator().next(): null;
+return found.size() >= 1 ? found.iterator().next(): null;
   }
 
   public static  Set findOperatorsUpstream(Collection> 
starts, Class clazz) {



hive git commit: HIVE-13637: Fold CASE into NVL when CBO optimized the plan (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jcamacho
Repository: hive
Updated Branches:
  refs/heads/master 10d054913 -> da82819bc


HIVE-13637: Fold CASE into NVL when CBO optimized the plan (Jesus Camacho 
Rodriguez, reviewed by Ashutosh Chauhan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/da82819b
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/da82819b
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/da82819b

Branch: refs/heads/master
Commit: da82819bc112589e0d96874947c942e834681ed2
Parents: 10d0549
Author: Jesus Camacho Rodriguez 
Authored: Wed May 4 01:27:30 2016 +0100
Committer: Jesus Camacho Rodriguez 
Committed: Thu May 5 22:13:10 2016 +0100

--
 .../calcite/translator/JoinTypeCheckCtx.java|  2 +-
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java  | 17 -
 .../hadoop/hive/ql/parse/TypeCheckCtx.java  | 19 +-
 .../hive/ql/parse/TypeCheckProcFactory.java | 26 
 .../queries/clientpositive/constantPropWhen.q   |  2 ++
 5 files changed, 53 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/da82819b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/JoinTypeCheckCtx.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/JoinTypeCheckCtx.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/JoinTypeCheckCtx.java
index dccd1d9..f166bb6 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/JoinTypeCheckCtx.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/JoinTypeCheckCtx.java
@@ -53,7 +53,7 @@ public class JoinTypeCheckCtx extends TypeCheckCtx {
 
   public JoinTypeCheckCtx(RowResolver leftRR, RowResolver rightRR, JoinType 
hiveJoinType)
   throws SemanticException {
-super(RowResolver.getCombinedRR(leftRR, rightRR), true, false, false, 
false, false, false, false,
+super(RowResolver.getCombinedRR(leftRR, rightRR), true, false, false, 
false, false, false, false, false,
 false, false);
 this.inputRRLst = ImmutableList.of(leftRR, rightRR);
 this.outerJoin = (hiveJoinType == JoinType.LEFTOUTER) || (hiveJoinType == 
JoinType.RIGHTOUTER)

http://git-wip-us.apache.org/repos/asf/hive/blob/da82819b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
index 2983d38..f79a525 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
@@ -3143,8 +3143,8 @@ public class SemanticAnalyzer extends 
BaseSemanticAnalyzer {
 OpParseContext inputCtx = opParseCtx.get(input);
 RowResolver inputRR = inputCtx.getRowResolver();
 Operator output = putOpInsertMap(OperatorFactory.getAndMakeChild(
-new FilterDesc(genExprNodeDesc(condn, inputRR, useCaching), false), 
new RowSchema(
-inputRR.getColumnInfos()), input), inputRR);
+new FilterDesc(genExprNodeDesc(condn, inputRR, useCaching, 
isCBOExecuted()), false),
+new RowSchema(inputRR.getColumnInfos()), input), inputRR);
 
 if (LOG.isDebugEnabled()) {
   LOG.debug("Created Filter Plan for " + qb.getId() + " row schema: "
@@ -4146,7 +4146,7 @@ public class SemanticAnalyzer extends 
BaseSemanticAnalyzer {
  expr, col_list, null, inputRR, starRR, pos, out_rwsch, 
qb.getAliases(), false);
   } else {
 // Case when this is an expression
-TypeCheckCtx tcCtx = new TypeCheckCtx(inputRR);
+TypeCheckCtx tcCtx = new TypeCheckCtx(inputRR, true, isCBOExecuted());
 // We allow stateful functions in the SELECT list (but nowhere else)
 tcCtx.setAllowStatefulFunctions(true);
 tcCtx.setAllowDistinctFunctions(false);
@@ -,7 +,7 @@ public class SemanticAnalyzer extends 
BaseSemanticAnalyzer {
   List expressions = joinTree.getExpressions().get(i);
   joinKeys[i] = new ExprNodeDesc[expressions.size()];
   for (int j = 0; j < joinKeys[i].length; j++) {
-joinKeys[i][j] = genExprNodeDesc(expressions.get(j), inputRR);
+joinKeys[i][j] = genExprNodeDesc(expressions.get(j), inputRR, true, 
isCBOExecuted());
   }
 }
 // Type checking and implicit type conversion for join keys
@@ -10999,12 +10999,17 @@ public class SemanticAnalyzer extends 
BaseSemanticAnalyzer {
   throws SemanticException {
 // Since the user didn't supply a customized type-checking context,
 // use default settings.
-return genExprNodeDesc(expr, input, true);
+return g

[3/3] hive git commit: HIVE-13395 Lost Update problem in ACID (Eugene Koifman, reviewed by Alan Gates)

2016-05-05 Thread ekoifman
HIVE-13395 Lost Update problem in ACID (Eugene Koifman, reviewed by Alan Gates)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/10d05491
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/10d05491
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/10d05491

Branch: refs/heads/master
Commit: 10d05491379bb6f8e607a030811e8d4e530604de
Parents: 0927187
Author: Eugene Koifman 
Authored: Thu May 5 12:45:44 2016 -0700
Committer: Eugene Koifman 
Committed: Thu May 5 12:45:44 2016 -0700

--
 .../org/apache/hadoop/hive/conf/HiveConf.java   |   2 +
 .../hive/metastore/TestHiveMetaStoreTxns.java   |   2 +-
 .../upgrade/derby/035-HIVE-13395.derby.sql  |  11 +
 .../upgrade/derby/hive-schema-2.1.0.derby.sql   |   2 +-
 .../derby/hive-txn-schema-1.3.0.derby.sql   |  11 +-
 .../derby/hive-txn-schema-2.1.0.derby.sql   | 130 
 .../derby/upgrade-1.2.0-to-1.3.0.derby.sql  |   1 +
 .../derby/upgrade-2.0.0-to-2.1.0.derby.sql  |   1 +
 .../upgrade/mssql/020-HIVE-13395.mssql.sql  |   9 +
 .../upgrade/mssql/hive-schema-1.3.0.mssql.sql   |  12 +-
 .../upgrade/mssql/hive-schema-2.1.0.mssql.sql   |  12 +-
 .../mssql/upgrade-1.2.0-to-1.3.0.mssql.sql  |   1 +
 .../mssql/upgrade-2.0.0-to-2.1.0.mssql.sql  |   1 +
 .../upgrade/mysql/035-HIVE-13395.mysql.sql  |  10 +
 .../upgrade/mysql/hive-schema-2.1.0.mysql.sql   |   2 +-
 .../mysql/hive-txn-schema-2.1.0.mysql.sql   | 131 
 .../mysql/upgrade-1.2.0-to-1.3.0.mysql.sql  |   1 +
 .../mysql/upgrade-2.0.0-to-2.1.0.mysql.sql  |   1 +
 .../upgrade/oracle/035-HIVE-13395.oracle.sql|  10 +
 .../upgrade/oracle/hive-schema-2.1.0.oracle.sql |   2 +-
 .../oracle/hive-txn-schema-1.3.0.oracle.sql |  12 +-
 .../oracle/hive-txn-schema-2.1.0.oracle.sql | 129 
 .../oracle/upgrade-1.2.0-to-1.3.0.oracle.sql|   1 +
 .../oracle/upgrade-2.0.0-to-2.1.0.oracle.sql|   1 +
 .../postgres/034-HIVE-13395.postgres.sql|  10 +
 .../postgres/hive-schema-2.1.0.postgres.sql |   2 +-
 .../postgres/hive-txn-schema-1.3.0.postgres.sql |  11 +-
 .../postgres/hive-txn-schema-2.1.0.postgres.sql | 129 
 .../upgrade-1.2.0-to-1.3.0.postgres.sql |   1 +
 .../upgrade-2.0.0-to-2.1.0.postgres.sql |   1 +
 .../hadoop/hive/metastore/HiveMetaStore.java|   1 +
 .../hadoop/hive/metastore/txn/TxnDbUtil.java| 130 ++--
 .../hadoop/hive/metastore/txn/TxnHandler.java   | 466 +++---
 .../hadoop/hive/metastore/txn/TxnStore.java |   8 +-
 .../hadoop/hive/metastore/txn/TxnUtils.java |   2 +
 .../metastore/txn/TestCompactionTxnHandler.java |   6 +-
 .../hive/metastore/txn/TestTxnHandler.java  |  29 +-
 .../org/apache/hadoop/hive/ql/ErrorMsg.java |   2 +-
 .../hadoop/hive/ql/lockmgr/DbLockManager.java   |   5 +-
 .../hadoop/hive/ql/lockmgr/DbTxnManager.java|  27 +-
 .../hadoop/hive/ql/txn/AcidWriteSetService.java |  61 ++
 .../txn/compactor/HouseKeeperServiceBase.java   |   2 +-
 .../hadoop/hive/ql/txn/compactor/Initiator.java |   2 +-
 .../hadoop/hive/ql/txn/compactor/Worker.java|   2 +-
 .../apache/hadoop/hive/ql/TestTxnCommands2.java |   2 +-
 .../apache/hadoop/hive/ql/io/TestAcidUtils.java |  20 +
 .../hive/ql/lockmgr/TestDbTxnManager.java   |   7 +
 .../hive/ql/lockmgr/TestDbTxnManager2.java  | 610 ++-
 .../hive/ql/txn/compactor/TestCleaner.java  |   4 +
 49 files changed, 1843 insertions(+), 192 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/10d05491/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
--
diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 06a6906..bb74d99 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -1769,6 +1769,8 @@ public class HiveConf extends Configuration {
   new TimeValidator(TimeUnit.MILLISECONDS), "Time delay of 1st reaper run 
after metastore start"),
 HIVE_TIMEDOUT_TXN_REAPER_INTERVAL("hive.timedout.txn.reaper.interval", 
"180s",
   new TimeValidator(TimeUnit.MILLISECONDS), "Time interval describing how 
often the reaper runs"),
+WRITE_SET_REAPER_INTERVAL("hive.writeset.reaper.interval", "60s",
+  new TimeValidator(TimeUnit.MILLISECONDS), "Frequency of WriteSet reaper 
runs"),
 
 // For HBase storage handler
 HIVE_HBASE_WAL_ENABLED("hive.hbase.wal.enabled", true,

http://git-wip-us.apache.org/repos/asf/hive/blob/10d05491/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStoreTxns.java
--
diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoo

[1/3] hive git commit: HIVE-13395 Lost Update problem in ACID (Eugene Koifman, reviewed by Alan Gates)

2016-05-05 Thread ekoifman
Repository: hive
Updated Branches:
  refs/heads/master 092718720 -> 10d054913


http://git-wip-us.apache.org/repos/asf/hive/blob/10d05491/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager2.java
--
diff --git 
a/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager2.java 
b/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager2.java
index e94af55..c956d78 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager2.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDbTxnManager2.java
@@ -17,7 +17,13 @@
  */
 package org.apache.hadoop.hive.ql.lockmgr;
 
-import junit.framework.Assert;
+import org.apache.hadoop.hive.metastore.api.AddDynamicPartitions;
+import org.apache.hadoop.hive.metastore.txn.TxnStore;
+import org.apache.hadoop.hive.metastore.txn.TxnUtils;
+import org.apache.hadoop.hive.ql.TestTxnCommands2;
+import org.apache.hadoop.hive.ql.txn.AcidWriteSetService;
+import org.junit.After;
+import org.junit.Assert;
 import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.metastore.api.LockState;
 import org.apache.hadoop.hive.metastore.api.LockType;
@@ -29,23 +35,32 @@ import org.apache.hadoop.hive.ql.Driver;
 import org.apache.hadoop.hive.ql.ErrorMsg;
 import org.apache.hadoop.hive.ql.processors.CommandProcessorResponse;
 import org.apache.hadoop.hive.ql.session.SessionState;
-import org.junit.After;
 import org.junit.Before;
 import org.junit.BeforeClass;
+import org.junit.Ignore;
 import org.junit.Test;
 
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 
 /**
  * See additional tests in {@link 
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager}
  * Tests here are "end-to-end"ish and simulate concurrent queries.
+ * 
+ * The general approach is to use an instance of Driver to use Driver.run() to 
create tables
+ * Use Driver.compile() to generate QueryPlan which can then be passed to 
HiveTxnManager.acquireLocks().
+ * Same HiveTxnManager is used to openTxn()/commitTxn() etc.  This can 
exercise almost the entire
+ * code path that CLI would but with the advantage that you can create a 2nd 
HiveTxnManager and then
+ * simulate interleaved transactional/locking operations but all from within a 
single thread.
+ * The later not only controls concurrency precisely but is the only way to 
run in UT env with DerbyDB.
  */
 public class TestDbTxnManager2 {
   private static HiveConf conf = new HiveConf(Driver.class);
   private HiveTxnManager txnMgr;
   private Context ctx;
   private Driver driver;
+  TxnStore txnHandler;
 
   @BeforeClass
   public static void setUpClass() throws Exception {
@@ -60,15 +75,17 @@ public class TestDbTxnManager2 {
 driver.init();
 TxnDbUtil.cleanDb();
 TxnDbUtil.prepDb();
-txnMgr = TxnManagerFactory.getTxnManagerFactory().getTxnManager(conf);
+SessionState ss = SessionState.get();
+ss.initTxnMgr(conf);
+txnMgr = ss.getTxnMgr();
 Assert.assertTrue(txnMgr instanceof DbTxnManager);
+txnHandler = TxnUtils.getTxnStore(conf);
+
   }
   @After
   public void tearDown() throws Exception {
 driver.close();
 if (txnMgr != null) txnMgr.closeTxnManager();
-TxnDbUtil.cleanDb();
-TxnDbUtil.prepDb();
   }
   @Test
   public void testLocksInSubquery() throws Exception {
@@ -192,22 +209,24 @@ public class TestDbTxnManager2 {
 checkCmdOnDriver(cpr);
 cpr = driver.compileAndRespond("update temp.T7 set a = 5 where b = 6");
 checkCmdOnDriver(cpr);
+txnMgr.openTxn("Fifer");
 txnMgr.acquireLocks(driver.getPlan(), ctx, "Fifer");
-List updateLocks = ctx.getHiveLocks();
-cpr = driver.compileAndRespond("drop database if exists temp");
-LockState lockState = ((DbTxnManager) 
txnMgr).acquireLocks(driver.getPlan(), ctx, "Fiddler", false);//gets SS lock on 
T7
+checkCmdOnDriver(driver.compileAndRespond("drop database if exists temp"));
+HiveTxnManager txnMgr2 = 
TxnManagerFactory.getTxnManagerFactory().getTxnManager(conf);
+//txnMgr2.openTxn("Fiddler");
+((DbTxnManager)txnMgr2).acquireLocks(driver.getPlan(), ctx, "Fiddler", 
false);//gets SS lock on T7
 List locks = getLocks();
 Assert.assertEquals("Unexpected lock count", 2, locks.size());
 checkLock(LockType.SHARED_WRITE, LockState.ACQUIRED, "temp", "T7", null, 
locks.get(0));
 checkLock(LockType.EXCLUSIVE, LockState.WAITING, "temp", null, null, 
locks.get(1));
-txnMgr.getLockManager().releaseLocks(updateLocks);
-lockState = 
((DbLockManager)txnMgr.getLockManager()).checkLock(locks.get(1).getLockid());
+txnMgr.commitTxn();
+
((DbLockManager)txnMgr.getLockManager()).checkLock(locks.get(1).getLockid());
 locks = getLocks();
 Assert.assertEquals("Unexpected lock count", 1, locks.size());
 checkLock(LockType.EXCLUSIVE, LockState.ACQUIRED, "temp", null, null, 
locks.get(0));
 List xLock = new ArrayList(0);
 xLock.add(new DbLockManager

[2/3] hive git commit: HIVE-13395 Lost Update problem in ACID (Eugene Koifman, reviewed by Alan Gates)

2016-05-05 Thread ekoifman
http://git-wip-us.apache.org/repos/asf/hive/blob/10d05491/metastore/src/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
--
diff --git 
a/metastore/src/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java 
b/metastore/src/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
index c0fa97a..06cd4aa 100644
--- a/metastore/src/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
+++ b/metastore/src/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
@@ -72,7 +72,7 @@ import java.util.regex.Pattern;
  * used to properly sequence operations.  Most notably:
  * 1. various sequence IDs are generated with aid of this mutex
  * 2. ensuring that each (Hive) Transaction state is transitioned atomically.  
Transaction state
- *  includes it's actual state (Open, Aborted) as well as it's lock 
list/component list.  Thus all
+ *  includes its actual state (Open, Aborted) as well as it's lock 
list/component list.  Thus all
  *  per transaction ops, either start by update/delete of the relevant TXNS 
row or do S4U on that row.
  *  This allows almost all operations to run at READ_COMMITTED and minimizes 
DB deadlocks.
  * 3. checkLock() - this is mutexted entirely since we must ensure that while 
we check if some lock
@@ -126,6 +126,41 @@ abstract class TxnHandler implements TxnStore, 
TxnStore.MutexAPI {
 
   static private DataSource connPool;
   static private boolean doRetryOnConnPool = false;
+  
+  private enum OpertaionType {
+INSERT('i'), UPDATE('u'), DELETE('d');
+private final char sqlConst;
+OpertaionType(char sqlConst) {
+  this.sqlConst = sqlConst;
+}
+public String toString() {
+  return Character.toString(sqlConst);
+}
+public static OpertaionType fromString(char sqlConst) {
+  switch (sqlConst) {
+case 'i':
+  return INSERT;
+case 'u':
+  return UPDATE;
+case 'd':
+  return DELETE;
+default:
+  throw new IllegalArgumentException(quoteChar(sqlConst));
+  }
+}
+//we should instead just pass in OpertaionType from client (HIVE-13622)
+@Deprecated
+public static OpertaionType fromLockType(LockType lockType) {
+  switch (lockType) {
+case SHARED_READ:
+  return INSERT;
+case SHARED_WRITE:
+  return UPDATE;
+default:
+  throw new IllegalArgumentException("Unexpected lock type: " + 
lockType);
+  }
+}
+  }
 
   /**
* Number of consecutive deadlocks we have seen
@@ -454,6 +489,31 @@ abstract class TxnHandler implements TxnStore, 
TxnStore.MutexAPI {
 }
   }
 
+  /**
+   * Concurrency/isolation notes:
+   * This is mutexed with {@link #openTxns(OpenTxnRequest)} and other {@link 
#commitTxn(CommitTxnRequest)}
+   * operations using select4update on NEXT_TXN_ID.  Also, mutexes on TXNX 
table for specific txnid:X
+   * see more notes below.
+   * In order to prevent lost updates, we need to determine if any 2 
transactions overlap.  Each txn
+   * is viewed as an interval [M,N]. M is the txnid and N is taken from the 
same NEXT_TXN_ID sequence
+   * so that we can compare commit time of txn T with start time of txn S.  
This sequence can be thought of
+   * as a logical time counter.  If S.commitTime < T.startTime, T and S do NOT 
overlap.
+   *
+   * Motivating example:
+   * Suppose we have multi-statment transactions T and S both of which are 
attempting x = x + 1
+   * In order to prevent lost update problem, the the non-overlapping txns 
must lock in the snapshot
+   * that they read appropriately.  In particular, if txns do not overlap, 
then one follows the other
+   * (assumig they write the same entity), and thus the 2nd must see changes 
of the 1st.  We ensure
+   * this by locking in snapshot after 
+   * {@link #openTxns(OpenTxnRequest)} call is made (see {@link 
org.apache.hadoop.hive.ql.Driver#acquireLocksAndOpenTxn()})
+   * and mutexing openTxn() with commit().  In other words, once a S.commit() 
starts we must ensure
+   * that txn T which will be considered a later txn, locks in a snapshot that 
includes the result
+   * of S's commit (assuming no other txns).
+   * As a counter example, suppose we have S[3,3] and T[4,4] (commitId=txnid 
means no other transactions
+   * were running in parallel).  If T and S both locked in the same snapshot 
(for example commit of
+   * txnid:2, which is possible if commitTxn() and openTxnx() is not mutexed)
+   * 'x' would be updated to the same value by both, i.e. lost update. 
+   */
   public void commitTxn(CommitTxnRequest rqst)
 throws NoSuchTxnException, TxnAbortedException,  MetaException {
 long txnid = rqst.getTxnid();
@@ -461,40 +521,116 @@ abstract class TxnHandler implements TxnStore, 
TxnStore.MutexAPI {
   Connection dbConn = null;
   Statement stmt = null;
   ResultSet lockHandle = null;
+  ResultSet commitIdRs = null, rs;
   try {
   

[06/20] hive git commit: HIVE-13351: Support drop Primary Key/Foreign Key constraints (Hari Subramaniyan, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jdere
http://git-wip-us.apache.org/repos/asf/hive/blob/212077b8/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
--
diff --git a/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp 
b/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
index 8da883d..36a0f96 100644
--- a/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
+++ b/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
@@ -8999,6 +8999,138 @@ void ForeignKeysResponse::printTo(std::ostream& out) 
const {
 }
 
 
+DropConstraintRequest::~DropConstraintRequest() throw() {
+}
+
+
+void DropConstraintRequest::__set_dbname(const std::string& val) {
+  this->dbname = val;
+}
+
+void DropConstraintRequest::__set_tablename(const std::string& val) {
+  this->tablename = val;
+}
+
+void DropConstraintRequest::__set_constraintname(const std::string& val) {
+  this->constraintname = val;
+}
+
+uint32_t DropConstraintRequest::read(::apache::thrift::protocol::TProtocol* 
iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+  bool isset_dbname = false;
+  bool isset_tablename = false;
+  bool isset_constraintname = false;
+
+  while (true)
+  {
+xfer += iprot->readFieldBegin(fname, ftype, fid);
+if (ftype == ::apache::thrift::protocol::T_STOP) {
+  break;
+}
+switch (fid)
+{
+  case 1:
+if (ftype == ::apache::thrift::protocol::T_STRING) {
+  xfer += iprot->readString(this->dbname);
+  isset_dbname = true;
+} else {
+  xfer += iprot->skip(ftype);
+}
+break;
+  case 2:
+if (ftype == ::apache::thrift::protocol::T_STRING) {
+  xfer += iprot->readString(this->tablename);
+  isset_tablename = true;
+} else {
+  xfer += iprot->skip(ftype);
+}
+break;
+  case 3:
+if (ftype == ::apache::thrift::protocol::T_STRING) {
+  xfer += iprot->readString(this->constraintname);
+  isset_constraintname = true;
+} else {
+  xfer += iprot->skip(ftype);
+}
+break;
+  default:
+xfer += iprot->skip(ftype);
+break;
+}
+xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  if (!isset_dbname)
+throw TProtocolException(TProtocolException::INVALID_DATA);
+  if (!isset_tablename)
+throw TProtocolException(TProtocolException::INVALID_DATA);
+  if (!isset_constraintname)
+throw TProtocolException(TProtocolException::INVALID_DATA);
+  return xfer;
+}
+
+uint32_t DropConstraintRequest::write(::apache::thrift::protocol::TProtocol* 
oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("DropConstraintRequest");
+
+  xfer += oprot->writeFieldBegin("dbname", 
::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeString(this->dbname);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("tablename", 
::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->tablename);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("constraintname", 
::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString(this->constraintname);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+void swap(DropConstraintRequest &a, DropConstraintRequest &b) {
+  using ::std::swap;
+  swap(a.dbname, b.dbname);
+  swap(a.tablename, b.tablename);
+  swap(a.constraintname, b.constraintname);
+}
+
+DropConstraintRequest::DropConstraintRequest(const DropConstraintRequest& 
other377) {
+  dbname = other377.dbname;
+  tablename = other377.tablename;
+  constraintname = other377.constraintname;
+}
+DropConstraintRequest& DropConstraintRequest::operator=(const 
DropConstraintRequest& other378) {
+  dbname = other378.dbname;
+  tablename = other378.tablename;
+  constraintname = other378.constraintname;
+  return *this;
+}
+void DropConstraintRequest::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "DropConstraintRequest(";
+  out << "dbname=" << to_string(dbname);
+  out << ", " << "tablename=" << to_string(tablename);
+  out << ", " << "constraintname=" << to_string(constraintname);
+  out << ")";
+}
+
+
 PartitionsByExprResult::~PartitionsByExprResult() throw() {
 }
 
@@ -9038,14 +9170,14 @@ uint32_t 
PartitionsByExprResult::read(::apache::thrift::protocol::TProtocol* ipr
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 this->partitions.clear();
-uint32_t _size377;
-::apache::thrift::protocol::TType _etype380;
-

[13/20] hive git commit: HIVE-13671: Add PerfLogger to log4j2.properties logger (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2016-05-05 Thread jdere
HIVE-13671: Add PerfLogger to log4j2.properties logger (Prasanth Jayachandran 
reviewed by Sergey Shelukhin)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/a88050bd
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/a88050bd
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/a88050bd

Branch: refs/heads/llap
Commit: a88050bd9ae1f2cfec87a54e773a83cdb3de325f
Parents: f68b5db
Author: Prasanth Jayachandran 
Authored: Wed May 4 21:30:45 2016 -0500
Committer: Prasanth Jayachandran 
Committed: Wed May 4 21:30:45 2016 -0500

--
 common/src/main/resources/hive-log4j2.properties | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/a88050bd/common/src/main/resources/hive-log4j2.properties
--
diff --git a/common/src/main/resources/hive-log4j2.properties 
b/common/src/main/resources/hive-log4j2.properties
index 12cd9ac..cf0369a 100644
--- a/common/src/main/resources/hive-log4j2.properties
+++ b/common/src/main/resources/hive-log4j2.properties
@@ -23,6 +23,7 @@ property.hive.log.level = INFO
 property.hive.root.logger = DRFA
 property.hive.log.dir = ${sys:java.io.tmpdir}/${sys:user.name}
 property.hive.log.file = hive.log
+property.hive.perflogger.log.level = INFO
 
 # list of all appenders
 appenders = console, DRFA
@@ -50,7 +51,7 @@ appender.DRFA.strategy.type = DefaultRolloverStrategy
 appender.DRFA.strategy.max = 30
 
 # list of all loggers
-loggers = NIOServerCnxn, ClientCnxnSocketNIO, DataNucleus, Datastore, JPOX
+loggers = NIOServerCnxn, ClientCnxnSocketNIO, DataNucleus, Datastore, JPOX, 
PerfLogger
 
 logger.NIOServerCnxn.name = org.apache.zookeeper.server.NIOServerCnxn
 logger.NIOServerCnxn.level = WARN
@@ -67,6 +68,9 @@ logger.Datastore.level = ERROR
 logger.JPOX.name = JPOX
 logger.JPOX.level = ERROR
 
+logger.PerfLogger.name = org.apache.hadoop.hive.ql.log.PerfLogger
+logger.PerfLogger.level = ${sys:hive.perflogger.log.level}
+
 # root logger
 rootLogger.level = ${sys:hive.log.level}
 rootLogger.appenderRefs = root



[20/20] hive git commit: Merge branch 'master' into llap

2016-05-05 Thread jdere
Merge branch 'master' into llap


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/763e6969
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/763e6969
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/763e6969

Branch: refs/heads/llap
Commit: 763e6969d0e78806db0fc875830395c783f18b0c
Parents: 03ee048 0927187
Author: Jason Dere 
Authored: Thu May 5 13:03:53 2016 -0700
Committer: Jason Dere 
Committed: Thu May 5 13:03:53 2016 -0700

--
 .../src/main/resources/hive-log4j2.properties   |6 +-
 .../antlr4/org/apache/hive/hplsql/Hplsql.g4 |  108 +-
 .../main/java/org/apache/hive/hplsql/Exec.java  |   67 +-
 .../java/org/apache/hive/hplsql/Expression.java |   31 +-
 .../java/org/apache/hive/hplsql/Select.java |   31 +-
 .../java/org/apache/hive/hplsql/Signal.java |2 +-
 .../main/java/org/apache/hive/hplsql/Stmt.java  |  154 +-
 hplsql/src/main/resources/hplsql-site.xml   |2 -
 .../org/apache/hive/hplsql/TestHplsqlLocal.java |5 +
 .../apache/hive/hplsql/TestHplsqlOffline.java   |   20 +
 hplsql/src/test/queries/local/if3_bteq.sql  |3 +
 .../test/queries/offline/create_table_td.sql|   45 +
 hplsql/src/test/queries/offline/delete_all.sql  |1 +
 hplsql/src/test/queries/offline/select.sql  |   42 +
 .../test/queries/offline/select_teradata.sql|   12 +
 hplsql/src/test/results/db/select_into.out.txt  |3 +-
 hplsql/src/test/results/db/select_into2.out.txt |4 +-
 hplsql/src/test/results/local/if3_bteq.out.txt  |3 +
 hplsql/src/test/results/local/lang.out.txt  |   10 +-
 .../results/offline/create_table_mssql.out.txt  |   39 +-
 .../results/offline/create_table_mssql2.out.txt |   13 +-
 .../results/offline/create_table_mysql.out.txt  |5 +-
 .../results/offline/create_table_ora.out.txt|   65 +-
 .../results/offline/create_table_ora2.out.txt   |9 +-
 .../results/offline/create_table_pg.out.txt |7 +-
 .../results/offline/create_table_td.out.txt |   31 +
 .../src/test/results/offline/delete_all.out.txt |2 +
 hplsql/src/test/results/offline/select.out.txt  |   34 +
 .../src/test/results/offline/select_db2.out.txt |3 +-
 .../results/offline/select_teradata.out.txt |   10 +
 .../hadoop/hive/llap/cache/BuddyAllocator.java  |   43 +-
 .../hive/llap/daemon/impl/LlapDaemon.java   |5 +-
 metastore/if/hive_metastore.thrift  |8 +
 .../gen/thrift/gen-cpp/ThriftHiveMetastore.cpp  | 2431 ++
 .../gen/thrift/gen-cpp/ThriftHiveMetastore.h|  133 +
 .../ThriftHiveMetastore_server.skeleton.cpp |5 +
 .../gen/thrift/gen-cpp/hive_metastore_types.cpp | 2180 
 .../gen/thrift/gen-cpp/hive_metastore_types.h   |   52 +
 .../metastore/api/DropConstraintRequest.java|  591 +
 .../hive/metastore/api/ThriftHiveMetastore.java | 1966 ++
 .../gen-php/metastore/ThriftHiveMetastore.php   |  242 ++
 .../src/gen/thrift/gen-php/metastore/Types.php  |  121 +
 .../hive_metastore/ThriftHiveMetastore-remote   |7 +
 .../hive_metastore/ThriftHiveMetastore.py   |  212 ++
 .../gen/thrift/gen-py/hive_metastore/ttypes.py  |   97 +
 .../gen/thrift/gen-rb/hive_metastore_types.rb   |   23 +
 .../gen/thrift/gen-rb/thrift_hive_metastore.rb  |   63 +
 .../hadoop/hive/metastore/HiveMetaStore.java|   29 +
 .../hive/metastore/HiveMetaStoreClient.java |6 +
 .../hadoop/hive/metastore/IMetaStoreClient.java |3 +
 .../hadoop/hive/metastore/ObjectStore.java  |   46 +-
 .../apache/hadoop/hive/metastore/RawStore.java  |2 +
 .../hive/metastore/RetryingMetaStoreClient.java |   17 +-
 .../hadoop/hive/metastore/hbase/HBaseStore.java |6 +
 .../DummyRawStoreControlledCommit.java  |6 +
 .../DummyRawStoreForJdoConnection.java  |6 +
 .../org/apache/hadoop/hive/ql/exec/DDLTask.java |   21 +-
 .../persistence/HybridHashTableContainer.java   |   60 +-
 .../ql/exec/persistence/KeyValueContainer.java  |4 +
 .../ql/exec/vector/VectorizationContext.java|7 +
 .../hadoop/hive/ql/hooks/WriteEntity.java   |3 +-
 .../serde/AbstractParquetMapInspector.java  |4 +-
 .../serde/ParquetHiveArrayInspector.java|4 +-
 .../ql/io/parquet/write/DataWritableWriter.java |   67 +-
 .../apache/hadoop/hive/ql/metadata/Hive.java|   12 +-
 .../rules/HiveReduceExpressionsRule.java|  125 +
 .../rules/HiveSortLimitPullUpConstantsRule.java |  157 ++
 .../rules/HiveUnionPullUpConstantsRule.java |  133 +
 .../hadoop/hive/ql/parse/CalcitePlanner.java|5 +
 .../hive/ql/parse/DDLSemanticAnalyzer.java  |   13 +-
 .../apache/hadoop/hive/ql/parse/HiveParser.g|9 +
 .../hive/ql/parse/SemanticAnalyzerFactory.java  |2 +
 .../hadoop/hive/ql/plan/AlterTableDesc.java |   25 +-
 .../hadoop/hive/ql/plan/HiveOperation.java  |2 +
 .../ql/io/parquet/TestDataWritableWriter.

[12/20] hive git commit: HIVE-13592 : metastore calls map is not thread safe (Sergey Shelukhin, reviewed by Aihua Xu)

2016-05-05 Thread jdere
HIVE-13592 : metastore calls map is not thread safe (Sergey Shelukhin, reviewed 
by Aihua Xu)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/f68b5dbb
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/f68b5dbb
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/f68b5dbb

Branch: refs/heads/llap
Commit: f68b5dbb59a9e837209e64aefe5aa994476c0bdc
Parents: e68783c
Author: Sergey Shelukhin 
Authored: Wed May 4 17:05:20 2016 -0700
Committer: Sergey Shelukhin 
Committed: Wed May 4 17:05:39 2016 -0700

--
 .../hive/metastore/RetryingMetaStoreClient.java| 17 +
 .../org/apache/hadoop/hive/ql/metadata/Hive.java   |  3 ++-
 2 files changed, 11 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/f68b5dbb/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java
--
diff --git 
a/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java
 
b/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java
index f672adf..3c125e0 100644
--- 
a/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java
+++ 
b/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java
@@ -25,6 +25,7 @@ import java.lang.reflect.Method;
 import java.lang.reflect.Proxy;
 import java.lang.reflect.UndeclaredThrowableException;
 import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.TimeUnit;
 
 import org.slf4j.Logger;
@@ -55,14 +56,14 @@ public class RetryingMetaStoreClient implements 
InvocationHandler {
   private final IMetaStoreClient base;
   private final int retryLimit;
   private final long retryDelaySeconds;
-  private final Map metaCallTimeMap;
+  private final ConcurrentHashMap metaCallTimeMap;
   private final long connectionLifeTimeInMillis;
   private long lastConnectionTime;
   private boolean localMetaStore;
 
 
   protected RetryingMetaStoreClient(HiveConf hiveConf, Class[] 
constructorArgTypes,
-  Object[] constructorArgs, Map metaCallTimeMap,
+  Object[] constructorArgs, ConcurrentHashMap 
metaCallTimeMap,
   Class msClientClass) throws MetaException {
 
 this.retryLimit = 
hiveConf.getIntVar(HiveConf.ConfVars.METASTORETHRIFTFAILURERETRIES);
@@ -94,7 +95,7 @@ public class RetryingMetaStoreClient implements 
InvocationHandler {
   }
 
   public static IMetaStoreClient getProxy(HiveConf hiveConf, 
HiveMetaHookLoader hookLoader,
-  Map metaCallTimeMap, String mscClassName, boolean 
allowEmbedded)
+  ConcurrentHashMap metaCallTimeMap, String mscClassName, 
boolean allowEmbedded)
   throws MetaException {
 
 return getProxy(hiveConf,
@@ -119,7 +120,7 @@ public class RetryingMetaStoreClient implements 
InvocationHandler {
* Please use getProxy(HiveConf hiveConf, HiveMetaHookLoader hookLoader) for 
external purpose.
*/
   public static IMetaStoreClient getProxy(HiveConf hiveConf, Class[] 
constructorArgTypes,
-  Object[] constructorArgs, Map metaCallTimeMap,
+  Object[] constructorArgs, ConcurrentHashMap 
metaCallTimeMap,
   String mscClassName) throws MetaException {
 
 @SuppressWarnings("unchecked")
@@ -202,11 +203,11 @@ public class RetryingMetaStoreClient implements 
InvocationHandler {
 
   private void addMethodTime(Method method, long timeTaken) {
 String methodStr = getMethodString(method);
-Long curTime = metaCallTimeMap.get(methodStr);
-if (curTime != null) {
-  timeTaken += curTime;
+while (true) {
+  Long curTime = metaCallTimeMap.get(methodStr), newTime = timeTaken;
+  if (curTime != null && metaCallTimeMap.replace(methodStr, curTime, 
newTime + curTime)) break;
+  if (curTime == null && (null == metaCallTimeMap.putIfAbsent(methodStr, 
newTime))) break;
 }
-metaCallTimeMap.put(methodStr, timeTaken);
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hive/blob/f68b5dbb/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 
b/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
index 6862f70..f4a9772 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
@@ -48,6 +48,7 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.ConcurrentHashMap;
 
 import com.google.common.collect.ImmutableMap;
 
@@ -162,7 +163,7 @@ public class Hive {
   private UserGroupInformation owner;
 
   // m

[16/20] hive git commit: HIVE-13653 : improve config error messages for LLAP cache size/etc (Sergey Shelukhin, reviewed by Prasanth Jayachandran)

2016-05-05 Thread jdere
HIVE-13653 : improve config error messages for LLAP cache size/etc (Sergey 
Shelukhin, reviewed by Prasanth Jayachandran)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/f41d693b
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/f41d693b
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/f41d693b

Branch: refs/heads/llap
Commit: f41d693b5b984ea55b01394af0dbb6c7121db90a
Parents: 96f2dc7
Author: Sergey Shelukhin 
Authored: Thu May 5 10:41:47 2016 -0700
Committer: Sergey Shelukhin 
Committed: Thu May 5 10:41:47 2016 -0700

--
 .../hadoop/hive/llap/cache/BuddyAllocator.java  | 43 +++-
 1 file changed, 32 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/f41d693b/llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java
--
diff --git 
a/llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java 
b/llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java
index d78c1e0..1d5a7db 100644
--- a/llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java
+++ b/llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java
@@ -44,6 +44,8 @@ public final class BuddyAllocator implements 
EvictionAwareAllocator, BuddyAlloca
   // We don't know the acceptable size for Java array, so we'll use 1Gb 
boundary.
   // That is guaranteed to fit any maximum allocation.
   private static final int MAX_ARENA_SIZE = 1024*1024*1024;
+  // Don't try to operate with less than MIN_SIZE allocator space, it will 
just give you grief.
+  private static final int MIN_TOTAL_MEMORY_SIZE = 64*1024*1024;
 
 
   public BuddyAllocator(Configuration conf, MemoryManager mm, 
LlapDaemonCacheMetrics metrics) {
@@ -51,8 +53,19 @@ public final class BuddyAllocator implements 
EvictionAwareAllocator, BuddyAlloca
 (int)HiveConf.getSizeVar(conf, ConfVars.LLAP_ALLOCATOR_MIN_ALLOC),
 (int)HiveConf.getSizeVar(conf, ConfVars.LLAP_ALLOCATOR_MAX_ALLOC),
 HiveConf.getIntVar(conf, ConfVars.LLAP_ALLOCATOR_ARENA_COUNT),
-HiveConf.getSizeVar(conf, ConfVars.LLAP_IO_MEMORY_MAX_SIZE),
-mm, metrics);
+getMaxTotalMemorySize(conf), mm, metrics);
+  }
+
+  private static long getMaxTotalMemorySize(Configuration conf) {
+long maxSize = HiveConf.getSizeVar(conf, ConfVars.LLAP_IO_MEMORY_MAX_SIZE);
+if (maxSize > MIN_TOTAL_MEMORY_SIZE || HiveConf.getBoolVar(conf, 
ConfVars.HIVE_IN_TEST)) {
+  return maxSize;
+}
+throw new RuntimeException("Allocator space is too small for reasonable 
operation; "
++ ConfVars.LLAP_IO_MEMORY_MAX_SIZE.varname + "=" + maxSize + ", but at 
least "
++ MIN_TOTAL_MEMORY_SIZE + " is required. If you cannot spare any 
memory, you can "
++ "disable LLAP IO entirely via " + ConfVars.LLAP_IO_ENABLED.varname + 
"; or set "
++ ConfVars.LLAP_IO_MEMORY_MODE.varname + " to 'none'");
   }
 
   @VisibleForTesting
@@ -69,16 +82,19 @@ public final class BuddyAllocator implements 
EvictionAwareAllocator, BuddyAlloca
   + ", arena size " + arenaSizeVal + ". total size " + maxSizeVal);
 }
 
+String minName = ConfVars.LLAP_ALLOCATOR_MIN_ALLOC.varname,
+maxName = ConfVars.LLAP_ALLOCATOR_MAX_ALLOC.varname;
 if (minAllocation < 8) {
-  throw new AssertionError("Min allocation must be at least 8 bytes: " + 
minAllocation);
+  throw new RuntimeException(minName + " must be at least 8 bytes: " + 
minAllocation);
 }
-if (maxSizeVal < arenaSizeVal || maxAllocation < minAllocation) {
-  throw new AssertionError("Inconsistent sizes of cache, arena and 
allocations: "
-  + minAllocation + ", " + maxAllocation + ", " + arenaSizeVal + ", " 
+ maxSizeVal);
+if (maxSizeVal < maxAllocation || maxAllocation < minAllocation) {
+  throw new RuntimeException("Inconsistent sizes; expecting " + minName + 
" <= " + maxName
+  + " <= " + ConfVars.LLAP_IO_MEMORY_MAX_SIZE.varname + "; configured 
with min="
+  + minAllocation + ", max=" + maxAllocation + " and total=" + 
maxSizeVal);
 }
 if ((Integer.bitCount(minAllocation) != 1) || 
(Integer.bitCount(maxAllocation) != 1)) {
-  throw new AssertionError("Allocation sizes must be powers of two: "
-  + minAllocation + ", " + maxAllocation);
+  throw new RuntimeException("Allocation sizes must be powers of two; 
configured with "
+  + minName + "=" + minAllocation + ", " + maxName + "=" + 
maxAllocation);
 }
 if ((arenaSizeVal % maxAllocation) > 0) {
   long oldArenaSize = arenaSizeVal;
@@ -94,8 +110,8 @@ public final class BuddyAllocator implements 
EvictionAwareAllocator, BuddyAlloca
   + " to be divisible by aren

[02/20] hive git commit: HIVE-13638: CBO rule to pull up constants through Sort/Limit (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jdere
HIVE-13638: CBO rule to pull up constants through Sort/Limit (Jesus Camacho 
Rodriguez, reviewed by Ashutosh Chauhan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/b04dc95f
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/b04dc95f
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/b04dc95f

Branch: refs/heads/llap
Commit: b04dc95f4fa7dda9d4806c45dbe52aed4b9f1a18
Parents: 2d33d09
Author: Jesus Camacho Rodriguez 
Authored: Sat Apr 30 11:49:47 2016 +0100
Committer: Jesus Camacho Rodriguez 
Committed: Wed May 4 18:57:30 2016 +0100

--
 .../rules/HiveReduceExpressionsRule.java| 125 
 .../rules/HiveSortLimitPullUpConstantsRule.java | 157 +
 .../hadoop/hive/ql/parse/CalcitePlanner.java|   3 +
 .../test/queries/clientpositive/cbo_input26.q   |  54 ++
 .../results/clientpositive/cbo_input26.q.out| 596 +++
 .../clientpositive/load_dyn_part14.q.out|   6 +-
 .../clientpositive/spark/load_dyn_part14.q.out  |   6 +-
 .../clientpositive/spark/union_remove_25.q.out  |  60 +-
 .../clientpositive/union_remove_25.q.out|  20 +-
 9 files changed, 985 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/b04dc95f/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveReduceExpressionsRule.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveReduceExpressionsRule.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveReduceExpressionsRule.java
index 9006f45..2fe9b75 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveReduceExpressionsRule.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveReduceExpressionsRule.java
@@ -396,6 +396,131 @@ public abstract class HiveReduceExpressionsRule extends 
RelOptRule {
 assert constExps.size() == addCasts.size();
   }
 
+  /** Creates a map containing each (e, constant) pair that occurs within
+   * a predicate list.
+   *
+   * @param clazz Class of expression that is considered constant
+   * @param rexBuilder Rex builder
+   * @param predicates Predicate list
+   * @param  what to consider a constant: {@link RexLiteral} to use a narrow
+   *   definition of constant, or {@link RexNode} to use
+   *   {@link RexUtil#isConstant(RexNode)}
+   * @return Map from values to constants
+   */
+  public static  ImmutableMap 
predicateConstants(
+  Class clazz, RexBuilder rexBuilder, RelOptPredicateList 
predicates) {
+// We cannot use an ImmutableMap.Builder here. If there are multiple 
entries
+// with the same key (e.g. "WHERE deptno = 1 AND deptno = 2"), it doesn't
+// matter which we take, so the latter will replace the former.
+// The basic idea is to find all the pairs of RexNode = RexLiteral
+// (1) If 'predicates' contain a non-EQUALS, we bail out.
+// (2) It is OK if a RexNode is equal to the same RexLiteral several times,
+// (e.g. "WHERE deptno = 1 AND deptno = 1")
+// (3) It will return false if there are inconsistent constraints (e.g.
+// "WHERE deptno = 1 AND deptno = 2")
+final Map map = new HashMap<>();
+final Set excludeSet = new HashSet<>();
+for (RexNode predicate : predicates.pulledUpPredicates) {
+  gatherConstraints(clazz, predicate, map, excludeSet, rexBuilder);
+}
+final ImmutableMap.Builder builder =
+ImmutableMap.builder();
+for (Map.Entry entry : map.entrySet()) {
+  RexNode rexNode = entry.getKey();
+  if (!overlap(rexNode, excludeSet)) {
+builder.put(rexNode, entry.getValue());
+  }
+}
+return builder.build();
+  }
+
+  private static  void gatherConstraints(Class clazz,
+  RexNode predicate, Map map, Set excludeSet,
+  RexBuilder rexBuilder) {
+if (predicate.getKind() != SqlKind.EQUALS) {
+  decompose(excludeSet, predicate);
+  return;
+}
+final List operands = ((RexCall) predicate).getOperands();
+if (operands.size() != 2) {
+  decompose(excludeSet, predicate);
+  return;
+}
+// if it reaches here, we have rexNode equals rexNode
+final RexNode left = operands.get(0);
+final RexNode right = operands.get(1);
+// note that literals are immutable too and they can only be compared 
through
+// values.
+gatherConstraint(clazz, left, right, map, excludeSet, rexBuilder);
+gatherConstraint(clazz, right, left, map, excludeSet, rexBuilder);
+  }
+
+  /** Returns whether a value of {@code type2} can be assigned to a variable
+   * of {@code type1}.
+   *
+   * For example:
+   * 
+   *   {@code canAssignFrom(BIGINT, TINYINT)} returns {@code true}
+   *   {@code canAssignFrom(TINYINT, BIGINT)} retur

[08/20] hive git commit: HIVE-13351: Support drop Primary Key/Foreign Key constraints (Hari Subramaniyan, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jdere
http://git-wip-us.apache.org/repos/asf/hive/blob/212077b8/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
--
diff --git a/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 
b/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
index 690c895..2734a1c 100644
--- a/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
+++ b/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
@@ -1240,14 +1240,14 @@ uint32_t 
ThriftHiveMetastore_get_databases_result::read(::apache::thrift::protoc
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 this->success.clear();
-uint32_t _size749;
-::apache::thrift::protocol::TType _etype752;
-xfer += iprot->readListBegin(_etype752, _size749);
-this->success.resize(_size749);
-uint32_t _i753;
-for (_i753 = 0; _i753 < _size749; ++_i753)
+uint32_t _size751;
+::apache::thrift::protocol::TType _etype754;
+xfer += iprot->readListBegin(_etype754, _size751);
+this->success.resize(_size751);
+uint32_t _i755;
+for (_i755 = 0; _i755 < _size751; ++_i755)
 {
-  xfer += iprot->readString(this->success[_i753]);
+  xfer += iprot->readString(this->success[_i755]);
 }
 xfer += iprot->readListEnd();
   }
@@ -1286,10 +1286,10 @@ uint32_t 
ThriftHiveMetastore_get_databases_result::write(::apache::thrift::proto
 xfer += oprot->writeFieldBegin("success", 
::apache::thrift::protocol::T_LIST, 0);
 {
   xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, 
static_cast(this->success.size()));
-  std::vector ::const_iterator _iter754;
-  for (_iter754 = this->success.begin(); _iter754 != this->success.end(); 
++_iter754)
+  std::vector ::const_iterator _iter756;
+  for (_iter756 = this->success.begin(); _iter756 != this->success.end(); 
++_iter756)
   {
-xfer += oprot->writeString((*_iter754));
+xfer += oprot->writeString((*_iter756));
   }
   xfer += oprot->writeListEnd();
 }
@@ -1334,14 +1334,14 @@ uint32_t 
ThriftHiveMetastore_get_databases_presult::read(::apache::thrift::proto
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 (*(this->success)).clear();
-uint32_t _size755;
-::apache::thrift::protocol::TType _etype758;
-xfer += iprot->readListBegin(_etype758, _size755);
-(*(this->success)).resize(_size755);
-uint32_t _i759;
-for (_i759 = 0; _i759 < _size755; ++_i759)
+uint32_t _size757;
+::apache::thrift::protocol::TType _etype760;
+xfer += iprot->readListBegin(_etype760, _size757);
+(*(this->success)).resize(_size757);
+uint32_t _i761;
+for (_i761 = 0; _i761 < _size757; ++_i761)
 {
-  xfer += iprot->readString((*(this->success))[_i759]);
+  xfer += iprot->readString((*(this->success))[_i761]);
 }
 xfer += iprot->readListEnd();
   }
@@ -1458,14 +1458,14 @@ uint32_t 
ThriftHiveMetastore_get_all_databases_result::read(::apache::thrift::pr
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 this->success.clear();
-uint32_t _size760;
-::apache::thrift::protocol::TType _etype763;
-xfer += iprot->readListBegin(_etype763, _size760);
-this->success.resize(_size760);
-uint32_t _i764;
-for (_i764 = 0; _i764 < _size760; ++_i764)
+uint32_t _size762;
+::apache::thrift::protocol::TType _etype765;
+xfer += iprot->readListBegin(_etype765, _size762);
+this->success.resize(_size762);
+uint32_t _i766;
+for (_i766 = 0; _i766 < _size762; ++_i766)
 {
-  xfer += iprot->readString(this->success[_i764]);
+  xfer += iprot->readString(this->success[_i766]);
 }
 xfer += iprot->readListEnd();
   }
@@ -1504,10 +1504,10 @@ uint32_t 
ThriftHiveMetastore_get_all_databases_result::write(::apache::thrift::p
 xfer += oprot->writeFieldBegin("success", 
::apache::thrift::protocol::T_LIST, 0);
 {
   xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, 
static_cast(this->success.size()));
-  std::vector ::const_iterator _iter765;
-  for (_iter765 = this->success.begin(); _iter765 != this->success.end(); 
++_iter765)
+  std::vector ::const_iterator _iter767;
+  for (_iter767 = this->success.begin(); _iter767 != this->success.end(); 
++_iter767)
   {
-xfer += oprot->writeString((*_iter765));
+xfer += oprot->writeString((*_iter767));
   }
   xfer += oprot->writeListEnd();
 }
@@ -155

[15/20] hive git commit: HIVE-13632: Hive failing on insert empty array into parquet table. (Yongzhi Chen, reviewed by Sergio Pena)

2016-05-05 Thread jdere
HIVE-13632: Hive failing on insert empty array into parquet table. (Yongzhi 
Chen, reviewed by Sergio Pena)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/96f2dc72
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/96f2dc72
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/96f2dc72

Branch: refs/heads/llap
Commit: 96f2dc723270bb4c38e5ab842371929c2c1c849a
Parents: cbebb4d
Author: Yongzhi Chen 
Authored: Thu Apr 28 14:52:16 2016 -0400
Committer: Yongzhi Chen 
Committed: Thu May 5 09:58:39 2016 -0400

--
 .../serde/AbstractParquetMapInspector.java  |  4 +-
 .../serde/ParquetHiveArrayInspector.java|  4 +-
 .../ql/io/parquet/write/DataWritableWriter.java | 67 ---
 .../ql/io/parquet/TestDataWritableWriter.java   | 29 +++
 .../serde/TestAbstractParquetMapInspector.java  |  4 +-
 .../serde/TestParquetHiveArrayInspector.java|  4 +-
 .../parquet_array_map_emptynullvals.q   | 20 +
 .../parquet_array_map_emptynullvals.q.out   | 87 
 8 files changed, 180 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/96f2dc72/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
index 49bf1c5..e80206e 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
@@ -60,7 +60,7 @@ public abstract class AbstractParquetMapInspector implements 
SettableMapObjectIn
 
 if (data instanceof ArrayWritable) {
   final Writable[] mapArray = ((ArrayWritable) data).get();
-  if (mapArray == null || mapArray.length == 0) {
+  if (mapArray == null) {
 return null;
   }
 
@@ -90,7 +90,7 @@ public abstract class AbstractParquetMapInspector implements 
SettableMapObjectIn
 if (data instanceof ArrayWritable) {
   final Writable[] mapArray = ((ArrayWritable) data).get();
 
-  if (mapArray == null || mapArray.length == 0) {
+  if (mapArray == null) {
 return -1;
   } else {
 return mapArray.length;

http://git-wip-us.apache.org/repos/asf/hive/blob/96f2dc72/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
index 05e92b5..55614a3 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
@@ -83,7 +83,7 @@ public class ParquetHiveArrayInspector implements 
SettableListObjectInspector {
 
 if (data instanceof ArrayWritable) {
   final Writable[] array = ((ArrayWritable) data).get();
-  if (array == null || array.length == 0) {
+  if (array == null) {
 return -1;
   }
 
@@ -105,7 +105,7 @@ public class ParquetHiveArrayInspector implements 
SettableListObjectInspector {
 
 if (data instanceof ArrayWritable) {
   final Writable[] array = ((ArrayWritable) data).get();
-  if (array == null || array.length == 0) {
+  if (array == null) {
 return null;
   }
 

http://git-wip-us.apache.org/repos/asf/hive/blob/96f2dc72/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
index 69272dc..1e26c19 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
@@ -259,21 +259,24 @@ public class DataWritableWriter {
 @Override
 public void write(Object value) {
   recordConsumer.startGroup();
-  recordConsumer.startField(repeatedGroupName, 0);
-
   int listLength = inspector.getListLength(value);
-  for (int i = 0; i < listLength; i++) {
-Object element = inspector.getListElement(value, i);
-recordConsumer.startGroup();
-if (element != null) {
-  recordConsumer.startField(elementName, 0);
-  elementWriter.write(element);
-  recordConsumer.e

[19/20] hive git commit: HIVE-13639: CBO rule to pull up constants through Union (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jdere
HIVE-13639: CBO rule to pull up constants through Union (Jesus Camacho 
Rodriguez, reviewed by Ashutosh Chauhan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/09271872
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/09271872
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/09271872

Branch: refs/heads/llap
Commit: 092718720a4abc77ce74c2efcf42cfef0243e9d4
Parents: f41d693
Author: Jesus Camacho Rodriguez 
Authored: Wed May 4 22:01:52 2016 +0100
Committer: Jesus Camacho Rodriguez 
Committed: Thu May 5 20:21:50 2016 +0100

--
 .../rules/HiveUnionPullUpConstantsRule.java | 133 
 .../hadoop/hive/ql/parse/CalcitePlanner.java|   2 +
 .../queries/clientpositive/cbo_union_view.q |  19 +
 .../results/clientpositive/cbo_input26.q.out|  64 +-
 .../results/clientpositive/cbo_union_view.q.out | 228 ++
 .../results/clientpositive/groupby_ppd.q.out|  28 +-
 .../results/clientpositive/perf/query66.q.out   | 328 -
 .../results/clientpositive/perf/query75.q.out   | 692 ++-
 .../clientpositive/spark/union_remove_25.q.out  |  48 +-
 .../clientpositive/spark/union_view.q.out   |  60 +-
 .../results/clientpositive/union_view.q.out |  60 +-
 11 files changed, 1021 insertions(+), 641 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/09271872/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
new file mode 100644
index 000..3155cb1
--- /dev/null
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
@@ -0,0 +1,133 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.optimizer.calcite.rules;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.calcite.plan.RelOptPredicateList;
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.plan.RelOptUtil;
+import org.apache.calcite.rel.core.Union;
+import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+import org.apache.calcite.tools.RelBuilderFactory;
+import org.apache.calcite.util.Pair;
+import org.apache.calcite.util.mapping.Mappings;
+import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
+import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveUnion;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.ImmutableList;
+
+/**
+ * Planner rule that pulls up constants through a Union operator.
+ */
+public class HiveUnionPullUpConstantsRule extends RelOptRule {
+
+  protected static final Logger LOG = 
LoggerFactory.getLogger(HiveUnionPullUpConstantsRule.class);
+
+
+  public static final HiveUnionPullUpConstantsRule INSTANCE =
+  new HiveUnionPullUpConstantsRule(HiveUnion.class,
+  HiveRelFactories.HIVE_BUILDER);
+
+  private HiveUnionPullUpConstantsRule(
+  Class unionClass,
+  RelBuilderFactory relBuilderFactory) {
+super(operand(unionClass, any()),
+relBuilderFactory, null);
+  }
+
+  @Override
+  public void onMatch(RelOptRuleCall call) {
+final Union union = call.rel(0);
+
+final int count = union.getRowType().getFieldCount();
+if (count == 1) {
+  // No room for optimization since we cannot create an empty
+  // Project operator.
+  return;
+}
+
+final RexBuilder rexBuilder = union.getCluster().getRexBuilder();
+final Rel

[14/20] hive git commit: HIVE-12837 : Better memory estimation/allocation for hybrid grace hash join during hash table loading (Wei Zheng, reviewed by Vikram Dixit K)

2016-05-05 Thread jdere
HIVE-12837 : Better memory estimation/allocation for hybrid grace hash join 
during hash table loading (Wei Zheng, reviewed by Vikram Dixit K)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/cbebb4d7
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/cbebb4d7
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/cbebb4d7

Branch: refs/heads/llap
Commit: cbebb4d78064a9098e4145a0f7532f08885c9b27
Parents: a88050b
Author: Wei Zheng 
Authored: Wed May 4 23:09:08 2016 -0700
Committer: Wei Zheng 
Committed: Wed May 4 23:09:08 2016 -0700

--
 .../persistence/HybridHashTableContainer.java   | 60 +++-
 .../ql/exec/persistence/KeyValueContainer.java  |  4 ++
 2 files changed, 51 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/cbebb4d7/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/HybridHashTableContainer.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/HybridHashTableContainer.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/HybridHashTableContainer.java
index f5da5a4..5552dfb 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/HybridHashTableContainer.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/HybridHashTableContainer.java
@@ -90,6 +90,7 @@ public class HybridHashTableContainer
   private boolean lastPartitionInMem;   // only one (last one) 
partition is left in memory
   private final int memoryCheckFrequency;   // how often (# of rows apart) 
to check if memory is full
   private final HybridHashTableConf nwayConf; // configuration for 
n-way join
+  private int writeBufferSize;  // write buffer size for 
BytesBytesMultiHashMap
 
   /** The OI used to deserialize values. We never deserialize keys. */
   private LazyBinaryStructObjectInspector internalValueOi;
@@ -294,7 +295,6 @@ public class HybridHashTableContainer
 this.spillLocalDirs = spillLocalDirs;
 
 this.nwayConf = nwayConf;
-int writeBufferSize;
 int numPartitions;
 if (nwayConf == null) { // binary join
   numPartitions = calcNumPartitions(memoryThreshold, estimatedTableSize, 
minNumParts, minWbSize);
@@ -327,7 +327,9 @@ public class HybridHashTableContainer
 writeBufferSize : Integer.highestOneBit(writeBufferSize);
 
 // Cap WriteBufferSize to avoid large preallocations
-writeBufferSize = writeBufferSize < minWbSize ? minWbSize : 
Math.min(maxWbSize, writeBufferSize);
+// We also want to limit the size of writeBuffer, because we normally have 
16 partitions, that
+// makes spilling prediction (isMemoryFull) to be too defensive which 
results in unnecessary spilling
+writeBufferSize = writeBufferSize < minWbSize ? minWbSize : 
Math.min(maxWbSize / numPartitions, writeBufferSize);
 
 this.bloom1 = new BloomFilter(newKeyCount);
 
@@ -417,6 +419,11 @@ public class HybridHashTableContainer
 for (HashPartition hp : hashPartitions) {
   if (hp.hashMap != null) {
 memUsed += hp.hashMap.memorySize();
+  } else {
+// also include the still-in-memory sidefile, before it has been 
truely spilled
+if (hp.sidefileKVContainer != null) {
+  memUsed += hp.sidefileKVContainer.numRowsInReadBuffer() * 
tableRowSize;
+}
   }
 }
 return memoryUsed = memUsed;
@@ -454,6 +461,8 @@ public class HybridHashTableContainer
   private MapJoinKey internalPutRow(KeyValueHelper keyValueHelper,
   Writable currentKey, Writable currentValue) throws SerDeException, 
IOException {
 
+boolean putToSidefile = false; // by default we put row into partition in 
memory
+
 // Next, put row into corresponding hash partition
 int keyHash = keyValueHelper.getHashFromKey();
 int partitionId = keyHash & (hashPartitions.length - 1);
@@ -461,15 +470,13 @@ public class HybridHashTableContainer
 
 bloom1.addLong(keyHash);
 
-if (isOnDisk(partitionId) || isHashMapSpilledOnCreation(partitionId)) {
-  KeyValueContainer kvContainer = hashPartition.getSidefileKVContainer();
-  kvContainer.add((HiveKey) currentKey, (BytesWritable) currentValue);
-} else {
-  hashPartition.hashMap.put(keyValueHelper, keyHash); // Pass along 
hashcode to avoid recalculation
-  totalInMemRowCount++;
-
-  if ((totalInMemRowCount & (this.memoryCheckFrequency - 1)) == 0 &&  // 
check periodically
-  !lastPartitionInMem) { // If this is the only partition in memory, 
proceed without check
+if (isOnDisk(partitionId) || isHashMapSpilledOnCreation(partitionId)) { // 
destination on disk
+  putToSidefile = true;
+} else {  // destination in memory
+  if (!lastPartitionInMem &&// If

[11/20] hive git commit: HIVE-13660: Vectorizing IN expression with list of columns throws java.lang.ClassCastException ExprNodeColumnDesc cannot be cast to ExprNodeConstantDesc (Matt McCline, reviewe

2016-05-05 Thread jdere
HIVE-13660: Vectorizing IN expression with list of columns throws 
java.lang.ClassCastException ExprNodeColumnDesc cannot be cast to 
ExprNodeConstantDesc (Matt McCline, reviewed by Prasanth Jayachandran)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/e68783c8
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/e68783c8
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/e68783c8

Branch: refs/heads/llap
Commit: e68783c8e5cdb0cc00db6d725f15392bd5a6fe06
Parents: 652f88a
Author: Matt McCline 
Authored: Wed May 4 14:59:00 2016 -0700
Committer: Matt McCline 
Committed: Wed May 4 14:59:30 2016 -0700

--
 .../ql/exec/vector/VectorizationContext.java|  7 
 .../vector_non_constant_in_expr.q   |  4 +++
 .../vector_non_constant_in_expr.q.out   | 36 
 3 files changed, 47 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/e68783c8/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
index 5454ba3..9558d31 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
@@ -1519,6 +1519,13 @@ public class VectorizationContext {
 
 VectorExpression expr = null;
 
+// Validate the IN items are only constants.
+for (ExprNodeDesc inListChild : childrenForInList) {
+  if (!(inListChild instanceof ExprNodeConstantDesc)) {
+throw new HiveException("Vectorizing IN expression only supported for 
constant values");
+  }
+}
+
 // determine class
 Class cl = null;
 if (isIntFamily(colType)) {

http://git-wip-us.apache.org/repos/asf/hive/blob/e68783c8/ql/src/test/queries/clientpositive/vector_non_constant_in_expr.q
--
diff --git a/ql/src/test/queries/clientpositive/vector_non_constant_in_expr.q 
b/ql/src/test/queries/clientpositive/vector_non_constant_in_expr.q
new file mode 100644
index 000..69142bf
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/vector_non_constant_in_expr.q
@@ -0,0 +1,4 @@
+SET hive.vectorized.execution.enabled=true;
+set hive.fetch.task.conversion=none;
+
+explain SELECT * FROM alltypesorc WHERE cint in (ctinyint, cbigint);
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hive/blob/e68783c8/ql/src/test/results/clientpositive/vector_non_constant_in_expr.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/vector_non_constant_in_expr.q.out 
b/ql/src/test/results/clientpositive/vector_non_constant_in_expr.q.out
new file mode 100644
index 000..8845cb2
--- /dev/null
+++ b/ql/src/test/results/clientpositive/vector_non_constant_in_expr.q.out
@@ -0,0 +1,36 @@
+PREHOOK: query: explain SELECT * FROM alltypesorc WHERE cint in (ctinyint, 
cbigint)
+PREHOOK: type: QUERY
+POSTHOOK: query: explain SELECT * FROM alltypesorc WHERE cint in (ctinyint, 
cbigint)
+POSTHOOK: type: QUERY
+STAGE DEPENDENCIES:
+  Stage-1 is a root stage
+  Stage-0 depends on stages: Stage-1
+
+STAGE PLANS:
+  Stage: Stage-1
+Map Reduce
+  Map Operator Tree:
+  TableScan
+alias: alltypesorc
+Statistics: Num rows: 12288 Data size: 2641964 Basic stats: 
COMPLETE Column stats: NONE
+Filter Operator
+  predicate: (cint) IN (ctinyint, cbigint) (type: boolean)
+  Statistics: Num rows: 6144 Data size: 1320982 Basic stats: 
COMPLETE Column stats: NONE
+  Select Operator
+expressions: ctinyint (type: tinyint), csmallint (type: 
smallint), cint (type: int), cbigint (type: bigint), cfloat (type: float), 
cdouble (type: double), cstring1 (type: string), cstring2 (type: string), 
ctimestamp1 (type: timestamp), ctimestamp2 (type: timestamp), cboolean1 (type: 
boolean), cboolean2 (type: boolean)
+outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, 
_col6, _col7, _col8, _col9, _col10, _col11
+Statistics: Num rows: 6144 Data size: 1320982 Basic stats: 
COMPLETE Column stats: NONE
+File Output Operator
+  compressed: false
+  Statistics: Num rows: 6144 Data size: 1320982 Basic stats: 
COMPLETE Column stats: NONE
+  table:
+  input format: 
org.apache.hadoop.mapred.SequenceFileInputFormat
+  output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
+  

[10/20] hive git commit: HIVE-13669 : LLAP: io.enabled config is ignored on the server side (Sergey Shelukhin, reviewed by Prasanth Jayachandran)

2016-05-05 Thread jdere
HIVE-13669 : LLAP: io.enabled config is ignored on the server side (Sergey 
Shelukhin, reviewed by Prasanth Jayachandran)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/652f88ad
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/652f88ad
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/652f88ad

Branch: refs/heads/llap
Commit: 652f88ad973ebe1668b5663617259795cc007953
Parents: 212077b
Author: Sergey Shelukhin 
Authored: Wed May 4 14:55:01 2016 -0700
Committer: Sergey Shelukhin 
Committed: Wed May 4 14:55:01 2016 -0700

--
 .../org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/652f88ad/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java
--
diff --git 
a/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java 
b/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java
index d23a44a..e662de9 100644
--- 
a/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java
+++ 
b/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java
@@ -322,8 +322,9 @@ public class LlapDaemon extends CompositeService implements 
ContainerRunner, Lla
   fnLocalizer.init();
   fnLocalizer.startLocalizeAllFunctions();
 }
-LlapProxy.initializeLlapIo(conf);
-
+if (isIoEnabled()) {
+  LlapProxy.initializeLlapIo(conf);
+}
   }
 
   @Override



[17/20] hive git commit: HIVE-13639: CBO rule to pull up constants through Union (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jdere
http://git-wip-us.apache.org/repos/asf/hive/blob/09271872/ql/src/test/results/clientpositive/union_view.q.out
--
diff --git a/ql/src/test/results/clientpositive/union_view.q.out 
b/ql/src/test/results/clientpositive/union_view.q.out
index badd209..530739e 100644
--- a/ql/src/test/results/clientpositive/union_view.q.out
+++ b/ql/src/test/results/clientpositive/union_view.q.out
@@ -358,12 +358,12 @@ STAGE PLANS:
   Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE 
Column stats: NONE
   Select Operator
 expressions: value (type: string)
-outputColumnNames: _col1
+outputColumnNames: _col0
 Statistics: Num rows: 250 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
 Union
   Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
   Select Operator
-expressions: 86 (type: int), _col1 (type: string), '1' 
(type: string)
+expressions: 86 (type: int), _col0 (type: string), '1' 
(type: string)
 outputColumnNames: _col0, _col1, _col2
 Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
 File Output Operator
@@ -382,12 +382,12 @@ STAGE PLANS:
   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
   Select Operator
 expressions: value (type: string)
-outputColumnNames: _col1
+outputColumnNames: _col0
 Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
Column stats: NONE
 Union
   Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
   Select Operator
-expressions: 86 (type: int), _col1 (type: string), '1' 
(type: string)
+expressions: 86 (type: int), _col0 (type: string), '1' 
(type: string)
 outputColumnNames: _col0, _col1, _col2
 Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
 File Output Operator
@@ -406,12 +406,12 @@ STAGE PLANS:
   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
   Select Operator
 expressions: value (type: string)
-outputColumnNames: _col1
+outputColumnNames: _col0
 Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
Column stats: NONE
 Union
   Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
   Select Operator
-expressions: 86 (type: int), _col1 (type: string), '1' 
(type: string)
+expressions: 86 (type: int), _col0 (type: string), '1' 
(type: string)
 outputColumnNames: _col0, _col1, _col2
 Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
 File Output Operator
@@ -471,12 +471,12 @@ STAGE PLANS:
   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
   Select Operator
 expressions: value (type: string)
-outputColumnNames: _col1
+outputColumnNames: _col0
 Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
Column stats: NONE
 Union
   Statistics: Num rows: 502 Data size: 5312 Basic stats: 
COMPLETE Column stats: NONE
   Select Operator
-expressions: 86 (type: int), _col1 (type: string), '2' 
(type: string)
+expressions: 86 (type: int), _col0 (type: string), '2' 
(type: string)
 outputColumnNames: _col0, _col1, _col2
 Statistics: Num rows: 502 Data size: 5312 Basic stats: 
COMPLETE Column stats: NONE
 File Output Operator
@@ -495,12 +495,12 @@ STAGE PLANS:
   Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
Column stats: NONE
   Select Operator
 expressions: value (type: string)
-outputColumnNames: _col1
+outputColumnNames: _col0
 Statistics: Num rows: 500 Data size: 5312 Basic stats: 
COMPLETE Column stats: NONE
 Union
   Statistics: Num rows: 502 Data size: 5312 Basic stats: 
COMPLETE Column stats: NONE
   Select Operator
-expressions: 86 (type: int), _col1 (type: string), '2' 
(type: string)
+expressions: 86 (type: int), _col0 (type: string), '2' 
(type: string)
 outputCol

[03/20] hive git commit: HIVE-13351: Support drop Primary Key/Foreign Key constraints (Hari Subramaniyan, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jdere
http://git-wip-us.apache.org/repos/asf/hive/blob/212077b8/metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
--
diff --git a/metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php 
b/metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
index 4f0c8fd..0e7b745 100644
--- a/metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
+++ b/metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
@@ -167,6 +167,12 @@ interface ThriftHiveMetastoreIf extends \FacebookServiceIf 
{
*/
   public function create_table_with_constraints(\metastore\Table $tbl, array 
$primaryKeys, array $foreignKeys);
   /**
+   * @param \metastore\DropConstraintRequest $req
+   * @throws \metastore\NoSuchObjectException
+   * @throws \metastore\MetaException
+   */
+  public function drop_constraint(\metastore\DropConstraintRequest $req);
+  /**
* @param string $dbname
* @param string $name
* @param bool $deleteData
@@ -2250,6 +2256,60 @@ class ThriftHiveMetastoreClient extends 
\FacebookServiceClient implements \metas
 return;
   }
 
+  public function drop_constraint(\metastore\DropConstraintRequest $req)
+  {
+$this->send_drop_constraint($req);
+$this->recv_drop_constraint();
+  }
+
+  public function send_drop_constraint(\metastore\DropConstraintRequest $req)
+  {
+$args = new \metastore\ThriftHiveMetastore_drop_constraint_args();
+$args->req = $req;
+$bin_accel = ($this->output_ instanceof TBinaryProtocolAccelerated) && 
function_exists('thrift_protocol_write_binary');
+if ($bin_accel)
+{
+  thrift_protocol_write_binary($this->output_, 'drop_constraint', 
TMessageType::CALL, $args, $this->seqid_, $this->output_->isStrictWrite());
+}
+else
+{
+  $this->output_->writeMessageBegin('drop_constraint', TMessageType::CALL, 
$this->seqid_);
+  $args->write($this->output_);
+  $this->output_->writeMessageEnd();
+  $this->output_->getTransport()->flush();
+}
+  }
+
+  public function recv_drop_constraint()
+  {
+$bin_accel = ($this->input_ instanceof TBinaryProtocolAccelerated) && 
function_exists('thrift_protocol_read_binary');
+if ($bin_accel) $result = thrift_protocol_read_binary($this->input_, 
'\metastore\ThriftHiveMetastore_drop_constraint_result', 
$this->input_->isStrictRead());
+else
+{
+  $rseqid = 0;
+  $fname = null;
+  $mtype = 0;
+
+  $this->input_->readMessageBegin($fname, $mtype, $rseqid);
+  if ($mtype == TMessageType::EXCEPTION) {
+$x = new TApplicationException();
+$x->read($this->input_);
+$this->input_->readMessageEnd();
+throw $x;
+  }
+  $result = new \metastore\ThriftHiveMetastore_drop_constraint_result();
+  $result->read($this->input_);
+  $this->input_->readMessageEnd();
+}
+if ($result->o1 !== null) {
+  throw $result->o1;
+}
+if ($result->o3 !== null) {
+  throw $result->o3;
+}
+return;
+  }
+
   public function drop_table($dbname, $name, $deleteData)
   {
 $this->send_drop_table($dbname, $name, $deleteData);
@@ -13889,6 +13949,188 @@ class 
ThriftHiveMetastore_create_table_with_constraints_result {
 
 }
 
+class ThriftHiveMetastore_drop_constraint_args {
+  static $_TSPEC;
+
+  /**
+   * @var \metastore\DropConstraintRequest
+   */
+  public $req = null;
+
+  public function __construct($vals=null) {
+if (!isset(self::$_TSPEC)) {
+  self::$_TSPEC = array(
+1 => array(
+  'var' => 'req',
+  'type' => TType::STRUCT,
+  'class' => '\metastore\DropConstraintRequest',
+  ),
+);
+}
+if (is_array($vals)) {
+  if (isset($vals['req'])) {
+$this->req = $vals['req'];
+  }
+}
+  }
+
+  public function getName() {
+return 'ThriftHiveMetastore_drop_constraint_args';
+  }
+
+  public function read($input)
+  {
+$xfer = 0;
+$fname = null;
+$ftype = 0;
+$fid = 0;
+$xfer += $input->readStructBegin($fname);
+while (true)
+{
+  $xfer += $input->readFieldBegin($fname, $ftype, $fid);
+  if ($ftype == TType::STOP) {
+break;
+  }
+  switch ($fid)
+  {
+case 1:
+  if ($ftype == TType::STRUCT) {
+$this->req = new \metastore\DropConstraintRequest();
+$xfer += $this->req->read($input);
+  } else {
+$xfer += $input->skip($ftype);
+  }
+  break;
+default:
+  $xfer += $input->skip($ftype);
+  break;
+  }
+  $xfer += $input->readFieldEnd();
+}
+$xfer += $input->readStructEnd();
+return $xfer;
+  }
+
+  public function write($output) {
+$xfer = 0;
+$xfer += 
$output->writeStructBegin('ThriftHiveMetastore_drop_constraint_args');
+if ($this->req !== null) {
+  if (!is_object($this->req)) {
+throw new TProtocolException('Bad type in struct

[09/20] hive git commit: HIVE-13351: Support drop Primary Key/Foreign Key constraints (Hari Subramaniyan, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jdere
HIVE-13351: Support drop Primary Key/Foreign Key constraints (Hari 
Subramaniyan, reviewed by Ashutosh Chauhan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/212077b8
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/212077b8
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/212077b8

Branch: refs/heads/llap
Commit: 212077b8ae4aed130d8fea38febfc86c2bc55bbb
Parents: b04dc95
Author: Hari Subramaniyan 
Authored: Wed May 4 12:26:38 2016 -0700
Committer: Hari Subramaniyan 
Committed: Wed May 4 12:26:38 2016 -0700

--
 metastore/if/hive_metastore.thrift  |8 +
 .../gen/thrift/gen-cpp/ThriftHiveMetastore.cpp  | 2431 ++
 .../gen/thrift/gen-cpp/ThriftHiveMetastore.h|  133 +
 .../ThriftHiveMetastore_server.skeleton.cpp |5 +
 .../gen/thrift/gen-cpp/hive_metastore_types.cpp | 2180 
 .../gen/thrift/gen-cpp/hive_metastore_types.h   |   52 +
 .../metastore/api/DropConstraintRequest.java|  591 +
 .../hive/metastore/api/ThriftHiveMetastore.java | 1966 ++
 .../gen-php/metastore/ThriftHiveMetastore.php   |  242 ++
 .../src/gen/thrift/gen-php/metastore/Types.php  |  121 +
 .../hive_metastore/ThriftHiveMetastore-remote   |7 +
 .../hive_metastore/ThriftHiveMetastore.py   |  212 ++
 .../gen/thrift/gen-py/hive_metastore/ttypes.py  |   97 +
 .../gen/thrift/gen-rb/hive_metastore_types.rb   |   23 +
 .../gen/thrift/gen-rb/thrift_hive_metastore.rb  |   63 +
 .../hadoop/hive/metastore/HiveMetaStore.java|   29 +
 .../hive/metastore/HiveMetaStoreClient.java |6 +
 .../hadoop/hive/metastore/IMetaStoreClient.java |3 +
 .../hadoop/hive/metastore/ObjectStore.java  |   46 +-
 .../apache/hadoop/hive/metastore/RawStore.java  |2 +
 .../hadoop/hive/metastore/hbase/HBaseStore.java |6 +
 .../DummyRawStoreControlledCommit.java  |6 +
 .../DummyRawStoreForJdoConnection.java  |6 +
 .../org/apache/hadoop/hive/ql/exec/DDLTask.java |   21 +-
 .../hadoop/hive/ql/hooks/WriteEntity.java   |3 +-
 .../apache/hadoop/hive/ql/metadata/Hive.java|9 +
 .../hive/ql/parse/DDLSemanticAnalyzer.java  |   13 +-
 .../apache/hadoop/hive/ql/parse/HiveParser.g|9 +
 .../hive/ql/parse/SemanticAnalyzerFactory.java  |2 +
 .../hadoop/hive/ql/plan/AlterTableDesc.java |   25 +-
 .../hadoop/hive/ql/plan/HiveOperation.java  |2 +
 .../clientnegative/drop_invalid_constraint1.q   |3 +
 .../clientnegative/drop_invalid_constraint2.q   |2 +
 .../clientnegative/drop_invalid_constraint3.q   |2 +
 .../clientnegative/drop_invalid_constraint4.q   |3 +
 .../clientpositive/create_with_constraints.q|   12 +
 .../drop_invalid_constraint1.q.out  |   15 +
 .../drop_invalid_constraint2.q.out  |   11 +
 .../drop_invalid_constraint3.q.out  |   11 +
 .../drop_invalid_constraint4.q.out  |   19 +
 .../create_with_constraints.q.out   |   68 +
 service/src/gen/thrift/gen-py/__init__.py   |0
 42 files changed, 5925 insertions(+), 2540 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/212077b8/metastore/if/hive_metastore.thrift
--
diff --git a/metastore/if/hive_metastore.thrift 
b/metastore/if/hive_metastore.thrift
index acebf7a..c8d78b6 100755
--- a/metastore/if/hive_metastore.thrift
+++ b/metastore/if/hive_metastore.thrift
@@ -487,6 +487,11 @@ struct ForeignKeysResponse {
   1: required list foreignKeys
 }
 
+struct DropConstraintRequest {
+  1: required string dbname, 
+  2: required string tablename,
+  3: required string constraintname
+}
 
 // Return type for get_partitions_by_expr
 struct PartitionsByExprResult {
@@ -993,6 +998,9 @@ service ThriftHiveMetastore extends fb303.FacebookService
   throws (1:AlreadyExistsException o1,
   2:InvalidObjectException o2, 3:MetaException o3,
   4:NoSuchObjectException o4)
+  void drop_constraint(1:DropConstraintRequest req)
+  throws(1:NoSuchObjectException o1, 2:MetaException o3)
+
   // drops the table and all the partitions associated with it if the table 
has partitions
   // delete data (including partitions) if deleteData is set to true
   void drop_table(1:string dbname, 2:string name, 3:bool deleteData)



[04/20] hive git commit: HIVE-13351: Support drop Primary Key/Foreign Key constraints (Hari Subramaniyan, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jdere
http://git-wip-us.apache.org/repos/asf/hive/blob/212077b8/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
--
diff --git 
a/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 
b/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
index 051c1f2..2a81c4b 100644
--- 
a/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
+++ 
b/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
@@ -80,6 +80,8 @@ public class ThriftHiveMetastore {
 
 public void create_table_with_constraints(Table tbl, List 
primaryKeys, List foreignKeys) throws AlreadyExistsException, 
InvalidObjectException, MetaException, NoSuchObjectException, 
org.apache.thrift.TException;
 
+public void drop_constraint(DropConstraintRequest req) throws 
NoSuchObjectException, MetaException, org.apache.thrift.TException;
+
 public void drop_table(String dbname, String name, boolean deleteData) 
throws NoSuchObjectException, MetaException, org.apache.thrift.TException;
 
 public void drop_table_with_environment_context(String dbname, String 
name, boolean deleteData, EnvironmentContext environment_context) throws 
NoSuchObjectException, MetaException, org.apache.thrift.TException;
@@ -376,6 +378,8 @@ public class ThriftHiveMetastore {
 
 public void create_table_with_constraints(Table tbl, List 
primaryKeys, List foreignKeys, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
 
+public void drop_constraint(DropConstraintRequest req, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
 public void drop_table(String dbname, String name, boolean deleteData, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
 
 public void drop_table_with_environment_context(String dbname, String 
name, boolean deleteData, EnvironmentContext environment_context, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
@@ -1218,6 +1222,32 @@ public class ThriftHiveMetastore {
   return;
 }
 
+public void drop_constraint(DropConstraintRequest req) throws 
NoSuchObjectException, MetaException, org.apache.thrift.TException
+{
+  send_drop_constraint(req);
+  recv_drop_constraint();
+}
+
+public void send_drop_constraint(DropConstraintRequest req) throws 
org.apache.thrift.TException
+{
+  drop_constraint_args args = new drop_constraint_args();
+  args.setReq(req);
+  sendBase("drop_constraint", args);
+}
+
+public void recv_drop_constraint() throws NoSuchObjectException, 
MetaException, org.apache.thrift.TException
+{
+  drop_constraint_result result = new drop_constraint_result();
+  receiveBase(result, "drop_constraint");
+  if (result.o1 != null) {
+throw result.o1;
+  }
+  if (result.o3 != null) {
+throw result.o3;
+  }
+  return;
+}
+
 public void drop_table(String dbname, String name, boolean deleteData) 
throws NoSuchObjectException, MetaException, org.apache.thrift.TException
 {
   send_drop_table(dbname, name, deleteData);
@@ -5535,6 +5565,38 @@ public class ThriftHiveMetastore {
   }
 }
 
+public void drop_constraint(DropConstraintRequest req, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException {
+  checkReady();
+  drop_constraint_call method_call = new drop_constraint_call(req, 
resultHandler, this, ___protocolFactory, ___transport);
+  this.___currentMethod = method_call;
+  ___manager.call(method_call);
+}
+
+public static class drop_constraint_call extends 
org.apache.thrift.async.TAsyncMethodCall {
+  private DropConstraintRequest req;
+  public drop_constraint_call(DropConstraintRequest req, 
org.apache.thrift.async.AsyncMethodCallback resultHandler, 
org.apache.thrift.async.TAsyncClient client, 
org.apache.thrift.protocol.TProtocolFactory protocolFactory, 
org.apache.thrift.transport.TNonblockingTransport transport) throws 
org.apache.thrift.TException {
+super(client, protocolFactory, transport, resultHandler, false);
+this.req = req;
+  }
+
+  public void write_args(org.apache.thrift.protocol.TProtocol prot) throws 
org.apache.thrift.TException {
+prot.writeMessageBegin(new 
org.apache.thrift.protocol.TMessage("drop_constraint", 
org.apache.thrift.protocol.TMessageType.CALL, 0));
+drop_constraint_args args = new drop_constraint_args();
+args.setReq(req);
+args.write(prot);
+prot.writeMessageEnd();
+  }
+
+  public void getResult() throws NoSuchObjectException, MetaExc

[07/20] hive git commit: HIVE-13351: Support drop Primary Key/Foreign Key constraints (Hari Subramaniyan, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jdere
http://git-wip-us.apache.org/repos/asf/hive/blob/212077b8/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
--
diff --git a/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 
b/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
index 11d3322..990be15 100644
--- a/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
+++ b/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
@@ -41,6 +41,7 @@ class ThriftHiveMetastoreIf : virtual public  
::facebook::fb303::FacebookService
   virtual void create_table(const Table& tbl) = 0;
   virtual void create_table_with_environment_context(const Table& tbl, const 
EnvironmentContext& environment_context) = 0;
   virtual void create_table_with_constraints(const Table& tbl, const 
std::vector & primaryKeys, const std::vector & 
foreignKeys) = 0;
+  virtual void drop_constraint(const DropConstraintRequest& req) = 0;
   virtual void drop_table(const std::string& dbname, const std::string& name, 
const bool deleteData) = 0;
   virtual void drop_table_with_environment_context(const std::string& dbname, 
const std::string& name, const bool deleteData, const EnvironmentContext& 
environment_context) = 0;
   virtual void get_tables(std::vector & _return, const 
std::string& db_name, const std::string& pattern) = 0;
@@ -256,6 +257,9 @@ class ThriftHiveMetastoreNull : virtual public 
ThriftHiveMetastoreIf , virtual p
   void create_table_with_constraints(const Table& /* tbl */, const 
std::vector & /* primaryKeys */, const 
std::vector & /* foreignKeys */) {
 return;
   }
+  void drop_constraint(const DropConstraintRequest& /* req */) {
+return;
+  }
   void drop_table(const std::string& /* dbname */, const std::string& /* name 
*/, const bool /* deleteData */) {
 return;
   }
@@ -3032,6 +3036,118 @@ class 
ThriftHiveMetastore_create_table_with_constraints_presult {
 
 };
 
+typedef struct _ThriftHiveMetastore_drop_constraint_args__isset {
+  _ThriftHiveMetastore_drop_constraint_args__isset() : req(false) {}
+  bool req :1;
+} _ThriftHiveMetastore_drop_constraint_args__isset;
+
+class ThriftHiveMetastore_drop_constraint_args {
+ public:
+
+  ThriftHiveMetastore_drop_constraint_args(const 
ThriftHiveMetastore_drop_constraint_args&);
+  ThriftHiveMetastore_drop_constraint_args& operator=(const 
ThriftHiveMetastore_drop_constraint_args&);
+  ThriftHiveMetastore_drop_constraint_args() {
+  }
+
+  virtual ~ThriftHiveMetastore_drop_constraint_args() throw();
+  DropConstraintRequest req;
+
+  _ThriftHiveMetastore_drop_constraint_args__isset __isset;
+
+  void __set_req(const DropConstraintRequest& val);
+
+  bool operator == (const ThriftHiveMetastore_drop_constraint_args & rhs) const
+  {
+if (!(req == rhs.req))
+  return false;
+return true;
+  }
+  bool operator != (const ThriftHiveMetastore_drop_constraint_args &rhs) const 
{
+return !(*this == rhs);
+  }
+
+  bool operator < (const ThriftHiveMetastore_drop_constraint_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class ThriftHiveMetastore_drop_constraint_pargs {
+ public:
+
+
+  virtual ~ThriftHiveMetastore_drop_constraint_pargs() throw();
+  const DropConstraintRequest* req;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _ThriftHiveMetastore_drop_constraint_result__isset {
+  _ThriftHiveMetastore_drop_constraint_result__isset() : o1(false), o3(false) 
{}
+  bool o1 :1;
+  bool o3 :1;
+} _ThriftHiveMetastore_drop_constraint_result__isset;
+
+class ThriftHiveMetastore_drop_constraint_result {
+ public:
+
+  ThriftHiveMetastore_drop_constraint_result(const 
ThriftHiveMetastore_drop_constraint_result&);
+  ThriftHiveMetastore_drop_constraint_result& operator=(const 
ThriftHiveMetastore_drop_constraint_result&);
+  ThriftHiveMetastore_drop_constraint_result() {
+  }
+
+  virtual ~ThriftHiveMetastore_drop_constraint_result() throw();
+  NoSuchObjectException o1;
+  MetaException o3;
+
+  _ThriftHiveMetastore_drop_constraint_result__isset __isset;
+
+  void __set_o1(const NoSuchObjectException& val);
+
+  void __set_o3(const MetaException& val);
+
+  bool operator == (const ThriftHiveMetastore_drop_constraint_result & rhs) 
const
+  {
+if (!(o1 == rhs.o1))
+  return false;
+if (!(o3 == rhs.o3))
+  return false;
+return true;
+  }
+  bool operator != (const ThriftHiveMetastore_drop_constraint_result &rhs) 
const {
+return !(*this == rhs);
+  }
+
+  bool operator < (const ThriftHiveMetastore_drop_constraint_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _ThriftHiveMetastore_drop_constraint_presult__isset {
+  _ThriftHiveMetastore_drop_constraint_presult__isset() : o1(false), o3(false) 
{}
+  bool o1 :1;
+  bool 

[01/20] hive git commit: HIVE-13516: Adding BTEQ .IF, .QUIT, ERRORCODE to HPL/SQL (Dmitry Tolpeko reviewed by Alan Gates

2016-05-05 Thread jdere
Repository: hive
Updated Branches:
  refs/heads/llap 03ee0481a -> 763e6969d


HIVE-13516: Adding BTEQ .IF, .QUIT, ERRORCODE to HPL/SQL (Dmitry Tolpeko 
reviewed by Alan Gates


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/2d33d091
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/2d33d091
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/2d33d091

Branch: refs/heads/llap
Commit: 2d33d091b61dce092543970e62f41b63af1f32d1
Parents: 8729966
Author: Dmitry Tolpeko 
Authored: Wed May 4 03:13:18 2016 -0700
Committer: Dmitry Tolpeko 
Committed: Wed May 4 03:13:18 2016 -0700

--
 .../antlr4/org/apache/hive/hplsql/Hplsql.g4 | 108 ++---
 .../main/java/org/apache/hive/hplsql/Exec.java  |  67 +++-
 .../java/org/apache/hive/hplsql/Expression.java |  31 ++--
 .../java/org/apache/hive/hplsql/Select.java |  31 ++--
 .../java/org/apache/hive/hplsql/Signal.java |   2 +-
 .../main/java/org/apache/hive/hplsql/Stmt.java  | 154 ---
 hplsql/src/main/resources/hplsql-site.xml   |   2 -
 .../org/apache/hive/hplsql/TestHplsqlLocal.java |   5 +
 .../apache/hive/hplsql/TestHplsqlOffline.java   |  20 +++
 hplsql/src/test/queries/local/if3_bteq.sql  |   3 +
 .../test/queries/offline/create_table_td.sql|  45 ++
 hplsql/src/test/queries/offline/delete_all.sql  |   1 +
 hplsql/src/test/queries/offline/select.sql  |  42 +
 .../test/queries/offline/select_teradata.sql|  12 ++
 hplsql/src/test/results/db/select_into.out.txt  |   3 +-
 hplsql/src/test/results/db/select_into2.out.txt |   4 +-
 hplsql/src/test/results/local/if3_bteq.out.txt  |   3 +
 hplsql/src/test/results/local/lang.out.txt  |  10 +-
 .../results/offline/create_table_mssql.out.txt  |  39 ++---
 .../results/offline/create_table_mssql2.out.txt |  13 +-
 .../results/offline/create_table_mysql.out.txt  |   5 +-
 .../results/offline/create_table_ora.out.txt|  65 
 .../results/offline/create_table_ora2.out.txt   |   9 +-
 .../results/offline/create_table_pg.out.txt |   7 +-
 .../results/offline/create_table_td.out.txt |  31 
 .../src/test/results/offline/delete_all.out.txt |   2 +
 hplsql/src/test/results/offline/select.out.txt  |  34 
 .../src/test/results/offline/select_db2.out.txt |   3 +-
 .../results/offline/select_teradata.out.txt |  10 ++
 29 files changed, 589 insertions(+), 172 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/2d33d091/hplsql/src/main/antlr4/org/apache/hive/hplsql/Hplsql.g4
--
diff --git a/hplsql/src/main/antlr4/org/apache/hive/hplsql/Hplsql.g4 
b/hplsql/src/main/antlr4/org/apache/hive/hplsql/Hplsql.g4
index b84116f..5ce0e23 100644
--- a/hplsql/src/main/antlr4/org/apache/hive/hplsql/Hplsql.g4
+++ b/hplsql/src/main/antlr4/org/apache/hive/hplsql/Hplsql.g4
@@ -30,7 +30,7 @@ single_block_stmt :  // 
Single BEGIN END blo
T_BEGIN block exception_block? block_end
  | stmt T_SEMICOLON?
  ;
-
+
 block_end :
{!_input.LT(2).getText().equalsIgnoreCase("TRANSACTION")}? T_END 
  ;
@@ -48,6 +48,7 @@ stmt :
  | begin_transaction_stmt
  | break_stmt
  | call_stmt
+ | collect_stats_stmt
  | close_stmt
  | cmp_stmt
  | copy_from_ftp_stmt
@@ -83,6 +84,7 @@ stmt :
  | merge_stmt
  | open_stmt
  | print_stmt
+ | quit_stmt
  | raise_stmt
  | resignal_stmt
  | return_stmt
@@ -181,9 +183,9 @@ declare_block_inplace :
  
 declare_stmt_item :
declare_cursor_item
- | declare_var_item 
  | declare_condition_item  
  | declare_handler_item
+ | declare_var_item 
  | declare_temporary_table_item
  ;
 
@@ -213,15 +215,19 @@ declare_handler_item : // Condition handler 
declaration
  ;
  
 declare_temporary_table_item : // DECLARE TEMPORARY TABLE statement
-   T_GLOBAL? T_TEMPORARY T_TABLE ident (T_AS? T_OPEN_P select_stmt 
T_CLOSE_P | T_AS? select_stmt | T_OPEN_P create_table_columns T_CLOSE_P) 
create_table_options?
+   T_GLOBAL? T_TEMPORARY T_TABLE ident create_table_preoptions? 
create_table_definition
  ;
  
 create_table_stmt :
-   T_CREATE T_TABLE (T_IF T_NOT T_EXISTS)? table_name T_OPEN_P 
create_table_columns T_CLOSE_P create_table_options?
+   T_CREATE T_TABLE (T_IF T_NOT T_EXISTS)? table_name 
create_table_preoptions? create_table_definition
  ;
  
 create_local_temp_table_stmt :
-   T_CREATE (T_LOCAL T_TEMPORARY | (T_SET | T_MULTISET)? T_VOLATILE) 
T_TABLE ident create_table_preoptions? T_OPEN_P create_table_columns T_CLOSE_P 
create_table_options?
+   T_CREATE (T_LOCAL T_TEMPORARY | (T_SET | T_MULTISET)? T_VOLATILE) 
T_TABLE ident create_table_preoptions? create_tab

[05/20] hive git commit: HIVE-13351: Support drop Primary Key/Foreign Key constraints (Hari Subramaniyan, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jdere
http://git-wip-us.apache.org/repos/asf/hive/blob/212077b8/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
--
diff --git a/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h 
b/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
index d392f67..3b3e05e 100644
--- a/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
+++ b/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
@@ -251,6 +251,8 @@ class ForeignKeysRequest;
 
 class ForeignKeysResponse;
 
+class DropConstraintRequest;
+
 class PartitionsByExprResult;
 
 class PartitionsByExprRequest;
@@ -3779,6 +3781,56 @@ inline std::ostream& operator<<(std::ostream& out, const 
ForeignKeysResponse& ob
 }
 
 
+class DropConstraintRequest {
+ public:
+
+  DropConstraintRequest(const DropConstraintRequest&);
+  DropConstraintRequest& operator=(const DropConstraintRequest&);
+  DropConstraintRequest() : dbname(), tablename(), constraintname() {
+  }
+
+  virtual ~DropConstraintRequest() throw();
+  std::string dbname;
+  std::string tablename;
+  std::string constraintname;
+
+  void __set_dbname(const std::string& val);
+
+  void __set_tablename(const std::string& val);
+
+  void __set_constraintname(const std::string& val);
+
+  bool operator == (const DropConstraintRequest & rhs) const
+  {
+if (!(dbname == rhs.dbname))
+  return false;
+if (!(tablename == rhs.tablename))
+  return false;
+if (!(constraintname == rhs.constraintname))
+  return false;
+return true;
+  }
+  bool operator != (const DropConstraintRequest &rhs) const {
+return !(*this == rhs);
+  }
+
+  bool operator < (const DropConstraintRequest & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+  virtual void printTo(std::ostream& out) const;
+};
+
+void swap(DropConstraintRequest &a, DropConstraintRequest &b);
+
+inline std::ostream& operator<<(std::ostream& out, const 
DropConstraintRequest& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
+
 class PartitionsByExprResult {
  public:
 

http://git-wip-us.apache.org/repos/asf/hive/blob/212077b8/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropConstraintRequest.java
--
diff --git 
a/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropConstraintRequest.java
 
b/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropConstraintRequest.java
new file mode 100644
index 000..4519dac
--- /dev/null
+++ 
b/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropConstraintRequest.java
@@ -0,0 +1,591 @@
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.hadoop.hive.metastore.api;
+
+import org.apache.thrift.scheme.IScheme;
+import org.apache.thrift.scheme.SchemeFactory;
+import org.apache.thrift.scheme.StandardScheme;
+
+import org.apache.thrift.scheme.TupleScheme;
+import org.apache.thrift.protocol.TTupleProtocol;
+import org.apache.thrift.protocol.TProtocolException;
+import org.apache.thrift.EncodingUtils;
+import org.apache.thrift.TException;
+import org.apache.thrift.async.AsyncMethodCallback;
+import org.apache.thrift.server.AbstractNonblockingServer.*;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import javax.annotation.Generated;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class DropConstraintRequest implements 
org.apache.thrift.TBase, 
java.io.Serializable, Cloneable, Comparable {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new 
org.apache.thrift.protocol.TStruct("DropConstraintRequest");
+
+  private static final org.apache.thrift.protocol.TField DBNAME_FIELD_DESC = 
new org.apache.thrift.protocol.TField("dbname", 
org.apache.thrift.protocol.TType.STRING, (short)1);
+  private static final org.apache.thrift.protocol.TField TABLENAME_FIELD_DESC 
= new org.apache.thrift.protocol.TField("tablename", 
org.apache.thrift.protocol.TType.STRING, (short)2);
+  private static final org.apache.thrift.protocol.TField 
CONSTRAINTNAME_FIELD_DESC = new 
org.apache.thrift.protocol.TField("constraintname", 
org.apache.thrift.protocol.TType.STRING, (short)3);
+
+  private static final Map, SchemeFactory> schemes = 
new HashMap, SchemeFactory>();
+  static {
+schemes.put(StandardScheme.class, new 
DropConstraintR

[18/20] hive git commit: HIVE-13639: CBO rule to pull up constants through Union (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jdere
http://git-wip-us.apache.org/repos/asf/hive/blob/09271872/ql/src/test/results/clientpositive/perf/query75.q.out
--
diff --git a/ql/src/test/results/clientpositive/perf/query75.q.out 
b/ql/src/test/results/clientpositive/perf/query75.q.out
index 15c46c2..731ff62 100644
--- a/ql/src/test/results/clientpositive/perf/query75.q.out
+++ b/ql/src/test/results/clientpositive/perf/query75.q.out
@@ -41,363 +41,367 @@ Stage-0
   <-Reducer 7 [SIMPLE_EDGE]
 SHUFFLE [RS_153]
   Select Operator [SEL_152] (rows=169103 width=1436)
-
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9"]
+
Output:["_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9"]
 Filter Operator [FIL_151] (rows=169103 width=1436)
   predicate:(UDFToDouble((CAST( _col5 AS decimal(17,2)) / 
CAST( _col12 AS decimal(17,2 < 0.9)
   Merge Join Operator [MERGEJOIN_259] (rows=507310 width=1436)
-Conds:RS_148._col1, _col2, _col3, _col4=RS_149._col1, 
_col2, _col3, 
_col4(Inner),Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col12","_col13"]
+Conds:RS_148._col1, _col2, _col3, _col4=RS_149._col1, 
_col2, _col3, 
_col4(Inner),Output:["_col1","_col2","_col3","_col4","_col5","_col6","_col12","_col13"]
   <-Reducer 31 [SIMPLE_EDGE]
 SHUFFLE [RS_149]
   PartitionCols:_col1, _col2, _col3, _col4
-  Group By Operator [GBY_146] (rows=461191 width=1436)
-
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6"],aggregations:["sum(VALUE._col0)","sum(VALUE._col1)"],keys:KEY._col0,
 KEY._col1, KEY._col2, KEY._col3, KEY._col4
-  <-Union 30 [SIMPLE_EDGE]
-<-Reducer 29 [CONTAINS]
-  Reduce Output Operator [RS_145]
-PartitionCols:_col0, _col1, _col2, _col3, _col4
-Group By Operator [GBY_144] (rows=922383 
width=1436)
-  
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6"],aggregations:["sum(_col5)","sum(_col6)"],keys:_col0,
 _col1, _col2, _col3, _col4
-  Select Operator [SEL_142] (rows=922383 
width=1436)
-
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6"]
-Select Operator [SEL_95] (rows=307461 
width=1436)
+  Select Operator [SEL_147] (rows=461191 width=1436)
+
Output:["_col1","_col2","_col3","_col4","_col5","_col6"]
+Group By Operator [GBY_146] (rows=461191 width=1436)
+  
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6"],aggregations:["sum(VALUE._col0)","sum(VALUE._col1)"],keys:2001,
 KEY._col1, KEY._col2, KEY._col3, KEY._col4
+<-Union 30 [SIMPLE_EDGE]
+  <-Reducer 29 [CONTAINS]
+Reduce Output Operator [RS_145]
+  PartitionCols:2001, _col1, _col2, _col3, _col4
+  Group By Operator [GBY_144] (rows=922383 
width=1436)
+
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6"],aggregations:["sum(_col5)","sum(_col6)"],keys:2001,
 _col1, _col2, _col3, _col4
+Select Operator [SEL_142] (rows=922383 
width=1436)
   
Output:["_col1","_col2","_col3","_col4","_col5","_col6"]
-  Merge Join Operator [MERGEJOIN_252] 
(rows=307461 width=1436)
-Conds:RS_92._col2, _col1=RS_93._col1, 
_col0(Left 
Outer),Output:["_col3","_col4","_col6","_col7","_col8","_col10","_col15","_col16"]
-  <-Map 34 [SIMPLE_EDGE]
-SHUFFLE [RS_93]
-  PartitionCols:_col1, _col0
-  Select Operator [SEL_85] (rows=1 width=0)
-
Output:["_col0","_col1","_col2","_col3"]
-Filter Operator [FIL_232] (rows=1 
width=0)
-  predicate:cr_item_sk is not null
-  TableScan [TS_83] (rows=1 width=0)
-
default@catalog_returns,catalog_returns,Tbl:PARTIAL,Col:NONE,Output:["cr_item_sk","cr_order_number","cr_return_quantity","cr_return_amount"]
-  <-Reducer 28 [SIMPLE_EDGE]
-SHUFFLE [RS_92]
-  PartitionCols:_col2, _col1
-

hive git commit: HIVE-13695: LlapOutputFormatService port should be able to be set via conf

2016-05-05 Thread jdere
Repository: hive
Updated Branches:
  refs/heads/llap 2a03f1f46 -> 03ee0481a


HIVE-13695: LlapOutputFormatService port should be able to be set via conf


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/03ee0481
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/03ee0481
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/03ee0481

Branch: refs/heads/llap
Commit: 03ee0481a518585a4a92875d88c560ff525d75d4
Parents: 2a03f1f
Author: Jason Dere 
Authored: Thu May 5 12:56:20 2016 -0700
Committer: Jason Dere 
Committed: Thu May 5 12:56:20 2016 -0700

--
 .../hive/llap/daemon/impl/LlapDaemon.java   |  6 +++
 .../hive/llap/daemon/MiniLlapCluster.java   |  3 ++
 .../hive/llap/LlapOutputFormatService.java  | 44 +---
 3 files changed, 38 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/03ee0481/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java
--
diff --git 
a/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java 
b/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java
index 223c390..b3c1abf 100644
--- 
a/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java
+++ 
b/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapDaemon.java
@@ -132,6 +132,10 @@ public class LlapDaemon extends CompositeService 
implements ContainerRunner, Lla
 "Work dirs must be specified");
 Preconditions.checkArgument(shufflePort == 0 || (shufflePort > 1024 && 
shufflePort < 65536),
 "Shuffle Port must be betwee 1024 and 65535, or 0 for automatic 
selection");
+int outputFormatServicePort = HiveConf.getIntVar(daemonConf, 
HiveConf.ConfVars.LLAP_DAEMON_OUTPUT_SERVICE_PORT);
+Preconditions.checkArgument(outputFormatServicePort == 0
+|| (outputFormatServicePort > 1024 && outputFormatServicePort < 65536),
+"OutputFormatService Port must be between 1024 and 65535, or 0 for 
automatic selection");
 String hosts = HiveConf.getTrimmedVar(daemonConf, 
ConfVars.LLAP_DAEMON_SERVICE_HOSTS);
 if (hosts.startsWith("@")) {
   String zkHosts = HiveConf.getTrimmedVar(daemonConf, 
ConfVars.HIVE_ZOOKEEPER_QUORUM);
@@ -165,6 +169,7 @@ public class LlapDaemon extends CompositeService implements 
ContainerRunner, Lla
 ", rpcListenerPort=" + srvPort +
 ", mngListenerPort=" + mngPort +
 ", webPort=" + webPort +
+", outputFormatSvcPort=" + outputFormatServicePort +
 ", workDirs=" + Arrays.toString(localDirs) +
 ", shufflePort=" + shufflePort +
 ", executorMemory=" + executorMemoryBytes +
@@ -335,6 +340,7 @@ public class LlapDaemon extends CompositeService implements 
ContainerRunner, Lla
 this.shufflePort.set(ShuffleHandler.get().getPort());
 getConfig()
 .setInt(ConfVars.LLAP_DAEMON_YARN_SHUFFLE_PORT.varname, 
ShuffleHandler.get().getPort());
+LlapOutputFormatService.initializeAndStart(getConfig());
 super.serviceStart();
 
 // Setup the actual ports in the configuration.

http://git-wip-us.apache.org/repos/asf/hive/blob/03ee0481/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/MiniLlapCluster.java
--
diff --git 
a/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/MiniLlapCluster.java 
b/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/MiniLlapCluster.java
index dde5be0..e394191 100644
--- 
a/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/MiniLlapCluster.java
+++ 
b/llap-server/src/test/org/apache/hadoop/hive/llap/daemon/MiniLlapCluster.java
@@ -166,6 +166,7 @@ public class MiniLlapCluster extends AbstractService {
 int mngPort = 0;
 int shufflePort = 0;
 int webPort = 0;
+int outputFormatServicePort = 0;
 boolean usePortsFromConf = conf.getBoolean("minillap.usePortsFromConf", 
false);
 LOG.info("MiniLlap configured to use ports from conf: {}", 
usePortsFromConf);
 if (usePortsFromConf) {
@@ -173,7 +174,9 @@ public class MiniLlapCluster extends AbstractService {
   mngPort = HiveConf.getIntVar(conf, 
HiveConf.ConfVars.LLAP_MANAGEMENT_RPC_PORT);
   shufflePort = conf.getInt(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY, 
ShuffleHandler.DEFAULT_SHUFFLE_PORT);
   webPort = HiveConf.getIntVar(conf, ConfVars.LLAP_DAEMON_WEB_PORT);
+  outputFormatServicePort = HiveConf.getIntVar(conf, 
ConfVars.LLAP_DAEMON_OUTPUT_SERVICE_PORT);
 }
+HiveConf.setIntVar(conf, ConfVars.LLAP_DAEMON_OUTPUT_SERVICE_PORT, 
outputFormatServicePort);
 
 if (ownZkCluster) {
   miniZooKeeperCluster = new MiniZooKeeperCluster();

http://git-wip-us.apache.org/repos/asf/hive/blob

hive git commit: HIVE-13620: Merge llap branch work to master (committing changes from review feedback)

2016-05-05 Thread jdere
Repository: hive
Updated Branches:
  refs/heads/llap e05790973 -> 2a03f1f46


HIVE-13620: Merge llap branch work to master (committing changes from review 
feedback)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/2a03f1f4
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/2a03f1f4
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/2a03f1f4

Branch: refs/heads/llap
Commit: 2a03f1f4648c683414c0b23be0aebbfd614d105c
Parents: e057909
Author: Jason Dere 
Authored: Thu May 5 12:29:14 2016 -0700
Committer: Jason Dere 
Committed: Thu May 5 12:29:14 2016 -0700

--
 .../hive/llap/ext/TestLlapInputSplit.java   |  18 ++
 .../apache/hive/jdbc/TestJdbcWithMiniLlap.java  |  37 +
 .../org/apache/hadoop/hive/ql/QTestUtil.java|   3 +-
 .../hadoop/hive/llap/LlapBaseRecordReader.java  |  38 +++--
 .../hadoop/hive/llap/LlapRowRecordReader.java   |  48 --
 .../apache/hadoop/hive/llap/SubmitWorkInfo.java |  16 ++
 .../ext/LlapTaskUmbilicalExternalClient.java|  46 +++---
 .../helpers/LlapTaskUmbilicalServer.java|  16 ++
 .../hadoop/hive/llap/LlapBaseInputFormat.java   |  20 +--
 .../org/apache/hadoop/hive/llap/LlapDump.java   |  11 +-
 .../hadoop/hive/llap/LlapRowInputFormat.java|  18 ++
 .../hive/llap/daemon/impl/LlapDaemon.java   |   2 +-
 .../llap/daemon/impl/TaskRunnerCallable.java|   6 +-
 .../daemon/impl/TaskExecutorTestHelpers.java|   2 +-
 .../hadoop/hive/llap/LlapDataOutputBuffer.java  | 165 ---
 .../hive/llap/LlapOutputFormatService.java  |  27 +--
 .../hive/ql/exec/tez/HiveSplitGenerator.java|   4 +-
 .../hive/ql/io/HivePassThroughRecordWriter.java |   4 -
 .../hive/ql/parse/TypeCheckProcFactory.java |   9 +-
 .../ql/udf/generic/GenericUDTFGetSplits.java|   1 -
 .../org/apache/tez/dag/api/TaskSpecBuilder.java |  17 +-
 .../hadoop/hive/llap/TestLlapOutputFormat.java  |   2 +-
 .../results/clientpositive/show_functions.q.out |   1 +
 23 files changed, 209 insertions(+), 302 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/2a03f1f4/itests/hive-unit/src/test/java/org/apache/hadoop/hive/llap/ext/TestLlapInputSplit.java
--
diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/llap/ext/TestLlapInputSplit.java
 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/llap/ext/TestLlapInputSplit.java
index 8264190..1de8aa6 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/llap/ext/TestLlapInputSplit.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/llap/ext/TestLlapInputSplit.java
@@ -1,3 +1,21 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 package org.apache.hadoop.hive.llap.ext;
 
 import java.io.ByteArrayInputStream;

http://git-wip-us.apache.org/repos/asf/hive/blob/2a03f1f4/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlap.java
--
diff --git 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlap.java 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlap.java
index 5b4ba49..48b9493 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlap.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniLlap.java
@@ -161,39 +161,7 @@ public class TestJdbcWithMiniLlap {
 stmt.close();
   }
 
-  private static boolean timedOut = false;
-
-  private static class TestTimerTask extends TimerTask {
-private boolean timedOut = false;
-private Thread threadToInterrupt;
-
-public TestTimerTask(Thread threadToInterrupt) {
-  this.threadToInterrupt = threadToInterrupt;
-}
-
-@Override
-public void run() {
-  System.out.println("Test timed out!");
-  timedOut = true;
-  threadToInterrupt.interrupt();
-}
-
-public boolean isTimedOut() {
-  return timedOut;
-}
-
-public void setTimedOut(boolean timedOut) {
-  this

[3/3] hive git commit: HIVE-13639: CBO rule to pull up constants through Union (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jcamacho
HIVE-13639: CBO rule to pull up constants through Union (Jesus Camacho 
Rodriguez, reviewed by Ashutosh Chauhan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/09271872
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/09271872
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/09271872

Branch: refs/heads/master
Commit: 092718720a4abc77ce74c2efcf42cfef0243e9d4
Parents: f41d693
Author: Jesus Camacho Rodriguez 
Authored: Wed May 4 22:01:52 2016 +0100
Committer: Jesus Camacho Rodriguez 
Committed: Thu May 5 20:21:50 2016 +0100

--
 .../rules/HiveUnionPullUpConstantsRule.java | 133 
 .../hadoop/hive/ql/parse/CalcitePlanner.java|   2 +
 .../queries/clientpositive/cbo_union_view.q |  19 +
 .../results/clientpositive/cbo_input26.q.out|  64 +-
 .../results/clientpositive/cbo_union_view.q.out | 228 ++
 .../results/clientpositive/groupby_ppd.q.out|  28 +-
 .../results/clientpositive/perf/query66.q.out   | 328 -
 .../results/clientpositive/perf/query75.q.out   | 692 ++-
 .../clientpositive/spark/union_remove_25.q.out  |  48 +-
 .../clientpositive/spark/union_view.q.out   |  60 +-
 .../results/clientpositive/union_view.q.out |  60 +-
 11 files changed, 1021 insertions(+), 641 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/09271872/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
new file mode 100644
index 000..3155cb1
--- /dev/null
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveUnionPullUpConstantsRule.java
@@ -0,0 +1,133 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.optimizer.calcite.rules;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.calcite.plan.RelOptPredicateList;
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.plan.RelOptUtil;
+import org.apache.calcite.rel.core.Union;
+import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexUtil;
+import org.apache.calcite.tools.RelBuilder;
+import org.apache.calcite.tools.RelBuilderFactory;
+import org.apache.calcite.util.Pair;
+import org.apache.calcite.util.mapping.Mappings;
+import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
+import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveUnion;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.ImmutableList;
+
+/**
+ * Planner rule that pulls up constants through a Union operator.
+ */
+public class HiveUnionPullUpConstantsRule extends RelOptRule {
+
+  protected static final Logger LOG = 
LoggerFactory.getLogger(HiveUnionPullUpConstantsRule.class);
+
+
+  public static final HiveUnionPullUpConstantsRule INSTANCE =
+  new HiveUnionPullUpConstantsRule(HiveUnion.class,
+  HiveRelFactories.HIVE_BUILDER);
+
+  private HiveUnionPullUpConstantsRule(
+  Class unionClass,
+  RelBuilderFactory relBuilderFactory) {
+super(operand(unionClass, any()),
+relBuilderFactory, null);
+  }
+
+  @Override
+  public void onMatch(RelOptRuleCall call) {
+final Union union = call.rel(0);
+
+final int count = union.getRowType().getFieldCount();
+if (count == 1) {
+  // No room for optimization since we cannot create an empty
+  // Project operator.
+  return;
+}
+
+final RexBuilder rexBuilder = union.getCluster().getRexBuilder();
+final R

[2/3] hive git commit: HIVE-13639: CBO rule to pull up constants through Union (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jcamacho
http://git-wip-us.apache.org/repos/asf/hive/blob/09271872/ql/src/test/results/clientpositive/perf/query75.q.out
--
diff --git a/ql/src/test/results/clientpositive/perf/query75.q.out 
b/ql/src/test/results/clientpositive/perf/query75.q.out
index 15c46c2..731ff62 100644
--- a/ql/src/test/results/clientpositive/perf/query75.q.out
+++ b/ql/src/test/results/clientpositive/perf/query75.q.out
@@ -41,363 +41,367 @@ Stage-0
   <-Reducer 7 [SIMPLE_EDGE]
 SHUFFLE [RS_153]
   Select Operator [SEL_152] (rows=169103 width=1436)
-
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9"]
+
Output:["_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9"]
 Filter Operator [FIL_151] (rows=169103 width=1436)
   predicate:(UDFToDouble((CAST( _col5 AS decimal(17,2)) / 
CAST( _col12 AS decimal(17,2 < 0.9)
   Merge Join Operator [MERGEJOIN_259] (rows=507310 width=1436)
-Conds:RS_148._col1, _col2, _col3, _col4=RS_149._col1, 
_col2, _col3, 
_col4(Inner),Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col12","_col13"]
+Conds:RS_148._col1, _col2, _col3, _col4=RS_149._col1, 
_col2, _col3, 
_col4(Inner),Output:["_col1","_col2","_col3","_col4","_col5","_col6","_col12","_col13"]
   <-Reducer 31 [SIMPLE_EDGE]
 SHUFFLE [RS_149]
   PartitionCols:_col1, _col2, _col3, _col4
-  Group By Operator [GBY_146] (rows=461191 width=1436)
-
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6"],aggregations:["sum(VALUE._col0)","sum(VALUE._col1)"],keys:KEY._col0,
 KEY._col1, KEY._col2, KEY._col3, KEY._col4
-  <-Union 30 [SIMPLE_EDGE]
-<-Reducer 29 [CONTAINS]
-  Reduce Output Operator [RS_145]
-PartitionCols:_col0, _col1, _col2, _col3, _col4
-Group By Operator [GBY_144] (rows=922383 
width=1436)
-  
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6"],aggregations:["sum(_col5)","sum(_col6)"],keys:_col0,
 _col1, _col2, _col3, _col4
-  Select Operator [SEL_142] (rows=922383 
width=1436)
-
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6"]
-Select Operator [SEL_95] (rows=307461 
width=1436)
+  Select Operator [SEL_147] (rows=461191 width=1436)
+
Output:["_col1","_col2","_col3","_col4","_col5","_col6"]
+Group By Operator [GBY_146] (rows=461191 width=1436)
+  
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6"],aggregations:["sum(VALUE._col0)","sum(VALUE._col1)"],keys:2001,
 KEY._col1, KEY._col2, KEY._col3, KEY._col4
+<-Union 30 [SIMPLE_EDGE]
+  <-Reducer 29 [CONTAINS]
+Reduce Output Operator [RS_145]
+  PartitionCols:2001, _col1, _col2, _col3, _col4
+  Group By Operator [GBY_144] (rows=922383 
width=1436)
+
Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6"],aggregations:["sum(_col5)","sum(_col6)"],keys:2001,
 _col1, _col2, _col3, _col4
+Select Operator [SEL_142] (rows=922383 
width=1436)
   
Output:["_col1","_col2","_col3","_col4","_col5","_col6"]
-  Merge Join Operator [MERGEJOIN_252] 
(rows=307461 width=1436)
-Conds:RS_92._col2, _col1=RS_93._col1, 
_col0(Left 
Outer),Output:["_col3","_col4","_col6","_col7","_col8","_col10","_col15","_col16"]
-  <-Map 34 [SIMPLE_EDGE]
-SHUFFLE [RS_93]
-  PartitionCols:_col1, _col0
-  Select Operator [SEL_85] (rows=1 width=0)
-
Output:["_col0","_col1","_col2","_col3"]
-Filter Operator [FIL_232] (rows=1 
width=0)
-  predicate:cr_item_sk is not null
-  TableScan [TS_83] (rows=1 width=0)
-
default@catalog_returns,catalog_returns,Tbl:PARTIAL,Col:NONE,Output:["cr_item_sk","cr_order_number","cr_return_quantity","cr_return_amount"]
-  <-Reducer 28 [SIMPLE_EDGE]
-SHUFFLE [RS_92]
-  PartitionCols:_col2, _col1
-

[1/3] hive git commit: HIVE-13639: CBO rule to pull up constants through Union (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan)

2016-05-05 Thread jcamacho
Repository: hive
Updated Branches:
  refs/heads/master f41d693b5 -> 092718720


http://git-wip-us.apache.org/repos/asf/hive/blob/09271872/ql/src/test/results/clientpositive/union_view.q.out
--
diff --git a/ql/src/test/results/clientpositive/union_view.q.out 
b/ql/src/test/results/clientpositive/union_view.q.out
index badd209..530739e 100644
--- a/ql/src/test/results/clientpositive/union_view.q.out
+++ b/ql/src/test/results/clientpositive/union_view.q.out
@@ -358,12 +358,12 @@ STAGE PLANS:
   Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE 
Column stats: NONE
   Select Operator
 expressions: value (type: string)
-outputColumnNames: _col1
+outputColumnNames: _col0
 Statistics: Num rows: 250 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
 Union
   Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
   Select Operator
-expressions: 86 (type: int), _col1 (type: string), '1' 
(type: string)
+expressions: 86 (type: int), _col0 (type: string), '1' 
(type: string)
 outputColumnNames: _col0, _col1, _col2
 Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
 File Output Operator
@@ -382,12 +382,12 @@ STAGE PLANS:
   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
   Select Operator
 expressions: value (type: string)
-outputColumnNames: _col1
+outputColumnNames: _col0
 Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
Column stats: NONE
 Union
   Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
   Select Operator
-expressions: 86 (type: int), _col1 (type: string), '1' 
(type: string)
+expressions: 86 (type: int), _col0 (type: string), '1' 
(type: string)
 outputColumnNames: _col0, _col1, _col2
 Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
 File Output Operator
@@ -406,12 +406,12 @@ STAGE PLANS:
   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
   Select Operator
 expressions: value (type: string)
-outputColumnNames: _col1
+outputColumnNames: _col0
 Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
Column stats: NONE
 Union
   Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
   Select Operator
-expressions: 86 (type: int), _col1 (type: string), '1' 
(type: string)
+expressions: 86 (type: int), _col0 (type: string), '1' 
(type: string)
 outputColumnNames: _col0, _col1, _col2
 Statistics: Num rows: 252 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
 File Output Operator
@@ -471,12 +471,12 @@ STAGE PLANS:
   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
   Select Operator
 expressions: value (type: string)
-outputColumnNames: _col1
+outputColumnNames: _col0
 Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
Column stats: NONE
 Union
   Statistics: Num rows: 502 Data size: 5312 Basic stats: 
COMPLETE Column stats: NONE
   Select Operator
-expressions: 86 (type: int), _col1 (type: string), '2' 
(type: string)
+expressions: 86 (type: int), _col0 (type: string), '2' 
(type: string)
 outputColumnNames: _col0, _col1, _col2
 Statistics: Num rows: 502 Data size: 5312 Basic stats: 
COMPLETE Column stats: NONE
 File Output Operator
@@ -495,12 +495,12 @@ STAGE PLANS:
   Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
Column stats: NONE
   Select Operator
 expressions: value (type: string)
-outputColumnNames: _col1
+outputColumnNames: _col0
 Statistics: Num rows: 500 Data size: 5312 Basic stats: 
COMPLETE Column stats: NONE
 Union
   Statistics: Num rows: 502 Data size: 5312 Basic stats: 
COMPLETE Column stats: NONE
   Select Operator
-expressions: 86 (type: int), _col1 (type: string), '2' 
(type: string)
+expressions: 86 (type

hive git commit: HIVE-13653 : improve config error messages for LLAP cache size/etc (Sergey Shelukhin, reviewed by Prasanth Jayachandran)

2016-05-05 Thread sershe
Repository: hive
Updated Branches:
  refs/heads/master 96f2dc723 -> f41d693b5


HIVE-13653 : improve config error messages for LLAP cache size/etc (Sergey 
Shelukhin, reviewed by Prasanth Jayachandran)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/f41d693b
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/f41d693b
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/f41d693b

Branch: refs/heads/master
Commit: f41d693b5b984ea55b01394af0dbb6c7121db90a
Parents: 96f2dc7
Author: Sergey Shelukhin 
Authored: Thu May 5 10:41:47 2016 -0700
Committer: Sergey Shelukhin 
Committed: Thu May 5 10:41:47 2016 -0700

--
 .../hadoop/hive/llap/cache/BuddyAllocator.java  | 43 +++-
 1 file changed, 32 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/f41d693b/llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java
--
diff --git 
a/llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java 
b/llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java
index d78c1e0..1d5a7db 100644
--- a/llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java
+++ b/llap-server/src/java/org/apache/hadoop/hive/llap/cache/BuddyAllocator.java
@@ -44,6 +44,8 @@ public final class BuddyAllocator implements 
EvictionAwareAllocator, BuddyAlloca
   // We don't know the acceptable size for Java array, so we'll use 1Gb 
boundary.
   // That is guaranteed to fit any maximum allocation.
   private static final int MAX_ARENA_SIZE = 1024*1024*1024;
+  // Don't try to operate with less than MIN_SIZE allocator space, it will 
just give you grief.
+  private static final int MIN_TOTAL_MEMORY_SIZE = 64*1024*1024;
 
 
   public BuddyAllocator(Configuration conf, MemoryManager mm, 
LlapDaemonCacheMetrics metrics) {
@@ -51,8 +53,19 @@ public final class BuddyAllocator implements 
EvictionAwareAllocator, BuddyAlloca
 (int)HiveConf.getSizeVar(conf, ConfVars.LLAP_ALLOCATOR_MIN_ALLOC),
 (int)HiveConf.getSizeVar(conf, ConfVars.LLAP_ALLOCATOR_MAX_ALLOC),
 HiveConf.getIntVar(conf, ConfVars.LLAP_ALLOCATOR_ARENA_COUNT),
-HiveConf.getSizeVar(conf, ConfVars.LLAP_IO_MEMORY_MAX_SIZE),
-mm, metrics);
+getMaxTotalMemorySize(conf), mm, metrics);
+  }
+
+  private static long getMaxTotalMemorySize(Configuration conf) {
+long maxSize = HiveConf.getSizeVar(conf, ConfVars.LLAP_IO_MEMORY_MAX_SIZE);
+if (maxSize > MIN_TOTAL_MEMORY_SIZE || HiveConf.getBoolVar(conf, 
ConfVars.HIVE_IN_TEST)) {
+  return maxSize;
+}
+throw new RuntimeException("Allocator space is too small for reasonable 
operation; "
++ ConfVars.LLAP_IO_MEMORY_MAX_SIZE.varname + "=" + maxSize + ", but at 
least "
++ MIN_TOTAL_MEMORY_SIZE + " is required. If you cannot spare any 
memory, you can "
++ "disable LLAP IO entirely via " + ConfVars.LLAP_IO_ENABLED.varname + 
"; or set "
++ ConfVars.LLAP_IO_MEMORY_MODE.varname + " to 'none'");
   }
 
   @VisibleForTesting
@@ -69,16 +82,19 @@ public final class BuddyAllocator implements 
EvictionAwareAllocator, BuddyAlloca
   + ", arena size " + arenaSizeVal + ". total size " + maxSizeVal);
 }
 
+String minName = ConfVars.LLAP_ALLOCATOR_MIN_ALLOC.varname,
+maxName = ConfVars.LLAP_ALLOCATOR_MAX_ALLOC.varname;
 if (minAllocation < 8) {
-  throw new AssertionError("Min allocation must be at least 8 bytes: " + 
minAllocation);
+  throw new RuntimeException(minName + " must be at least 8 bytes: " + 
minAllocation);
 }
-if (maxSizeVal < arenaSizeVal || maxAllocation < minAllocation) {
-  throw new AssertionError("Inconsistent sizes of cache, arena and 
allocations: "
-  + minAllocation + ", " + maxAllocation + ", " + arenaSizeVal + ", " 
+ maxSizeVal);
+if (maxSizeVal < maxAllocation || maxAllocation < minAllocation) {
+  throw new RuntimeException("Inconsistent sizes; expecting " + minName + 
" <= " + maxName
+  + " <= " + ConfVars.LLAP_IO_MEMORY_MAX_SIZE.varname + "; configured 
with min="
+  + minAllocation + ", max=" + maxAllocation + " and total=" + 
maxSizeVal);
 }
 if ((Integer.bitCount(minAllocation) != 1) || 
(Integer.bitCount(maxAllocation) != 1)) {
-  throw new AssertionError("Allocation sizes must be powers of two: "
-  + minAllocation + ", " + maxAllocation);
+  throw new RuntimeException("Allocation sizes must be powers of two; 
configured with "
+  + minName + "=" + minAllocation + ", " + maxName + "=" + 
maxAllocation);
 }
 if ((arenaSizeVal % maxAllocation) > 0) {
   long oldArenaSize = arenaSizeVal;
@@ -94,8 +110,8 @@ public final class BuddyAllocator impl

hive git commit: HIVE-13632: Hive failing on insert empty array into parquet table. (Yongzhi Chen, reviewed by Sergio Pena)

2016-05-05 Thread ychena
Repository: hive
Updated Branches:
  refs/heads/branch-1 32069e334 -> 8a59b85a6


HIVE-13632: Hive failing on insert empty array into parquet table. (Yongzhi 
Chen, reviewed by Sergio Pena)

Conflicts:

ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/8a59b85a
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/8a59b85a
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/8a59b85a

Branch: refs/heads/branch-1
Commit: 8a59b85a6beadee51d6bdb57ced55d46a6ed9556
Parents: 32069e3
Author: Yongzhi Chen 
Authored: Thu Apr 28 14:52:16 2016 -0400
Committer: Yongzhi Chen 
Committed: Thu May 5 11:05:44 2016 -0400

--
 .../serde/AbstractParquetMapInspector.java  |  4 +-
 .../serde/ParquetHiveArrayInspector.java|  4 +-
 .../ql/io/parquet/write/DataWritableWriter.java | 90 ++--
 .../ql/io/parquet/TestDataWritableWriter.java   | 29 +++
 .../serde/TestAbstractParquetMapInspector.java  |  4 +-
 .../serde/TestParquetHiveArrayInspector.java|  4 +-
 .../parquet_array_map_emptynullvals.q   | 20 +
 .../parquet_array_map_emptynullvals.q.out   | 87 +++
 8 files changed, 189 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/8a59b85a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
index 49bf1c5..e80206e 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
@@ -60,7 +60,7 @@ public abstract class AbstractParquetMapInspector implements 
SettableMapObjectIn
 
 if (data instanceof ArrayWritable) {
   final Writable[] mapArray = ((ArrayWritable) data).get();
-  if (mapArray == null || mapArray.length == 0) {
+  if (mapArray == null) {
 return null;
   }
 
@@ -90,7 +90,7 @@ public abstract class AbstractParquetMapInspector implements 
SettableMapObjectIn
 if (data instanceof ArrayWritable) {
   final Writable[] mapArray = ((ArrayWritable) data).get();
 
-  if (mapArray == null || mapArray.length == 0) {
+  if (mapArray == null) {
 return -1;
   } else {
 return mapArray.length;

http://git-wip-us.apache.org/repos/asf/hive/blob/8a59b85a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
index 05e92b5..55614a3 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
@@ -83,7 +83,7 @@ public class ParquetHiveArrayInspector implements 
SettableListObjectInspector {
 
 if (data instanceof ArrayWritable) {
   final Writable[] array = ((ArrayWritable) data).get();
-  if (array == null || array.length == 0) {
+  if (array == null) {
 return -1;
   }
 
@@ -105,7 +105,7 @@ public class ParquetHiveArrayInspector implements 
SettableListObjectInspector {
 
 if (data instanceof ArrayWritable) {
   final Writable[] array = ((ArrayWritable) data).get();
-  if (array == null || array.length == 0) {
+  if (array == null) {
 return null;
   }
 

http://git-wip-us.apache.org/repos/asf/hive/blob/8a59b85a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
index c195c3e..24ad948 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
@@ -163,28 +163,28 @@ public class DataWritableWriter {
   private void writeArray(final Object value, final ListObjectInspector 
inspector, final GroupType type) {
 // Get the internal array structure
 GroupType repeatedType = type.getType(0).asGroupType();
-
 recordConsumer.startGroup();
-recordConsumer.startField(repeatedType.getName(), 0);
 
 L

hive git commit: HIVE-13632: Hive failing on insert empty array into parquet table. (Yongzhi Chen, reviewed by Sergio Pena)

2016-05-05 Thread ychena
Repository: hive
Updated Branches:
  refs/heads/master cbebb4d78 -> 96f2dc723


HIVE-13632: Hive failing on insert empty array into parquet table. (Yongzhi 
Chen, reviewed by Sergio Pena)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/96f2dc72
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/96f2dc72
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/96f2dc72

Branch: refs/heads/master
Commit: 96f2dc723270bb4c38e5ab842371929c2c1c849a
Parents: cbebb4d
Author: Yongzhi Chen 
Authored: Thu Apr 28 14:52:16 2016 -0400
Committer: Yongzhi Chen 
Committed: Thu May 5 09:58:39 2016 -0400

--
 .../serde/AbstractParquetMapInspector.java  |  4 +-
 .../serde/ParquetHiveArrayInspector.java|  4 +-
 .../ql/io/parquet/write/DataWritableWriter.java | 67 ---
 .../ql/io/parquet/TestDataWritableWriter.java   | 29 +++
 .../serde/TestAbstractParquetMapInspector.java  |  4 +-
 .../serde/TestParquetHiveArrayInspector.java|  4 +-
 .../parquet_array_map_emptynullvals.q   | 20 +
 .../parquet_array_map_emptynullvals.q.out   | 87 
 8 files changed, 180 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/96f2dc72/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
index 49bf1c5..e80206e 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java
@@ -60,7 +60,7 @@ public abstract class AbstractParquetMapInspector implements 
SettableMapObjectIn
 
 if (data instanceof ArrayWritable) {
   final Writable[] mapArray = ((ArrayWritable) data).get();
-  if (mapArray == null || mapArray.length == 0) {
+  if (mapArray == null) {
 return null;
   }
 
@@ -90,7 +90,7 @@ public abstract class AbstractParquetMapInspector implements 
SettableMapObjectIn
 if (data instanceof ArrayWritable) {
   final Writable[] mapArray = ((ArrayWritable) data).get();
 
-  if (mapArray == null || mapArray.length == 0) {
+  if (mapArray == null) {
 return -1;
   } else {
 return mapArray.length;

http://git-wip-us.apache.org/repos/asf/hive/blob/96f2dc72/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
index 05e92b5..55614a3 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java
@@ -83,7 +83,7 @@ public class ParquetHiveArrayInspector implements 
SettableListObjectInspector {
 
 if (data instanceof ArrayWritable) {
   final Writable[] array = ((ArrayWritable) data).get();
-  if (array == null || array.length == 0) {
+  if (array == null) {
 return -1;
   }
 
@@ -105,7 +105,7 @@ public class ParquetHiveArrayInspector implements 
SettableListObjectInspector {
 
 if (data instanceof ArrayWritable) {
   final Writable[] array = ((ArrayWritable) data).get();
-  if (array == null || array.length == 0) {
+  if (array == null) {
 return null;
   }
 

http://git-wip-us.apache.org/repos/asf/hive/blob/96f2dc72/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
index 69272dc..1e26c19 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
@@ -259,21 +259,24 @@ public class DataWritableWriter {
 @Override
 public void write(Object value) {
   recordConsumer.startGroup();
-  recordConsumer.startField(repeatedGroupName, 0);
-
   int listLength = inspector.getListLength(value);
-  for (int i = 0; i < listLength; i++) {
-Object element = inspector.getListElement(value, i);
-recordConsumer.startGroup();
-if (element != null) {
-  recordConsumer.startField(ele