[hive] branch storage-branch-2.8 updated: Preparing for development storage api post-2.8.1

2021-08-03 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.8
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/storage-branch-2.8 by this 
push:
 new 2aace4f  Preparing for development storage api post-2.8.1
2aace4f is described below

commit 2aace4fee36632af7087463f1094dccf6cc1d70a
Author: Owen O'Malley 
AuthorDate: Tue Aug 3 10:49:01 2021 -0700

Preparing for development storage api post-2.8.1

Signed-off-by: Owen O'Malley 
---
 pom.xml  | 2 +-
 standalone-metastore/pom.xml | 2 +-
 storage-api/pom.xml  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/pom.xml b/pom.xml
index 9369c9c..93283a5 100644
--- a/pom.xml
+++ b/pom.xml
@@ -198,7 +198,7 @@
 1.0.1
 1.7.30
 4.0.4
-2.8.1
+2.8.2-SNAPSHOT
 0.10.0
 2.2.0
 2.4.5
diff --git a/standalone-metastore/pom.xml b/standalone-metastore/pom.xml
index afe36c2..a3ec42c 100644
--- a/standalone-metastore/pom.xml
+++ b/standalone-metastore/pom.xml
@@ -99,7 +99,7 @@
 1.9.0
 2.14.6
 4.0.4
-2.8.1
+2.8.2-SNAPSHOT
 1.9.4
 1.3
 4.2.0
diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index 1eec76a..56f9f9f 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -24,7 +24,7 @@
 
   org.apache.hive
   hive-storage-api
-  2.8.1
+  2.8.2-SNAPSHOT
   jar
   Hive Storage API
 


[hive] annotated tag rel/storage-release-2.8.1 updated (02017b0 -> 97452aa)

2021-08-03 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to annotated tag rel/storage-release-2.8.1
in repository https://gitbox.apache.org/repos/asf/hive.git.


*** WARNING: tag rel/storage-release-2.8.1 was modified! ***

from 02017b0  (commit)
  to 97452aa  (tag)
 tagging 02017b04737de040b56ab40dacd120325ca6cc6f (commit)
 replaces storage-release-2.7.3-rc0
  by Owen O'Malley
  on Tue Aug 3 10:41:23 2021 -0700

- Log -
Hive Storage API 2.8.1
-BEGIN PGP SIGNATURE-

iQIzBAABCgAdFiEEdeuq7Peil/8i6MGN0Z6wna0cWHcFAmEJf8MACgkQ0Z6wna0c
WHdREw/+PgWA0o2Pm6MpvmAYfXE/bEn4c+EQYIWF/8RTKCBR0T/iX4JyANugiAzl
aIzI5kTwyBUZ6TcdZHcRiSR9klG/kS5XFN5nat1ycfbzqN5k8FWdl51A7KQ7lfmB
d0egeFAepzDqnbh1exb4hhYHVu6T96x2i/SMweraEys8G5Il42RHveTP+jNj/OJs
nnrhWRO5vItClMdk0vJ3e3otem9RktXx6AIJ/mqMiTykr5PmwrG4rtpiDbKdzumL
HW3pNKGI88zfjk4hN5bMjjXn7c3pzWUmFpIXLhJCqP9Nrc3/IEPe5Dmk4mX977nD
S0oDBbSmna9Q1Yjp5ol3IEemNEW781EZQ6aWJ3a6m4BywBu6hX53qNuH54p7MNzO
XfelZ/Gj/C5M0Mxq8n3u3munhKQGKSN3BjPEsJsYpc6hvUK/TaZJY22dRjfhAwzy
ONIzAjtBe8Uw8SkxKSrG2ZJX5vBvVZ68kOgEseDSqT7VoALp2YgAE3ZCcuPAXyam
2OnoM/M3OW6TDTRZO3m9t9R0CD/nMs0OCKoCpvUGv2TgViJV7zEofHBWOJ8RXgbg
Ben9DpkOC7v8vFc1ym1zfWy/I/H7l+nzsjNQcJURNTxGNDePPUR0fuDlcJoZn2Pd
z4JhpRswyt4TBmwrW6HzTTDfqdqWrDMy4bMyvZ1v7CtPwlz/zPw=
=VpN2
-END PGP SIGNATURE-
---


No new revisions were added by this update.

Summary of changes:


[hive] branch storage-branch-2.7 updated: Preparing for development post storaage-2.7.3

2021-08-03 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.7
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/storage-branch-2.7 by this 
push:
 new 614ed05  Preparing for development post storaage-2.7.3
614ed05 is described below

commit 614ed05e50bc2274cf3a3de683d35fa625ea60f9
Author: Owen O'Malley 
AuthorDate: Tue Aug 3 10:39:56 2021 -0700

Preparing for development post storaage-2.7.3

Signed-off-by: Owen O'Malley 
---
 pom.xml  | 2 +-
 standalone-metastore/pom.xml | 2 +-
 storage-api/pom.xml  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/pom.xml b/pom.xml
index 48d45a6..adf0dba 100644
--- a/pom.xml
+++ b/pom.xml
@@ -196,7 +196,7 @@
 1.0.1
 1.7.10
 4.0.4
-2.7.3
+2.7.4-SNAPSHOT
 0.9.1
 2.2.0
 2.3.0
diff --git a/standalone-metastore/pom.xml b/standalone-metastore/pom.xml
index 71dcde7..ea4fa6c 100644
--- a/standalone-metastore/pom.xml
+++ b/standalone-metastore/pom.xml
@@ -83,7 +83,7 @@
 1.5.1
 2.5.0
 1.3.0
-2.7.3
+2.7.4-SNAPSHOT
 
 
 you-must-set-this-to-run-thrift
diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index 1a14bfa..022f4dd 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -25,7 +25,7 @@
 
   org.apache.hive
   hive-storage-api
-  2.7.3
+  2.7.4-SNAPSHOT
   jar
   Hive Storage API
 


svn commit: r49176 - in /release/hive: hive-storage-2.6.1/ hive-storage-2.7.2/ hive-storage-2.7.3/ hive-storage-2.8.0/

2021-08-03 Thread omalley
Author: omalley
Date: Tue Aug  3 17:25:39 2021
New Revision: 49176

Log:
Clean up old releases and add hive storage 2.7.3.

Added:
release/hive/hive-storage-2.7.3/
release/hive/hive-storage-2.7.3/hive-storage-2.7.3.tar.gz   (with props)
release/hive/hive-storage-2.7.3/hive-storage-2.7.3.tar.gz.asc   (with props)
release/hive/hive-storage-2.7.3/hive-storage-2.7.3.tar.gz.sha256
Removed:
release/hive/hive-storage-2.6.1/
release/hive/hive-storage-2.7.2/
release/hive/hive-storage-2.8.0/

Added: release/hive/hive-storage-2.7.3/hive-storage-2.7.3.tar.gz
==
Binary file - no diff available.

Propchange: release/hive/hive-storage-2.7.3/hive-storage-2.7.3.tar.gz
--
svn:mime-type = application/gzip

Added: release/hive/hive-storage-2.7.3/hive-storage-2.7.3.tar.gz.asc
==
Binary file - no diff available.

Propchange: release/hive/hive-storage-2.7.3/hive-storage-2.7.3.tar.gz.asc
--
svn:mime-type = application/pgp-signature

Added: release/hive/hive-storage-2.7.3/hive-storage-2.7.3.tar.gz.sha256
==
--- release/hive/hive-storage-2.7.3/hive-storage-2.7.3.tar.gz.sha256 (added)
+++ release/hive/hive-storage-2.7.3/hive-storage-2.7.3.tar.gz.sha256 Tue Aug  3 
17:25:39 2021
@@ -0,0 +1 @@
+1254205cb4c586da4209a85fb7b3e1982f4bf4ffa2d0cbfb17d21e3085776cf8  
hive-storage-2.7.3.tar.gz




svn commit: r49175 - in /release/hive/hive-storage-2.8.1: ./ hive-storage-2.8.1.tar.gz hive-storage-2.8.1.tar.gz.asc hive-storage-2.8.1.tar.gz.sha256

2021-08-03 Thread omalley
Author: omalley
Date: Tue Aug  3 17:25:15 2021
New Revision: 49175

Log:
Add Hive storage api 2.8.1

Added:
release/hive/hive-storage-2.8.1/
release/hive/hive-storage-2.8.1/hive-storage-2.8.1.tar.gz   (with props)
release/hive/hive-storage-2.8.1/hive-storage-2.8.1.tar.gz.asc   (with props)
release/hive/hive-storage-2.8.1/hive-storage-2.8.1.tar.gz.sha256

Added: release/hive/hive-storage-2.8.1/hive-storage-2.8.1.tar.gz
==
Binary file - no diff available.

Propchange: release/hive/hive-storage-2.8.1/hive-storage-2.8.1.tar.gz
--
svn:mime-type = application/gzip

Added: release/hive/hive-storage-2.8.1/hive-storage-2.8.1.tar.gz.asc
==
Binary file - no diff available.

Propchange: release/hive/hive-storage-2.8.1/hive-storage-2.8.1.tar.gz.asc
--
svn:mime-type = application/pgp-signature

Added: release/hive/hive-storage-2.8.1/hive-storage-2.8.1.tar.gz.sha256
==
--- release/hive/hive-storage-2.8.1/hive-storage-2.8.1.tar.gz.sha256 (added)
+++ release/hive/hive-storage-2.8.1/hive-storage-2.8.1.tar.gz.sha256 Tue Aug  3 
17:25:15 2021
@@ -0,0 +1 @@
+92721f70546a839d7e9d793fa992c81d457aeca0218f102f5dc263e2f7ac0861  
hive-storage-2.8.1.tar.gz




[hive] annotated tag rel/storage-release-2.7.3 updated (640a366 -> 5dfef6d)

2021-08-03 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to annotated tag rel/storage-release-2.7.3
in repository https://gitbox.apache.org/repos/asf/hive.git.


*** WARNING: tag rel/storage-release-2.7.3 was modified! ***

from 640a366  (commit)
  to 5dfef6d  (tag)
 tagging 640a366be25e3fe8e33eb7324b44e6b6ee2db645 (commit)
 replaces rel/storage-release-2.7.2
  by Owen O'Malley
  on Tue Aug 3 10:12:32 2021 -0700

- Log -
Storage releaase 2.7.3
-BEGIN PGP SIGNATURE-

iQIzBAABCgAdFiEEdeuq7Peil/8i6MGN0Z6wna0cWHcFAmEJeQsACgkQ0Z6wna0c
WHe8+g/9Eief+lSPShTxfd0czdfJfedMYMPtf3TaGuAfwCI+LExisRwCXuw8QZcy
SoHOSq0cpy1J2FLHd+a4soGW6KS6oJ9QQoUivi2Oj0G32IRpYO2vELaj3MYvkDyK
IUTKZE16LruCrkxXhdEM6Nf5oBvAdaNHUpcIInH1sPD5dHDzdWs/8IiGHIVoqVdA
/+ufJM8vwK8SwdAYPluHisiaBMfaNUOmRuzs8lWjW9DP+cQxX14p3jOMQy2m+FPp
neUGLlpqn0xW6cDY9k9x8xI3GByl4cMC11FbfSIZgFgjPaWQ0HBI1xJlC5I1xmka
MQ/EaB+fGY/maHS4EZdqUf3Miomb8vvY+MGlPMOsC0QEmrjiS587oXjsjoXXxZtz
0hFlR8odZOCbLtinOeJ98f787SKhE/Re0i+b1uCBoE2XwKOdro+n9IFIhIUV22ll
wQqYEiBx9CGhCSC48TyDLw/pqZl/j2Q7/QQgIiP+KyRRV7oyzfqA15CbN8mL32nv
SFb5v1Fw0F03tLKsiqZujtawPxSrk05GX4k3QkAYJhOpgNfSs1s/TkMaHfsQA+x5
JVWKVibARvWhjNY0OMTWUJK3l0H0Bt0rqbOi/R8yzmr27PVBsL2jdOcIqe0E/Kri
iO8GOnLhnBMJ+wz09geeKtyl15HIQ7iE3uGW6vNi3esL3PbxBWA=
=cVDF
-END PGP SIGNATURE-
---


No new revisions were added by this update.

Summary of changes:


[hive] tag storage-release-2.8.1-rc2 created (now 02017b0)

2021-07-29 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to tag storage-release-2.8.1-rc2
in repository https://gitbox.apache.org/repos/asf/hive.git.


  at 02017b0  (commit)
No new revisions were added by this update.


[hive] branch storage-branch-2.8 updated: HIVE-25400: Move the offset updating in BytesColumnVector to setValPreallocated

2021-07-29 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.8
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/storage-branch-2.8 by this 
push:
 new 02017b0  HIVE-25400: Move the offset updating in BytesColumnVector to 
setValPreallocated
02017b0 is described below

commit 02017b04737de040b56ab40dacd120325ca6cc6f
Author: Owen O'Malley 
AuthorDate: Wed Jul 28 10:41:04 2021 -0700

HIVE-25400: Move the offset updating in BytesColumnVector to 
setValPreallocated

This change moves the update to the sharedBufferOffset to 
setValPreallocated. It
also means the internal code also needs to call setValPreallocated rather 
than
use the direct access to the values.

Fixes #2543

Signed-off-by: Owen O'Malley 
---
 .../hive/ql/exec/vector/BytesColumnVector.java | 34 +---
 .../hive/ql/exec/vector/TestBytesColumnVector.java | 37 ++
 2 files changed, 52 insertions(+), 19 deletions(-)

diff --git 
a/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
 
b/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
index a8c58ac..3b26ac7 100644
--- 
a/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
+++ 
b/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
@@ -187,9 +187,7 @@ public class BytesColumnVector extends ColumnVector {
 if (length > 0) {
   System.arraycopy(sourceBuf, start, currentValue, currentOffset, length);
 }
-vector[elementNum] = currentValue;
-this.start[elementNum] = currentOffset;
-this.length[elementNum] = length;
+setValPreallocated(elementNum, length);
   }
 
   /**
@@ -213,18 +211,18 @@ public class BytesColumnVector extends ColumnVector {
* Ensures that we have space allocated for the next value, which has size
* length bytes.
*
-   * Updates currentValue, currentOffset, and sharedBufferOffset for this 
value.
+   * Updates currentValue and currentOffset for this value.
*
-   * Always use before getValPreallocatedBytes, getValPreallocatedStart,
-   * and setValPreallocated.
+   * Always use before getValPreallocatedBytes, getValPreallocatedStart.
+   * setValPreallocated must be called to actually reserve the bytes.
*/
   public void ensureValPreallocated(int length) {
 if ((sharedBufferOffset + length) > sharedBuffer.length) {
-  currentValue = allocateBuffer(length);
+  // sets currentValue and currentOffset
+  allocateBuffer(length);
 } else {
   currentValue = sharedBuffer;
   currentOffset = sharedBufferOffset;
-  sharedBufferOffset += length;
 }
   }
 
@@ -246,6 +244,10 @@ public class BytesColumnVector extends ColumnVector {
 vector[elementNum] = currentValue;
 this.start[elementNum] = currentOffset;
 this.length[elementNum] = length;
+// If the current value is the shared buffer, move the next offset forward.
+if (currentValue == sharedBuffer) {
+  sharedBufferOffset += length;
+}
   }
 
   /**
@@ -264,9 +266,7 @@ public class BytesColumnVector extends ColumnVector {
   byte[] rightSourceBuf, int rightStart, int rightLen) {
 int newLen = leftLen + rightLen;
 ensureValPreallocated(newLen);
-vector[elementNum] = currentValue;
-this.start[elementNum] = currentOffset;
-this.length[elementNum] = newLen;
+setValPreallocated(elementNum, newLen);
 
 System.arraycopy(leftSourceBuf, leftStart, currentValue, currentOffset, 
leftLen);
 System.arraycopy(rightSourceBuf, rightStart, currentValue,
@@ -275,9 +275,7 @@ public class BytesColumnVector extends ColumnVector {
 
   /**
* Allocate/reuse enough buffer space to accommodate next element.
-   * currentOffset is set to the first available byte in the returned array.
-   * If sharedBuffer is used, sharedBufferOffset is updated to point after the
-   * current record.
+   * Sets currentValue and currentOffset to the correct buffer and offset.
*
* This uses an exponential increase mechanism to rapidly
* increase buffer size to enough to hold all data.
@@ -285,9 +283,8 @@ public class BytesColumnVector extends ColumnVector {
* stabilize.
*
* @param nextElemLength size of next element to be added
-   * @return the buffer to use for the next element
*/
-  private byte[] allocateBuffer(int nextElemLength) {
+  private void allocateBuffer(int nextElemLength) {
 // If this is a large value or shared buffer is maxed out, allocate a
 // single use buffer. Assumes that sharedBuffer length and
 // MAX_SIZE_FOR_SHARED_BUFFER are powers of 2.
@@ -295,8 +292,8 @@ public class BytesColumnVector extends ColumnVector {
 sharedBufferOffset + nextElemLength >= MAX_SIZE_FOR_SHARED_BUFFER) {
   // allocate a value for the next value
   ++bufferAlloca

[hive] tag storage-release-2.7.3-rc2 created (now 640a366)

2021-07-29 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to tag storage-release-2.7.3-rc2
in repository https://gitbox.apache.org/repos/asf/hive.git.


  at 640a366  (commit)
No new revisions were added by this update.


[hive] branch storage-branch-2.7 updated: HIVE-25400: Move the offset updating in BytesColumnVector to setValPreallocated

2021-07-29 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.7
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/storage-branch-2.7 by this 
push:
 new 640a366  HIVE-25400: Move the offset updating in BytesColumnVector to 
setValPreallocated
640a366 is described below

commit 640a366be25e3fe8e33eb7324b44e6b6ee2db645
Author: Owen O'Malley 
AuthorDate: Wed Jul 28 10:41:04 2021 -0700

HIVE-25400: Move the offset updating in BytesColumnVector to 
setValPreallocated

This change moves the update to the sharedBufferOffset to 
setValPreallocated. It
also means the internal code also needs to call setValPreallocated rather 
than
use the direct access to the values.

Fixes #2543

Signed-off-by: Owen O'Malley 
---
 .../hive/ql/exec/vector/BytesColumnVector.java | 34 +---
 .../hive/ql/exec/vector/TestBytesColumnVector.java | 37 ++
 2 files changed, 52 insertions(+), 19 deletions(-)

diff --git 
a/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
 
b/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
index 4782661..e541233 100644
--- 
a/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
+++ 
b/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
@@ -183,9 +183,7 @@ public class BytesColumnVector extends ColumnVector {
 if (length > 0) {
   System.arraycopy(sourceBuf, start, currentValue, currentOffset, length);
 }
-vector[elementNum] = currentValue;
-this.start[elementNum] = currentOffset;
-this.length[elementNum] = length;
+setValPreallocated(elementNum, length);
   }
 
   /**
@@ -209,18 +207,18 @@ public class BytesColumnVector extends ColumnVector {
* Ensures that we have space allocated for the next value, which has size
* length bytes.
*
-   * Updates currentValue, currentOffset, and sharedBufferOffset for this 
value.
+   * Updates currentValue and currentOffset for this value.
*
-   * Always use before getValPreallocatedBytes, getValPreallocatedStart,
-   * and setValPreallocated.
+   * Always use before getValPreallocatedBytes, getValPreallocatedStart.
+   * setValPreallocated must be called to actually reserve the bytes.
*/
   public void ensureValPreallocated(int length) {
 if ((sharedBufferOffset + length) > sharedBuffer.length) {
-  currentValue = allocateBuffer(length);
+  // sets currentValue and currentOffset
+  allocateBuffer(length);
 } else {
   currentValue = sharedBuffer;
   currentOffset = sharedBufferOffset;
-  sharedBufferOffset += length;
 }
   }
 
@@ -241,6 +239,10 @@ public class BytesColumnVector extends ColumnVector {
 vector[elementNum] = currentValue;
 this.start[elementNum] = currentOffset;
 this.length[elementNum] = length;
+// If the current value is the shared buffer, move the next offset forward.
+if (currentValue == sharedBuffer) {
+  sharedBufferOffset += length;
+}
   }
 
   /**
@@ -259,9 +261,7 @@ public class BytesColumnVector extends ColumnVector {
   byte[] rightSourceBuf, int rightStart, int rightLen) {
 int newLen = leftLen + rightLen;
 ensureValPreallocated(newLen);
-vector[elementNum] = currentValue;
-this.start[elementNum] = currentOffset;
-this.length[elementNum] = newLen;
+setValPreallocated(elementNum, newLen);
 
 System.arraycopy(leftSourceBuf, leftStart, currentValue, currentOffset, 
leftLen);
 System.arraycopy(rightSourceBuf, rightStart, currentValue,
@@ -270,9 +270,7 @@ public class BytesColumnVector extends ColumnVector {
 
   /**
* Allocate/reuse enough buffer space to accommodate next element.
-   * currentOffset is set to the first available byte in the returned array.
-   * If sharedBuffer is used, sharedBufferOffset is updated to point after the
-   * current record.
+   * Sets currentValue and currentOffset to the correct buffer and offset.
*
* This uses an exponential increase mechanism to rapidly
* increase buffer size to enough to hold all data.
@@ -280,9 +278,8 @@ public class BytesColumnVector extends ColumnVector {
* stabilize.
*
* @param nextElemLength size of next element to be added
-   * @return the buffer to use for the next element
*/
-  private byte[] allocateBuffer(int nextElemLength) {
+  private void allocateBuffer(int nextElemLength) {
 // If this is a large value or shared buffer is maxed out, allocate a
 // single use buffer. Assumes that sharedBuffer length and
 // MAX_SIZE_FOR_SHARED_BUFFER are powers of 2.
@@ -290,8 +287,8 @@ public class BytesColumnVector extends ColumnVector {
 sharedBufferOffset + nextElemLength >= MAX_SIZE_FOR_SHARED_BUFFER) {
   // allocate a value for the next value
   ++bufferAlloca

[hive] branch master updated (0e05fd7 -> eef2a5d)

2021-07-29 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 0e05fd7  Hive-24467: ConditionalTask remove tasks that not selected 
exists thread safety problem (Xi Chen reviewed by Peter Vary)
 add eef2a5d  HIVE-25400: Move the offset updating in BytesColumnVector to 
setValPreallocated

No new revisions were added by this update.

Summary of changes:
 .../hive/ql/exec/vector/BytesColumnVector.java | 34 +---
 .../hive/ql/exec/vector/TestBytesColumnVector.java | 37 ++
 2 files changed, 52 insertions(+), 19 deletions(-)


[hive] tag storage-release-2.8.1-rc1 created (now a4b0e88)

2021-07-26 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to tag storage-release-2.8.1-rc1
in repository https://gitbox.apache.org/repos/asf/hive.git.


  at a4b0e88  (commit)
No new revisions were added by this update.


[hive] tag storage-release-2.7.3-rc1 created (now f61ad98)

2021-07-26 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to tag storage-release-2.7.3-rc1
in repository https://gitbox.apache.org/repos/asf/hive.git.


  at f61ad98  (commit)
No new revisions were added by this update.


[hive] branch storage-branch-2.7 updated: Update outside build to compile and move to 2.7.3.

2021-07-26 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.7
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/storage-branch-2.7 by this 
push:
 new f61ad98  Update outside build to compile and move to 2.7.3.
f61ad98 is described below

commit f61ad98096833caac1bf3626a2c40fc35ecb1a56
Author: Owen O'Malley 
AuthorDate: Fri Jul 23 16:59:11 2021 -0700

Update outside build to compile and move to 2.7.3.

Signed-off-by: Owen O'Malley 
---
 pom.xml  | 8 
 service/pom.xml  | 6 ++
 standalone-metastore/pom.xml | 2 +-
 storage-api/pom.xml  | 2 +-
 upgrade-acid/pom.xml | 9 -
 5 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/pom.xml b/pom.xml
index 6dee287..48d45a6 100644
--- a/pom.xml
+++ b/pom.xml
@@ -196,7 +196,7 @@
 1.0.1
 1.7.10
 4.0.4
-2.7.3-SNAPSHOT
+2.7.3
 0.9.1
 2.2.0
 2.3.0
@@ -221,7 +221,7 @@
 
   datanucleus
   datanucleus maven repository
-  http://www.datanucleus.org/downloads/maven2
+  https://www.datanucleus.org/downloads/maven2
   default
   
 true
@@ -233,7 +233,7 @@
 
 
   glassfish-repository
-  http://maven.glassfish.org/content/groups/glassfish
+  https://maven.glassfish.org/content/groups/glassfish
   
 false
   
@@ -243,7 +243,7 @@
 
 
   glassfish-repo-archive
-  http://maven.glassfish.org/content/groups/glassfish
+  https://maven.glassfish.org/content/groups/glassfish
   
 false
   
diff --git a/service/pom.xml b/service/pom.xml
index 652e582..016233f 100644
--- a/service/pom.xml
+++ b/service/pom.xml
@@ -303,6 +303,12 @@
   apacheds-server-integ
   ${apache-directory-server.version}
   test
+  
+
+  org.apache.directory.client.ldap
+  ldap-client-api
+
+  
 
 
 
diff --git a/standalone-metastore/pom.xml b/standalone-metastore/pom.xml
index 130311d..71dcde7 100644
--- a/standalone-metastore/pom.xml
+++ b/standalone-metastore/pom.xml
@@ -83,7 +83,7 @@
 1.5.1
 2.5.0
 1.3.0
-2.7.1
+2.7.3
 
 
 you-must-set-this-to-run-thrift
diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index 9ec0890..1a14bfa 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -25,7 +25,7 @@
 
   org.apache.hive
   hive-storage-api
-  2.7.3-SNAPSHOT
+  2.7.3
   jar
   Hive Storage API
 
diff --git a/upgrade-acid/pom.xml b/upgrade-acid/pom.xml
index f53d096..8ef583a 100644
--- a/upgrade-acid/pom.xml
+++ b/upgrade-acid/pom.xml
@@ -122,6 +122,13 @@ java.io.IOException: Cannot initialize Cluster. Please 
check your configuration
 
 
 
+
+
+conjars
+conjarsc
+https://conjars.wensel.net/repo/
+
+
 
 
 
@@ -293,4 +300,4 @@ java.io.IOException: Cannot initialize Cluster. Please 
check your configuration
 
 
 
-
\ No newline at end of file
+


[hive] branch storage-branch-2.8 updated: HIVE-25386: hive-storage-api should not have guava compile dependency

2021-07-26 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.8
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/storage-branch-2.8 by this 
push:
 new a4b0e88  HIVE-25386: hive-storage-api should not have guava compile 
dependency
a4b0e88 is described below

commit a4b0e88199d5c66023ddcdc251d1fe9c517bee3f
Author: Dongjoon Hyun 
AuthorDate: Sun Jul 25 23:45:48 2021 -0700

HIVE-25386: hive-storage-api should not have guava compile dependency

Fixes #2531

Signed-off-by: Owen O'Malley 
---
 storage-api/pom.xml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index 306073e..1eec76a 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -118,6 +118,7 @@
   com.google.guava
   guava
   ${guava.version}
+  test
 
 
 


[hive] branch master updated (ce1b638 -> 4ce585f)

2021-07-26 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from ce1b638  Disable flaky test
 add 4ce585f  HIVE-25386: hive-storage-api should not have guava compile 
dependency

No new revisions were added by this update.

Summary of changes:
 storage-api/pom.xml | 1 +
 1 file changed, 1 insertion(+)


[hive] branch storage-branch-2.8 updated (b304acd -> ab76b0c)

2021-07-23 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch storage-branch-2.8
in repository https://gitbox.apache.org/repos/asf/hive.git.


from b304acd  HIVE-25190: Fix many small allocations in BytesColumnVector
 add ab76b0c  Update storage api to version 2.8.1

No new revisions were added by this update.

Summary of changes:
 pom.xml  | 2 +-
 standalone-metastore/pom.xml | 2 +-
 storage-api/pom.xml  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)


[hive] tag storage-release-2.8.1-rc0 created (now ab76b0c)

2021-07-23 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to tag storage-release-2.8.1-rc0
in repository https://gitbox.apache.org/repos/asf/hive.git.


  at ab76b0c  (commit)
No new revisions were added by this update.


svn commit: r48975 - in /release/hive: KEYS hive-storage-2.8.0/ hive-storage-2.8.0/hive-storage-2.8.0.tar.gz hive-storage-2.8.0/hive-storage-2.8.0.tar.gz.asc hive-storage-2.8.0/hive-storage-2.8.0.tar.

2021-07-23 Thread omalley
Author: omalley
Date: Fri Jul 23 17:02:02 2021
New Revision: 48975

Log:
Add Panos' key and hive storage 2.8.0

Added:
release/hive/hive-storage-2.8.0/
release/hive/hive-storage-2.8.0/hive-storage-2.8.0.tar.gz   (with props)
release/hive/hive-storage-2.8.0/hive-storage-2.8.0.tar.gz.asc   (with props)
release/hive/hive-storage-2.8.0/hive-storage-2.8.0.tar.gz.sha256
Modified:
release/hive/KEYS

Modified: release/hive/KEYS
==
--- release/hive/KEYS (original)
+++ release/hive/KEYS Fri Jul 23 17:02:02 2021
@@ -1477,3 +1477,51 @@ xTcYYEy4zJRK8LGZWnJl8qIK5jkvmb4EM/y2dZiK
 oQ==
 =U9CT
 -END PGP PUBLIC KEY BLOCK-
+pub   rsa3072 2020-11-17 [SC] [expires: 2022-11-17]
+  7DFAB216AB7D96B3B2072184DC11DE4D00F8FA1D
+uid   [ unknown] Panagiotis Garefalakis 
+sig 3DC11DE4D00F8FA1D 2020-11-17  Panagiotis Garefalakis 

+sub   rsa3072 2020-11-17 [E] [expires: 2022-11-17]
+sig  DC11DE4D00F8FA1D 2020-11-17  Panagiotis Garefalakis 

+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQGNBF+zqdsBDAC8OwCBo4OoJAIlz+jE7x8JuxkZLtzFHved+4nz2XNZJtt+/j76
+FTmItVsjs73HVqssEP1e7r8g9Q5qt4SvangE1CvCrEpuatXBYvslc1bwzoq36wGZ
+Eq1O8S3UjZYFQF/3+FeFyTlsZK2Z5wUm/L0qQ/llD1DleJCbvqpGjxIldlopI0r/
+Hps/RJML5AiaVYwLtmdvz+QI2jvp7sLnzwrDd68sziWlqdOvghibdeRiRZX9Yc8n
+y84AtLz/cxhh2admL8GFnjiF7Zd0nbRKXdstnlYGuMsRtN8g+AHGkM6npk0PfN3P
+L1OafNne0UDMKZJICTbJ7WqcHT+45qCobxGEUE+nU2HC9wngmvvfqN/W5WZBwxm5
+Mc/UXOl3HFRFsue7mg1fHxDerLVp7c8e1Glj9iv1W3XefVCFU5/E94/ledI1uc9r
+wzLUOzD4MiF0CqC814O/tJEuT0dDCYwfE8HlKd+rm3y9LGgrJvnRULG9DDpn0Rdd
+RNjWjGmLy6dGmcUAEQEAAbQqUGFuYWdpb3RpcyBHYXJlZmFsYWtpcyA8cGdhcmVm
+QGFwYWNoZS5vcmc+iQHUBBMBCAA+FiEEffqyFqt9lrOyByGE3BHeTQD4+h0FAl+z
+qdsCGwMFCQPCZwAFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQ3BHeTQD4+h1Y
+/gv/b0RD0R//Umw5rnOO+mEgg/uUU8tAwnYfYLz1yZb/L41fp8EXWaHZlgB2low1
+T0ubt5hbLdMPvSJBmtDyPU5skov+oDhCqMuzbqzhefbuAArt/OYRQyxpuWDAsFJz
+Gd/ROajCN7Y2rnOfSCKkboZxvucSyF/3mxNsH1jvQ/zh9DRcRaLAv0y/sx9t3e5F
+Cmi94On1fP+ANbPEzbuvjRMO8BpxMIgz4pZEke0WrW1X70spVoQQ6NL4oD/9HP7X
+aMMDP6mXspcugKJt0rl8YCiGZ8YY3Bjr5oY6LrVtsjtX5fjQqUVY8SBJvnRXpF7s
+y5of894uJwzFIU/XzRdgi7ubXWGn0xHuyoJ3ZrOIvLI1OKzP4rrYbYwJipJeF9m/
+sL/1NGfgkUT5iu+ntgqFZwcI1OR+Cri9Bz42t0xTlUUM1XX8shRnDnlh/W5KF/vp
+D1WhKV9cuOg/WMOdEaRj28CaReLPBCBULet7zQATEXR+zghdO+MxzYJoCI6TKqOp
+/R46uQGNBF+zqdsBDADW3TTziY88GsV2GmyfYwe/LmlHEXN0RFYxHum5d56XB8CU
+1OyR7EUnh4HCmQwTU7k/CMFo25ALP4RBmKvlKbWsHsJgyGT7jg6fKEnx37/PHgqZ
+5MCRmoaCSYjyhTPl4BITm7MnHrTC205bE9HpRLXqzXi/09ecbE+XLSFthBZHcvMH
+DqiNNgvM5BZldjjm8ILX4V3e0Br93GooToEyunrtvZTY1LNsuCjbLE1m33q62B7i
+qZFIQG2+RXQ9TTetlnNbdQn0LOrho8Et/SYi8CdUQt4Y5W6uAxA7KLwJXpavV1nW
+Ub1lFB8Ng/22VugHOqTclxdxHJrHk8hitCDYlsDU7fdyUkpqEcmx2RFB6O0rWg0B
+yPfZQFjY+rqZZjD5Rq36jUHgLcMvFwKVtMH2uyX3I5eXhLiGM6P58yNmVcxIDv/t
+Aq5ST7wPU2vM7/Q+LulSgfadCTuXLwwGOOeNQ6m2ijyDnvCIlCS5rmVFw9JhqQ+Y
+j5AwtqQArTcnB/SeSaMAEQEAAYkBvAQYAQgAJhYhBH36sharfZazsgchhNwR3k0A
++PodBQJfs6nbAhsMBQkDwmcAAAoJENwR3k0A+Pod5PkL/jshoxCmT4mYmBbpMOcG
+rdC+d4jnZPr16VYVNZ2gCYUBYSSFAFUvNW+TgqOfJ4eD2X7Bfl++jGLaBRnezTL8
+ID98lbD9g6NsUsMcqhTbIpxVZbRt2tfRXr/HL+vUhAqYurTeOSIRn33PR8I3hVxa
+AS/CQiUaRvCCIGoafITkE61okLQ7UoEZPZOff2rEBK6LF6cmH4lM/cTEyoUzOcOm
+ANA3NYrEHgeY5awTW+tm2ToIl1nXnWlu703exLPY9yBhSi1Y5QAp6ZkPuEAQ7PhT
+lQDA7ZDvZVDGvrKjmXtRkAfeFYXYOPEVbkJw7A1QV65rk2zW+3lTIpP9+dkg/ILA
+ZVOOYpBIQ9Z+d1Hj4E8vM3EN5yrMF666/PUr2rIKc2kGwfLrd/J3/TAlOoxZjADu
+snhufW8+VCMO+w/ZjoFCNpU5cff2bPFDrj7oyLGQGXccqkMScq2/nMyzjLxT6RAq
+tUvO4BP/JuwHpWR4tFv6ZyCKLtEq13k/mkCFjMrtTx5b/Q==
+=2yVM
+-END PGP PUBLIC KEY BLOCK-

Added: release/hive/hive-storage-2.8.0/hive-storage-2.8.0.tar.gz
==
Binary file - no diff available.

Propchange: release/hive/hive-storage-2.8.0/hive-storage-2.8.0.tar.gz
--
svn:mime-type = application/gzip

Added: release/hive/hive-storage-2.8.0/hive-storage-2.8.0.tar.gz.asc
==
Binary file - no diff available.

Propchange: release/hive/hive-storage-2.8.0/hive-storage-2.8.0.tar.gz.asc
--
svn:mime-type = application/pgp-signature

Added: release/hive/hive-storage-2.8.0/hive-storage-2.8.0.tar.gz.sha256
==
--- release/hive/hive-storage-2.8.0/hive-storage-2.8.0.tar.gz.sha256 (added)
+++ release/hive/hive-storage-2.8.0/hive-storage-2.8.0.tar.gz.sha256 Fri Jul 23 
17:02:02 2021
@@ -0,0 +1 @@
+6f75fc28552a4a3d37e75363c57bbad5a00b8745ea7316a27005001b2d20d7a1  
hive-storage-2.8.0.tar.gz




[hive] branch storage-branch-2.7 updated: HIVE-25190: Fix many small allocations in BytesColumnVector

2021-07-19 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.7
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/storage-branch-2.7 by this 
push:
 new 89eaded  HIVE-25190: Fix many small allocations in BytesColumnVector
89eaded is described below

commit 89eadedb89c098b1c66e8d21e74a13ffdc6dc74d
Author: Owen O'Malley 
AuthorDate: Fri Jun 18 16:30:13 2021 -0700

HIVE-25190: Fix many small allocations in BytesColumnVector

Fixes #2408

Signed-off-by: Owen O'Malley 
---
 storage-api/pom.xml|   2 +-
 .../hive/ql/exec/vector/BytesColumnVector.java | 161 ++---
 .../hive/ql/exec/vector/TestBytesColumnVector.java | 125 ++--
 3 files changed, 187 insertions(+), 101 deletions(-)

diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index de68292..9ec0890 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -178,7 +178,7 @@
 2.19.1
 
   false
-  -Xmx2048m
+  -Xmx3g
   false
   
 ${project.build.directory}/tmp
diff --git 
a/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
 
b/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
index e386109..4782661 100644
--- 
a/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
+++ 
b/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
@@ -46,14 +46,15 @@ public class BytesColumnVector extends ColumnVector {
*/
   public int[] length;
 
-  // A call to increaseBufferSpace() or ensureValPreallocated() will ensure 
that buffer[] points to
-  // a byte[] with sufficient space for the specified size.
-  private byte[] buffer;   // optional buffer to use when actually copying in 
data
-  private int nextFree;// next free position in buffer
+  // Calls to ensureValPreallocated() ensure that currentValue and 
currentOffset
+  // are set to enough space for the value.
+  private byte[] currentValue;   // bytes for the next value
+  private int currentOffset;// starting position in the current buffer
 
-  // Hang onto a byte array for holding smaller byte values
-  private byte[] smallBuffer;
-  private int smallBufferNextFree;
+  // A shared static buffer allocation that we use for the small values
+  private byte[] sharedBuffer;
+  // The next unused offset in the sharedBuffer.
+  private int sharedBufferOffset;
 
   private int bufferAllocationCount;
 
@@ -63,8 +64,11 @@ public class BytesColumnVector extends ColumnVector {
   // Proportion of extra space to provide when allocating more buffer space.
   static final float EXTRA_SPACE_FACTOR = (float) 1.2;
 
-  // Largest size allowed in smallBuffer
-  static final int MAX_SIZE_FOR_SMALL_BUFFER = 1024 * 1024;
+  // Largest item size allowed in sharedBuffer
+  static final int MAX_SIZE_FOR_SMALL_ITEM = 1024 * 1024;
+
+  // Largest size allowed for sharedBuffer
+  static final int MAX_SIZE_FOR_SHARED_BUFFER = 1024 * 1024 * 1024;
 
   /**
* Use this constructor for normal operation.
@@ -118,29 +122,29 @@ public class BytesColumnVector extends ColumnVector {
* Provide the estimated number of bytes needed to hold
* a full column vector worth of byte string data.
*
-   * @param estimatedValueSize  Estimated size of buffer space needed
+   * @param estimatedValueSize  Estimated size of buffer space needed per row
*/
   public void initBuffer(int estimatedValueSize) {
-nextFree = 0;
-smallBufferNextFree = 0;
+sharedBufferOffset = 0;
 
 // if buffer is already allocated, keep using it, don't re-allocate
-if (buffer != null) {
+if (sharedBuffer != null) {
   // Free up any previously allocated buffers that are referenced by vector
   if (bufferAllocationCount > 0) {
 for (int idx = 0; idx < vector.length; ++idx) {
   vector[idx] = null;
 }
-buffer = smallBuffer; // In case last row was a large bytes value
   }
 } else {
   // allocate a little extra space to limit need to re-allocate
-  int bufferSize = this.vector.length * (int)(estimatedValueSize * 
EXTRA_SPACE_FACTOR);
+  long bufferSize = (long) (this.vector.length * estimatedValueSize * 
EXTRA_SPACE_FACTOR);
   if (bufferSize < DEFAULT_BUFFER_SIZE) {
 bufferSize = DEFAULT_BUFFER_SIZE;
   }
-  buffer = new byte[bufferSize];
-  smallBuffer = buffer;
+  if (bufferSize > MAX_SIZE_FOR_SHARED_BUFFER) {
+bufferSize = MAX_SIZE_FOR_SHARED_BUFFER;
+  }
+  sharedBuffer = new byte[(int) bufferSize];
 }
 bufferAllocationCount = 0;
   }
@@ -156,10 +160,7 @@ public class BytesColumnVector extends ColumnVector {
* @return amount of buffer space currently allocated
*/
   public int bufferSize() {
-if (bu

[hive] branch storage-branch-2.8 updated: HIVE-25190: Fix many small allocations in BytesColumnVector

2021-07-19 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.8
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/storage-branch-2.8 by this 
push:
 new b304acd  HIVE-25190: Fix many small allocations in BytesColumnVector
b304acd is described below

commit b304acd36334e54d276e9ded5851ae24d2f23595
Author: Owen O'Malley 
AuthorDate: Fri Jun 18 16:30:13 2021 -0700

HIVE-25190: Fix many small allocations in BytesColumnVector

Fixes #2408

Signed-off-by: Owen O'Malley 
---
 storage-api/pom.xml|   2 +-
 .../hive/ql/exec/vector/BytesColumnVector.java | 163 ++---
 .../hive/ql/exec/vector/TestBytesColumnVector.java | 124 ++--
 3 files changed, 187 insertions(+), 102 deletions(-)

diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index 53fa3c0..c87aed7 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -185,7 +185,7 @@
 3.0.0-M4
 
   false
-  -Xmx2048m
+  -Xmx3g
   false
   
 ${project.build.directory}/tmp
diff --git 
a/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
 
b/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
index 6618807..a8c58ac 100644
--- 
a/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
+++ 
b/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java
@@ -49,14 +49,15 @@ public class BytesColumnVector extends ColumnVector {
*/
   public int[] length;
 
-  // A call to increaseBufferSpace() or ensureValPreallocated() will ensure 
that buffer[] points to
-  // a byte[] with sufficient space for the specified size.
-  private byte[] buffer;   // optional buffer to use when actually copying in 
data
-  private int nextFree;// next free position in buffer
+  // Calls to ensureValPreallocated() ensure that currentValue and 
currentOffset
+  // are set to enough space for the value.
+  private byte[] currentValue;   // bytes for the next value
+  private int currentOffset;// starting position in the current buffer
 
-  // Hang onto a byte array for holding smaller byte values
-  private byte[] smallBuffer;
-  private int smallBufferNextFree;
+  // A shared static buffer allocation that we use for the small values
+  private byte[] sharedBuffer;
+  // The next unused offset in the sharedBuffer.
+  private int sharedBufferOffset;
 
   private int bufferAllocationCount;
 
@@ -66,8 +67,11 @@ public class BytesColumnVector extends ColumnVector {
   // Proportion of extra space to provide when allocating more buffer space.
   static final float EXTRA_SPACE_FACTOR = (float) 1.2;
 
-  // Largest size allowed in smallBuffer
-  static final int MAX_SIZE_FOR_SMALL_BUFFER = 1024 * 1024;
+  // Largest item size allowed in sharedBuffer
+  static final int MAX_SIZE_FOR_SMALL_ITEM = 1024 * 1024;
+
+  // Largest size allowed for sharedBuffer
+  static final int MAX_SIZE_FOR_SHARED_BUFFER = 1024 * 1024 * 1024;
 
   /**
* Use this constructor for normal operation.
@@ -121,30 +125,30 @@ public class BytesColumnVector extends ColumnVector {
* Provide the estimated number of bytes needed to hold
* a full column vector worth of byte string data.
*
-   * @param estimatedValueSize  Estimated size of buffer space needed
+   * @param estimatedValueSize  Estimated size of buffer space needed per row
*/
   public void initBuffer(int estimatedValueSize) {
-nextFree = 0;
-smallBufferNextFree = 0;
+sharedBufferOffset = 0;
 
 // if buffer is already allocated, keep using it, don't re-allocate
-if (buffer != null) {
+if (sharedBuffer != null) {
   // Free up any previously allocated buffers that are referenced by vector
   if (bufferAllocationCount > 0) {
 for (int idx = 0; idx < vector.length; ++idx) {
   vector[idx] = null;
   length[idx] = 0;
 }
-buffer = smallBuffer; // In case last row was a large bytes value
   }
 } else {
   // allocate a little extra space to limit need to re-allocate
-  int bufferSize = this.vector.length * (int)(estimatedValueSize * 
EXTRA_SPACE_FACTOR);
+  long bufferSize = (long) (this.vector.length * estimatedValueSize * 
EXTRA_SPACE_FACTOR);
   if (bufferSize < DEFAULT_BUFFER_SIZE) {
 bufferSize = DEFAULT_BUFFER_SIZE;
   }
-  buffer = new byte[bufferSize];
-  smallBuffer = buffer;
+  if (bufferSize > MAX_SIZE_FOR_SHARED_BUFFER) {
+bufferSize = MAX_SIZE_FOR_SHARED_BUFFER;
+  }
+  sharedBuffer = new byte[(int) bufferSize];
 }
 bufferAllocationCount = 0;
   }
@@ -160,10 +164,7 @@ public class BytesColumnVector extends ColumnVector {
* @return amount of buffer space currently allocated
*/
   public in

[hive] branch master updated (7553a60 -> 2d1bf27)

2021-07-19 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 7553a60  HIVE-25336. Use single call to get tables in 
DropDatabaseAnalyzer. (#2481)(Ayush Saxena, reviewed by Miklos Gergley)
 add 2d1bf27  HIVE-25190: Fix many small allocations in BytesColumnVector

No new revisions were added by this update.

Summary of changes:
 storage-api/pom.xml|   2 +-
 .../hive/ql/exec/vector/BytesColumnVector.java | 163 ++---
 .../hive/ql/exec/vector/TestBytesColumnVector.java | 124 ++--
 3 files changed, 187 insertions(+), 102 deletions(-)


[hive] 01/01: Update storage-api to 2.8.0

2021-06-18 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.8
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 08c0b2a45f4044da411bec5772905f2fdb8014a5
Author: Owen O'Malley 
AuthorDate: Fri Jun 18 10:27:34 2021 -0700

Update storage-api to 2.8.0
---
 pom.xml  | 2 +-
 standalone-metastore/pom.xml | 2 +-
 storage-api/pom.xml  | 2 +-
 upgrade-acid/pre-upgrade/pom.xml | 4 
 4 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/pom.xml b/pom.xml
index f03368b..b17d6da 100644
--- a/pom.xml
+++ b/pom.xml
@@ -198,7 +198,7 @@
 1.0.1
 1.7.30
 4.0.4
-2.7.3-SNAPSHOT
+2.8.0
 0.10.0
 2.2.0
 2.4.5
diff --git a/standalone-metastore/pom.xml b/standalone-metastore/pom.xml
index d3ae036..e1b2f57 100644
--- a/standalone-metastore/pom.xml
+++ b/standalone-metastore/pom.xml
@@ -99,7 +99,7 @@
 1.9.0
 2.14.6
 4.0.4
-2.7.3-SNAPSHOT
+2.8.0
 1.9.4
 1.3
 4.2.0
diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index f53fcf7..53fa3c0 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -24,7 +24,7 @@
 
   org.apache.hive
   hive-storage-api
-  2.7.3
+  2.8.0
   jar
   Hive Storage API
 
diff --git a/upgrade-acid/pre-upgrade/pom.xml b/upgrade-acid/pre-upgrade/pom.xml
index 165856b..d0b5889 100644
--- a/upgrade-acid/pre-upgrade/pom.xml
+++ b/upgrade-acid/pre-upgrade/pom.xml
@@ -99,6 +99,10 @@
 org.apache.curator
 curator-framework
   
+  
+org.pentaho
+pentaho-aggdesigner-algorithm
+  
 
 
 


[hive] branch storage-branch-2.8 created (now 08c0b2a)

2021-06-18 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch storage-branch-2.8
in repository https://gitbox.apache.org/repos/asf/hive.git.


  at 08c0b2a  Update storage-api to 2.8.0

This branch includes the following new commits:

 new 08c0b2a  Update storage-api to 2.8.0

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



[hive] branch master updated (1873b39 -> 539291f)

2020-12-01 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git.


from 1873b39  HIVE-24436: Fix Avro NULL_DEFAULT_VALUE compatibility issue 
(#1722)
 add 539291f  HIVE-24455. Fix broken junit framework in storage-api.

No new revisions were added by this update.

Summary of changes:
 storage-api/pom.xml | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)



[hive] branch master updated: Move to github pages

2020-08-27 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new c40a357  Move to github pages
c40a357 is described below

commit c40a357aa456d5b4541e7a5cbb5d3f6365e4e866
Author: Owen O'Malley 
AuthorDate: Thu Aug 27 09:37:37 2020 -0700

Move to github pages

Signed-off-by: Owen O'Malley 
---
 .asf.yaml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/.asf.yaml b/.asf.yaml
index 64a28ff..5f1317b 100644
--- a/.asf.yaml
+++ b/.asf.yaml
@@ -17,6 +17,8 @@
 github:
   description: "Apache Hive"
   homepage: https://hive.apache.org/
+  ghp_branch: master
+  ghp_path: /docs
   labels:
 - hive
 - java



[hive] branch master updated: Move the Hive website to github pages.

2020-08-27 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/master by this push:
 new 49ce067  Move the Hive website to github pages.
49ce067 is described below

commit 49ce067e3852e284c59742103da04e6bf0f689ef
Author: Owen O'Malley 
AuthorDate: Mon Aug 17 22:59:24 2020 -0700

Move the Hive website to github pages.

Fixes #1410

Signed-off-by: Owen O'Malley 
---
 docs/Dockerfile  |  51 ++
 docs/Gemfile |   3 +
 docs/README.md   |  24 +++
 docs/_config.yml |  17 ++
 docs/_includes/footer.html   |  14 ++
 docs/_includes/header.html   |   5 +
 docs/_includes/sidenav.html  |  50 ++
 docs/_includes/top.html  |  21 +++
 docs/_layouts/default.html   |  19 ++
 docs/css/hive.css| 365 +++
 docs/doap_Hive.rdf   |  58 +++
 docs/downloads.md| 211 ++
 docs/favicon.ico | Bin 0 -> 1150 bytes
 docs/hcatalog_downloads.md   |  43 +
 docs/images/feather_small.gif| Bin 0 -> 7500 bytes
 docs/images/hive_logo_medium.jpg | Bin 0 -> 4372 bytes
 docs/index.md|  62 +++
 docs/issue_tracking.md   |  31 
 docs/javadoc.md  |  32 
 docs/mailing_lists.md|  78 +
 docs/people.md   | 156 +
 docs/privacy_policy.md   |  48 +
 docs/version_control.md  |  27 +++
 pom.xml  |  14 +-
 24 files changed, 1323 insertions(+), 6 deletions(-)

diff --git a/docs/Dockerfile b/docs/Dockerfile
new file mode 100644
index 000..1889955
--- /dev/null
+++ b/docs/Dockerfile
@@ -0,0 +1,51 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Hive site builder
+#
+
+FROM ubuntu:18.04
+MAINTAINER Hive team 
+
+RUN ln -fs /usr/share/zoneinfo/America/Los_Angeles /etc/localtime
+RUN apt-get update
+RUN apt-get install -y \
+  g++ \
+  gcc \
+  git \
+  libssl-dev \
+  libz-dev \
+  make \
+  ruby-dev \
+  rubygems \
+  tzdata
+RUN gem install \
+  bundler \
+  liquid \
+  listen \
+  rouge
+RUN gem install jekyll -v 3.8.6
+RUN gem install github-pages
+
+RUN useradd -ms /bin/bash hive
+COPY . /home/hive/site
+RUN chown -R hive:hive /home/hive
+USER hive
+WORKDIR /home/hive/site
+
+EXPOSE 4000
+CMD bundle exec jekyll serve -H 0.0.0.0
+
diff --git a/docs/Gemfile b/docs/Gemfile
new file mode 100644
index 000..1c529c9
--- /dev/null
+++ b/docs/Gemfile
@@ -0,0 +1,3 @@
+source 'https://rubygems.org'
+gem 'rouge'
+gem 'jekyll', "~> 3.8.3"
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 000..6100928
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,24 @@
+# Apache Hive docs site
+
+This directory contains the code for the Apache Hive web site,
+[hive.apache.org](https://hive.apache.org/). The easiest way to build
+the site is to use docker to use a standard environment.
+
+## Run the docker container with the preview of the site.
+
+1. `docker build -t hive-site .`
+2. `CONTAINER=$(docker run -d -p 4000:4000 hive-site)`
+
+## Browsing
+
+Look at the site by navigating to
+[http://0.0.0.0:4000/](http://0.0.0.0:4000/) .
+
+## Pushing to site
+
+Commit and push the changes to the main branch. The site is automatically 
deployed
+from the site directory.
+
+## Shutting down the docker container
+
+1. `docker stop $CONTAINER`
\ No newline at end of file
diff --git a/docs/_config.yml b/docs/_config.yml
new file mode 100644
index 000..68bc8f0
--- /dev/null
+++ b/docs/_config.yml
@@ -0,0 +1,17 @@
+markdown: kramdown
+highlighter: rouge
+permalink: /news/:year/:month/:day/:title
+excerpt_separator: ""
+encoding: utf-8
+exclude: [README.md, Gemfile*, Dockerfile]
+
+repository: https://github.com/apache/hive
+jira: https://issues.apache.org/jira/browse
+dist: https://downloads.apache.org/hive
+dist_mirror: https://www.apache.org/dyn/closer.cgi/hive
+tag_url: https://github.com/apache/hive/releases/tag/rel

[hive] branch storage-branch-2.7 updated: Preparing for development post-2.7.1

2019-12-01 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.7
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/storage-branch-2.7 by this 
push:
 new 3638231  Preparing for development post-2.7.1
3638231 is described below

commit 363823132d5fccec6fd1a872e74b1bf8333faa43
Author: Owen O'Malley 
AuthorDate: Sun Dec 1 09:58:53 2019 -0800

Preparing for development post-2.7.1

Signed-off-by: Owen O'Malley 
---
 pom.xml | 2 +-
 storage-api/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/pom.xml b/pom.xml
index c1e22ef..6bf537b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -196,7 +196,7 @@
 1.0.1
 1.7.10
 4.0.4
-2.7.1
+2.7.2-SNAPSHOT
 0.9.1
 2.2.0
 2.3.0
diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index 13b2277..0b23bfa 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -25,7 +25,7 @@
 
   org.apache.hive
   hive-storage-api
-  2.7.1
+  2.7.2-SNAPSHOT
   jar
   Hive Storage API
 



svn commit: r37004 - in /release/hive: hive-storage-2.4.0/ hive-storage-2.7.0/ hive-storage-2.7.1/

2019-12-01 Thread omalley
Author: omalley
Date: Sun Dec  1 17:42:08 2019
New Revision: 37004

Log:
Publish Hive Storage API 2.7.1.

Added:
release/hive/hive-storage-2.7.1/
release/hive/hive-storage-2.7.1/hive-storage-api-2.7.1.tar.gz   (with props)
release/hive/hive-storage-2.7.1/hive-storage-api-2.7.1.tar.gz.asc
release/hive/hive-storage-2.7.1/hive-storage-api-2.7.1.tar.gz.sha256
Removed:
release/hive/hive-storage-2.4.0/
release/hive/hive-storage-2.7.0/

Added: release/hive/hive-storage-2.7.1/hive-storage-api-2.7.1.tar.gz
==
Binary file - no diff available.

Propchange: release/hive/hive-storage-2.7.1/hive-storage-api-2.7.1.tar.gz
--
svn:mime-type = application/octet-stream

Added: release/hive/hive-storage-2.7.1/hive-storage-api-2.7.1.tar.gz.asc
==
--- release/hive/hive-storage-2.7.1/hive-storage-api-2.7.1.tar.gz.asc (added)
+++ release/hive/hive-storage-2.7.1/hive-storage-api-2.7.1.tar.gz.asc Sun Dec  
1 17:42:08 2019
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEER2YLyYvEM/AeXJBYEgnn8T0MkrkFAl3dwxkACgkQEgnn8T0M
+krkrTg//ecIV16FmdHIhQ5RFDJWGKNEOy1Jc0mYV3W12mSGYZygpRbfowlKOWxvT
+j7iCJHYjabTTjMgaUWuh9Oz3yepsd4dblKa5maocho9xCPJgeOY0oo3HStFUA4J5
+FXFNe7husQkbJjAGFeFZg5dacLcawfxiTWEw5DCHdScBaW0MobkAjzKAnlo8gm2p
+LiclXSueBq65r+oPU+N7kSuaVA0WcZv4yXCBsrdw26Fnhy70LMp8IR2CfpNmjbCp
+Ge7W/Tz6KiX33KT1kaPVhXr9dWH89Qre6zcRPIPVpFGCw52+eU3zPlfLHqxgup9+
+4Am2+noVavT0WeJYDxw4t8SrIve+dEsJ9audoF3odPRh3workODW/6aALpBiKNrI
+hnu67CNhLLONh46mxpAqK5zixA4Fz6dydDc/zqvALs8PkhdG+jGVnTGN7XpKZyHJ
+KQxCVn/5m/4kRa0MZMECJ8LW6qf15DdKfOcF/ygVcfLqzhX4uvOVQcRqCE+xGn+g
+LCCdKkdpQ44LrpTfw3/1WJru3/WlbNQjX7ZJ0Rh7oSK51RBBYkRtjt+XOudcqMrk
+LngmAH7JNvtuJOwq1pq4Rcyk61KMO0IS0rZ8W9IM7puRzjEFZF+rBseCM7h/mrxD
+sJJv6EtvJQeGg8vHShFycIiz79WnCeWtJECfx7yeoAgNvFUpcJk=
+=AY/l
+-END PGP SIGNATURE-

Added: release/hive/hive-storage-2.7.1/hive-storage-api-2.7.1.tar.gz.sha256
==
--- release/hive/hive-storage-2.7.1/hive-storage-api-2.7.1.tar.gz.sha256 (added)
+++ release/hive/hive-storage-2.7.1/hive-storage-api-2.7.1.tar.gz.sha256 Sun 
Dec  1 17:42:08 2019
@@ -0,0 +1 @@
+ce3b9be7ae5d35037545166d27415bc44595ddf2fa1949d55cb8c8e293c48d68  
hive-storage-api-2.7.1.tar.gz




[hive] annotated tag rel/storage-release-2.7.1 updated (80fe235 -> f8636f5)

2019-12-01 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to annotated tag rel/storage-release-2.7.1
in repository https://gitbox.apache.org/repos/asf/hive.git.


*** WARNING: tag rel/storage-release-2.7.1 was modified! ***

from 80fe235  (commit)
  to f8636f5  (tag)
 tagging 80fe235690cd88ff939555a0a4aefe918b0662b5 (commit)
 replaces rel/storage-release-2.7.0
  by Owen O'Malley
  on Sun Dec 1 09:26:30 2019 -0800

- Log -
Hive Storage API 2.7.1
-BEGIN PGP SIGNATURE-

iQIzBAABCgAdFiEER2YLyYvEM/AeXJBYEgnn8T0MkrkFAl3j98YACgkQEgnn8T0M
krkrDg/+MijDbkJUuYNjOXCHo7wZceSJpLqCnl7BWa8QK7etmmHaJQT5cBLwkg6P
2hIthRkRsYXifB/+q3gaL53jWrBoEXxvVDK/10lePLz6PQ4Mn2TLGr5JwmHFTTFb
KunzOxK3/8BK1VTj9QpgsAuJZWnF4TB+dWkp/o1+cQtdlxa+SdE0TUdhFiVhvWZh
jWrI7Y6IZYU6TPtGTnjtIVWCDRelxkm2PZ6NtRm7hWIzN/T7Qguy+ZOvsUC5SiSl
/Fz8mIfNQq/l9khdlTkN+I8Rty+QWuU9YDB3PWk4aoa3X/YWftcGvRDjX8lJ+/qf
dsUrx3dcwKujVO1BgzBQftMXf6EyGt/z7/MBcDdmQd+pR+L1hy9DltaVvL5xeoAk
AU9OkuCHV/vlcOe3BQ1tM9skKLe2QBezkwh51aUpBKRFmha9gVCbb7MkPcBX/KpL
ADPOc+SNEvptjVtJPl1XSvm5n/1ACBaC878gbxUHsZUA9Qbb5vtKIsaGjrSaQKN7
cNB9Jw8u5cvRTJWcDwXE0ojpfuipfYPHyLb3E5D6L5DmEaJf7KzDj1T1Dex8XgAn
qt6H+GONJZyP3mejMoNcD+mL6a5/pTmIXKJodLC/AWrTjeB4qGrCQUzmKaETb3RQ
nZ9eR8Z0N/HuTlRvVVywlwmonPVTMVDm2nAzWX7YrdUlWyT5vRU=
=L1gr
-END PGP SIGNATURE-
---


No new revisions were added by this update.

Summary of changes:



[hive] tag storage-release-2.7.1rc0 created (now 80fe235)

2019-11-26 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to tag storage-release-2.7.1rc0
in repository https://gitbox.apache.org/repos/asf/hive.git.


  at 80fe235  (commit)
No new revisions were added by this update.



[hive] 01/01: Update outer hive build to use the updated storage api.

2019-11-26 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.7
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 80fe235690cd88ff939555a0a4aefe918b0662b5
Author: Owen O'Malley 
AuthorDate: Mon Nov 25 12:28:57 2019 -0800

Update outer hive build to use the updated storage api.
---
 pom.xml  | 2 +-
 standalone-metastore/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/pom.xml b/pom.xml
index 19a9b71..c1e22ef 100644
--- a/pom.xml
+++ b/pom.xml
@@ -196,7 +196,7 @@
 1.0.1
 1.7.10
 4.0.4
-2.6.1-SNAPSHOT
+2.7.1
 0.9.1
 2.2.0
 2.3.0
diff --git a/standalone-metastore/pom.xml b/standalone-metastore/pom.xml
index 4176834..130311d 100644
--- a/standalone-metastore/pom.xml
+++ b/standalone-metastore/pom.xml
@@ -83,7 +83,7 @@
 1.5.1
 2.5.0
 1.3.0
-2.6.1-SNAPSHOT
+2.7.1
 
 
 you-must-set-this-to-run-thrift



[hive] branch storage-branch-2.7 updated (c742556 -> 80fe235)

2019-11-26 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch storage-branch-2.7
in repository https://gitbox.apache.org/repos/asf/hive.git.


 discard c742556  Update outer hive build to use the updated storage api.
 new 80fe235  Update outer hive build to use the updated storage api.

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c742556)
\
 N -- N -- N   refs/heads/storage-branch-2.7 (80fe235)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 pom.xml  | 2 +-
 standalone-metastore/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)



[hive] branch storage-branch-2.7 updated: Update outer hive build to use the updated storage api.

2019-11-25 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.7
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/storage-branch-2.7 by this 
push:
 new c742556  Update outer hive build to use the updated storage api.
c742556 is described below

commit c7425569f0a63e23e917c0cf29cabbfa691d741f
Author: Owen O'Malley 
AuthorDate: Mon Nov 25 12:28:57 2019 -0800

Update outer hive build to use the updated storage api.
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 19a9b71..b0e4d75 100644
--- a/pom.xml
+++ b/pom.xml
@@ -196,7 +196,7 @@
 1.0.1
 1.7.10
 4.0.4
-2.6.1-SNAPSHOT
+2.7.2
 0.9.1
 2.2.0
 2.3.0



[hive] 01/02: HIVE-22405: Add ColumnVector support for ProlepticCalendar (László Bodor via Owen O'Malley, Jesus Camacho Rodriguez)

2019-11-25 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.7
in repository https://gitbox.apache.org/repos/asf/hive.git

commit d390c582c8321586e5d130f0b053cc19b6b908da
Author: László Bodor 
AuthorDate: Sat Nov 23 08:49:23 2019 -0800

HIVE-22405: Add ColumnVector support for ProlepticCalendar (László Bodor 
via Owen O'Malley, Jesus Camacho Rodriguez)
---
 .../hive/ql/exec/vector/DateColumnVector.java  | 126 +++
 .../hive/ql/exec/vector/TimestampColumnVector.java |  83 +++-
 .../hive/ql/exec/vector/TestDateColumnVector.java  |  80 
 .../ql/exec/vector/TestTimestampColumnVector.java  | 140 +
 4 files changed, 407 insertions(+), 22 deletions(-)

diff --git 
a/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/DateColumnVector.java
 
b/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/DateColumnVector.java
new file mode 100644
index 000..3dac667
--- /dev/null
+++ 
b/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/DateColumnVector.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.exec.vector;
+
+import java.text.SimpleDateFormat;
+import java.util.GregorianCalendar;
+import java.util.TimeZone;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * This class extends LongColumnVector in order to introduce some 
date-specific semantics. In
+ * DateColumnVector, the elements of vector[] represent the days since 
1970-01-01
+ */
+public class DateColumnVector extends LongColumnVector {
+  private static final TimeZone UTC = TimeZone.getTimeZone("UTC");
+  private static final GregorianCalendar PROLEPTIC_GREGORIAN_CALENDAR = new 
GregorianCalendar(UTC);
+  private static final GregorianCalendar GREGORIAN_CALENDAR = new 
GregorianCalendar(UTC);
+
+  private static final SimpleDateFormat PROLEPTIC_GREGORIAN_DATE_FORMATTER =
+  new SimpleDateFormat("-MM-dd");
+  private static final SimpleDateFormat GREGORIAN_DATE_FORMATTER =
+  new SimpleDateFormat("-MM-dd");
+
+  /**
+  * -141427: hybrid: 1582-10-15 proleptic: 1582-10-15
+  * -141428: hybrid: 1582-10-04 proleptic: 1582-10-14
+  */
+  private static final int CUTOVER_DAY_EPOCH = -141427; // it's 1582-10-15 in 
both calendars
+
+  static {
+PROLEPTIC_GREGORIAN_CALENDAR.setGregorianChange(new 
java.util.Date(Long.MIN_VALUE));
+
+
PROLEPTIC_GREGORIAN_DATE_FORMATTER.setCalendar(PROLEPTIC_GREGORIAN_CALENDAR);
+GREGORIAN_DATE_FORMATTER.setCalendar(GREGORIAN_CALENDAR);
+  }
+
+  private boolean usingProlepticCalendar = false;
+
+  public DateColumnVector() {
+this(VectorizedRowBatch.DEFAULT_SIZE);
+  }
+
+  /**
+   * Change the calendar to or from proleptic. If the new and old values of 
the flag are the same,
+   * nothing is done. useProleptic - set the flag for the proleptic calendar 
updateData - change the
+   * data to match the new value of the flag.
+   */
+  public void changeCalendar(boolean useProleptic, boolean updateData) {
+if (useProleptic == usingProlepticCalendar) {
+  return;
+}
+usingProlepticCalendar = useProleptic;
+if (updateData) {
+  try {
+updateDataAccordingProlepticSetting();
+  } catch (Exception e) {
+throw new RuntimeException(e);
+  }
+}
+  }
+
+  private void updateDataAccordingProlepticSetting() throws Exception {
+for (int i = 0; i < vector.length; i++) {
+  if (vector[i] >= CUTOVER_DAY_EPOCH) { // no need for conversion
+continue;
+  }
+  long millis = TimeUnit.DAYS.toMillis(vector[i]);
+  String originalFormatted = usingProlepticCalendar ? 
GREGORIAN_DATE_FORMATTER.format(millis)
+: PROLEPTIC_GREGORIAN_DATE_FORMATTER.format(millis);
+
+  millis = (usingProlepticCalendar ? 
PROLEPTIC_GREGORIAN_DATE_FORMATTER.parse(originalFormatted)
+: GREGORIAN_DATE_FORMATTER.parse(originalFormatted)).getTime();
+
+  vector[i] = TimeUnit.MILLISECONDS.toDays(millis);
+}
+  }
+
+  public String formatDate(int i) {
+long millis = TimeUnit.DAYS.toMillis(vector[i]);
+return usingProlepticCale

[hive] 02/02: Preparing for a storage-api release 2.7.1.

2019-11-25 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch storage-branch-2.7
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 834f68520ebb62249c834760d9d467d8f3e07d21
Author: Owen O'Malley 
AuthorDate: Mon Nov 25 08:46:48 2019 -0800

Preparing for a storage-api release 2.7.1.
---
 storage-api/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index 5d3c7d4..13b2277 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -25,7 +25,7 @@
 
   org.apache.hive
   hive-storage-api
-  2.7.0
+  2.7.1
   jar
   Hive Storage API
 



[hive] branch storage-branch-2.7 updated (e59fdf9 -> 834f685)

2019-11-25 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch storage-branch-2.7
in repository https://gitbox.apache.org/repos/asf/hive.git.


from e59fdf9  Preparing for storage-api 2.7.0 release
 new d390c58  HIVE-22405: Add ColumnVector support for ProlepticCalendar 
(László Bodor via Owen O'Malley, Jesus Camacho Rodriguez)
 new 834f685  Preparing for a storage-api release 2.7.1.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 storage-api/pom.xml|   2 +-
 .../hive/ql/exec/vector/DateColumnVector.java  | 126 +++
 .../hive/ql/exec/vector/TimestampColumnVector.java |  83 +++-
 .../hive/ql/exec/vector/TestDateColumnVector.java  |  80 
 .../ql/exec/vector/TestTimestampColumnVector.java  | 140 +
 5 files changed, 408 insertions(+), 23 deletions(-)
 create mode 100644 
storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/DateColumnVector.java
 create mode 100644 
storage-api/src/test/org/apache/hadoop/hive/ql/exec/vector/TestDateColumnVector.java



[hive] branch branch-2.3 updated: HIVE-21585: Upgrade Hive branch-2.3 to ORC.1.3.4.

2019-04-16 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hive.git


The following commit(s) were added to refs/heads/branch-2.3 by this push:
 new dc40659  HIVE-21585: Upgrade Hive branch-2.3 to ORC.1.3.4.
dc40659 is described below

commit dc40659c627528a174e00f018724360e1f066bab
Author: Owen O'Malley 
AuthorDate: Fri Apr 5 10:41:15 2019 -0700

HIVE-21585: Upgrade Hive branch-2.3 to ORC.1.3.4.

Fixes #590

Signed-off-by: Owen O'Malley 
---
 .travis.yml | 4 +---
 .../apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java| 6 ++
 .../apache/hadoop/hive/llap/io/encoded/SerDeEncodedDataReader.java  | 5 +
 .../hadoop/hive/llap/cache/TestIncrementalObjectSizeEstimator.java  | 6 ++
 pom.xml | 2 +-
 .../org/apache/hadoop/hive/ql/io/orc/encoded/EncodedReaderImpl.java | 4 ++--
 .../java/org/apache/hadoop/hive/ql/io/orc/encoded/ReaderImpl.java   | 2 +-
 7 files changed, 22 insertions(+), 7 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index d0e1568..98057b6 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -23,10 +23,8 @@ dist: trusty
 # that requires full git history, enable this
 # before_install: git fetch --unshallow
 
-# parallel builds on jdk7 and jdk8
 language: java
 jdk:
-  - oraclejdk7
   - oraclejdk8
 
 cache:
@@ -44,4 +42,4 @@ before_install:
 
 install: true
 
-script: mvn clean install -DskipTests -T 4 -q -Pitests
+script: travis_wait 30 mvn clean install -DskipTests -T 4 -q -Pitests
diff --git 
a/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java
 
b/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java
index 03bc3ce..07318da 100644
--- 
a/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java
+++ 
b/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java
@@ -26,6 +26,7 @@ import java.util.Collections;
 import java.util.List;
 
 import org.apache.hadoop.hive.llap.counters.LlapIOCounters;
+import org.apache.orc.CompressionCodec;
 import org.apache.orc.TypeDescription;
 import org.apache.orc.impl.DataReaderProperties;
 import org.apache.orc.impl.OrcIndex;
@@ -884,6 +885,11 @@ public class OrcEncodedDataReader extends 
CallableWithNdc
 }
 
 @Override
+public CompressionCodec getCompressionCodec() {
+  return orcDataReader.getCompressionCodec();
+}
+
+@Override
 public DiskRangeList readFileData(DiskRangeList range, long baseOffset,
 boolean doForceDirect) throws IOException {
   long startTime = counters.startTimeCounter();
diff --git 
a/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/SerDeEncodedDataReader.java
 
b/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/SerDeEncodedDataReader.java
index 907200a..dd39fd7 100644
--- 
a/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/SerDeEncodedDataReader.java
+++ 
b/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/SerDeEncodedDataReader.java
@@ -546,6 +546,11 @@ public class SerDeEncodedDataReader extends 
CallableWithNdc
   throw new UnsupportedOperationException(); // Only used in ACID writer.
 }
 
+@Override
+public CompressionCodec getCompressionCodec() {
+  return null;
+}
+
 public void setCurrentStripeOffsets(long currentKnownTornStart,
 long firstStartOffset, long lastStartOffset, long currentFileOffset) {
   currentStripe.knownTornStart = currentKnownTornStart;
diff --git 
a/llap-server/src/test/org/apache/hadoop/hive/llap/cache/TestIncrementalObjectSizeEstimator.java
 
b/llap-server/src/test/org/apache/hadoop/hive/llap/cache/TestIncrementalObjectSizeEstimator.java
index 13c7767..2c41d21 100644
--- 
a/llap-server/src/test/org/apache/hadoop/hive/llap/cache/TestIncrementalObjectSizeEstimator.java
+++ 
b/llap-server/src/test/org/apache/hadoop/hive/llap/cache/TestIncrementalObjectSizeEstimator.java
@@ -28,6 +28,7 @@ import java.util.ArrayList;
 import java.util.LinkedHashSet;
 
 import org.apache.hadoop.hive.common.io.DiskRangeList;
+import org.apache.orc.CompressionCodec;
 import org.apache.orc.DataReader;
 import org.apache.orc.OrcFile;
 import org.apache.orc.TypeDescription;
@@ -159,6 +160,11 @@ public class TestIncrementalObjectSizeEstimator {
 @Override
 public void close() throws IOException {
 }
+
+@Override
+public CompressionCodec getCompressionCodec() {
+  return null;
+}
   }
 
   @Test
diff --git a/pom.xml b/pom.xml
index 736eef7..adc2541 100644
--- a/pom.xml
+++ b/pom.xml
@@ -175,7 +175,7 @@
 0.9.3
 2.6.2
 2.3
-1.3.3
+1.3.4
 1.9.5
 2.0.0-M5
 4.0.52.Final
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/EncodedReaderImpl.java 
b/ql/src/java/org/apache

[hive] 04/04: HIVE-18624: Parsing time is extremely high (~10 min) for queries with complex select expressions (Zoltan Haindrich reviewed by Ashutosh Chauhan)

2019-04-05 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 1b803f8050b0dc7b1771bdb7ef22556e6df71cef
Author: Zoltan Haindrich 
AuthorDate: Wed Aug 22 18:20:41 2018 +0200

HIVE-18624: Parsing time is extremely high (~10 min) for queries with 
complex select expressions (Zoltan Haindrich reviewed by Ashutosh Chauhan)

Signed-off-by: Zoltan Haindrich 
(cherry picked from commit 4408661c0501bf1e7991e144f65b49732f4c641b)
(cherry picked from commit a4b913360d6086b5da8d1c84a2d3cfd847131056)
---
 .../hadoop/hive/ql/parse/IdentifiersParser.g   |   2 +-
 .../hadoop/hive/ql/parse/TestParseDriver.java  | 100 +
 .../hive/ql/parse/TestParseDriverIntervals.java|   3 +-
 .../clientnegative/char_pad_convert_fail2.q.out|   2 +-
 .../ptf_negative_DistributeByOrderBy.q.out |   3 +-
 .../ptf_negative_PartitionBySortBy.q.out   |   3 +-
 .../clientnegative/ptf_window_boundaries.q.out |   2 +-
 .../clientnegative/ptf_window_boundaries2.q.out|   2 +-
 8 files changed, 110 insertions(+), 7 deletions(-)

diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
index 8c4ee8a..071676a 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
@@ -420,7 +420,7 @@ atomExpression
 | whenExpression
 | (subQueryExpression)=> (subQueryExpression)
 -> ^(TOK_SUBQUERY_EXPR TOK_SUBQUERY_OP subQueryExpression)
-| (function) => function
+| (functionName LPAREN) => function
 | tableOrColumn
 | expressionsInParenthesis[true]
 ;
diff --git a/ql/src/test/org/apache/hadoop/hive/ql/parse/TestParseDriver.java 
b/ql/src/test/org/apache/hadoop/hive/ql/parse/TestParseDriver.java
index cd9db19..827921d 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/parse/TestParseDriver.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/parse/TestParseDriver.java
@@ -19,13 +19,22 @@ package org.apache.hadoop.hive.ql.parse;
 
 import static org.junit.Assert.assertEquals;
 
+import org.junit.FixMethodOrder;
 import org.junit.Test;
+import org.junit.runners.MethodSorters;
 
 
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
 public class TestParseDriver {
   ParseDriver parseDriver = new ParseDriver();
 
   @Test
+  public void atFirstWarmup() throws Exception {
+// this test method is here to do an initial call to parsedriver; and 
prevent any tests with timeouts to be the first.
+parseDriver.parse("select 1");
+  }
+
+  @Test
   public void testParse() throws Exception {
 String selectStr = "select field1, field2, sum(field3+field4)";
 String whereStr = "field5=1 and field6 in ('a', 'b')";
@@ -114,4 +123,95 @@ public class TestParseDriver {
   assertTree((ASTNode) astNode1.getChild(i), (ASTNode) 
astNode2.getChild(i));
 }
   }
+
+  @Test(timeout = 1000)
+  public void testNestedFunctionCalls() throws Exception {
+// Expectation here is not to run into a timeout
+parseDriver.parse(
+"select 
greatest(1,greatest(1,greatest(1,greatest(1,greatest(1,greatest(1,greatest(1,"
++ 
"greatest(1,greatest(1,greatest(1,greatest(1,greatest(1,greatest(1,greatest(1,"
++ 
"greatest(1,greatest(1,(greatest(1,greatest(1,2)))");
+  }
+
+  @Test(timeout = 1000)
+  public void testHIVE18624() throws Exception {
+// Expectation here is not to run into a timeout
+parseDriver.parse("EXPLAIN\n" +
+"SELECT DISTINCT\n" +
+"\n" +
+"\n" +
+"  IF(lower('a') <= lower('a')\n" +
+"  ,'a'\n" +
+"  ,IF(('a' IS NULL AND from_unixtime(UNIX_TIMESTAMP()) <= 'a')\n" +
+"  ,'a'\n" +
+"  ,IF(if('a' = 'a', TRUE, FALSE) = 1\n" +
+"  ,'a'\n" +
+"  ,IF(('a' = 1 and lower('a') NOT IN ('a', 'a')\n" +
+"   and lower(if('a' = 'a','a','a')) <= lower('a'))\n" +
+"  OR ('a' like 'a' OR 'a' like 'a')\n" +
+"  OR 'a' in ('a','a')\n" +
+"  ,'a'\n" +
+"  ,IF(if(lower('a') in ('a', 'a') and 'a'='a', TRUE, FALSE) = 1\n" +
+"  ,'a'\n" +
+"  ,IF('a'='a' and unix_timestamp(if('a' = 'a',cast('a' as 
string),coalesce('a',cast('a' as string),from_unixtime(unix_timestamp() <= 
unix_timestamp(concat_ws('a',cast(lower('a') as string),'00:00:00')) + 9*3600\n"
++
+"  ,'a'\n" +
+"\n" +
+"  ,If(lower('a') <= lower('a')\n" +
+"  an

[hive] branch branch-2.3 updated (a2cdcf2 -> 1b803f8)

2019-04-05 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hive.git.


 discard a2cdcf2  HIVE-18624: Parsing time is extremely high (~10 min) for 
queries with complex select expressions (Zoltan Haindrich reviewed by Ashutosh 
Chauhan)
 discard 9547246  HIVE-20126: OrcInputFormat does not pass conf to orc reader 
options (Prasanth Jayachandran reviewed by Sergey Shelukhin)
 new 56acdd2  Preparing for 2.3.4 release
 new 432960e  prepare for post 2.3.4 development
 new 2c0f948  HIVE-20126: OrcInputFormat does not pass conf to orc reader 
options (Prasanth Jayachandran reviewed by Sergey Shelukhin)
 new 1b803f8  HIVE-18624: Parsing time is extremely high (~10 min) for 
queries with complex select expressions (Zoltan Haindrich reviewed by Ashutosh 
Chauhan)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (a2cdcf2)
\
 N -- N -- N   refs/heads/branch-2.3 (1b803f8)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 RELEASE_NOTES.txt  | 64 ++
 accumulo-handler/pom.xml   |  2 +-
 beeline/pom.xml|  2 +-
 cli/pom.xml|  2 +-
 common/pom.xml |  2 +-
 contrib/pom.xml|  2 +-
 druid-handler/pom.xml  |  2 +-
 hbase-handler/pom.xml  |  2 +-
 hcatalog/core/pom.xml  |  2 +-
 hcatalog/hcatalog-pig-adapter/pom.xml  |  2 +-
 hcatalog/pom.xml   |  2 +-
 hcatalog/server-extensions/pom.xml |  2 +-
 hcatalog/streaming/pom.xml |  2 +-
 hcatalog/webhcat/java-client/pom.xml   |  2 +-
 hcatalog/webhcat/svr/pom.xml   |  2 +-
 hplsql/pom.xml |  2 +-
 itests/custom-serde/pom.xml|  2 +-
 itests/custom-udfs/pom.xml |  2 +-
 itests/custom-udfs/udf-classloader-udf1/pom.xml|  2 +-
 itests/custom-udfs/udf-classloader-udf2/pom.xml|  2 +-
 itests/custom-udfs/udf-classloader-util/pom.xml|  2 +-
 .../custom-udfs/udf-vectorized-badexample/pom.xml  |  2 +-
 itests/hcatalog-unit/pom.xml   |  2 +-
 itests/hive-blobstore/pom.xml  |  2 +-
 itests/hive-jmh/pom.xml|  2 +-
 itests/hive-minikdc/pom.xml|  2 +-
 itests/hive-unit-hadoop2/pom.xml   |  2 +-
 itests/hive-unit/pom.xml   |  2 +-
 itests/pom.xml |  2 +-
 itests/qtest-accumulo/pom.xml  |  2 +-
 itests/qtest-spark/pom.xml |  2 +-
 itests/qtest/pom.xml   |  2 +-
 itests/test-serde/pom.xml  |  2 +-
 itests/util/pom.xml|  2 +-
 jdbc-handler/pom.xml   |  2 +-
 jdbc/pom.xml   |  2 +-
 llap-client/pom.xml|  2 +-
 llap-common/pom.xml|  2 +-
 llap-ext-client/pom.xml|  2 +-
 llap-server/pom.xml|  2 +-
 llap-tez/pom.xml   |  2 +-
 metastore/pom.xml  |  2 +-
 packaging/pom.xml  |  2 +-
 pom.xml|  2 +-
 ql/pom.xml |  2 +-
 serde/pom.xml  |  2 +-
 service-rpc/pom.xml|  2 +-
 service/pom.xml|  2 +-
 shims/0.23/pom.xml |  2 +-
 shims/aggregator/pom.xml   |  2 +-
 shims/common/pom.xml   |  2 +-
 shims/pom.xml  

[hive] 01/04: Preparing for 2.3.4 release

2019-04-05 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 56acdd2120b9ce6790185c679223b8b5e884aaf2
Author: Daniel Dai 
AuthorDate: Wed Oct 31 14:19:11 2018 -0700

Preparing for 2.3.4 release
---
 RELEASE_NOTES.txt  | 64 ++
 accumulo-handler/pom.xml   |  2 +-
 beeline/pom.xml|  2 +-
 cli/pom.xml|  2 +-
 common/pom.xml |  2 +-
 contrib/pom.xml|  2 +-
 druid-handler/pom.xml  |  2 +-
 hbase-handler/pom.xml  |  2 +-
 hcatalog/core/pom.xml  |  2 +-
 hcatalog/hcatalog-pig-adapter/pom.xml  |  2 +-
 hcatalog/pom.xml   |  2 +-
 hcatalog/server-extensions/pom.xml |  2 +-
 hcatalog/streaming/pom.xml |  2 +-
 hcatalog/webhcat/java-client/pom.xml   |  2 +-
 hcatalog/webhcat/svr/pom.xml   |  2 +-
 hplsql/pom.xml |  2 +-
 itests/custom-serde/pom.xml|  2 +-
 itests/custom-udfs/pom.xml |  2 +-
 itests/custom-udfs/udf-classloader-udf1/pom.xml|  2 +-
 itests/custom-udfs/udf-classloader-udf2/pom.xml|  2 +-
 itests/custom-udfs/udf-classloader-util/pom.xml|  2 +-
 .../custom-udfs/udf-vectorized-badexample/pom.xml  |  2 +-
 itests/hcatalog-unit/pom.xml   |  2 +-
 itests/hive-blobstore/pom.xml  |  2 +-
 itests/hive-jmh/pom.xml|  2 +-
 itests/hive-minikdc/pom.xml|  2 +-
 itests/hive-unit-hadoop2/pom.xml   |  2 +-
 itests/hive-unit/pom.xml   |  2 +-
 itests/pom.xml |  2 +-
 itests/qtest-accumulo/pom.xml  |  2 +-
 itests/qtest-spark/pom.xml |  2 +-
 itests/qtest/pom.xml   |  2 +-
 itests/test-serde/pom.xml  |  2 +-
 itests/util/pom.xml|  2 +-
 jdbc-handler/pom.xml   |  2 +-
 jdbc/pom.xml   |  2 +-
 llap-client/pom.xml|  2 +-
 llap-common/pom.xml|  2 +-
 llap-ext-client/pom.xml|  2 +-
 llap-server/pom.xml|  2 +-
 llap-tez/pom.xml   |  2 +-
 metastore/pom.xml  |  2 +-
 packaging/pom.xml  |  2 +-
 pom.xml|  2 +-
 ql/pom.xml |  2 +-
 serde/pom.xml  |  2 +-
 service-rpc/pom.xml|  2 +-
 service/pom.xml|  2 +-
 shims/0.23/pom.xml |  2 +-
 shims/aggregator/pom.xml   |  2 +-
 shims/common/pom.xml   |  2 +-
 shims/pom.xml  |  2 +-
 shims/scheduler/pom.xml|  2 +-
 spark-client/pom.xml   |  4 +-
 testutils/pom.xml  |  2 +-
 vector-code-gen/pom.xml|  2 +-
 56 files changed, 61 insertions(+), 115 deletions(-)

diff --git a/RELEASE_NOTES.txt b/RELEASE_NOTES.txt
index 51f3a6b..55b5360 100644
--- a/RELEASE_NOTES.txt
+++ b/RELEASE_NOTES.txt
@@ -1,67 +1,13 @@
 
-Release Notes - Hive - Version 2.3.2
-
-** Sub-task
-* [HIVE-16312] - Flaky test: TestHCatClient.testTransportFailure
-
-
-
-
+Release Notes - Hive - Version 2.3.4
 
 
 
 ** Bug
-* [HIVE-10378] - Hive Update statement set keyword work with lower case 
only and doesn't give any error if wrong column name specified in the set 
clause.
-* [HIVE-15761] - ObjectStore.getNextNotification could return an empty 
NotificationEventResponse causing TProtocolException 
-* [HIVE-16213] - ObjectStore can leak Queries when rollbackTransaction 
throws an exception
-* [HIVE-16487] - Serious Zookeeper exception is logged when a race 
condition happens
-* [HIVE-16646] - Alias in transform ... as clause shouldn't be case 
sensitive
-* [HIVE-16930] - HoS should verify the value of Kerberos principal and 
keytab file before adding them to spark-submit command parameters
-* [HIVE-16991] - HiveMetaStoreClient needs a 2-arg constructor for 
backwards compatibility
-* [HIVE-17008] - Fix boolean flag switchup in DropTableEvent
-* [HIVE-17150] - CREATE INDEX

[hive] 02/04: prepare for post 2.3.4 development

2019-04-05 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch branch-2.3
in repository https://gitbox.apache.org/repos/asf/hive.git

commit 432960e93326cd749014e068b4c1f94615bf9f24
Author: Owen O'Malley 
AuthorDate: Fri Apr 5 09:48:50 2019 -0700

prepare for post 2.3.4 development
---
 accumulo-handler/pom.xml | 2 +-
 beeline/pom.xml  | 2 +-
 cli/pom.xml  | 2 +-
 common/pom.xml   | 2 +-
 contrib/pom.xml  | 2 +-
 druid-handler/pom.xml| 2 +-
 hbase-handler/pom.xml| 2 +-
 hcatalog/core/pom.xml| 2 +-
 hcatalog/hcatalog-pig-adapter/pom.xml| 2 +-
 hcatalog/pom.xml | 2 +-
 hcatalog/server-extensions/pom.xml   | 2 +-
 hcatalog/streaming/pom.xml   | 2 +-
 hcatalog/webhcat/java-client/pom.xml | 2 +-
 hcatalog/webhcat/svr/pom.xml | 2 +-
 hplsql/pom.xml   | 2 +-
 itests/custom-serde/pom.xml  | 2 +-
 itests/custom-udfs/pom.xml   | 2 +-
 itests/custom-udfs/udf-classloader-udf1/pom.xml  | 2 +-
 itests/custom-udfs/udf-classloader-udf2/pom.xml  | 2 +-
 itests/custom-udfs/udf-classloader-util/pom.xml  | 2 +-
 itests/custom-udfs/udf-vectorized-badexample/pom.xml | 2 +-
 itests/hcatalog-unit/pom.xml | 2 +-
 itests/hive-blobstore/pom.xml| 2 +-
 itests/hive-jmh/pom.xml  | 2 +-
 itests/hive-minikdc/pom.xml  | 2 +-
 itests/hive-unit-hadoop2/pom.xml | 2 +-
 itests/hive-unit/pom.xml | 2 +-
 itests/pom.xml   | 2 +-
 itests/qtest-accumulo/pom.xml| 2 +-
 itests/qtest-spark/pom.xml   | 2 +-
 itests/qtest/pom.xml | 2 +-
 itests/test-serde/pom.xml| 2 +-
 itests/util/pom.xml  | 2 +-
 jdbc-handler/pom.xml | 2 +-
 jdbc/pom.xml | 2 +-
 llap-client/pom.xml  | 2 +-
 llap-common/pom.xml  | 2 +-
 llap-ext-client/pom.xml  | 2 +-
 llap-server/pom.xml  | 2 +-
 llap-tez/pom.xml | 2 +-
 metastore/pom.xml| 2 +-
 packaging/pom.xml| 2 +-
 pom.xml  | 2 +-
 ql/pom.xml   | 2 +-
 serde/pom.xml| 2 +-
 service-rpc/pom.xml  | 2 +-
 service/pom.xml  | 2 +-
 shims/0.23/pom.xml   | 2 +-
 shims/aggregator/pom.xml | 2 +-
 shims/common/pom.xml | 2 +-
 shims/pom.xml| 2 +-
 shims/scheduler/pom.xml  | 2 +-
 spark-client/pom.xml | 4 ++--
 testutils/pom.xml| 2 +-
 vector-code-gen/pom.xml  | 2 +-
 55 files changed, 56 insertions(+), 56 deletions(-)

diff --git a/accumulo-handler/pom.xml b/accumulo-handler/pom.xml
index 465ff27..f53ca77 100644
--- a/accumulo-handler/pom.xml
+++ b/accumulo-handler/pom.xml
@@ -19,7 +19,7 @@
   
 org.apache.hive
 hive
-2.3.4
+2.3.5-SNAPSHOT
 ../pom.xml
   
 
diff --git a/beeline/pom.xml b/beeline/pom.xml
index 6396574..71349f0 100644
--- a/beeline/pom.xml
+++ b/beeline/pom.xml
@@ -19,7 +19,7 @@
   
 org.apache.hive
 hive
-2.3.4
+2.3.5-SNAPSHOT
 ../pom.xml
   
 
diff --git a/cli/pom.xml b/cli/pom.xml
index 4d1db9d..aa1734f 100644
--- a/cli/pom.xml
+++ b/cli/pom.xml
@@ -19,7 +19,7 @@
   
 org.apache.hive
 hive
-2.3.4
+2.3.5-SNAPSHOT
 ../pom.xml
   
 
diff --git a/common/pom.xml b/common/pom.xml
index 43e7eda..b626cfc 100644
--- a/common/pom.xml
+++ b/common/pom.xml
@@ -19,7 +19,7 @@
   
 org.apache.hive
 hive
-2.3.4
+2.3.5-SNAPSHOT
 ../pom.xml
   
 
diff --git a/contrib/pom.xml b/contrib/pom.xml
index 53e9a32..e6a594d 100644
--- a/contrib/pom.xml
+++ b/contrib/pom.xml
@@ -19,7 +19,7 @@
   
 org.apache.hive
 hive
-2.3.4
+2.3.5-SNAPSHOT
 ../pom.xml
   
 
diff --git a/druid-handler/pom.xml b/druid-handler/pom.xml
index aab20f2..844adbb 100644
--- a/druid

svn commit: r26858 - in /release/hive: hive-storage-2.3.1/ hive-storage-2.6.0/

2018-05-11 Thread omalley
Author: omalley
Date: Fri May 11 22:27:57 2018
New Revision: 26858

Log:
Remove old releases.

Removed:
release/hive/hive-storage-2.3.1/
release/hive/hive-storage-2.6.0/



svn commit: r26857 - in /release/hive/hive-storage-2.6.1: ./ hive-storage-2.6.1.tar.gz hive-storage-2.6.1.tar.gz.asc hive-storage-2.6.1.tar.gz.sha256

2018-05-11 Thread omalley
Author: omalley
Date: Fri May 11 22:26:25 2018
New Revision: 26857

Log:
Add Hive Storage-API 2.6.1 release.

Added:
release/hive/hive-storage-2.6.1/
release/hive/hive-storage-2.6.1/hive-storage-2.6.1.tar.gz   (with props)
release/hive/hive-storage-2.6.1/hive-storage-2.6.1.tar.gz.asc   (with props)
release/hive/hive-storage-2.6.1/hive-storage-2.6.1.tar.gz.sha256

Added: release/hive/hive-storage-2.6.1/hive-storage-2.6.1.tar.gz
==
Binary file - no diff available.

Propchange: release/hive/hive-storage-2.6.1/hive-storage-2.6.1.tar.gz
--
svn:mime-type = application/x-gzip

Added: release/hive/hive-storage-2.6.1/hive-storage-2.6.1.tar.gz.asc
==
Binary file - no diff available.

Propchange: release/hive/hive-storage-2.6.1/hive-storage-2.6.1.tar.gz.asc
--
svn:mime-type = application/pgp-signature

Added: release/hive/hive-storage-2.6.1/hive-storage-2.6.1.tar.gz.sha256
==
--- release/hive/hive-storage-2.6.1/hive-storage-2.6.1.tar.gz.sha256 (added)
+++ release/hive/hive-storage-2.6.1/hive-storage-2.6.1.tar.gz.sha256 Fri May 11 
22:26:25 2018
@@ -0,0 +1 @@
+b19c689f16e7fc52ba23d9e048c4a650a6f5291d2c5d8bb06f2f3e5c16308267  
hive-storage-2.6.1-rc0.tar.gz




[hive] Git Push Summary

2018-05-07 Thread omalley
Repository: hive
Updated Branches:
  refs/heads/storage-branch-2.6.1 [deleted] 9394867a9


svn commit: r26602 - in /release/hive/hive-storage-2.6.0: ./ hive-storage-2.6.0.tar.gz hive-storage-2.6.0.tar.gz.asc hive-storage-2.6.0.tar.gz.sha256

2018-04-30 Thread omalley
Author: omalley
Date: Mon Apr 30 16:10:54 2018
New Revision: 26602

Log:
Add Storage API 2.6.0

Added:
release/hive/hive-storage-2.6.0/
release/hive/hive-storage-2.6.0/hive-storage-2.6.0.tar.gz   (with props)
release/hive/hive-storage-2.6.0/hive-storage-2.6.0.tar.gz.asc   (with props)
release/hive/hive-storage-2.6.0/hive-storage-2.6.0.tar.gz.sha256

Added: release/hive/hive-storage-2.6.0/hive-storage-2.6.0.tar.gz
==
Binary file - no diff available.

Propchange: release/hive/hive-storage-2.6.0/hive-storage-2.6.0.tar.gz
--
svn:mime-type = application/x-gzip

Added: release/hive/hive-storage-2.6.0/hive-storage-2.6.0.tar.gz.asc
==
Binary file - no diff available.

Propchange: release/hive/hive-storage-2.6.0/hive-storage-2.6.0.tar.gz.asc
--
svn:mime-type = application/pgp-signature

Added: release/hive/hive-storage-2.6.0/hive-storage-2.6.0.tar.gz.sha256
==
--- release/hive/hive-storage-2.6.0/hive-storage-2.6.0.tar.gz.sha256 (added)
+++ release/hive/hive-storage-2.6.0/hive-storage-2.6.0.tar.gz.sha256 Mon Apr 30 
16:10:54 2018
@@ -0,0 +1 @@
+7e6ad0186489e9bfdb3bdfae360ada30817eff823408ceda5372997c34d65bcd  
hive-storage-2.6.0-rc0.tar.gz




hive git commit: Preparing for development after storage-api 2.6.0.

2018-04-30 Thread omalley
Repository: hive
Updated Branches:
  refs/heads/storage-branch-2.6 657dfefd5 -> 10dcff625


Preparing for development after storage-api 2.6.0.

Signed-off-by: Owen O'Malley 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/10dcff62
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/10dcff62
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/10dcff62

Branch: refs/heads/storage-branch-2.6
Commit: 10dcff625c2989f919c7803afd986f963e713063
Parents: 657dfef
Author: Owen O'Malley 
Authored: Mon Apr 30 08:50:09 2018 -0700
Committer: Owen O'Malley 
Committed: Mon Apr 30 08:50:09 2018 -0700

--
 pom.xml | 2 +-
 storage-api/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/10dcff62/pom.xml
--
diff --git a/pom.xml b/pom.xml
index ede38d8..6d5c0cc 100644
--- a/pom.xml
+++ b/pom.xml
@@ -163,7 +163,7 @@
 1.0.1
 1.7.10
 4.0.4
-2.6.0-SNAPSHOT
+2.6.1-SNAPSHOT
 0.9.1
 0.92.0-incubating
 2.2.0

http://git-wip-us.apache.org/repos/asf/hive/blob/10dcff62/storage-api/pom.xml
--
diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index 89795bf..a40feff 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -25,7 +25,7 @@
 
   org.apache.hive
   hive-storage-api
-  2.6.0
+  2.6.1-SNAPSHOT
   jar
   Hive Storage API
 



[hive] Git Push Summary

2018-04-30 Thread omalley
Repository: hive
Updated Tags:  refs/tags/storage-release-2.6.0-rc0 [deleted] 657dfefd5


[hive] Git Push Summary

2018-04-30 Thread omalley
Repository: hive
Updated Tags:  refs/tags/rel/storage-release-2.6.0 [created] af9d4b2d4


[hive] Git Push Summary

2018-04-26 Thread omalley
Repository: hive
Updated Tags:  refs/tags/storage-release-2.6.0-rc0 [created] 657dfefd5


hive git commit: Preparing for storage-api 2.6.0 release.

2018-04-26 Thread omalley
Repository: hive
Updated Branches:
  refs/heads/storage-branch-2.6 c055e8444 -> 657dfefd5


Preparing for storage-api 2.6.0 release.


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/657dfefd
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/657dfefd
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/657dfefd

Branch: refs/heads/storage-branch-2.6
Commit: 657dfefd5c56bbb781ff69e7c40747da3df31786
Parents: c055e84
Author: Owen O'Malley 
Authored: Thu Apr 26 08:17:21 2018 -0700
Committer: Owen O'Malley 
Committed: Thu Apr 26 08:17:21 2018 -0700

--
 storage-api/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/657dfefd/storage-api/pom.xml
--
diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index d768f3f..89795bf 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -25,7 +25,7 @@
 
   org.apache.hive
   hive-storage-api
-  2.6.0-SNAPSHOT
+  2.6.0
   jar
   Hive Storage API
 



hive git commit: Preparing for storage-api 2.7 development

2018-04-26 Thread omalley
Repository: hive
Updated Branches:
  refs/heads/master 087ef7b63 -> 23d20e649


Preparing for storage-api 2.7 development


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/23d20e64
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/23d20e64
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/23d20e64

Branch: refs/heads/master
Commit: 23d20e649424bf10075590e51bc41e25bc4cc625
Parents: 087ef7b
Author: Owen O'Malley 
Authored: Thu Apr 26 08:08:09 2018 -0700
Committer: Owen O'Malley 
Committed: Thu Apr 26 08:08:09 2018 -0700

--
 pom.xml  | 2 +-
 standalone-metastore/pom.xml | 2 +-
 storage-api/pom.xml  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/23d20e64/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 50f416e..afcf76e 100644
--- a/pom.xml
+++ b/pom.xml
@@ -193,7 +193,7 @@
 1.0.1
 1.7.10
 4.0.4
-2.6.0-SNAPSHOT
+2.7.0-SNAPSHOT
 0.9.1
 0.92.0-incubating
 2.2.0

http://git-wip-us.apache.org/repos/asf/hive/blob/23d20e64/standalone-metastore/pom.xml
--
diff --git a/standalone-metastore/pom.xml b/standalone-metastore/pom.xml
index 10b1bfa..c9eec9d 100644
--- a/standalone-metastore/pom.xml
+++ b/standalone-metastore/pom.xml
@@ -83,7 +83,7 @@
 1.4.3
 2.5.0
 1.3.0
-2.6.0-SNAPSHOT
+2.7.0-SNAPSHOT
 
 
 you-must-set-this-to-run-thrift

http://git-wip-us.apache.org/repos/asf/hive/blob/23d20e64/storage-api/pom.xml
--
diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index d768f3f..9a03bb3 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -25,7 +25,7 @@
 
   org.apache.hive
   hive-storage-api
-  2.6.0-SNAPSHOT
+  2.7.0-SNAPSHOT
   jar
   Hive Storage API
 



[34/50] [abbrv] hive git commit: HIVE-19186 : Multi Table INSERT statements query has a flaw for partitioned table when INSERT INTO and INSERT OVERWRITE are used (Steve Yeom via Ashutosh Chauhan)

2018-04-26 Thread omalley
HIVE-19186 : Multi Table INSERT statements query has a flaw for partitioned 
table when INSERT INTO and INSERT OVERWRITE are used (Steve Yeom via Ashutosh 
Chauhan)

Signed-off-by: Ashutosh Chauhan 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/63923e72
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/63923e72
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/63923e72

Branch: refs/heads/storage-branch-2.6
Commit: 63923e72e8e07ddfcec60ed519b42144f612fc00
Parents: e909448
Author: Steve Yeom 
Authored: Thu Apr 12 13:19:00 2018 -0700
Committer: Ashutosh Chauhan 
Committed: Tue Apr 24 19:31:18 2018 -0700

--
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java  |   5 +-
 .../clientpositive/multi_insert_partitioned.q   |  56 ++
 .../multi_insert_partitioned.q.out  | 573 +++
 3 files changed, 632 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/63923e72/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
index 88b5ed8..a00f927 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
@@ -7363,10 +7363,11 @@ public class SemanticAnalyzer extends 
BaseSemanticAnalyzer {
 throw new SemanticException("Failed to allocate write Id", ex);
   }
   ltd = new LoadTableDesc(queryTmpdir, table_desc, dest_part.getSpec(), 
acidOp, writeId);
+  // For the current context for generating File Sink Operator, it is 
either INSERT INTO or INSERT OVERWRITE.
+  // So the next line works.
+  boolean isInsertInto = 
!qb.getParseInfo().isDestToOpTypeInsertOverwrite(dest);
   // For Acid table, Insert Overwrite shouldn't replace the table content. 
We keep the old
   // deltas and base and leave them up to the cleaner to clean up
-  boolean isInsertInto = qb.getParseInfo().isInsertIntoTable(
-  dest_tab.getDbName(), dest_tab.getTableName());
   LoadFileType loadType = (!isInsertInto && !destTableIsTransactional)
   ? LoadFileType.REPLACE_ALL : LoadFileType.KEEP_EXISTING;
   ltd.setLoadFileType(loadType);

http://git-wip-us.apache.org/repos/asf/hive/blob/63923e72/ql/src/test/queries/clientpositive/multi_insert_partitioned.q
--
diff --git a/ql/src/test/queries/clientpositive/multi_insert_partitioned.q 
b/ql/src/test/queries/clientpositive/multi_insert_partitioned.q
new file mode 100644
index 000..cd91c46
--- /dev/null
+++ b/ql/src/test/queries/clientpositive/multi_insert_partitioned.q
@@ -0,0 +1,56 @@
+set hive.stats.column.autogather=false;
+set hive.mapred.mode=nonstrict;
+set hive.explain.user=false;
+set hive.exec.dynamic.partition.mode=nonstrict;
+
+drop table intermediate;
+
+create table intermediate(key int) partitioned by (p int) stored as orc;
+insert into table intermediate partition(p='455') select distinct key from src 
where key >= 0 order by key desc limit 2;
+insert into table intermediate partition(p='456') select distinct key from src 
where key is not null order by key asc limit 2;
+insert into table intermediate partition(p='457') select distinct key from src 
where key >= 100 order by key asc limit 2;
+
+drop table multi_partitioned;
+
+create table multi_partitioned (key int, key2 int) partitioned by (p int);
+
+from intermediate
+insert into table multi_partitioned partition(p=1) select p, key
+insert into table multi_partitioned partition(p=2) select key, p;
+
+select * from multi_partitioned order by key, key2, p;
+desc formatted multi_partitioned;
+
+from intermediate
+insert overwrite table multi_partitioned partition(p=2) select p, key
+insert overwrite table multi_partitioned partition(p=1) select key, p;
+
+select * from multi_partitioned order by key, key2, p;
+desc formatted multi_partitioned;
+
+from intermediate
+insert into table multi_partitioned partition(p=2) select p, key
+insert overwrite table multi_partitioned partition(p=1) select key, p;
+
+select * from multi_partitioned order by key, key2, p;
+desc formatted multi_partitioned;
+
+from intermediate
+insert into table multi_partitioned partition(p) select p, key, p
+insert into table multi_partitioned partition(p=1) select key, p;
+
+select key, key2, p from multi_partitioned order by key, key2, p;
+desc formatted multi_partitioned;
+
+from intermediate
+insert into table multi_partitioned partition(p) select p, key, 1
+insert into table 

[44/50] [abbrv] hive git commit: HIVE-19271: TestMiniLlapLocalCliDriver default_constraint and check_constraint failing (Vineet Garg, reviewed by Ashutosh Chauhan)

2018-04-26 Thread omalley
HIVE-19271: TestMiniLlapLocalCliDriver default_constraint and check_constraint 
failing (Vineet Garg, reviewed by Ashutosh Chauhan)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/39ecb6a9
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/39ecb6a9
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/39ecb6a9

Branch: refs/heads/storage-branch-2.6
Commit: 39ecb6a989c96a0c68fac0d13cac5e274be98612
Parents: 0b6967e
Author: Vineet Garg 
Authored: Wed Apr 25 12:27:36 2018 -0700
Committer: Vineet Garg 
Committed: Wed Apr 25 12:27:36 2018 -0700

--
 .../clientpositive/llap/check_constraint.q.out  |  3 ++-
 .../llap/default_constraint.q.out   | 25 +---
 2 files changed, 18 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/39ecb6a9/ql/src/test/results/clientpositive/llap/check_constraint.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/check_constraint.q.out 
b/ql/src/test/results/clientpositive/llap/check_constraint.q.out
index ce427d1..f06dcd1 100644
--- a/ql/src/test/results/clientpositive/llap/check_constraint.q.out
+++ b/ql/src/test/results/clientpositive/llap/check_constraint.q.out
@@ -3551,7 +3551,8 @@ Retention:0
  A masked pattern was here 
 Table Type:MANAGED_TABLE
 Table Parameters:   
-   COLUMN_STATS_ACCURATE   {}  
+   numFiles1   
+   totalSize   5   
transactional   true
transactional_propertiesinsert_only 
  A masked pattern was here 

http://git-wip-us.apache.org/repos/asf/hive/blob/39ecb6a9/ql/src/test/results/clientpositive/llap/default_constraint.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/default_constraint.q.out 
b/ql/src/test/results/clientpositive/llap/default_constraint.q.out
index 95302da..dd8cc4f 100644
--- a/ql/src/test/results/clientpositive/llap/default_constraint.q.out
+++ b/ql/src/test/results/clientpositive/llap/default_constraint.q.out
@@ -1148,10 +1148,10 @@ STAGE PLANS:
   Select Operator
 expressions: UDFToInteger(UDFToDouble(4)) (type: int), 
UDFToBoolean('true') (type: boolean), UDFToInteger(5.67) (type: int), 
UDFToByte(45) (type: tinyint), UDFToFloat(45.4) (type: float), UDFToLong(567) 
(type: bigint), UDFToShort(88) (type: smallint), CAST( CURRENT_TIMESTAMP() AS 
varchar(50)) (type: varchar(50)), UDFToString(CAST( CURRENT_USER() AS 
varchar(50))) (type: string), CAST( '2016-01-03 12:26:34 America/Los_Angeles' 
AS timestamp with local time zone) (type: timestamp with local time zone), 
CAST( '2016-01-01 12:01:01' AS TIMESTAMP) (type: timestamp), CAST( 4.5 AS 
decimal(8,2)) (type: decimal(8,2)), UDFToDouble(5) (type: double), CAST( col1 
AS CHAR(2)) (type: char(2))
 outputColumnNames: _col0, _col1, _col2, _col3, _col4, 
_col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13
-Statistics: Num rows: 1 Data size: 523 Basic stats: 
COMPLETE Column stats: COMPLETE
+Statistics: Num rows: 1 Data size: 522 Basic stats: 
COMPLETE Column stats: COMPLETE
 File Output Operator
   compressed: false
-  Statistics: Num rows: 1 Data size: 523 Basic stats: 
COMPLETE Column stats: COMPLETE
+  Statistics: Num rows: 1 Data size: 522 Basic stats: 
COMPLETE Column stats: COMPLETE
   table:
   input format: 
org.apache.hadoop.mapred.TextInputFormat
   output format: 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
@@ -1490,7 +1490,8 @@ Retention:0
  A masked pattern was here 
 Table Type:MANAGED_TABLE
 Table Parameters:   
-   COLUMN_STATS_ACCURATE   {}  
+   numFiles1   
+   totalSize   1063
transactional   true
transactional_propertiesdefault 
  A masked pattern was here 
@@ -1657,8 +1658,9 @@ Retention:0
  A masked pattern was here 
 Table Type:MANAGED_TABLE
 Table Parameters:   
-   COLUMN_STATS_ACCURATE   {}  
  A masked pattern was here 
+   numFiles2   
+   totalSize  

[26/50] [abbrv] hive git commit: HIVE-19171: Persist runtime statistics in metastore (Zoltan Haindrich reviewed by Ashutosh Chauhan)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/56c3a957/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
--
diff --git a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 
b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
index 802d8e3..da66951 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
@@ -224,6 +224,8 @@ class ThriftHiveMetastoreIf : virtual public  
::facebook::fb303::FacebookService
   virtual void get_serde(SerDeInfo& _return, const GetSerdeRequest& rqst) = 0;
   virtual void get_lock_materialization_rebuild(LockResponse& _return, const 
std::string& dbName, const std::string& tableName, const int64_t txnId) = 0;
   virtual bool heartbeat_lock_materialization_rebuild(const std::string& 
dbName, const std::string& tableName, const int64_t txnId) = 0;
+  virtual void add_runtime_stats(const RuntimeStat& stat) = 0;
+  virtual void get_runtime_stats(std::vector & _return, const 
GetRuntimeStatsRequest& rqst) = 0;
 };
 
 class ThriftHiveMetastoreIfFactory : virtual public  
::facebook::fb303::FacebookServiceIfFactory {
@@ -887,6 +889,12 @@ class ThriftHiveMetastoreNull : virtual public 
ThriftHiveMetastoreIf , virtual p
 bool _return = false;
 return _return;
   }
+  void add_runtime_stats(const RuntimeStat& /* stat */) {
+return;
+  }
+  void get_runtime_stats(std::vector & /* _return */, const 
GetRuntimeStatsRequest& /* rqst */) {
+return;
+  }
 };
 
 typedef struct _ThriftHiveMetastore_getMetaConf_args__isset {
@@ -25644,6 +25652,222 @@ class 
ThriftHiveMetastore_heartbeat_lock_materialization_rebuild_presult {
 
 };
 
+typedef struct _ThriftHiveMetastore_add_runtime_stats_args__isset {
+  _ThriftHiveMetastore_add_runtime_stats_args__isset() : stat(false) {}
+  bool stat :1;
+} _ThriftHiveMetastore_add_runtime_stats_args__isset;
+
+class ThriftHiveMetastore_add_runtime_stats_args {
+ public:
+
+  ThriftHiveMetastore_add_runtime_stats_args(const 
ThriftHiveMetastore_add_runtime_stats_args&);
+  ThriftHiveMetastore_add_runtime_stats_args& operator=(const 
ThriftHiveMetastore_add_runtime_stats_args&);
+  ThriftHiveMetastore_add_runtime_stats_args() {
+  }
+
+  virtual ~ThriftHiveMetastore_add_runtime_stats_args() throw();
+  RuntimeStat stat;
+
+  _ThriftHiveMetastore_add_runtime_stats_args__isset __isset;
+
+  void __set_stat(const RuntimeStat& val);
+
+  bool operator == (const ThriftHiveMetastore_add_runtime_stats_args & rhs) 
const
+  {
+if (!(stat == rhs.stat))
+  return false;
+return true;
+  }
+  bool operator != (const ThriftHiveMetastore_add_runtime_stats_args ) 
const {
+return !(*this == rhs);
+  }
+
+  bool operator < (const ThriftHiveMetastore_add_runtime_stats_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class ThriftHiveMetastore_add_runtime_stats_pargs {
+ public:
+
+
+  virtual ~ThriftHiveMetastore_add_runtime_stats_pargs() throw();
+  const RuntimeStat* stat;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _ThriftHiveMetastore_add_runtime_stats_result__isset {
+  _ThriftHiveMetastore_add_runtime_stats_result__isset() : o1(false) {}
+  bool o1 :1;
+} _ThriftHiveMetastore_add_runtime_stats_result__isset;
+
+class ThriftHiveMetastore_add_runtime_stats_result {
+ public:
+
+  ThriftHiveMetastore_add_runtime_stats_result(const 
ThriftHiveMetastore_add_runtime_stats_result&);
+  ThriftHiveMetastore_add_runtime_stats_result& operator=(const 
ThriftHiveMetastore_add_runtime_stats_result&);
+  ThriftHiveMetastore_add_runtime_stats_result() {
+  }
+
+  virtual ~ThriftHiveMetastore_add_runtime_stats_result() throw();
+  MetaException o1;
+
+  _ThriftHiveMetastore_add_runtime_stats_result__isset __isset;
+
+  void __set_o1(const MetaException& val);
+
+  bool operator == (const ThriftHiveMetastore_add_runtime_stats_result & rhs) 
const
+  {
+if (!(o1 == rhs.o1))
+  return false;
+return true;
+  }
+  bool operator != (const ThriftHiveMetastore_add_runtime_stats_result ) 
const {
+return !(*this == rhs);
+  }
+
+  bool operator < (const ThriftHiveMetastore_add_runtime_stats_result & ) 
const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _ThriftHiveMetastore_add_runtime_stats_presult__isset {
+  _ThriftHiveMetastore_add_runtime_stats_presult__isset() : o1(false) {}
+  bool o1 :1;
+} _ThriftHiveMetastore_add_runtime_stats_presult__isset;
+
+class ThriftHiveMetastore_add_runtime_stats_presult {
+ public:
+
+
+  virtual ~ThriftHiveMetastore_add_runtime_stats_presult() throw();
+  MetaException o1;
+
+  

[32/50] [abbrv] hive git commit: HIVE-19232: results_cache_invalidation2 is failing (Jason Dere, reviewed by Vineet Garg)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/e9094483/ql/src/test/results/clientpositive/results_cache_capacity.q.out
--
diff --git a/ql/src/test/results/clientpositive/results_cache_capacity.q.out 
b/ql/src/test/results/clientpositive/results_cache_capacity.q.out
deleted file mode 100644
index 3951cc2..000
--- a/ql/src/test/results/clientpositive/results_cache_capacity.q.out
+++ /dev/null
@@ -1,238 +0,0 @@
-PREHOOK: query: select key, count(*) from src where key = 0 group by key
-PREHOOK: type: QUERY
-PREHOOK: Input: default@src
- A masked pattern was here 
-POSTHOOK: query: select key, count(*) from src where key = 0 group by key
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@src
- A masked pattern was here 
-0  3
-test.comment=Q1 should be cached
-PREHOOK: query: explain
-select key, count(*) from src where key = 0 group by key
-PREHOOK: type: QUERY
-POSTHOOK: query: explain
-select key, count(*) from src where key = 0 group by key
-POSTHOOK: type: QUERY
-STAGE DEPENDENCIES:
-  Stage-0 is a root stage
-
-STAGE PLANS:
-  Stage: Stage-0
-Fetch Operator
-  limit: -1
-  Processor Tree:
-ListSink
-  Cached Query Result: true
-
-PREHOOK: query: select key, count(*) from src where key = 2 group by key
-PREHOOK: type: QUERY
-PREHOOK: Input: default@src
- A masked pattern was here 
-POSTHOOK: query: select key, count(*) from src where key = 2 group by key
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@src
- A masked pattern was here 
-2  1
-test.comment=Q2 should now be cached
-PREHOOK: query: explain
-select key, count(*) from src where key = 2 group by key
-PREHOOK: type: QUERY
-POSTHOOK: query: explain
-select key, count(*) from src where key = 2 group by key
-POSTHOOK: type: QUERY
-STAGE DEPENDENCIES:
-  Stage-0 is a root stage
-
-STAGE PLANS:
-  Stage: Stage-0
-Fetch Operator
-  limit: -1
-  Processor Tree:
-ListSink
-  Cached Query Result: true
-
-test.comment=Q1 should still be cached
-PREHOOK: query: explain
-select key, count(*) from src where key = 0 group by key
-PREHOOK: type: QUERY
-POSTHOOK: query: explain
-select key, count(*) from src where key = 0 group by key
-POSTHOOK: type: QUERY
-STAGE DEPENDENCIES:
-  Stage-0 is a root stage
-
-STAGE PLANS:
-  Stage: Stage-0
-Fetch Operator
-  limit: -1
-  Processor Tree:
-ListSink
-  Cached Query Result: true
-
-PREHOOK: query: select key, count(*) from src where key = 4 group by key
-PREHOOK: type: QUERY
-PREHOOK: Input: default@src
- A masked pattern was here 
-POSTHOOK: query: select key, count(*) from src where key = 4 group by key
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@src
- A masked pattern was here 
-4  1
-test.comment=Q3 should now be cached
-PREHOOK: query: explain
-select key, count(*) from src where key = 4 group by key
-PREHOOK: type: QUERY
-POSTHOOK: query: explain
-select key, count(*) from src where key = 4 group by key
-POSTHOOK: type: QUERY
-STAGE DEPENDENCIES:
-  Stage-0 is a root stage
-
-STAGE PLANS:
-  Stage: Stage-0
-Fetch Operator
-  limit: -1
-  Processor Tree:
-ListSink
-  Cached Query Result: true
-
-test.comment=Q1 should still be cached
-PREHOOK: query: explain
-select key, count(*) from src where key = 0 group by key
-PREHOOK: type: QUERY
-POSTHOOK: query: explain
-select key, count(*) from src where key = 0 group by key
-POSTHOOK: type: QUERY
-STAGE DEPENDENCIES:
-  Stage-0 is a root stage
-
-STAGE PLANS:
-  Stage: Stage-0
-Fetch Operator
-  limit: -1
-  Processor Tree:
-ListSink
-  Cached Query Result: true
-
-test.comment=Q2 should no longer be in the cache
-PREHOOK: query: explain
-select key, count(*) from src where key = 2 group by key
-PREHOOK: type: QUERY
-POSTHOOK: query: explain
-select key, count(*) from src where key = 2 group by key
-POSTHOOK: type: QUERY
-STAGE DEPENDENCIES:
-  Stage-1 is a root stage
-  Stage-0 depends on stages: Stage-1
-
-STAGE PLANS:
-  Stage: Stage-1
-Map Reduce
-  Map Operator Tree:
-  TableScan
-alias: src
-Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
Column stats: NONE
-Filter Operator
-  predicate: (UDFToDouble(key) = 2.0D) (type: boolean)
-  Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE 
Column stats: NONE
-  Group By Operator
-aggregations: count()
-keys: key (type: string)
-mode: hash
-outputColumnNames: _col0, _col1
-Statistics: Num rows: 250 Data size: 2656 Basic stats: 
COMPLETE Column stats: NONE
-Reduce Output Operator
-  key expressions: _col0 (type: string)
-  sort order: +
-  Map-reduce partition columns: _col0 (type: string)
-  

[07/50] [abbrv] hive git commit: HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via Ashutosh Chauhan)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/b3e2d8a0/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
--
diff --git 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
index 125d5a7..184ecb6 100644
--- 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
+++ 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
@@ -55,6 +55,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantLock;
 import java.util.regex.Pattern;
+import java.util.stream.Collectors;
 
 import javax.jdo.JDOCanRetryException;
 import javax.jdo.JDODataStoreException;
@@ -83,7 +84,6 @@ import org.apache.hadoop.hive.common.StatsSetupConst;
 import 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.SqlFilterForPushdown;
 import org.apache.hadoop.hive.metastore.api.AggrStats;
 import org.apache.hadoop.hive.metastore.api.AlreadyExistsException;
-import org.apache.hadoop.hive.metastore.api.BasicTxnInfo;
 import org.apache.hadoop.hive.metastore.api.Catalog;
 import org.apache.hadoop.hive.metastore.api.ColumnStatistics;
 import org.apache.hadoop.hive.metastore.api.ColumnStatisticsDesc;
@@ -124,6 +124,7 @@ import org.apache.hadoop.hive.metastore.api.ResourceType;
 import org.apache.hadoop.hive.metastore.api.ResourceUri;
 import org.apache.hadoop.hive.metastore.api.Role;
 import org.apache.hadoop.hive.metastore.api.RolePrincipalGrant;
+import org.apache.hadoop.hive.metastore.api.RuntimeStat;
 import org.apache.hadoop.hive.metastore.api.SQLCheckConstraint;
 import org.apache.hadoop.hive.metastore.api.SQLDefaultConstraint;
 import org.apache.hadoop.hive.metastore.api.SQLForeignKey;
@@ -186,6 +187,7 @@ import 
org.apache.hadoop.hive.metastore.model.MPartitionPrivilege;
 import org.apache.hadoop.hive.metastore.model.MResourceUri;
 import org.apache.hadoop.hive.metastore.model.MRole;
 import org.apache.hadoop.hive.metastore.model.MRoleMap;
+import org.apache.hadoop.hive.metastore.model.MRuntimeStat;
 import org.apache.hadoop.hive.metastore.model.MSchemaVersion;
 import org.apache.hadoop.hive.metastore.model.MSerDeInfo;
 import org.apache.hadoop.hive.metastore.model.MStorageDescriptor;
@@ -210,7 +212,6 @@ import org.apache.hadoop.hive.metastore.utils.FileUtils;
 import org.apache.hadoop.hive.metastore.utils.JavaUtils;
 import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
 import org.apache.hadoop.hive.metastore.utils.ObjectPair;
-import org.apache.thrift.TDeserializer;
 import org.apache.thrift.TException;
 import org.datanucleus.AbstractNucleusContext;
 import org.datanucleus.ClassLoaderResolver;
@@ -809,7 +810,9 @@ public class ObjectStore implements RawStore, Configurable {
   pm.makePersistent(mCat);
   committed = commitTransaction();
 } finally {
-  if (!committed) rollbackTransaction();
+  if (!committed) {
+rollbackTransaction();
+  }
 }
   }
 
@@ -832,7 +835,9 @@ public class ObjectStore implements RawStore, Configurable {
   pm.makePersistent(mCat);
   committed = commitTransaction();
 } finally {
-  if (!committed) rollbackTransaction();
+  if (!committed) {
+rollbackTransaction();
+  }
 }
   }
 
@@ -840,7 +845,9 @@ public class ObjectStore implements RawStore, Configurable {
   public Catalog getCatalog(String catalogName) throws NoSuchObjectException, 
MetaException {
 LOG.debug("Fetching catalog " + catalogName);
 MCatalog mCat = getMCatalog(catalogName);
-if (mCat == null) throw new NoSuchObjectException("No catalog " + 
catalogName);
+if (mCat == null) {
+  throw new NoSuchObjectException("No catalog " + catalogName);
+}
 return mCatToCat(mCat);
   }
 
@@ -874,11 +881,15 @@ public class ObjectStore implements RawStore, 
Configurable {
   openTransaction();
   MCatalog mCat = getMCatalog(catalogName);
   pm.retrieve(mCat);
-  if (mCat == null) throw new NoSuchObjectException("No catalog " + 
catalogName);
+  if (mCat == null) {
+throw new NoSuchObjectException("No catalog " + catalogName);
+  }
   pm.deletePersistent(mCat);
   committed = commitTransaction();
 } finally {
-  if (!committed) rollbackTransaction();
+  if (!committed) {
+rollbackTransaction();
+  }
 }
   }
 
@@ -903,14 +914,18 @@ public class ObjectStore implements RawStore, 
Configurable {
   private MCatalog catToMCat(Catalog cat) {
 MCatalog mCat = new MCatalog();
 mCat.setName(normalizeIdentifier(cat.getName()));
-if (cat.isSetDescription()) mCat.setDescription(cat.getDescription());
+if (cat.isSetDescription()) {
+  mCat.setDescription(cat.getDescription());
+}
 

[17/50] [abbrv] hive git commit: Revert "HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via Ashutosh Chauhan)"

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/f0199500/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
--
diff --git a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 
b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
index da66951..802d8e3 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
@@ -224,8 +224,6 @@ class ThriftHiveMetastoreIf : virtual public  
::facebook::fb303::FacebookService
   virtual void get_serde(SerDeInfo& _return, const GetSerdeRequest& rqst) = 0;
   virtual void get_lock_materialization_rebuild(LockResponse& _return, const 
std::string& dbName, const std::string& tableName, const int64_t txnId) = 0;
   virtual bool heartbeat_lock_materialization_rebuild(const std::string& 
dbName, const std::string& tableName, const int64_t txnId) = 0;
-  virtual void add_runtime_stats(const RuntimeStat& stat) = 0;
-  virtual void get_runtime_stats(std::vector & _return, const 
GetRuntimeStatsRequest& rqst) = 0;
 };
 
 class ThriftHiveMetastoreIfFactory : virtual public  
::facebook::fb303::FacebookServiceIfFactory {
@@ -889,12 +887,6 @@ class ThriftHiveMetastoreNull : virtual public 
ThriftHiveMetastoreIf , virtual p
 bool _return = false;
 return _return;
   }
-  void add_runtime_stats(const RuntimeStat& /* stat */) {
-return;
-  }
-  void get_runtime_stats(std::vector & /* _return */, const 
GetRuntimeStatsRequest& /* rqst */) {
-return;
-  }
 };
 
 typedef struct _ThriftHiveMetastore_getMetaConf_args__isset {
@@ -25652,222 +25644,6 @@ class 
ThriftHiveMetastore_heartbeat_lock_materialization_rebuild_presult {
 
 };
 
-typedef struct _ThriftHiveMetastore_add_runtime_stats_args__isset {
-  _ThriftHiveMetastore_add_runtime_stats_args__isset() : stat(false) {}
-  bool stat :1;
-} _ThriftHiveMetastore_add_runtime_stats_args__isset;
-
-class ThriftHiveMetastore_add_runtime_stats_args {
- public:
-
-  ThriftHiveMetastore_add_runtime_stats_args(const 
ThriftHiveMetastore_add_runtime_stats_args&);
-  ThriftHiveMetastore_add_runtime_stats_args& operator=(const 
ThriftHiveMetastore_add_runtime_stats_args&);
-  ThriftHiveMetastore_add_runtime_stats_args() {
-  }
-
-  virtual ~ThriftHiveMetastore_add_runtime_stats_args() throw();
-  RuntimeStat stat;
-
-  _ThriftHiveMetastore_add_runtime_stats_args__isset __isset;
-
-  void __set_stat(const RuntimeStat& val);
-
-  bool operator == (const ThriftHiveMetastore_add_runtime_stats_args & rhs) 
const
-  {
-if (!(stat == rhs.stat))
-  return false;
-return true;
-  }
-  bool operator != (const ThriftHiveMetastore_add_runtime_stats_args ) 
const {
-return !(*this == rhs);
-  }
-
-  bool operator < (const ThriftHiveMetastore_add_runtime_stats_args & ) const;
-
-  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
-  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
-
-};
-
-
-class ThriftHiveMetastore_add_runtime_stats_pargs {
- public:
-
-
-  virtual ~ThriftHiveMetastore_add_runtime_stats_pargs() throw();
-  const RuntimeStat* stat;
-
-  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
-
-};
-
-typedef struct _ThriftHiveMetastore_add_runtime_stats_result__isset {
-  _ThriftHiveMetastore_add_runtime_stats_result__isset() : o1(false) {}
-  bool o1 :1;
-} _ThriftHiveMetastore_add_runtime_stats_result__isset;
-
-class ThriftHiveMetastore_add_runtime_stats_result {
- public:
-
-  ThriftHiveMetastore_add_runtime_stats_result(const 
ThriftHiveMetastore_add_runtime_stats_result&);
-  ThriftHiveMetastore_add_runtime_stats_result& operator=(const 
ThriftHiveMetastore_add_runtime_stats_result&);
-  ThriftHiveMetastore_add_runtime_stats_result() {
-  }
-
-  virtual ~ThriftHiveMetastore_add_runtime_stats_result() throw();
-  MetaException o1;
-
-  _ThriftHiveMetastore_add_runtime_stats_result__isset __isset;
-
-  void __set_o1(const MetaException& val);
-
-  bool operator == (const ThriftHiveMetastore_add_runtime_stats_result & rhs) 
const
-  {
-if (!(o1 == rhs.o1))
-  return false;
-return true;
-  }
-  bool operator != (const ThriftHiveMetastore_add_runtime_stats_result ) 
const {
-return !(*this == rhs);
-  }
-
-  bool operator < (const ThriftHiveMetastore_add_runtime_stats_result & ) 
const;
-
-  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
-  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
-
-};
-
-typedef struct _ThriftHiveMetastore_add_runtime_stats_presult__isset {
-  _ThriftHiveMetastore_add_runtime_stats_presult__isset() : o1(false) {}
-  bool o1 :1;
-} _ThriftHiveMetastore_add_runtime_stats_presult__isset;
-
-class ThriftHiveMetastore_add_runtime_stats_presult {
- public:
-
-
-  virtual ~ThriftHiveMetastore_add_runtime_stats_presult() throw();
-  MetaException o1;
-
-  

[08/50] [abbrv] hive git commit: HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via Ashutosh Chauhan)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/b3e2d8a0/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php 
b/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
index 9c94942..1c1d58e 100644
--- 
a/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
+++ 
b/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
@@ -1528,6 +1528,17 @@ interface ThriftHiveMetastoreIf extends 
\FacebookServiceIf {
* @return bool
*/
   public function heartbeat_lock_materialization_rebuild($dbName, $tableName, 
$txnId);
+  /**
+   * @param \metastore\RuntimeStat $stat
+   * @throws \metastore\MetaException
+   */
+  public function add_runtime_stats(\metastore\RuntimeStat $stat);
+  /**
+   * @param \metastore\GetRuntimeStatsRequest $rqst
+   * @return \metastore\RuntimeStat[]
+   * @throws \metastore\MetaException
+   */
+  public function get_runtime_stats(\metastore\GetRuntimeStatsRequest $rqst);
 }
 
 class ThriftHiveMetastoreClient extends \FacebookServiceClient implements 
\metastore\ThriftHiveMetastoreIf {
@@ -13007,6 +13018,111 @@ class ThriftHiveMetastoreClient extends 
\FacebookServiceClient implements \metas
 throw new \Exception("heartbeat_lock_materialization_rebuild failed: 
unknown result");
   }
 
+  public function add_runtime_stats(\metastore\RuntimeStat $stat)
+  {
+$this->send_add_runtime_stats($stat);
+$this->recv_add_runtime_stats();
+  }
+
+  public function send_add_runtime_stats(\metastore\RuntimeStat $stat)
+  {
+$args = new \metastore\ThriftHiveMetastore_add_runtime_stats_args();
+$args->stat = $stat;
+$bin_accel = ($this->output_ instanceof TBinaryProtocolAccelerated) && 
function_exists('thrift_protocol_write_binary');
+if ($bin_accel)
+{
+  thrift_protocol_write_binary($this->output_, 'add_runtime_stats', 
TMessageType::CALL, $args, $this->seqid_, $this->output_->isStrictWrite());
+}
+else
+{
+  $this->output_->writeMessageBegin('add_runtime_stats', 
TMessageType::CALL, $this->seqid_);
+  $args->write($this->output_);
+  $this->output_->writeMessageEnd();
+  $this->output_->getTransport()->flush();
+}
+  }
+
+  public function recv_add_runtime_stats()
+  {
+$bin_accel = ($this->input_ instanceof TBinaryProtocolAccelerated) && 
function_exists('thrift_protocol_read_binary');
+if ($bin_accel) $result = thrift_protocol_read_binary($this->input_, 
'\metastore\ThriftHiveMetastore_add_runtime_stats_result', 
$this->input_->isStrictRead());
+else
+{
+  $rseqid = 0;
+  $fname = null;
+  $mtype = 0;
+
+  $this->input_->readMessageBegin($fname, $mtype, $rseqid);
+  if ($mtype == TMessageType::EXCEPTION) {
+$x = new TApplicationException();
+$x->read($this->input_);
+$this->input_->readMessageEnd();
+throw $x;
+  }
+  $result = new \metastore\ThriftHiveMetastore_add_runtime_stats_result();
+  $result->read($this->input_);
+  $this->input_->readMessageEnd();
+}
+if ($result->o1 !== null) {
+  throw $result->o1;
+}
+return;
+  }
+
+  public function get_runtime_stats(\metastore\GetRuntimeStatsRequest $rqst)
+  {
+$this->send_get_runtime_stats($rqst);
+return $this->recv_get_runtime_stats();
+  }
+
+  public function send_get_runtime_stats(\metastore\GetRuntimeStatsRequest 
$rqst)
+  {
+$args = new \metastore\ThriftHiveMetastore_get_runtime_stats_args();
+$args->rqst = $rqst;
+$bin_accel = ($this->output_ instanceof TBinaryProtocolAccelerated) && 
function_exists('thrift_protocol_write_binary');
+if ($bin_accel)
+{
+  thrift_protocol_write_binary($this->output_, 'get_runtime_stats', 
TMessageType::CALL, $args, $this->seqid_, $this->output_->isStrictWrite());
+}
+else
+{
+  $this->output_->writeMessageBegin('get_runtime_stats', 
TMessageType::CALL, $this->seqid_);
+  $args->write($this->output_);
+  $this->output_->writeMessageEnd();
+  $this->output_->getTransport()->flush();
+}
+  }
+
+  public function recv_get_runtime_stats()
+  {
+$bin_accel = ($this->input_ instanceof TBinaryProtocolAccelerated) && 
function_exists('thrift_protocol_read_binary');
+if ($bin_accel) $result = thrift_protocol_read_binary($this->input_, 
'\metastore\ThriftHiveMetastore_get_runtime_stats_result', 
$this->input_->isStrictRead());
+else
+{
+  $rseqid = 0;
+  $fname = null;
+  $mtype = 0;
+
+  $this->input_->readMessageBegin($fname, $mtype, $rseqid);
+  if ($mtype == TMessageType::EXCEPTION) {
+$x = new TApplicationException();
+$x->read($this->input_);
+$this->input_->readMessageEnd();
+throw $x;
+  }
+  $result = new 

[12/50] [abbrv] hive git commit: HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via Ashutosh Chauhan)

2018-04-26 Thread omalley
HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via 
Ashutosh Chauhan)

Signed-off-by: Ashutosh Chauhan 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/b3e2d8a0
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/b3e2d8a0
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/b3e2d8a0

Branch: refs/heads/storage-branch-2.6
Commit: b3e2d8a05f57a91b12b8347b2763a296c3480d97
Parents: 4f67beb
Author: Zoltan Haindrich 
Authored: Mon Apr 23 13:36:11 2018 -0700
Committer: Ashutosh Chauhan 
Committed: Mon Apr 23 13:36:11 2018 -0700

--
 .../org/apache/hadoop/hive/conf/HiveConf.java   |6 +-
 .../listener/DummyRawStoreFailEvent.java|   19 +
 .../org/apache/hadoop/hive/ql/QTestUtil.java|2 +-
 .../upgrade/derby/056-HIVE-19171.derby.sql  |   10 +
 .../ql/optimizer/signature/OpSignature.java |   19 +-
 .../ql/optimizer/signature/OpTreeSignature.java |   24 +-
 .../signature/OpTreeSignatureFactory.java   |   12 +-
 .../ql/optimizer/signature/RuntimeStatsMap.java |   83 +
 .../signature/RuntimeStatsPersister.java|   54 +
 .../ql/optimizer/signature/SignatureUtils.java  |   22 +-
 .../hadoop/hive/ql/plan/FileSinkDesc.java   |7 +-
 .../hadoop/hive/ql/plan/HashTableSinkDesc.java  |6 +-
 .../apache/hadoop/hive/ql/plan/JoinDesc.java|6 +-
 .../apache/hadoop/hive/ql/plan/MapJoinDesc.java |6 +-
 .../hive/ql/plan/mapper/CachingStatsSource.java |7 +-
 .../hive/ql/plan/mapper/EmptyStatsSource.java   |2 +-
 .../ql/plan/mapper/MetastoreStatsConnector.java |  143 +
 .../hadoop/hive/ql/plan/mapper/PlanMapper.java  |  108 +-
 .../plan/mapper/SimpleRuntimeStatsSource.java   |   37 +-
 .../hive/ql/plan/mapper/StatsSources.java   |   86 +-
 .../hadoop/hive/ql/reexec/ReOptimizePlugin.java |   17 +-
 .../hadoop/hive/ql/stats/OperatorStats.java |   33 +-
 .../signature/TestRuntimeStatsPersistence.java  |  165 +
 .../ql/plan/mapping/TestCounterMapping.java |7 +-
 .../ql/plan/mapping/TestReOptimization.java |   85 +-
 .../apache/hive/service/server/HiveServer2.java |3 +
 .../gen/thrift/gen-cpp/ThriftHiveMetastore.cpp  | 5376 ++
 .../gen/thrift/gen-cpp/ThriftHiveMetastore.h|  259 +
 .../ThriftHiveMetastore_server.skeleton.cpp |   10 +
 .../gen/thrift/gen-cpp/hive_metastore_types.cpp |  376 +-
 .../gen/thrift/gen-cpp/hive_metastore_types.h   |   97 +
 .../metastore/api/GetRuntimeStatsRequest.java   |  283 +
 .../hadoop/hive/metastore/api/RuntimeStat.java  |  600 ++
 .../hive/metastore/api/ThriftHiveMetastore.java | 2584 +++--
 .../gen-php/metastore/ThriftHiveMetastore.php   |  481 ++
 .../src/gen/thrift/gen-php/metastore/Types.php  |  171 +
 .../hive_metastore/ThriftHiveMetastore-remote   |   14 +
 .../hive_metastore/ThriftHiveMetastore.py   |  409 ++
 .../gen/thrift/gen-py/hive_metastore/ttypes.py  |  141 +
 .../gen/thrift/gen-rb/hive_metastore_types.rb   |   37 +
 .../gen/thrift/gen-rb/thrift_hive_metastore.rb  |  119 +
 .../hadoop/hive/metastore/HiveMetaStore.java|  113 +-
 .../hive/metastore/HiveMetaStoreClient.java |  140 +-
 .../hadoop/hive/metastore/IMetaStoreClient.java |9 +
 .../hadoop/hive/metastore/ObjectStore.java  |  248 +-
 .../apache/hadoop/hive/metastore/RawStore.java  |   15 +-
 .../hive/metastore/RuntimeStatsCleanerTask.java |   67 +
 .../hive/metastore/cache/CachedStore.java   |   21 +-
 .../hive/metastore/conf/MetastoreConf.java  |   34 +-
 .../hive/metastore/model/MRuntimeStat.java  |   59 +
 .../src/main/resources/package.jdo  |   14 +
 .../main/sql/derby/hive-schema-3.0.0.derby.sql  |   10 +
 .../sql/derby/upgrade-2.3.0-to-3.0.0.derby.sql  |   10 +
 .../main/sql/mssql/hive-schema-3.0.0.mssql.sql  |9 +
 .../sql/mssql/upgrade-2.3.0-to-3.0.0.mssql.sql  |9 +
 .../main/sql/mysql/hive-schema-3.0.0.mysql.sql  |   10 +
 .../sql/mysql/upgrade-2.3.0-to-3.0.0.mysql.sql  |9 +
 .../sql/oracle/hive-schema-3.0.0.oracle.sql |   10 +
 .../oracle/upgrade-2.3.0-to-3.0.0.oracle.sql|9 +
 .../sql/postgres/hive-schema-3.0.0.postgres.sql |   11 +
 .../upgrade-2.3.0-to-3.0.0.postgres.sql |9 +
 .../src/main/thrift/hive_metastore.thrift   |   12 +
 .../DummyRawStoreControlledCommit.java  |   18 +
 .../DummyRawStoreForJdoConnection.java  |   17 +
 .../HiveMetaStoreClientPreCatalog.java  |   10 +
 .../hive/metastore/client/TestRuntimeStats.java |  100 +
 66 files changed, 9936 insertions(+), 2963 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/b3e2d8a0/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
--
diff --git 

[45/50] [abbrv] hive git commit: HIVE-19252: TestJdbcWithMiniKdcCookie.testCookieNegative is failing consistently (Vaibhav Gumashta reviewed by Daniel Dai)

2018-04-26 Thread omalley
HIVE-19252: TestJdbcWithMiniKdcCookie.testCookieNegative is failing 
consistently (Vaibhav Gumashta reviewed by Daniel Dai)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/29a86906
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/29a86906
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/29a86906

Branch: refs/heads/storage-branch-2.6
Commit: 29a86906d35c76223f187e000e4793273d514a33
Parents: 39ecb6a
Author: Vaibhav Gumashta 
Authored: Wed Apr 25 12:53:14 2018 -0700
Committer: Vaibhav Gumashta 
Committed: Wed Apr 25 12:53:14 2018 -0700

--
 .../java/org/apache/hive/minikdc/TestJdbcWithMiniKdcCookie.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/29a86906/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestJdbcWithMiniKdcCookie.java
--
diff --git 
a/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestJdbcWithMiniKdcCookie.java
 
b/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestJdbcWithMiniKdcCookie.java
index 9cad3ea..2fa2a87 100644
--- 
a/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestJdbcWithMiniKdcCookie.java
+++ 
b/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestJdbcWithMiniKdcCookie.java
@@ -109,7 +109,7 @@ public class TestJdbcWithMiniKdcCookie {
   // login failure.
   getConnection(HIVE_NON_EXISTENT_USER);
 } catch (IOException e) {
-  Assert.assertTrue(e.getMessage().contains("Login failure"));
+  
Assert.assertTrue(e.getMessage().contains("javax.security.auth.login.LoginException"));
 }
   }
 



[40/50] [abbrv] hive git commit: HIVE-19133 : HS2 WebUI phase-wise performance metrics not showing correctly (Bharathkrishna Guruvayoor Murali reviewed by Zoltan Haindrich, Vihang Karajgaonkar)

2018-04-26 Thread omalley
HIVE-19133 : HS2 WebUI phase-wise performance metrics not showing correctly 
(Bharathkrishna Guruvayoor Murali reviewed by Zoltan Haindrich, Vihang 
Karajgaonkar)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/e89f98c4
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/e89f98c4
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/e89f98c4

Branch: refs/heads/storage-branch-2.6
Commit: e89f98c448d1c95bd92275575f2caa02537e8803
Parents: da10aab
Author: Bharathkrishna Guruvayoor Murali 
Authored: Wed Apr 25 10:47:42 2018 -0700
Committer: Vihang Karajgaonkar 
Committed: Wed Apr 25 10:47:48 2018 -0700

--
 .../apache/hadoop/hive/ql/log/PerfLogger.java   |  2 --
 .../hive/jdbc/miniHS2/TestHs2Metrics.java   |  1 -
 .../TestOperationLoggingAPIWithMr.java  |  1 -
 .../TestOperationLoggingAPIWithTez.java |  1 -
 .../service/cli/session/TestQueryDisplay.java   |  7 +++
 .../java/org/apache/hadoop/hive/ql/Driver.java  | 22 
 .../service/cli/operation/SQLOperation.java | 11 +-
 7 files changed, 21 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/e89f98c4/common/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java
--
diff --git a/common/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java 
b/common/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java
index c1e1b7f..3d6315c 100644
--- a/common/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java
+++ b/common/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java
@@ -51,14 +51,12 @@ public class PerfLogger {
   public static final String SERIALIZE_PLAN = "serializePlan";
   public static final String DESERIALIZE_PLAN = "deserializePlan";
   public static final String CLONE_PLAN = "clonePlan";
-  public static final String TASK = "task.";
   public static final String RELEASE_LOCKS = "releaseLocks";
   public static final String PRUNE_LISTING = "prune-listing";
   public static final String PARTITION_RETRIEVING = "partition-retrieving";
   public static final String PRE_HOOK = "PreHook.";
   public static final String POST_HOOK = "PostHook.";
   public static final String FAILURE_HOOK = "FailureHook.";
-  public static final String DRIVER_RUN = "Driver.run";
   public static final String TEZ_COMPILER = "TezCompiler";
   public static final String TEZ_SUBMIT_TO_RUNNING = "TezSubmitToRunningDag";
   public static final String TEZ_BUILD_DAG = "TezBuildDag";

http://git-wip-us.apache.org/repos/asf/hive/blob/e89f98c4/itests/hive-unit/src/test/java/org/apache/hive/jdbc/miniHS2/TestHs2Metrics.java
--
diff --git 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/miniHS2/TestHs2Metrics.java
 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/miniHS2/TestHs2Metrics.java
index 0ec23e1..9686445 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/miniHS2/TestHs2Metrics.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/miniHS2/TestHs2Metrics.java
@@ -109,7 +109,6 @@ public class TestHs2Metrics {
 MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.TIMER, 
"api_hs2_sql_operation_PENDING", 1);
 MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.TIMER, 
"api_hs2_sql_operation_RUNNING", 1);
 MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.COUNTER, 
"hs2_completed_sql_operation_FINISHED", 1);
-MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.TIMER, 
"api_Driver.run", 1);
 
 //but there should be no more active calls.
 MetricsTestUtils.verifyMetricsJson(json, MetricsTestUtils.COUNTER, 
"active_calls_api_semanticAnalyze", 0);

http://git-wip-us.apache.org/repos/asf/hive/blob/e89f98c4/itests/hive-unit/src/test/java/org/apache/hive/service/cli/operation/TestOperationLoggingAPIWithMr.java
--
diff --git 
a/itests/hive-unit/src/test/java/org/apache/hive/service/cli/operation/TestOperationLoggingAPIWithMr.java
 
b/itests/hive-unit/src/test/java/org/apache/hive/service/cli/operation/TestOperationLoggingAPIWithMr.java
index a6aa846..c7dade3 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hive/service/cli/operation/TestOperationLoggingAPIWithMr.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hive/service/cli/operation/TestOperationLoggingAPIWithMr.java
@@ -59,7 +59,6 @@ public class TestOperationLoggingAPIWithMr extends 
OperationLoggingAPITestBase {
 expectedLogsPerformance = new String[]{
   "",
   "",
-  "",
   ""
 };
 hiveConf = new HiveConf();


[31/50] [abbrv] hive git commit: HIVE-19232: results_cache_invalidation2 is failing (Jason Dere, reviewed by Vineet Garg)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/e9094483/ql/src/test/results/clientpositive/results_cache_transactional.q.out
--
diff --git 
a/ql/src/test/results/clientpositive/results_cache_transactional.q.out 
b/ql/src/test/results/clientpositive/results_cache_transactional.q.out
deleted file mode 100644
index f2fac38..000
--- a/ql/src/test/results/clientpositive/results_cache_transactional.q.out
+++ /dev/null
@@ -1,583 +0,0 @@
-PREHOOK: query: create table tab1 (key string, value string) stored as orc 
tblproperties ('transactional'='true')
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@tab1
-POSTHOOK: query: create table tab1 (key string, value string) stored as orc 
tblproperties ('transactional'='true')
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@tab1
-PREHOOK: query: create table tab2 (key string, value string) stored as orc 
tblproperties ('transactional'='true')
-PREHOOK: type: CREATETABLE
-PREHOOK: Output: database:default
-PREHOOK: Output: default@tab2
-POSTHOOK: query: create table tab2 (key string, value string) stored as orc 
tblproperties ('transactional'='true')
-POSTHOOK: type: CREATETABLE
-POSTHOOK: Output: database:default
-POSTHOOK: Output: default@tab2
-PREHOOK: query: insert into tab1 select * from default.src
-PREHOOK: type: QUERY
-PREHOOK: Input: default@src
-PREHOOK: Output: default@tab1
-POSTHOOK: query: insert into tab1 select * from default.src
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@src
-POSTHOOK: Output: default@tab1
-POSTHOOK: Lineage: tab1.key SIMPLE [(src)src.FieldSchema(name:key, 
type:string, comment:default), ]
-POSTHOOK: Lineage: tab1.value SIMPLE [(src)src.FieldSchema(name:value, 
type:string, comment:default), ]
-PREHOOK: query: insert into tab2 select * from default.src
-PREHOOK: type: QUERY
-PREHOOK: Input: default@src
-PREHOOK: Output: default@tab2
-POSTHOOK: query: insert into tab2 select * from default.src
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@src
-POSTHOOK: Output: default@tab2
-POSTHOOK: Lineage: tab2.key SIMPLE [(src)src.FieldSchema(name:key, 
type:string, comment:default), ]
-POSTHOOK: Lineage: tab2.value SIMPLE [(src)src.FieldSchema(name:value, 
type:string, comment:default), ]
-PREHOOK: query: explain
-select max(key) from tab1
-PREHOOK: type: QUERY
-POSTHOOK: query: explain
-select max(key) from tab1
-POSTHOOK: type: QUERY
-STAGE DEPENDENCIES:
-  Stage-1 is a root stage
-  Stage-0 depends on stages: Stage-1
-
-STAGE PLANS:
-  Stage: Stage-1
-Map Reduce
-  Map Operator Tree:
-  TableScan
-alias: tab1
-Statistics: Num rows: 91 Data size: 35030 Basic stats: COMPLETE 
Column stats: NONE
-Select Operator
-  expressions: key (type: string)
-  outputColumnNames: key
-  Statistics: Num rows: 91 Data size: 35030 Basic stats: COMPLETE 
Column stats: NONE
-  Group By Operator
-aggregations: max(key)
-mode: hash
-outputColumnNames: _col0
-Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE 
Column stats: NONE
-Reduce Output Operator
-  sort order: 
-  Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE 
Column stats: NONE
-  value expressions: _col0 (type: string)
-  Reduce Operator Tree:
-Group By Operator
-  aggregations: max(VALUE._col0)
-  mode: mergepartial
-  outputColumnNames: _col0
-  Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE Column 
stats: NONE
-  File Output Operator
-compressed: false
-Statistics: Num rows: 1 Data size: 184 Basic stats: COMPLETE 
Column stats: NONE
-table:
-input format: org.apache.hadoop.mapred.SequenceFileInputFormat
-output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
-serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
-
-  Stage: Stage-0
-Fetch Operator
-  limit: -1
-  Processor Tree:
-ListSink
-
-PREHOOK: query: select max(key) from tab1
-PREHOOK: type: QUERY
-PREHOOK: Input: default@tab1
- A masked pattern was here 
-POSTHOOK: query: select max(key) from tab1
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@tab1
- A masked pattern was here 
-98
-test.comment="Query on transactional table should use cache"
-PREHOOK: query: explain
-select max(key) from tab1
-PREHOOK: type: QUERY
-POSTHOOK: query: explain
-select max(key) from tab1
-POSTHOOK: type: QUERY
-STAGE DEPENDENCIES:
-  Stage-0 is a root stage
-
-STAGE PLANS:
-  Stage: Stage-0
-Fetch Operator
-  limit: -1
-  Processor Tree:
-ListSink
-  Cached Query Result: true
-
-PREHOOK: query: select max(key) from tab1
-PREHOOK: 

[14/50] [abbrv] hive git commit: Revert "HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via Ashutosh Chauhan)"

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/f0199500/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
--
diff --git 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
index 184ecb6..125d5a7 100644
--- 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
+++ 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
@@ -55,7 +55,6 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantLock;
 import java.util.regex.Pattern;
-import java.util.stream.Collectors;
 
 import javax.jdo.JDOCanRetryException;
 import javax.jdo.JDODataStoreException;
@@ -84,6 +83,7 @@ import org.apache.hadoop.hive.common.StatsSetupConst;
 import 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.SqlFilterForPushdown;
 import org.apache.hadoop.hive.metastore.api.AggrStats;
 import org.apache.hadoop.hive.metastore.api.AlreadyExistsException;
+import org.apache.hadoop.hive.metastore.api.BasicTxnInfo;
 import org.apache.hadoop.hive.metastore.api.Catalog;
 import org.apache.hadoop.hive.metastore.api.ColumnStatistics;
 import org.apache.hadoop.hive.metastore.api.ColumnStatisticsDesc;
@@ -124,7 +124,6 @@ import org.apache.hadoop.hive.metastore.api.ResourceType;
 import org.apache.hadoop.hive.metastore.api.ResourceUri;
 import org.apache.hadoop.hive.metastore.api.Role;
 import org.apache.hadoop.hive.metastore.api.RolePrincipalGrant;
-import org.apache.hadoop.hive.metastore.api.RuntimeStat;
 import org.apache.hadoop.hive.metastore.api.SQLCheckConstraint;
 import org.apache.hadoop.hive.metastore.api.SQLDefaultConstraint;
 import org.apache.hadoop.hive.metastore.api.SQLForeignKey;
@@ -187,7 +186,6 @@ import 
org.apache.hadoop.hive.metastore.model.MPartitionPrivilege;
 import org.apache.hadoop.hive.metastore.model.MResourceUri;
 import org.apache.hadoop.hive.metastore.model.MRole;
 import org.apache.hadoop.hive.metastore.model.MRoleMap;
-import org.apache.hadoop.hive.metastore.model.MRuntimeStat;
 import org.apache.hadoop.hive.metastore.model.MSchemaVersion;
 import org.apache.hadoop.hive.metastore.model.MSerDeInfo;
 import org.apache.hadoop.hive.metastore.model.MStorageDescriptor;
@@ -212,6 +210,7 @@ import org.apache.hadoop.hive.metastore.utils.FileUtils;
 import org.apache.hadoop.hive.metastore.utils.JavaUtils;
 import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
 import org.apache.hadoop.hive.metastore.utils.ObjectPair;
+import org.apache.thrift.TDeserializer;
 import org.apache.thrift.TException;
 import org.datanucleus.AbstractNucleusContext;
 import org.datanucleus.ClassLoaderResolver;
@@ -810,9 +809,7 @@ public class ObjectStore implements RawStore, Configurable {
   pm.makePersistent(mCat);
   committed = commitTransaction();
 } finally {
-  if (!committed) {
-rollbackTransaction();
-  }
+  if (!committed) rollbackTransaction();
 }
   }
 
@@ -835,9 +832,7 @@ public class ObjectStore implements RawStore, Configurable {
   pm.makePersistent(mCat);
   committed = commitTransaction();
 } finally {
-  if (!committed) {
-rollbackTransaction();
-  }
+  if (!committed) rollbackTransaction();
 }
   }
 
@@ -845,9 +840,7 @@ public class ObjectStore implements RawStore, Configurable {
   public Catalog getCatalog(String catalogName) throws NoSuchObjectException, 
MetaException {
 LOG.debug("Fetching catalog " + catalogName);
 MCatalog mCat = getMCatalog(catalogName);
-if (mCat == null) {
-  throw new NoSuchObjectException("No catalog " + catalogName);
-}
+if (mCat == null) throw new NoSuchObjectException("No catalog " + 
catalogName);
 return mCatToCat(mCat);
   }
 
@@ -881,15 +874,11 @@ public class ObjectStore implements RawStore, 
Configurable {
   openTransaction();
   MCatalog mCat = getMCatalog(catalogName);
   pm.retrieve(mCat);
-  if (mCat == null) {
-throw new NoSuchObjectException("No catalog " + catalogName);
-  }
+  if (mCat == null) throw new NoSuchObjectException("No catalog " + 
catalogName);
   pm.deletePersistent(mCat);
   committed = commitTransaction();
 } finally {
-  if (!committed) {
-rollbackTransaction();
-  }
+  if (!committed) rollbackTransaction();
 }
   }
 
@@ -914,18 +903,14 @@ public class ObjectStore implements RawStore, 
Configurable {
   private MCatalog catToMCat(Catalog cat) {
 MCatalog mCat = new MCatalog();
 mCat.setName(normalizeIdentifier(cat.getName()));
-if (cat.isSetDescription()) {
-  mCat.setDescription(cat.getDescription());
-}
+if (cat.isSetDescription()) mCat.setDescription(cat.getDescription());
 

[22/50] [abbrv] hive git commit: HIVE-19275: Vectorization: Defer Wrong Results / Execution Failures when Vectorization turned on (Matt McCline, reviewed by Vihang Karajgaonkar)

2018-04-26 Thread omalley
HIVE-19275: Vectorization: Defer Wrong Results / Execution Failures when 
Vectorization turned on (Matt McCline, reviewed by Vihang Karajgaonkar)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/f552e745
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/f552e745
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/f552e745

Branch: refs/heads/storage-branch-2.6
Commit: f552e745bd21012a081f1a7e26a7e299f37e24d2
Parents: 211baae
Author: Matt McCline 
Authored: Mon Apr 23 23:05:36 2018 -0500
Committer: Matt McCline 
Committed: Mon Apr 23 23:05:36 2018 -0500

--
 ql/src/test/queries/clientpositive/auto_join_without_localtask.q   | 1 +
 ql/src/test/queries/clientpositive/avro_decimal_native.q   | 2 ++
 ql/src/test/queries/clientpositive/bucket_map_join_spark1.q| 1 +
 ql/src/test/queries/clientpositive/bucket_map_join_spark2.q| 1 +
 ql/src/test/queries/clientpositive/bucket_map_join_spark3.q| 1 +
 ql/src/test/queries/clientpositive/bucket_map_join_spark4.q| 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin1.q| 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin10.q   | 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin11.q   | 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin12.q   | 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin13.q   | 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin2.q| 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin3.q| 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin4.q| 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin5.q| 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin6.q| 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin7.q| 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin8.q| 1 +
 ql/src/test/queries/clientpositive/bucketmapjoin9.q| 2 ++
 ql/src/test/queries/clientpositive/bucketsortoptimize_insert_1.q   | 1 +
 ql/src/test/queries/clientpositive/bucketsortoptimize_insert_2.q   | 1 +
 ql/src/test/queries/clientpositive/bucketsortoptimize_insert_3.q   | 1 +
 ql/src/test/queries/clientpositive/bucketsortoptimize_insert_4.q   | 1 +
 ql/src/test/queries/clientpositive/bucketsortoptimize_insert_5.q   | 1 +
 ql/src/test/queries/clientpositive/bucketsortoptimize_insert_6.q   | 1 +
 ql/src/test/queries/clientpositive/bucketsortoptimize_insert_7.q   | 1 +
 ql/src/test/queries/clientpositive/bucketsortoptimize_insert_8.q   | 1 +
 ql/src/test/queries/clientpositive/decimal_join.q  | 1 +
 ql/src/test/queries/clientpositive/druidmini_dynamic_partition.q   | 1 +
 ql/src/test/queries/clientpositive/dynamic_partition_insert.q  | 1 +
 ql/src/test/queries/clientpositive/dynamic_partition_pruning.q | 1 +
 ql/src/test/queries/clientpositive/dynamic_partition_pruning_2.q   | 1 +
 .../test/queries/clientpositive/dynamic_partition_skip_default.q   | 1 +
 ql/src/test/queries/clientpositive/dynamic_semijoin_user_level.q   | 1 +
 ql/src/test/queries/clientpositive/dynpart_sort_optimization.q | 1 +
 ql/src/test/queries/clientpositive/explainanalyze_1.q  | 1 +
 ql/src/test/queries/clientpositive/explainanalyze_2.q  | 1 +
 ql/src/test/queries/clientpositive/explainanalyze_3.q  | 1 +
 ql/src/test/queries/clientpositive/explainanalyze_4.q  | 1 +
 ql/src/test/queries/clientpositive/explainanalyze_5.q  | 1 +
 ql/src/test/queries/clientpositive/insert_acid_dynamic_partition.q | 1 +
 .../queries/clientpositive/insert_values_dynamic_partitioned.q | 1 +
 ql/src/test/queries/clientpositive/join0.q | 1 +
 ql/src/test/queries/clientpositive/lineage1.q  | 1 +
 ql/src/test/queries/clientpositive/lineage2.q  | 1 +
 ql/src/test/queries/clientpositive/lineage3.q  | 1 +
 ql/src/test/queries/clientpositive/mapjoin1.q  | 1 +
 ql/src/test/queries/clientpositive/mapjoin46.q | 1 +
 ql/src/test/queries/clientpositive/mapjoin_addjar.q| 2 +-
 ql/src/test/queries/clientpositive/mapjoin_decimal.q   | 1 +
 ql/src/test/queries/clientpositive/merge_dynamic_partition.q   | 1 +
 ql/src/test/queries/clientpositive/merge_dynamic_partition2.q  | 1 +
 ql/src/test/queries/clientpositive/merge_dynamic_partition3.q  | 1 +
 ql/src/test/queries/clientpositive/merge_dynamic_partition4.q  | 1 +
 ql/src/test/queries/clientpositive/merge_dynamic_partition5.q  | 1 +
 ql/src/test/queries/clientpositive/multi_count_distinct.q  | 1 +
 ql/src/test/queries/clientpositive/nullgroup.q | 1 +
 

[01/50] [abbrv] hive git commit: HIVE-19104: When test MetaStore is started with retry the instances should be independent (Peter Vary, reviewed by Sahil Takiar) [Forced Update!]

2018-04-26 Thread omalley
Repository: hive
Updated Branches:
  refs/heads/storage-branch-2.6 a60871fd7 -> c055e8444 (forced update)


HIVE-19104: When test MetaStore is started with retry the instances should be 
independent (Peter Vary, reviewed by Sahil Takiar)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/5e0480e3
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/5e0480e3
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/5e0480e3

Branch: refs/heads/storage-branch-2.6
Commit: 5e0480e35f6176ff38ecffea64369287b87bd378
Parents: a1524f7
Author: Peter Vary 
Authored: Mon Apr 23 10:10:39 2018 +0200
Committer: Peter Vary 
Committed: Mon Apr 23 10:10:39 2018 +0200

--
 .../apache/hive/hcatalog/cli/TestPermsGrp.java  | 13 --
 .../mapreduce/TestHCatMultiOutputFormat.java| 18 
 .../mapreduce/TestHCatPartitionPublish.java | 15 ---
 .../hadoop/hive/ql/parse/WarehouseInstance.java |  2 +-
 .../org/apache/hive/jdbc/miniHS2/MiniHS2.java   |  7 +--
 .../hive/metastore/conf/MetastoreConf.java  |  4 +-
 .../hive/metastore/MetaStoreTestUtils.java  | 45 ++--
 .../hive/metastore/TestHiveMetaStore.java   | 13 +++---
 .../hive/metastore/TestRemoteHiveMetaStore.java |  1 -
 9 files changed, 82 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/5e0480e3/hcatalog/core/src/test/java/org/apache/hive/hcatalog/cli/TestPermsGrp.java
--
diff --git 
a/hcatalog/core/src/test/java/org/apache/hive/hcatalog/cli/TestPermsGrp.java 
b/hcatalog/core/src/test/java/org/apache/hive/hcatalog/cli/TestPermsGrp.java
index 8a2151c..3c9d89a 100644
--- a/hcatalog/core/src/test/java/org/apache/hive/hcatalog/cli/TestPermsGrp.java
+++ b/hcatalog/core/src/test/java/org/apache/hive/hcatalog/cli/TestPermsGrp.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.hive.metastore.api.SerDeInfo;
 import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
 import org.apache.hadoop.hive.metastore.api.Table;
 import org.apache.hadoop.hive.metastore.api.Type;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
 import org.apache.hadoop.hive.ql.io.HiveInputFormat;
 import org.apache.hadoop.hive.ql.io.HiveOutputFormat;
 import org.apache.hadoop.hive.ql.metadata.Hive;
@@ -62,7 +63,6 @@ import junit.framework.TestCase;
 public class TestPermsGrp extends TestCase {
 
   private boolean isServerRunning = false;
-  private int msPort;
   private HiveConf hcatConf;
   private Warehouse clientWH;
   private HiveMetaStoreClient msc;
@@ -80,7 +80,8 @@ public class TestPermsGrp extends TestCase {
   return;
 }
 
-msPort = MetaStoreTestUtils.startMetaStoreWithRetry();
+hcatConf = new HiveConf(this.getClass());
+MetaStoreTestUtils.startMetaStoreWithRetry(hcatConf);
 
 isServerRunning = true;
 
@@ -88,8 +89,6 @@ public class TestPermsGrp extends TestCase {
 System.setSecurityManager(new NoExitSecurityManager());
 Policy.setPolicy(new DerbyPolicy());
 
-hcatConf = new HiveConf(this.getClass());
-hcatConf.setVar(HiveConf.ConfVars.METASTOREURIS, "thrift://127.0.0.1:" + 
msPort);
 hcatConf.setIntVar(HiveConf.ConfVars.METASTORETHRIFTCONNECTIONRETRIES, 3);
 hcatConf.setIntVar(HiveConf.ConfVars.METASTORETHRIFTFAILURERETRIES, 3);
 
@@ -102,6 +101,12 @@ public class TestPermsGrp extends TestCase {
 msc = new HiveMetaStoreClient(hcatConf);
 System.setProperty(HiveConf.ConfVars.PREEXECHOOKS.varname, " ");
 System.setProperty(HiveConf.ConfVars.POSTEXECHOOKS.varname, " ");
+System.setProperty(HiveConf.ConfVars.METASTOREWAREHOUSE.varname,
+MetastoreConf.getVar(hcatConf, MetastoreConf.ConfVars.WAREHOUSE));
+System.setProperty(HiveConf.ConfVars.METASTORECONNECTURLKEY.varname,
+MetastoreConf.getVar(hcatConf, 
MetastoreConf.ConfVars.CONNECT_URL_KEY));
+System.setProperty(HiveConf.ConfVars.METASTOREURIS.varname,
+MetastoreConf.getVar(hcatConf, MetastoreConf.ConfVars.THRIFT_URIS));
   }
 
   public void testCustomPerms() throws Exception {

http://git-wip-us.apache.org/repos/asf/hive/blob/5e0480e3/hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestHCatMultiOutputFormat.java
--
diff --git 
a/hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestHCatMultiOutputFormat.java
 
b/hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestHCatMultiOutputFormat.java
index d9de10e..8a8a326 100644
--- 
a/hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestHCatMultiOutputFormat.java
+++ 
b/hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestHCatMultiOutputFormat.java
@@ -90,9 +90,6 @@ public class 

[48/50] [abbrv] hive git commit: HIVE-18986: Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns (Aihua Xu, reviewed by Yongzhi Chen)

2018-04-26 Thread omalley
HIVE-18986: Table rename will run java.lang.StackOverflowError in dataNucleus 
if the table contains large number of columns (Aihua Xu, reviewed by Yongzhi 
Chen)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/f30efbeb
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/f30efbeb
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/f30efbeb

Branch: refs/heads/storage-branch-2.6
Commit: f30efbebf2ff85c55a5d9e3e2f86e0a51341df78
Parents: 11b0d85
Author: Aihua Xu 
Authored: Wed Apr 18 17:05:08 2018 -0700
Committer: Aihua Xu 
Committed: Wed Apr 25 16:10:30 2018 -0700

--
 .../queries/clientpositive/alter_rename_table.q | 12 ++-
 .../clientpositive/alter_rename_table.q.out | 88 
 .../apache/hadoop/hive/metastore/Batchable.java | 86 +++
 .../hive/metastore/MetaStoreDirectSql.java  | 61 ++
 .../hadoop/hive/metastore/ObjectStore.java  | 45 ++
 .../hive/metastore/conf/MetastoreConf.java  |  5 ++
 6 files changed, 227 insertions(+), 70 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/f30efbeb/ql/src/test/queries/clientpositive/alter_rename_table.q
--
diff --git a/ql/src/test/queries/clientpositive/alter_rename_table.q 
b/ql/src/test/queries/clientpositive/alter_rename_table.q
index 53fb230..bcf6ad5 100644
--- a/ql/src/test/queries/clientpositive/alter_rename_table.q
+++ b/ql/src/test/queries/clientpositive/alter_rename_table.q
@@ -36,4 +36,14 @@ create table source.src1 like default.src;
 load data local inpath '../../data/files/kv1.txt' overwrite into table 
source.src;
 
 ALTER TABLE source.src RENAME TO target.src1;
-select * from target.src1 tablesample (10 rows);
\ No newline at end of file
+select * from target.src1 tablesample (10 rows);
+
+set metastore.rawstore.batch.size=1;
+set metastore.try.direct.sql=false;
+
+create table source.src2 like default.src;
+load data local inpath '../../data/files/kv1.txt' overwrite into table 
source.src2;
+ANALYZE TABlE source.src2 COMPUTE STATISTICS FOR COLUMNS;
+ALTER TABLE source.src2 RENAME TO target.src3;
+DESC FORMATTED target.src3;
+select * from target.src3 tablesample (10 rows);

http://git-wip-us.apache.org/repos/asf/hive/blob/f30efbeb/ql/src/test/results/clientpositive/alter_rename_table.q.out
--
diff --git a/ql/src/test/results/clientpositive/alter_rename_table.q.out 
b/ql/src/test/results/clientpositive/alter_rename_table.q.out
index 732d8a2..9ac8fd2 100644
--- a/ql/src/test/results/clientpositive/alter_rename_table.q.out
+++ b/ql/src/test/results/clientpositive/alter_rename_table.q.out
@@ -261,3 +261,91 @@ POSTHOOK: Input: target@src1
 278val_278
 98 val_98
 484val_484
+PREHOOK: query: create table source.src2 like default.src
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:source
+PREHOOK: Output: source@src2
+POSTHOOK: query: create table source.src2 like default.src
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:source
+POSTHOOK: Output: source@src2
+PREHOOK: query: load data local inpath '../../data/files/kv1.txt' overwrite 
into table source.src2
+PREHOOK: type: LOAD
+ A masked pattern was here 
+PREHOOK: Output: source@src2
+POSTHOOK: query: load data local inpath '../../data/files/kv1.txt' overwrite 
into table source.src2
+POSTHOOK: type: LOAD
+ A masked pattern was here 
+POSTHOOK: Output: source@src2
+PREHOOK: query: ANALYZE TABlE source.src2 COMPUTE STATISTICS FOR COLUMNS
+PREHOOK: type: QUERY
+PREHOOK: Input: source@src2
+ A masked pattern was here 
+PREHOOK: Output: source@src2
+POSTHOOK: query: ANALYZE TABlE source.src2 COMPUTE STATISTICS FOR COLUMNS
+POSTHOOK: type: QUERY
+POSTHOOK: Input: source@src2
+ A masked pattern was here 
+POSTHOOK: Output: source@src2
+PREHOOK: query: ALTER TABLE source.src2 RENAME TO target.src3
+PREHOOK: type: ALTERTABLE_RENAME
+PREHOOK: Input: source@src2
+PREHOOK: Output: source@src2
+POSTHOOK: query: ALTER TABLE source.src2 RENAME TO target.src3
+POSTHOOK: type: ALTERTABLE_RENAME
+POSTHOOK: Input: source@src2
+POSTHOOK: Output: source@src2
+POSTHOOK: Output: target@src3
+PREHOOK: query: DESC FORMATTED target.src3
+PREHOOK: type: DESCTABLE
+PREHOOK: Input: target@src3
+POSTHOOK: query: DESC FORMATTED target.src3
+POSTHOOK: type: DESCTABLE
+POSTHOOK: Input: target@src3
+# col_name data_type   comment 
+keystring  default 
+value  string  default 
+
+# Detailed Table Information
+Database:  target   
+ 

[16/50] [abbrv] hive git commit: Revert "HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via Ashutosh Chauhan)"

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/f0199500/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
index e2f0e82..a354f27 100644
--- 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
+++ 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
@@ -446,10 +446,6 @@ import org.slf4j.LoggerFactory;
 
 public boolean heartbeat_lock_materialization_rebuild(String dbName, 
String tableName, long txnId) throws org.apache.thrift.TException;
 
-public void add_runtime_stats(RuntimeStat stat) throws MetaException, 
org.apache.thrift.TException;
-
-public List get_runtime_stats(GetRuntimeStatsRequest rqst) 
throws MetaException, org.apache.thrift.TException;
-
   }
 
   @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public interface 
AsyncIface extends com.facebook.fb303.FacebookService .AsyncIface {
@@ -858,10 +854,6 @@ import org.slf4j.LoggerFactory;
 
 public void heartbeat_lock_materialization_rebuild(String dbName, String 
tableName, long txnId, org.apache.thrift.async.AsyncMethodCallback 
resultHandler) throws org.apache.thrift.TException;
 
-public void add_runtime_stats(RuntimeStat stat, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
-
-public void get_runtime_stats(GetRuntimeStatsRequest rqst, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
-
   }
 
   @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public static class 
Client extends com.facebook.fb303.FacebookService.Client implements Iface {
@@ -6700,55 +6692,6 @@ import org.slf4j.LoggerFactory;
   throw new 
org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT,
 "heartbeat_lock_materialization_rebuild failed: unknown result");
 }
 
-public void add_runtime_stats(RuntimeStat stat) throws MetaException, 
org.apache.thrift.TException
-{
-  send_add_runtime_stats(stat);
-  recv_add_runtime_stats();
-}
-
-public void send_add_runtime_stats(RuntimeStat stat) throws 
org.apache.thrift.TException
-{
-  add_runtime_stats_args args = new add_runtime_stats_args();
-  args.setStat(stat);
-  sendBase("add_runtime_stats", args);
-}
-
-public void recv_add_runtime_stats() throws MetaException, 
org.apache.thrift.TException
-{
-  add_runtime_stats_result result = new add_runtime_stats_result();
-  receiveBase(result, "add_runtime_stats");
-  if (result.o1 != null) {
-throw result.o1;
-  }
-  return;
-}
-
-public List get_runtime_stats(GetRuntimeStatsRequest rqst) 
throws MetaException, org.apache.thrift.TException
-{
-  send_get_runtime_stats(rqst);
-  return recv_get_runtime_stats();
-}
-
-public void send_get_runtime_stats(GetRuntimeStatsRequest rqst) throws 
org.apache.thrift.TException
-{
-  get_runtime_stats_args args = new get_runtime_stats_args();
-  args.setRqst(rqst);
-  sendBase("get_runtime_stats", args);
-}
-
-public List recv_get_runtime_stats() throws MetaException, 
org.apache.thrift.TException
-{
-  get_runtime_stats_result result = new get_runtime_stats_result();
-  receiveBase(result, "get_runtime_stats");
-  if (result.isSetSuccess()) {
-return result.success;
-  }
-  if (result.o1 != null) {
-throw result.o1;
-  }
-  throw new 
org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT,
 "get_runtime_stats failed: unknown result");
-}
-
   }
   @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public static class 
AsyncClient extends com.facebook.fb303.FacebookService.AsyncClient implements 
AsyncIface {
 @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public static class 
Factory implements org.apache.thrift.async.TAsyncClientFactory {
@@ -13717,70 +13660,6 @@ import org.slf4j.LoggerFactory;
   }
 }
 
-public void add_runtime_stats(RuntimeStat stat, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException {
-  checkReady();
-  add_runtime_stats_call method_call = new add_runtime_stats_call(stat, 
resultHandler, this, 

[02/50] [abbrv] hive git commit: HIVE-19131: DecimalColumnStatsMergerTest comparison review (Laszlo Bodor via Zoltan Haindrich)

2018-04-26 Thread omalley
HIVE-19131: DecimalColumnStatsMergerTest comparison review (Laszlo Bodor via 
Zoltan Haindrich)

Signed-off-by: Zoltan Haindrich 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/334c8cae
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/334c8cae
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/334c8cae

Branch: refs/heads/storage-branch-2.6
Commit: 334c8cae4af94be8157a3b9fd8506c8aee98ab50
Parents: 5e0480e
Author: Laszlo Bodor 
Authored: Mon Apr 23 13:14:51 2018 +0200
Committer: Zoltan Haindrich 
Committed: Mon Apr 23 13:14:51 2018 +0200

--
 .../hive/ql/exec/ColumnStatsUpdateTask.java |   5 +-
 .../ql/stats/ColumnStatisticsObjTranslator.java |   3 +-
 .../stats_analyze_decimal_compare.q |   4 +
 .../stats_analyze_decimal_compare.q.out |  45 +
 .../gen/thrift/gen-cpp/hive_metastore_types.cpp |  40 ++---
 .../gen/thrift/gen-cpp/hive_metastore_types.h   |  12 +-
 .../hadoop/hive/metastore/api/Decimal.java  | 170 +--
 .../src/gen/thrift/gen-php/metastore/Types.php  |  34 ++--
 .../gen/thrift/gen-py/hive_metastore/ttypes.py  |  24 +--
 .../gen/thrift/gen-rb/hive_metastore_types.rb   |   8 +-
 .../hive/metastore/StatObjectConverter.java |  38 ++---
 .../hive/metastore/api/utils/DecimalUtils.java  |  49 ++
 .../aggr/DecimalColumnStatsAggregator.java  |   5 +-
 .../src/main/thrift/hive_metastore.thrift   |   4 +-
 .../merge/DecimalColumnStatsMergerTest.java |  23 +--
 15 files changed, 275 insertions(+), 189 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/334c8cae/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
index a7465a7..207b66f 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
@@ -36,6 +36,7 @@ import org.apache.hadoop.hive.metastore.api.Date;
 import org.apache.hadoop.hive.metastore.api.Decimal;
 import org.apache.hadoop.hive.metastore.api.MetaException;
 import org.apache.hadoop.hive.metastore.api.SetPartitionsStatsRequest;
+import org.apache.hadoop.hive.metastore.api.utils.DecimalUtils;
 import 
org.apache.hadoop.hive.metastore.columnstats.cache.DateColumnStatsDataInspector;
 import 
org.apache.hadoop.hive.metastore.columnstats.cache.DecimalColumnStatsDataInspector;
 import 
org.apache.hadoop.hive.metastore.columnstats.cache.DoubleColumnStatsDataInspector;
@@ -226,11 +227,11 @@ public class ColumnStatsUpdateTask extends 
Task {
   decimalStats.setNumDVs(Long.parseLong(value));
 } else if (fName.equals("lowValue")) {
   BigDecimal d = new BigDecimal(value);
-  decimalStats.setLowValue(new Decimal(ByteBuffer.wrap(d
+  decimalStats.setLowValue(DecimalUtils.getDecimal(ByteBuffer.wrap(d
   .unscaledValue().toByteArray()), (short) d.scale()));
 } else if (fName.equals("highValue")) {
   BigDecimal d = new BigDecimal(value);
-  decimalStats.setHighValue(new Decimal(ByteBuffer.wrap(d
+  decimalStats.setHighValue(DecimalUtils.getDecimal(ByteBuffer.wrap(d
   .unscaledValue().toByteArray()), (short) d.scale()));
 } else {
   throw new SemanticException("Unknown stat");

http://git-wip-us.apache.org/repos/asf/hive/blob/334c8cae/ql/src/java/org/apache/hadoop/hive/ql/stats/ColumnStatisticsObjTranslator.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/stats/ColumnStatisticsObjTranslator.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/stats/ColumnStatisticsObjTranslator.java
index 08cda4a..607545d 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/stats/ColumnStatisticsObjTranslator.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/stats/ColumnStatisticsObjTranslator.java
@@ -28,6 +28,7 @@ import 
org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
 import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
 import org.apache.hadoop.hive.metastore.api.Date;
 import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.utils.DecimalUtils;
 import 
org.apache.hadoop.hive.metastore.columnstats.cache.DateColumnStatsDataInspector;
 import 
org.apache.hadoop.hive.metastore.columnstats.cache.DecimalColumnStatsDataInspector;
 import 
org.apache.hadoop.hive.metastore.columnstats.cache.DoubleColumnStatsDataInspector;
@@ -130,7 +131,7 @@ public class ColumnStatisticsObjTranslator {
   }
 
   

[10/50] [abbrv] hive git commit: HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via Ashutosh Chauhan)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/b3e2d8a0/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
--
diff --git a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 
b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
index 802d8e3..da66951 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
@@ -224,6 +224,8 @@ class ThriftHiveMetastoreIf : virtual public  
::facebook::fb303::FacebookService
   virtual void get_serde(SerDeInfo& _return, const GetSerdeRequest& rqst) = 0;
   virtual void get_lock_materialization_rebuild(LockResponse& _return, const 
std::string& dbName, const std::string& tableName, const int64_t txnId) = 0;
   virtual bool heartbeat_lock_materialization_rebuild(const std::string& 
dbName, const std::string& tableName, const int64_t txnId) = 0;
+  virtual void add_runtime_stats(const RuntimeStat& stat) = 0;
+  virtual void get_runtime_stats(std::vector & _return, const 
GetRuntimeStatsRequest& rqst) = 0;
 };
 
 class ThriftHiveMetastoreIfFactory : virtual public  
::facebook::fb303::FacebookServiceIfFactory {
@@ -887,6 +889,12 @@ class ThriftHiveMetastoreNull : virtual public 
ThriftHiveMetastoreIf , virtual p
 bool _return = false;
 return _return;
   }
+  void add_runtime_stats(const RuntimeStat& /* stat */) {
+return;
+  }
+  void get_runtime_stats(std::vector & /* _return */, const 
GetRuntimeStatsRequest& /* rqst */) {
+return;
+  }
 };
 
 typedef struct _ThriftHiveMetastore_getMetaConf_args__isset {
@@ -25644,6 +25652,222 @@ class 
ThriftHiveMetastore_heartbeat_lock_materialization_rebuild_presult {
 
 };
 
+typedef struct _ThriftHiveMetastore_add_runtime_stats_args__isset {
+  _ThriftHiveMetastore_add_runtime_stats_args__isset() : stat(false) {}
+  bool stat :1;
+} _ThriftHiveMetastore_add_runtime_stats_args__isset;
+
+class ThriftHiveMetastore_add_runtime_stats_args {
+ public:
+
+  ThriftHiveMetastore_add_runtime_stats_args(const 
ThriftHiveMetastore_add_runtime_stats_args&);
+  ThriftHiveMetastore_add_runtime_stats_args& operator=(const 
ThriftHiveMetastore_add_runtime_stats_args&);
+  ThriftHiveMetastore_add_runtime_stats_args() {
+  }
+
+  virtual ~ThriftHiveMetastore_add_runtime_stats_args() throw();
+  RuntimeStat stat;
+
+  _ThriftHiveMetastore_add_runtime_stats_args__isset __isset;
+
+  void __set_stat(const RuntimeStat& val);
+
+  bool operator == (const ThriftHiveMetastore_add_runtime_stats_args & rhs) 
const
+  {
+if (!(stat == rhs.stat))
+  return false;
+return true;
+  }
+  bool operator != (const ThriftHiveMetastore_add_runtime_stats_args ) 
const {
+return !(*this == rhs);
+  }
+
+  bool operator < (const ThriftHiveMetastore_add_runtime_stats_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class ThriftHiveMetastore_add_runtime_stats_pargs {
+ public:
+
+
+  virtual ~ThriftHiveMetastore_add_runtime_stats_pargs() throw();
+  const RuntimeStat* stat;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _ThriftHiveMetastore_add_runtime_stats_result__isset {
+  _ThriftHiveMetastore_add_runtime_stats_result__isset() : o1(false) {}
+  bool o1 :1;
+} _ThriftHiveMetastore_add_runtime_stats_result__isset;
+
+class ThriftHiveMetastore_add_runtime_stats_result {
+ public:
+
+  ThriftHiveMetastore_add_runtime_stats_result(const 
ThriftHiveMetastore_add_runtime_stats_result&);
+  ThriftHiveMetastore_add_runtime_stats_result& operator=(const 
ThriftHiveMetastore_add_runtime_stats_result&);
+  ThriftHiveMetastore_add_runtime_stats_result() {
+  }
+
+  virtual ~ThriftHiveMetastore_add_runtime_stats_result() throw();
+  MetaException o1;
+
+  _ThriftHiveMetastore_add_runtime_stats_result__isset __isset;
+
+  void __set_o1(const MetaException& val);
+
+  bool operator == (const ThriftHiveMetastore_add_runtime_stats_result & rhs) 
const
+  {
+if (!(o1 == rhs.o1))
+  return false;
+return true;
+  }
+  bool operator != (const ThriftHiveMetastore_add_runtime_stats_result ) 
const {
+return !(*this == rhs);
+  }
+
+  bool operator < (const ThriftHiveMetastore_add_runtime_stats_result & ) 
const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _ThriftHiveMetastore_add_runtime_stats_presult__isset {
+  _ThriftHiveMetastore_add_runtime_stats_presult__isset() : o1(false) {}
+  bool o1 :1;
+} _ThriftHiveMetastore_add_runtime_stats_presult__isset;
+
+class ThriftHiveMetastore_add_runtime_stats_presult {
+ public:
+
+
+  virtual ~ThriftHiveMetastore_add_runtime_stats_presult() throw();
+  MetaException o1;
+
+  

[38/50] [abbrv] hive git commit: HIVE-18423: Support pushing computation from the optimizer for JDBC storage handler tables (Jonathan Doron, reviewed by Jesus Camacho Rodriguez)

2018-04-26 Thread omalley
HIVE-18423: Support pushing computation from the optimizer for JDBC storage 
handler tables (Jonathan Doron, reviewed by Jesus Camacho Rodriguez)

Close apache/hive#288


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/10699bf1
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/10699bf1
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/10699bf1

Branch: refs/heads/storage-branch-2.6
Commit: 10699bf1498b677a852c0faa1279d3c904151b73
Parents: 391ff7e
Author: Jonathan Doron 
Authored: Wed Apr 25 07:17:56 2018 -0700
Committer: Jesus Camacho Rodriguez 
Committed: Wed Apr 25 07:19:24 2018 -0700

--
 .../org/apache/hadoop/hive/conf/Constants.java  |   6 +
 .../hive/storage/jdbc/JdbcInputFormat.java  |   2 +-
 .../hive/storage/jdbc/JdbcRecordReader.java |   6 +-
 .../org/apache/hive/storage/jdbc/JdbcSerDe.java |  37 ++--
 .../hive/storage/jdbc/JdbcStorageHandler.java   |   6 +
 .../hive/storage/jdbc/conf/DatabaseType.java|   3 +-
 .../storage/jdbc/conf/JdbcStorageConfig.java|   3 +-
 .../jdbc/conf/JdbcStorageConfigManager.java |  13 +-
 .../hive/storage/jdbc/dao/DatabaseAccessor.java |   2 +-
 .../jdbc/dao/DatabaseAccessorFactory.java   |   3 +
 .../jdbc/dao/GenericJdbcDatabaseAccessor.java   |  74 ++-
 .../jdbc/dao/JethroDatabaseAccessor.java|  50 +
 .../org/apache/hadoop/hive/ql/exec/DDLTask.java |   1 +
 .../metadata/HiveMaterializedViewsRegistry.java |  18 +-
 .../functions/HiveSqlCountAggFunction.java  |   6 +
 .../reloperators/jdbc/HiveJdbcConverter.java| 107 ++
 .../reloperators/jdbc/JdbcHiveTableScan.java|  58 ++
 .../calcite/rules/HiveRelColumnsAlignment.java  |   4 +
 .../rules/jdbc/JDBCAbstractSplitFilterRule.java | 208 +++
 .../rules/jdbc/JDBCAggregationPushDownRule.java |  94 +
 .../rules/jdbc/JDBCExtractJoinFilterRule.java   |  67 ++
 .../calcite/rules/jdbc/JDBCFilterJoinRule.java  |  71 +++
 .../rules/jdbc/JDBCFilterPushDownRule.java  |  78 +++
 .../rules/jdbc/JDBCJoinPushDownRule.java|  99 +
 .../rules/jdbc/JDBCProjectPushDownRule.java |  81 
 .../rules/jdbc/JDBCRexCallValidator.java|  90 
 .../rules/jdbc/JDBCSortPushDownRule.java|  84 
 .../rules/jdbc/JDBCUnionPushDownRule.java   |  88 
 .../calcite/rules/jdbc/package-info.java|  22 ++
 .../calcite/translator/ASTBuilder.java  |  33 ++-
 .../calcite/translator/ASTConverter.java|  18 +-
 .../hadoop/hive/ql/parse/CalcitePlanner.java| 170 ++-
 .../test/queries/clientpositive/jdbc_handler.q  |  40 
 .../clientpositive/llap/jdbc_handler.q.out  | 116 +++
 34 files changed, 1675 insertions(+), 83 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/10699bf1/common/src/java/org/apache/hadoop/hive/conf/Constants.java
--
diff --git a/common/src/java/org/apache/hadoop/hive/conf/Constants.java 
b/common/src/java/org/apache/hadoop/hive/conf/Constants.java
index ff9eb59..3d79eec 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/Constants.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/Constants.java
@@ -48,6 +48,7 @@ public class Constants {
   public static final String DRUID_SEGMENT_VERSION = "druid.segment.version";
   public static final String DRUID_JOB_WORKING_DIRECTORY = 
"druid.job.workingDirectory";
 
+
   public static final String KAFKA_TOPIC = "kafka.topic";
   public static final String KAFKA_BOOTSTRAP_SERVERS = 
"kafka.bootstrap.servers";
 
@@ -55,6 +56,11 @@ public class Constants {
   /* Kafka Ingestion state - valid values - START/STOP/RESET */
   public static final String DRUID_KAFKA_INGESTION = "druid.kafka.ingestion";
 
+  public static final String HIVE_JDBC_QUERY = "hive.sql.generated.query";
+  public static final String JDBC_QUERY = "hive.sql.query";
+  public static final String JDBC_HIVE_STORAGE_HANDLER_ID =
+  "org.apache.hive.storage.jdbc.JdbcStorageHandler";
+
   public static final String HIVE_SERVER2_JOB_CREDSTORE_PASSWORD_ENVVAR = 
"HIVE_JOB_CREDSTORE_PASSWORD";
   public static final String HADOOP_CREDENTIAL_PASSWORD_ENVVAR = 
"HADOOP_CREDSTORE_PASSWORD";
   public static final String HADOOP_CREDENTIAL_PROVIDER_PATH_CONFIG = 
"hadoop.security.credential.provider.path";

http://git-wip-us.apache.org/repos/asf/hive/blob/10699bf1/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcInputFormat.java
--
diff --git 
a/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcInputFormat.java 
b/jdbc-handler/src/main/java/org/apache/hive/storage/jdbc/JdbcInputFormat.java
index 

[39/50] [abbrv] hive git commit: HIVE-19274: Add an OpTreeSignature persistence checker hook (Zoltan Haindrich reviewed by Ashutosh Chauhan)

2018-04-26 Thread omalley
HIVE-19274: Add an OpTreeSignature persistence checker hook (Zoltan Haindrich 
reviewed by Ashutosh Chauhan)

Signed-off-by: Zoltan Haindrich 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/da10aabe
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/da10aabe
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/da10aabe

Branch: refs/heads/storage-branch-2.6
Commit: da10aabe56edf8fbb26d89d64bedcc4afa84a305
Parents: 10699bf
Author: Zoltan Haindrich 
Authored: Wed Apr 25 17:41:57 2018 +0200
Committer: Zoltan Haindrich 
Committed: Wed Apr 25 17:41:57 2018 +0200

--
 data/conf/llap/hive-site.xml|  2 +-
 data/conf/perf-reg/tez/hive-site.xml|  2 +-
 .../RuntimeStatsPersistenceCheckerHook.java | 71 
 .../ql/optimizer/signature/SignatureUtils.java  |  2 +-
 .../apache/hadoop/hive/ql/plan/JoinDesc.java|  6 +-
 5 files changed, 77 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/da10aabe/data/conf/llap/hive-site.xml
--
diff --git a/data/conf/llap/hive-site.xml b/data/conf/llap/hive-site.xml
index 990b473..1507a56 100644
--- a/data/conf/llap/hive-site.xml
+++ b/data/conf/llap/hive-site.xml
@@ -163,7 +163,7 @@
 
 
   hive.exec.post.hooks
-  org.apache.hadoop.hive.ql.hooks.PostExecutePrinter
+  org.apache.hadoop.hive.ql.hooks.PostExecutePrinter, 
org.apache.hadoop.hive.ql.hooks.RuntimeStatsPersistenceCheckerHook
   Post Execute Hook for Tests
 
 

http://git-wip-us.apache.org/repos/asf/hive/blob/da10aabe/data/conf/perf-reg/tez/hive-site.xml
--
diff --git a/data/conf/perf-reg/tez/hive-site.xml 
b/data/conf/perf-reg/tez/hive-site.xml
index e11f8f8..78a5481 100644
--- a/data/conf/perf-reg/tez/hive-site.xml
+++ b/data/conf/perf-reg/tez/hive-site.xml
@@ -162,7 +162,7 @@
 
 
   hive.exec.post.hooks
-  org.apache.hadoop.hive.ql.hooks.PostExecutePrinter
+  org.apache.hadoop.hive.ql.hooks.PostExecutePrinter, 
org.apache.hadoop.hive.ql.hooks.RuntimeStatsPersistenceCheckerHook
   Post Execute Hook for Tests
 
 

http://git-wip-us.apache.org/repos/asf/hive/blob/da10aabe/ql/src/java/org/apache/hadoop/hive/ql/hooks/RuntimeStatsPersistenceCheckerHook.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/hooks/RuntimeStatsPersistenceCheckerHook.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/hooks/RuntimeStatsPersistenceCheckerHook.java
new file mode 100644
index 000..b0bdad3
--- /dev/null
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/hooks/RuntimeStatsPersistenceCheckerHook.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.hooks;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.hive.ql.optimizer.signature.OpTreeSignature;
+import org.apache.hadoop.hive.ql.optimizer.signature.RuntimeStatsPersister;
+import org.apache.hadoop.hive.ql.plan.mapper.PlanMapper;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * This hook adds a persistence loop-back ensure that runtime statistics could 
be used.
+ */
+public class RuntimeStatsPersistenceCheckerHook implements 
ExecuteWithHookContext {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(RuntimeStatsPersistenceCheckerHook.class);
+
+  @Override
+  public void run(HookContext hookContext) throws Exception {
+
+PlanMapper pm = ((PrivateHookContext) 
hookContext).getContext().getPlanMapper();
+
+List sigs = pm.getAll(OpTreeSignature.class);
+
+for (OpTreeSignature sig : sigs) {
+  try {
+OpTreeSignature sig2 = persistenceLoop(sig, OpTreeSignature.class);
+sig.getSig().proveEquals(sig2.getSig());
+  } catch (Exception e) {
+throw new RuntimeException("while checking the signature of: " + 
sig.getSig(), e);
+  }
+}
+  

[11/50] [abbrv] hive git commit: HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via Ashutosh Chauhan)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/b3e2d8a0/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 
b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
index dfa13a0..4787703 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
@@ -2107,14 +2107,14 @@ uint32_t 
ThriftHiveMetastore_get_databases_result::read(::apache::thrift::protoc
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 this->success.clear();
-uint32_t _size1187;
-::apache::thrift::protocol::TType _etype1190;
-xfer += iprot->readListBegin(_etype1190, _size1187);
-this->success.resize(_size1187);
-uint32_t _i1191;
-for (_i1191 = 0; _i1191 < _size1187; ++_i1191)
+uint32_t _size1191;
+::apache::thrift::protocol::TType _etype1194;
+xfer += iprot->readListBegin(_etype1194, _size1191);
+this->success.resize(_size1191);
+uint32_t _i1195;
+for (_i1195 = 0; _i1195 < _size1191; ++_i1195)
 {
-  xfer += iprot->readString(this->success[_i1191]);
+  xfer += iprot->readString(this->success[_i1195]);
 }
 xfer += iprot->readListEnd();
   }
@@ -2153,10 +2153,10 @@ uint32_t 
ThriftHiveMetastore_get_databases_result::write(::apache::thrift::proto
 xfer += oprot->writeFieldBegin("success", 
::apache::thrift::protocol::T_LIST, 0);
 {
   xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, 
static_cast(this->success.size()));
-  std::vector ::const_iterator _iter1192;
-  for (_iter1192 = this->success.begin(); _iter1192 != 
this->success.end(); ++_iter1192)
+  std::vector ::const_iterator _iter1196;
+  for (_iter1196 = this->success.begin(); _iter1196 != 
this->success.end(); ++_iter1196)
   {
-xfer += oprot->writeString((*_iter1192));
+xfer += oprot->writeString((*_iter1196));
   }
   xfer += oprot->writeListEnd();
 }
@@ -2201,14 +2201,14 @@ uint32_t 
ThriftHiveMetastore_get_databases_presult::read(::apache::thrift::proto
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 (*(this->success)).clear();
-uint32_t _size1193;
-::apache::thrift::protocol::TType _etype1196;
-xfer += iprot->readListBegin(_etype1196, _size1193);
-(*(this->success)).resize(_size1193);
-uint32_t _i1197;
-for (_i1197 = 0; _i1197 < _size1193; ++_i1197)
+uint32_t _size1197;
+::apache::thrift::protocol::TType _etype1200;
+xfer += iprot->readListBegin(_etype1200, _size1197);
+(*(this->success)).resize(_size1197);
+uint32_t _i1201;
+for (_i1201 = 0; _i1201 < _size1197; ++_i1201)
 {
-  xfer += iprot->readString((*(this->success))[_i1197]);
+  xfer += iprot->readString((*(this->success))[_i1201]);
 }
 xfer += iprot->readListEnd();
   }
@@ -2325,14 +2325,14 @@ uint32_t 
ThriftHiveMetastore_get_all_databases_result::read(::apache::thrift::pr
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 this->success.clear();
-uint32_t _size1198;
-::apache::thrift::protocol::TType _etype1201;
-xfer += iprot->readListBegin(_etype1201, _size1198);
-this->success.resize(_size1198);
-uint32_t _i1202;
-for (_i1202 = 0; _i1202 < _size1198; ++_i1202)
+uint32_t _size1202;
+::apache::thrift::protocol::TType _etype1205;
+xfer += iprot->readListBegin(_etype1205, _size1202);
+this->success.resize(_size1202);
+uint32_t _i1206;
+for (_i1206 = 0; _i1206 < _size1202; ++_i1206)
 {
-  xfer += iprot->readString(this->success[_i1202]);
+  xfer += iprot->readString(this->success[_i1206]);
 }
 xfer += iprot->readListEnd();
   }
@@ -2371,10 +2371,10 @@ uint32_t 
ThriftHiveMetastore_get_all_databases_result::write(::apache::thrift::p
 xfer += oprot->writeFieldBegin("success", 
::apache::thrift::protocol::T_LIST, 0);
 {
   xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, 
static_cast(this->success.size()));
-  std::vector ::const_iterator _iter1203;
-  for (_iter1203 = this->success.begin(); _iter1203 != 
this->success.end(); ++_iter1203)
+  std::vector ::const_iterator _iter1207;
+  for (_iter1207 = this->success.begin(); _iter1207 != 
this->success.end(); ++_iter1207)
   {
-xfer += 

[37/50] [abbrv] hive git commit: HIVE-18423: Support pushing computation from the optimizer for JDBC storage handler tables (Jonathan Doron, reviewed by Jesus Camacho Rodriguez)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/10699bf1/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
index 062df06..0bc9d23 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
@@ -43,6 +43,7 @@ import java.util.Set;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import javax.sql.DataSource;
 import com.google.common.collect.Iterables;
 import org.antlr.runtime.ClassicToken;
 import org.antlr.runtime.CommonToken;
@@ -52,6 +53,9 @@ import org.antlr.runtime.tree.TreeVisitorAction;
 import org.apache.calcite.adapter.druid.DruidQuery;
 import org.apache.calcite.adapter.druid.DruidSchema;
 import org.apache.calcite.adapter.druid.DruidTable;
+import org.apache.calcite.adapter.jdbc.JdbcConvention;
+import org.apache.calcite.adapter.jdbc.JdbcSchema;
+import org.apache.calcite.adapter.jdbc.JdbcTable;
 import org.apache.calcite.config.CalciteConnectionConfig;
 import org.apache.calcite.config.CalciteConnectionConfigImpl;
 import org.apache.calcite.config.CalciteConnectionProperty;
@@ -106,6 +110,8 @@ import org.apache.calcite.rex.RexWindowBound;
 import org.apache.calcite.schema.SchemaPlus;
 import org.apache.calcite.sql.SqlAggFunction;
 import org.apache.calcite.sql.SqlCall;
+import org.apache.calcite.sql.SqlDialect;
+import org.apache.calcite.sql.SqlDialectFactoryImpl;
 import org.apache.calcite.sql.SqlExplainLevel;
 import org.apache.calcite.sql.SqlKind;
 import org.apache.calcite.sql.SqlLiteral;
@@ -171,6 +177,7 @@ import 
org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveExcept;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveFilter;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveGroupingID;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveIntersect;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.jdbc.HiveJdbcConverter;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveJoin;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveRelNode;
@@ -179,6 +186,7 @@ import 
org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveSortLimit;
 import 
org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveTableFunctionScan;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveTableScan;
 import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveUnion;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.jdbc.JdbcHiveTableScan;
 import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveAggregateJoinTransposeRule;
 import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveAggregateProjectMergeRule;
 import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveAggregatePullUpConstantsRule;
@@ -226,10 +234,22 @@ import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveSubQueryRemoveRule;
 import org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveUnionMergeRule;
 import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveUnionPullUpConstantsRule;
 import org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveWindowingFixRule;
+
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.jdbc.JDBCAggregationPushDownRule;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.jdbc.JDBCFilterJoinRule;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.jdbc.JDBCFilterPushDownRule;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.jdbc.JDBCExtractJoinFilterRule;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.jdbc.JDBCJoinPushDownRule;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.jdbc.JDBCProjectPushDownRule;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.jdbc.JDBCSortPushDownRule;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.jdbc.JDBCUnionPushDownRule;
+import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.jdbc.JDBCAbstractSplitFilterRule;
+
 import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.views.HiveAggregateIncrementalRewritingRule;
 import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.views.HiveMaterializedViewRule;
 import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.views.HiveNoAggregateIncrementalRewritingRule;
 import 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.views.MaterializedViewRewritingRelVisitor;
+
 import org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTBuilder;
 import org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter;
 import org.apache.hadoop.hive.ql.optimizer.calcite.translator.HiveOpConverter;
@@ -1830,6 +1850,17 @@ public class 

[29/50] [abbrv] hive git commit: HIVE-19137 - orcfiledump doesn't print hive.acid.version value (Igor Kryvenko via Eugene Koifman)

2018-04-26 Thread omalley
HIVE-19137 - orcfiledump doesn't print hive.acid.version value (Igor Kryvenko 
via Eugene Koifman)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/6934bb96
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/6934bb96
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/6934bb96

Branch: refs/heads/storage-branch-2.6
Commit: 6934bb96704045d1f35213420cc1cfc984080797
Parents: 56c3a95
Author: Igor Kryvenko 
Authored: Tue Apr 24 11:22:46 2018 -0700
Committer: Eugene Koifman 
Committed: Tue Apr 24 11:22:46 2018 -0700

--
 .../org/apache/hadoop/hive/ql/io/AcidUtils.java | 28 +++-
 .../hive/ql/io/orc/TestInputOutputFormat.java   |  4 +--
 2 files changed, 17 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/6934bb96/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java 
b/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
index 212e0a6..fd84978 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
@@ -27,8 +27,6 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.PathFilter;
 import org.apache.hadoop.hive.common.HiveStatsUtils;
-import org.apache.hadoop.hive.common.JavaUtils;
-import org.apache.hadoop.hive.common.ValidTxnList;
 import org.apache.hadoop.hive.common.ValidTxnWriteIdList;
 import org.apache.hadoop.hive.common.ValidWriteIdList;
 import org.apache.hadoop.hive.conf.HiveConf;
@@ -43,15 +41,12 @@ import org.apache.hadoop.hive.ql.io.orc.OrcInputFormat;
 import org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater;
 import org.apache.hadoop.hive.ql.io.orc.Reader;
 import org.apache.hadoop.hive.ql.io.orc.Writer;
-import org.apache.hadoop.hive.ql.lockmgr.HiveTxnManager;
-import org.apache.hadoop.hive.ql.lockmgr.LockException;
 import org.apache.hadoop.hive.ql.metadata.Table;
 import org.apache.hadoop.hive.ql.plan.CreateTableDesc;
 import org.apache.hadoop.hive.ql.plan.TableScanDesc;
 import org.apache.hadoop.hive.shims.HadoopShims;
 import org.apache.hadoop.hive.shims.HadoopShims.HdfsFileStatusWithId;
 import org.apache.hadoop.hive.shims.ShimLoader;
-import org.apache.hadoop.mapred.JobConf;
 import org.apache.hive.common.util.Ref;
 import org.apache.orc.FileFormatException;
 import org.apache.orc.impl.OrcAcidUtils;
@@ -61,7 +56,7 @@ import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.io.Serializable;
-import java.nio.ByteBuffer;
+import java.nio.charset.Charset;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.HashMap;
@@ -1705,6 +1700,7 @@ public class AcidUtils {
   public static final class OrcAcidVersion {
 private static final String ACID_VERSION_KEY = "hive.acid.version";
 private static final String ACID_FORMAT = "_orc_acid_version";
+private static final Charset UTF8 = Charset.forName("UTF-8");
 public static final int ORC_ACID_VERSION_DEFAULT = 0;
 /**
  * 2 is the version of Acid released in Hive 3.0.
@@ -1716,9 +1712,7 @@ public class AcidUtils {
  */
 public static void setAcidVersionInDataFile(Writer writer) {
   //so that we know which version wrote the file
-  ByteBuffer bf = ByteBuffer.allocate(4).putInt(ORC_ACID_VERSION);
-  bf.rewind(); //don't ask - some ByteBuffer weridness. w/o this, empty 
buffer is written
-  writer.addUserMetadata(ACID_VERSION_KEY, bf);
+  writer.addUserMetadata(ACID_VERSION_KEY, 
UTF8.encode(String.valueOf(ORC_ACID_VERSION)));
 }
 /**
  * This is smart enough to handle streaming ingest where there could be a
@@ -1735,8 +1729,10 @@ public class AcidUtils {
   .filesystem(fs)
   //make sure to check for side file in case streaming ingest died
   .maxLength(getLogicalLength(fs, fileStatus)));
-  if(orcReader.hasMetadataValue(ACID_VERSION_KEY)) {
-return orcReader.getMetadataValue(ACID_VERSION_KEY).getInt();
+  if (orcReader.hasMetadataValue(ACID_VERSION_KEY)) {
+char[] versionChar = 
UTF8.decode(orcReader.getMetadataValue(ACID_VERSION_KEY)).array();
+String version = new String(versionChar);
+return Integer.valueOf(version);
   }
   return ORC_ACID_VERSION_DEFAULT;
 }
@@ -1748,7 +1744,7 @@ public class AcidUtils {
   Path formatFile = getVersionFilePath(deltaOrBaseDir);
   if(!fs.exists(formatFile)) {
 try (FSDataOutputStream strm = fs.create(formatFile, false)) {
-  strm.writeInt(ORC_ACID_VERSION);
+  

[49/50] [abbrv] hive git commit: HIVE-19233 : Add utility for acid 1.0 to 2.0 migration (Eugene Koifman via Ashutosh Chauhan)

2018-04-26 Thread omalley
HIVE-19233 : Add utility for acid 1.0 to 2.0 migration (Eugene Koifman via 
Ashutosh Chauhan)

Signed-off-by: Ashutosh Chauhan 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/087ef7b6
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/087ef7b6
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/087ef7b6

Branch: refs/heads/storage-branch-2.6
Commit: 087ef7b638f8a2d803287c2a11d29df5c3393f80
Parents: f30efbe
Author: Eugene Koifman 
Authored: Wed Apr 25 21:31:39 2018 -0700
Committer: Ashutosh Chauhan 
Committed: Wed Apr 25 21:31:39 2018 -0700

--
 pom.xml |   2 +-
 .../org/apache/hadoop/hive/ql/TestTxnExIm.java  |  37 ++
 standalone-metastore/pom.xml|  16 +
 .../hive/metastore/tools/HiveMetaTool.java  | 392 ++-
 4 files changed, 445 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/087ef7b6/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 21ce5cb..50f416e 100644
--- a/pom.xml
+++ b/pom.xml
@@ -106,7 +106,7 @@
 2.4
 2.4
 3.1.0
-2.20.1
+2.21.0
 2.4
 2.8
 2.9

http://git-wip-us.apache.org/repos/asf/hive/blob/087ef7b6/ql/src/test/org/apache/hadoop/hive/ql/TestTxnExIm.java
--
diff --git a/ql/src/test/org/apache/hadoop/hive/ql/TestTxnExIm.java 
b/ql/src/test/org/apache/hadoop/hive/ql/TestTxnExIm.java
index 0e53697..6daac1b 100644
--- a/ql/src/test/org/apache/hadoop/hive/ql/TestTxnExIm.java
+++ b/ql/src/test/org/apache/hadoop/hive/ql/TestTxnExIm.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.hive.ql;
 
 import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.metastore.tools.HiveMetaTool;
 import org.junit.Assert;
 import org.junit.Ignore;
 import org.junit.Test;
@@ -535,4 +536,40 @@ 
target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1521148657811/
 TestTxnCommands2.stringifyValues(data), rs);
 
   }
+  @Test
+  public void testUpgrade() throws Exception {
+int[][] data = {{1,2}, {3, 4}, {5, 6}};
+int[][] dataPart = {{1, 2, 10}, {3, 4, 11}, {5, 6, 12}};
+runStatementOnDriver("drop table if exists TAcid");
+runStatementOnDriver("drop table if exists TAcidPart");
+runStatementOnDriver("drop table if exists TFlat");
+runStatementOnDriver("drop table if exists TFlatText");
+runStatementOnDriver("create table TAcid (a int, b int) stored as orc");
+runStatementOnDriver("create table TAcidPart (a int, b int) partitioned by 
(p int) stored" +
+" as orc");
+runStatementOnDriver("create table TFlat (a int, b int) stored as orc 
tblproperties('transactional'='false')");
+runStatementOnDriver("create table TFlatText (a int, b int) stored as 
textfile tblproperties('transactional'='false')");
+
+
+//this needs major compaction
+runStatementOnDriver("insert into TAcid" + 
TestTxnCommands2.makeValuesClause(data));
+runStatementOnDriver("update TAcid set a = 1 where b = 2");
+
+//this table needs to be converted to Acid
+runStatementOnDriver("insert into TFlat" + 
TestTxnCommands2.makeValuesClause(data));
+
+//this table needs to be converted to MM
+runStatementOnDriver("insert into TFlatText" + 
TestTxnCommands2.makeValuesClause(data));
+
+//p=10 needs major compaction
+runStatementOnDriver("insert into TAcidPart" + 
TestTxnCommands2.makeValuesClause(dataPart));
+runStatementOnDriver("update TAcidPart set a = 1 where b = 2 and p = 10");
+
+//todo: add partitioned table that needs conversion to MM/Acid
+
+//todo: rename files case
+String[] args = new String[1];
+args[0] = new String("-prepareAcidUpgrade");
+HiveMetaTool.main(args);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hive/blob/087ef7b6/standalone-metastore/pom.xml
--
diff --git a/standalone-metastore/pom.xml b/standalone-metastore/pom.xml
index c340fe2..10b1bfa 100644
--- a/standalone-metastore/pom.xml
+++ b/standalone-metastore/pom.xml
@@ -80,6 +80,7 @@
 0.9.3
 2.8.2
 1.10.19
+1.4.3
 2.5.0
 1.3.0
 2.6.0-SNAPSHOT
@@ -93,6 +94,21 @@
 
   
 
+  org.apache.orc
+  orc-core
+  ${orc.version}
+  
+
+  org.apache.hadoop
+  hadoop-common
+
+
+  org.apache.hive
+  hive-storage-api
+
+  
+
+
   com.fasterxml.jackson.core
   jackson-databind
   ${jackson.version}


[27/50] [abbrv] hive git commit: HIVE-19171: Persist runtime statistics in metastore (Zoltan Haindrich reviewed by Ashutosh Chauhan)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/56c3a957/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 
b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
index dfa13a0..4787703 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
@@ -2107,14 +2107,14 @@ uint32_t 
ThriftHiveMetastore_get_databases_result::read(::apache::thrift::protoc
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 this->success.clear();
-uint32_t _size1187;
-::apache::thrift::protocol::TType _etype1190;
-xfer += iprot->readListBegin(_etype1190, _size1187);
-this->success.resize(_size1187);
-uint32_t _i1191;
-for (_i1191 = 0; _i1191 < _size1187; ++_i1191)
+uint32_t _size1191;
+::apache::thrift::protocol::TType _etype1194;
+xfer += iprot->readListBegin(_etype1194, _size1191);
+this->success.resize(_size1191);
+uint32_t _i1195;
+for (_i1195 = 0; _i1195 < _size1191; ++_i1195)
 {
-  xfer += iprot->readString(this->success[_i1191]);
+  xfer += iprot->readString(this->success[_i1195]);
 }
 xfer += iprot->readListEnd();
   }
@@ -2153,10 +2153,10 @@ uint32_t 
ThriftHiveMetastore_get_databases_result::write(::apache::thrift::proto
 xfer += oprot->writeFieldBegin("success", 
::apache::thrift::protocol::T_LIST, 0);
 {
   xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, 
static_cast(this->success.size()));
-  std::vector ::const_iterator _iter1192;
-  for (_iter1192 = this->success.begin(); _iter1192 != 
this->success.end(); ++_iter1192)
+  std::vector ::const_iterator _iter1196;
+  for (_iter1196 = this->success.begin(); _iter1196 != 
this->success.end(); ++_iter1196)
   {
-xfer += oprot->writeString((*_iter1192));
+xfer += oprot->writeString((*_iter1196));
   }
   xfer += oprot->writeListEnd();
 }
@@ -2201,14 +2201,14 @@ uint32_t 
ThriftHiveMetastore_get_databases_presult::read(::apache::thrift::proto
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 (*(this->success)).clear();
-uint32_t _size1193;
-::apache::thrift::protocol::TType _etype1196;
-xfer += iprot->readListBegin(_etype1196, _size1193);
-(*(this->success)).resize(_size1193);
-uint32_t _i1197;
-for (_i1197 = 0; _i1197 < _size1193; ++_i1197)
+uint32_t _size1197;
+::apache::thrift::protocol::TType _etype1200;
+xfer += iprot->readListBegin(_etype1200, _size1197);
+(*(this->success)).resize(_size1197);
+uint32_t _i1201;
+for (_i1201 = 0; _i1201 < _size1197; ++_i1201)
 {
-  xfer += iprot->readString((*(this->success))[_i1197]);
+  xfer += iprot->readString((*(this->success))[_i1201]);
 }
 xfer += iprot->readListEnd();
   }
@@ -2325,14 +2325,14 @@ uint32_t 
ThriftHiveMetastore_get_all_databases_result::read(::apache::thrift::pr
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 this->success.clear();
-uint32_t _size1198;
-::apache::thrift::protocol::TType _etype1201;
-xfer += iprot->readListBegin(_etype1201, _size1198);
-this->success.resize(_size1198);
-uint32_t _i1202;
-for (_i1202 = 0; _i1202 < _size1198; ++_i1202)
+uint32_t _size1202;
+::apache::thrift::protocol::TType _etype1205;
+xfer += iprot->readListBegin(_etype1205, _size1202);
+this->success.resize(_size1202);
+uint32_t _i1206;
+for (_i1206 = 0; _i1206 < _size1202; ++_i1206)
 {
-  xfer += iprot->readString(this->success[_i1202]);
+  xfer += iprot->readString(this->success[_i1206]);
 }
 xfer += iprot->readListEnd();
   }
@@ -2371,10 +2371,10 @@ uint32_t 
ThriftHiveMetastore_get_all_databases_result::write(::apache::thrift::p
 xfer += oprot->writeFieldBegin("success", 
::apache::thrift::protocol::T_LIST, 0);
 {
   xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, 
static_cast(this->success.size()));
-  std::vector ::const_iterator _iter1203;
-  for (_iter1203 = this->success.begin(); _iter1203 != 
this->success.end(); ++_iter1203)
+  std::vector ::const_iterator _iter1207;
+  for (_iter1207 = this->success.begin(); _iter1207 != 
this->success.end(); ++_iter1207)
   {
-xfer += 

[20/50] [abbrv] hive git commit: HIVE-19265 : Potential NPE and hiding actual exception in Hive#copyFiles (Igor Kryvenko via Ashutosh Chauhan)

2018-04-26 Thread omalley
HIVE-19265 : Potential NPE and hiding actual exception in Hive#copyFiles (Igor 
Kryvenko via Ashutosh Chauhan)

Signed-off-by: Ashutosh Chauhan 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/cce0e377
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/cce0e377
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/cce0e377

Branch: refs/heads/storage-branch-2.6
Commit: cce0e3777d178e68f38a0c9335d44a12fff42a6b
Parents: f019950
Author: Igor Kryvenko 
Authored: Mon Apr 23 18:52:27 2018 -0700
Committer: Ashutosh Chauhan 
Committed: Mon Apr 23 18:52:27 2018 -0700

--
 ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/cce0e377/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 
b/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
index 69d42e3..4661881 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
@@ -3291,7 +3291,9 @@ private void constructOneLBLocationMap(FileStatus fSta,
 try {
   files = srcFs.listStatus(src.getPath(), 
FileUtils.HIDDEN_FILES_PATH_FILTER);
 } catch (IOException e) {
-  pool.shutdownNow();
+  if (null != pool) {
+pool.shutdownNow();
+  }
   throw new HiveException(e);
 }
   } else {



[25/50] [abbrv] hive git commit: HIVE-19171: Persist runtime statistics in metastore (Zoltan Haindrich reviewed by Ashutosh Chauhan)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/56c3a957/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
index a354f27..e2f0e82 100644
--- 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
+++ 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
@@ -446,6 +446,10 @@ import org.slf4j.LoggerFactory;
 
 public boolean heartbeat_lock_materialization_rebuild(String dbName, 
String tableName, long txnId) throws org.apache.thrift.TException;
 
+public void add_runtime_stats(RuntimeStat stat) throws MetaException, 
org.apache.thrift.TException;
+
+public List get_runtime_stats(GetRuntimeStatsRequest rqst) 
throws MetaException, org.apache.thrift.TException;
+
   }
 
   @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public interface 
AsyncIface extends com.facebook.fb303.FacebookService .AsyncIface {
@@ -854,6 +858,10 @@ import org.slf4j.LoggerFactory;
 
 public void heartbeat_lock_materialization_rebuild(String dbName, String 
tableName, long txnId, org.apache.thrift.async.AsyncMethodCallback 
resultHandler) throws org.apache.thrift.TException;
 
+public void add_runtime_stats(RuntimeStat stat, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void get_runtime_stats(GetRuntimeStatsRequest rqst, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
   }
 
   @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public static class 
Client extends com.facebook.fb303.FacebookService.Client implements Iface {
@@ -6692,6 +6700,55 @@ import org.slf4j.LoggerFactory;
   throw new 
org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT,
 "heartbeat_lock_materialization_rebuild failed: unknown result");
 }
 
+public void add_runtime_stats(RuntimeStat stat) throws MetaException, 
org.apache.thrift.TException
+{
+  send_add_runtime_stats(stat);
+  recv_add_runtime_stats();
+}
+
+public void send_add_runtime_stats(RuntimeStat stat) throws 
org.apache.thrift.TException
+{
+  add_runtime_stats_args args = new add_runtime_stats_args();
+  args.setStat(stat);
+  sendBase("add_runtime_stats", args);
+}
+
+public void recv_add_runtime_stats() throws MetaException, 
org.apache.thrift.TException
+{
+  add_runtime_stats_result result = new add_runtime_stats_result();
+  receiveBase(result, "add_runtime_stats");
+  if (result.o1 != null) {
+throw result.o1;
+  }
+  return;
+}
+
+public List get_runtime_stats(GetRuntimeStatsRequest rqst) 
throws MetaException, org.apache.thrift.TException
+{
+  send_get_runtime_stats(rqst);
+  return recv_get_runtime_stats();
+}
+
+public void send_get_runtime_stats(GetRuntimeStatsRequest rqst) throws 
org.apache.thrift.TException
+{
+  get_runtime_stats_args args = new get_runtime_stats_args();
+  args.setRqst(rqst);
+  sendBase("get_runtime_stats", args);
+}
+
+public List recv_get_runtime_stats() throws MetaException, 
org.apache.thrift.TException
+{
+  get_runtime_stats_result result = new get_runtime_stats_result();
+  receiveBase(result, "get_runtime_stats");
+  if (result.isSetSuccess()) {
+return result.success;
+  }
+  if (result.o1 != null) {
+throw result.o1;
+  }
+  throw new 
org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT,
 "get_runtime_stats failed: unknown result");
+}
+
   }
   @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public static class 
AsyncClient extends com.facebook.fb303.FacebookService.AsyncClient implements 
AsyncIface {
 @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public static class 
Factory implements org.apache.thrift.async.TAsyncClientFactory {
@@ -13660,6 +13717,70 @@ import org.slf4j.LoggerFactory;
   }
 }
 
+public void add_runtime_stats(RuntimeStat stat, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException {
+  checkReady();
+  add_runtime_stats_call method_call = new add_runtime_stats_call(stat, 
resultHandler, this, 

[23/50] [abbrv] hive git commit: HIVE-19171: Persist runtime statistics in metastore (Zoltan Haindrich reviewed by Ashutosh Chauhan)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/56c3a957/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
--
diff --git 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
index 125d5a7..184ecb6 100644
--- 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
+++ 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
@@ -55,6 +55,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantLock;
 import java.util.regex.Pattern;
+import java.util.stream.Collectors;
 
 import javax.jdo.JDOCanRetryException;
 import javax.jdo.JDODataStoreException;
@@ -83,7 +84,6 @@ import org.apache.hadoop.hive.common.StatsSetupConst;
 import 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.SqlFilterForPushdown;
 import org.apache.hadoop.hive.metastore.api.AggrStats;
 import org.apache.hadoop.hive.metastore.api.AlreadyExistsException;
-import org.apache.hadoop.hive.metastore.api.BasicTxnInfo;
 import org.apache.hadoop.hive.metastore.api.Catalog;
 import org.apache.hadoop.hive.metastore.api.ColumnStatistics;
 import org.apache.hadoop.hive.metastore.api.ColumnStatisticsDesc;
@@ -124,6 +124,7 @@ import org.apache.hadoop.hive.metastore.api.ResourceType;
 import org.apache.hadoop.hive.metastore.api.ResourceUri;
 import org.apache.hadoop.hive.metastore.api.Role;
 import org.apache.hadoop.hive.metastore.api.RolePrincipalGrant;
+import org.apache.hadoop.hive.metastore.api.RuntimeStat;
 import org.apache.hadoop.hive.metastore.api.SQLCheckConstraint;
 import org.apache.hadoop.hive.metastore.api.SQLDefaultConstraint;
 import org.apache.hadoop.hive.metastore.api.SQLForeignKey;
@@ -186,6 +187,7 @@ import 
org.apache.hadoop.hive.metastore.model.MPartitionPrivilege;
 import org.apache.hadoop.hive.metastore.model.MResourceUri;
 import org.apache.hadoop.hive.metastore.model.MRole;
 import org.apache.hadoop.hive.metastore.model.MRoleMap;
+import org.apache.hadoop.hive.metastore.model.MRuntimeStat;
 import org.apache.hadoop.hive.metastore.model.MSchemaVersion;
 import org.apache.hadoop.hive.metastore.model.MSerDeInfo;
 import org.apache.hadoop.hive.metastore.model.MStorageDescriptor;
@@ -210,7 +212,6 @@ import org.apache.hadoop.hive.metastore.utils.FileUtils;
 import org.apache.hadoop.hive.metastore.utils.JavaUtils;
 import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
 import org.apache.hadoop.hive.metastore.utils.ObjectPair;
-import org.apache.thrift.TDeserializer;
 import org.apache.thrift.TException;
 import org.datanucleus.AbstractNucleusContext;
 import org.datanucleus.ClassLoaderResolver;
@@ -809,7 +810,9 @@ public class ObjectStore implements RawStore, Configurable {
   pm.makePersistent(mCat);
   committed = commitTransaction();
 } finally {
-  if (!committed) rollbackTransaction();
+  if (!committed) {
+rollbackTransaction();
+  }
 }
   }
 
@@ -832,7 +835,9 @@ public class ObjectStore implements RawStore, Configurable {
   pm.makePersistent(mCat);
   committed = commitTransaction();
 } finally {
-  if (!committed) rollbackTransaction();
+  if (!committed) {
+rollbackTransaction();
+  }
 }
   }
 
@@ -840,7 +845,9 @@ public class ObjectStore implements RawStore, Configurable {
   public Catalog getCatalog(String catalogName) throws NoSuchObjectException, 
MetaException {
 LOG.debug("Fetching catalog " + catalogName);
 MCatalog mCat = getMCatalog(catalogName);
-if (mCat == null) throw new NoSuchObjectException("No catalog " + 
catalogName);
+if (mCat == null) {
+  throw new NoSuchObjectException("No catalog " + catalogName);
+}
 return mCatToCat(mCat);
   }
 
@@ -874,11 +881,15 @@ public class ObjectStore implements RawStore, 
Configurable {
   openTransaction();
   MCatalog mCat = getMCatalog(catalogName);
   pm.retrieve(mCat);
-  if (mCat == null) throw new NoSuchObjectException("No catalog " + 
catalogName);
+  if (mCat == null) {
+throw new NoSuchObjectException("No catalog " + catalogName);
+  }
   pm.deletePersistent(mCat);
   committed = commitTransaction();
 } finally {
-  if (!committed) rollbackTransaction();
+  if (!committed) {
+rollbackTransaction();
+  }
 }
   }
 
@@ -903,14 +914,18 @@ public class ObjectStore implements RawStore, 
Configurable {
   private MCatalog catToMCat(Catalog cat) {
 MCatalog mCat = new MCatalog();
 mCat.setName(normalizeIdentifier(cat.getName()));
-if (cat.isSetDescription()) mCat.setDescription(cat.getDescription());
+if (cat.isSetDescription()) {
+  mCat.setDescription(cat.getDescription());
+}
 

[42/50] [abbrv] hive git commit: HIVE-19280 : Invalid error messages for UPDATE/DELETE on insert-only transactional tables (Steve Yeom, reviewed by Sergey Shelukhin)

2018-04-26 Thread omalley
HIVE-19280 : Invalid error messages for UPDATE/DELETE on insert-only 
transactional tables (Steve Yeom, reviewed by Sergey Shelukhin)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/dd343d5f
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/dd343d5f
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/dd343d5f

Branch: refs/heads/storage-branch-2.6
Commit: dd343d5f0760c8ffe25aabd2fbf4c36d4081da97
Parents: 6607f8f
Author: sergey 
Authored: Wed Apr 25 11:19:18 2018 -0700
Committer: sergey 
Committed: Wed Apr 25 11:24:44 2018 -0700

--
 .../org/apache/hadoop/hive/ql/ErrorMsg.java |  2 +
 .../hadoop/hive/ql/parse/SemanticAnalyzer.java  | 16 +++--
 ql/src/test/queries/clientnegative/mm_delete.q  | 17 +
 ql/src/test/queries/clientnegative/mm_update.q  | 15 +
 .../test/results/clientnegative/mm_delete.q.out | 68 
 .../test/results/clientnegative/mm_update.q.out | 58 +
 6 files changed, 170 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/dd343d5f/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java 
b/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
index fde16f8..7d33fa3 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
@@ -460,6 +460,8 @@ public enum ErrorMsg {
   LOAD_DATA_ACID_FILE(10413,
   "\"{0}\" was created created by Acid write - it cannot be loaded into 
anther Acid table",
   true),
+  ACID_OP_ON_INSERTONLYTRAN_TABLE(10414, "Attempt to do update or delete on 
table {0} that is " +
+"insert-only transactional", true),
 
 
   //== 2 range starts here 
//

http://git-wip-us.apache.org/repos/asf/hive/blob/dd343d5f/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
index a00f927..1dccf96 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
@@ -2236,12 +2236,16 @@ public class SemanticAnalyzer extends 
BaseSemanticAnalyzer {
 "Inconsistent data structure detected: we are writing to " + 
ts.tableHandle  + " in " +
 name + " but it's not in isInsertIntoTable() or 
getInsertOverwriteTables()";
 // Disallow update and delete on non-acid tables
-boolean isAcid = AcidUtils.isFullAcidTable(ts.tableHandle);
-if ((updating(name) || deleting(name)) && !isAcid) {
-  // Whether we are using an acid compliant transaction manager has 
already been caught in
-  // UpdateDeleteSemanticAnalyzer, so if we are updating or deleting 
and getting nonAcid
-  // here, it means the table itself doesn't support it.
-  throw new SemanticException(ErrorMsg.ACID_OP_ON_NONACID_TABLE, 
ts.tableName);
+boolean isFullAcid = AcidUtils.isFullAcidTable(ts.tableHandle);
+if ((updating(name) || deleting(name)) && !isFullAcid) {
+  if (!AcidUtils.isInsertOnlyTable(ts.tableHandle)) {
+// Whether we are using an acid compliant transaction manager has 
already been caught in
+// UpdateDeleteSemanticAnalyzer, so if we are updating or deleting 
and getting nonAcid
+// here, it means the table itself doesn't support it.
+throw new SemanticException(ErrorMsg.ACID_OP_ON_NONACID_TABLE, 
ts.tableName);
+  } else {
+throw new 
SemanticException(ErrorMsg.ACID_OP_ON_INSERTONLYTRAN_TABLE, ts.tableName);
+  }
 }
 // TableSpec ts is got from the query (user specified),
 // which means the user didn't specify partitions in their query,

http://git-wip-us.apache.org/repos/asf/hive/blob/dd343d5f/ql/src/test/queries/clientnegative/mm_delete.q
--
diff --git a/ql/src/test/queries/clientnegative/mm_delete.q 
b/ql/src/test/queries/clientnegative/mm_delete.q
new file mode 100644
index 000..f0e92dc
--- /dev/null
+++ b/ql/src/test/queries/clientnegative/mm_delete.q
@@ -0,0 +1,17 @@
+--! qt:dataset:srcpart
+set hive.mapred.mode=nonstrict;
+set hive.exec.dynamic.partition.mode=nonstrict;
+set hive.explain.user=false;
+set hive.support.concurrency=true;
+set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
+
+drop table if exists mm_srcpart;
+CREATE TABLE 

[35/50] [abbrv] hive git commit: HIVE-19247 : StatsOptimizer: Missing stats fast-path for Date (Gopal V via Ashutosh Chauhan)

2018-04-26 Thread omalley
HIVE-19247 : StatsOptimizer: Missing stats fast-path for Date (Gopal V via 
Ashutosh Chauhan)

Signed-off-by: Ashutosh Chauhan 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/34ced306
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/34ced306
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/34ced306

Branch: refs/heads/storage-branch-2.6
Commit: 34ced3062f0b5083049cf1c94aa6d5335ee923c7
Parents: 63923e7
Author: Gopal V 
Authored: Tue Apr 24 21:51:22 2018 -0700
Committer: Ashutosh Chauhan 
Committed: Tue Apr 24 21:51:22 2018 -0700

--
 .../test/resources/testconfiguration.properties |  3 +-
 .../hive/ql/optimizer/StatsOptimizer.java   | 97 ++--
 ql/src/test/queries/clientpositive/stats_date.q | 18 
 .../clientpositive/llap/stats_date.q.out| 80 
 4 files changed, 189 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/34ced306/itests/src/test/resources/testconfiguration.properties
--
diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index f32b431..2c1a76d 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -877,7 +877,8 @@ minillaplocal.query.files=\
   unionDistinct_3.q,\
   vectorized_join46.q,\
   vectorized_multi_output_select.q,\
-  partialdhj.q
+  partialdhj.q,\
+  stats_date.q
 
 encrypted.query.files=encryption_join_unencrypted_tbl.q,\
   encryption_insert_partition_static.q,\

http://git-wip-us.apache.org/repos/asf/hive/blob/34ced306/ql/src/java/org/apache/hadoop/hive/ql/optimizer/StatsOptimizer.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/StatsOptimizer.java 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/StatsOptimizer.java
index d26a48b..a574372 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/StatsOptimizer.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/StatsOptimizer.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hive.ql.optimizer;
 
+import java.sql.Date;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.HashMap;
@@ -30,6 +31,7 @@ import org.apache.hadoop.hive.common.StatsSetupConst;
 import org.apache.hadoop.hive.common.type.HiveDecimal;
 import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
 import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
 import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
 import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
 import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
@@ -72,6 +74,8 @@ import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFResolver;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum;
 import org.apache.hadoop.hive.serde.serdeConstants;
+import org.apache.hadoop.hive.serde2.io.DateWritable;
+import org.apache.hadoop.hive.serde2.io.TimestampWritable;
 import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
 import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory;
 import 
org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector.PrimitiveCategory;
@@ -146,11 +150,12 @@ public class StatsOptimizer extends Transform {
 }
 
 enum StatType{
-  Integeral,
+  Integer,
   Double,
   String,
   Boolean,
   Binary,
+  Date,
   Unsupported
 }
 
@@ -163,7 +168,6 @@ public class StatsOptimizer extends Transform {
   Object cast(long longValue) { return (short)longValue; } },
   TINYINT { @Override
   Object cast(long longValue) { return (byte)longValue; } };
-
   abstract Object cast(long longValue);
 }
 
@@ -175,6 +179,13 @@ public class StatsOptimizer extends Transform {
 
   abstract Object cast(double doubleValue);
 }
+
+enum DateSubType {
+  DAYS {@Override
+Object cast(long longValue) { return (new 
DateWritable((int)longValue)).get();}
+  };
+  abstract Object cast(long longValue);
+}
 
 enum GbyKeyType {
   NULL, CONSTANT, OTHER
@@ -182,7 +193,7 @@ public class StatsOptimizer extends Transform {
 
 private StatType getType(String origType) {
   if (serdeConstants.IntegralTypes.contains(origType)) {
-return StatType.Integeral;
+return StatType.Integer;
   } else if (origType.equals(serdeConstants.DOUBLE_TYPE_NAME) ||
   

[21/50] [abbrv] hive git commit: HIVE-19263 : Improve ugly exception handling in HiveMetaStore (Igor Kryvenko via Vihang K)

2018-04-26 Thread omalley
HIVE-19263 : Improve ugly exception handling in HiveMetaStore (Igor Kryvenko 
via Vihang K)

Signed-off-by: Ashutosh Chauhan 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/211baae4
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/211baae4
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/211baae4

Branch: refs/heads/storage-branch-2.6
Commit: 211baae45f76722f354a4c29edb83de90c549f0e
Parents: cce0e37
Author: Igor Kryvenko 
Authored: Mon Apr 23 19:18:18 2018 -0700
Committer: Ashutosh Chauhan 
Committed: Mon Apr 23 19:18:18 2018 -0700

--
 .../hadoop/hive/metastore/HiveMetaStore.java| 493 +++
 1 file changed, 186 insertions(+), 307 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/211baae4/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
--
diff --git 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
index cd50e1b..f3c2d8b 100644
--- 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
+++ 
b/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
@@ -1257,17 +1257,12 @@ public class HiveMetaStore extends ThriftHiveMetastore {
 }
 create_database_core(getMS(), db);
 success = true;
+  } catch (MetaException | InvalidObjectException | AlreadyExistsException 
e) {
+ex = e;
+throw e;
   } catch (Exception e) {
 ex = e;
-if (e instanceof MetaException) {
-  throw (MetaException) e;
-} else if (e instanceof InvalidObjectException) {
-  throw (InvalidObjectException) e;
-} else if (e instanceof AlreadyExistsException) {
-  throw (AlreadyExistsException) e;
-} else {
-  throw newMetaException(e);
-}
+throw newMetaException(e);
   } finally {
 endFunction("create_database", success, ex);
   }
@@ -1591,13 +1586,12 @@ public class HiveMetaStore extends ThriftHiveMetastore {
 } else {
   ret = getMS().getDatabases(parsedDbNamed[CAT_NAME], 
parsedDbNamed[DB_NAME]);
 }
+  } catch (MetaException e) {
+ex = e;
+throw e;
   } catch (Exception e) {
 ex = e;
-if (e instanceof MetaException) {
-  throw (MetaException) e;
-} else {
-  throw newMetaException(e);
-}
+throw newMetaException(e);
   } finally {
 endFunction("get_databases", ret != null, ex);
   }
@@ -1639,17 +1633,12 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   try {
 create_type_core(getMS(), type);
 success = true;
+  } catch (MetaException | InvalidObjectException | AlreadyExistsException 
e) {
+ex = e;
+throw e;
   } catch (Exception e) {
 ex = e;
-if (e instanceof MetaException) {
-  throw (MetaException) e;
-} else if (e instanceof InvalidObjectException) {
-  throw (InvalidObjectException) e;
-} else if (e instanceof AlreadyExistsException) {
-  throw (AlreadyExistsException) e;
-} else {
-  throw newMetaException(e);
-}
+throw newMetaException(e);
   } finally {
 endFunction("create_type", success, ex);
   }
@@ -1944,17 +1933,12 @@ public class HiveMetaStore extends ThriftHiveMetastore {
 LOG.warn("create_table_with_environment_context got ", e);
 ex = e;
 throw new InvalidObjectException(e.getMessage());
+  } catch (MetaException | InvalidObjectException | AlreadyExistsException 
e) {
+ex = e;
+throw e;
   } catch (Exception e) {
 ex = e;
-if (e instanceof MetaException) {
-  throw (MetaException) e;
-} else if (e instanceof InvalidObjectException) {
-  throw (InvalidObjectException) e;
-} else if (e instanceof AlreadyExistsException) {
-  throw (AlreadyExistsException) e;
-} else {
-  throw newMetaException(e);
-}
+throw newMetaException(e);
   } finally {
 endFunction("create_table", success, ex, tbl.getTableName());
   }
@@ -1978,17 +1962,12 @@ public class HiveMetaStore extends ThriftHiveMetastore {
   } catch (NoSuchObjectException e) {
 ex = e;
 throw new InvalidObjectException(e.getMessage());
+  } catch (MetaException | InvalidObjectException | AlreadyExistsException 
e) {
+ex = e;
+throw e;
   } catch 

[36/50] [abbrv] hive git commit: HIVE-17193: HoS: don't combine map works that are targets of different DPPs (Rui reviewed by Sahil)

2018-04-26 Thread omalley
HIVE-17193: HoS: don't combine map works that are targets of different DPPs 
(Rui reviewed by Sahil)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/391ff7e2
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/391ff7e2
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/391ff7e2

Branch: refs/heads/storage-branch-2.6
Commit: 391ff7e22e98be6b1ebd7d66206b4403f47150ec
Parents: 34ced30
Author: Rui Li 
Authored: Wed Apr 25 15:47:07 2018 +0800
Committer: Rui Li 
Committed: Wed Apr 25 15:47:07 2018 +0800

--
 .../test/resources/testconfiguration.properties |   1 +
 .../exec/spark/SparkDynamicPartitionPruner.java |   3 +-
 .../spark/CombineEquivalentWorkResolver.java| 165 ++
 .../spark/SparkPartitionPruningSinkDesc.java|  15 +
 .../hive/ql/parse/spark/GenSparkUtils.java  |   3 +-
 .../spark_dynamic_partition_pruning_7.q |  22 ++
 .../spark_dynamic_partition_pruning_7.q.out | 329 +++
 7 files changed, 464 insertions(+), 74 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/391ff7e2/itests/src/test/resources/testconfiguration.properties
--
diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index 2c1a76d..1a34659 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -1579,6 +1579,7 @@ 
miniSparkOnYarn.only.query.files=spark_combine_equivalent_work.q,\
   spark_dynamic_partition_pruning_4.q,\
   spark_dynamic_partition_pruning_5.q,\
   spark_dynamic_partition_pruning_6.q,\
+  spark_dynamic_partition_pruning_7.q,\
   spark_dynamic_partition_pruning_mapjoin_only.q,\
   spark_constprog_dpp.q,\
   spark_dynamic_partition_pruning_recursive_mapjoin.q,\

http://git-wip-us.apache.org/repos/asf/hive/blob/391ff7e2/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkDynamicPartitionPruner.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkDynamicPartitionPruner.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkDynamicPartitionPruner.java
index ed889fa..240fa09 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkDynamicPartitionPruner.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkDynamicPartitionPruner.java
@@ -32,6 +32,7 @@ import java.util.Set;
 
 import com.clearspring.analytics.util.Preconditions;
 import javolution.testing.AssertionException;
+import org.apache.hadoop.hive.ql.optimizer.spark.SparkPartitionPruningSinkDesc;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.apache.hadoop.fs.FileStatus;
@@ -174,7 +175,7 @@ public class SparkDynamicPartitionPruner {
   throws HiveException {
 Set values = info.values;
 // strip the column name of the targetId
-String columnName = info.columnName.substring(info.columnName.indexOf(':') 
+ 1);
+String columnName = 
SparkPartitionPruningSinkDesc.stripOffTargetId(info.columnName);
 
 ObjectInspector oi =
 
PrimitiveObjectInspectorFactory.getPrimitiveWritableObjectInspector(TypeInfoFactory

http://git-wip-us.apache.org/repos/asf/hive/blob/391ff7e2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/CombineEquivalentWorkResolver.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/CombineEquivalentWorkResolver.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/CombineEquivalentWorkResolver.java
index 74f0368..c681c74 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/CombineEquivalentWorkResolver.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/CombineEquivalentWorkResolver.java
@@ -18,25 +18,30 @@
 
 package org.apache.hadoop.hive.ql.optimizer.spark;
 
-import java.io.Serializable;
 import java.util.ArrayList;
 import java.util.Comparator;
 import java.util.HashMap;
+import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.Stack;
+import java.util.stream.Collectors;
 
+import com.google.common.base.Preconditions;
 import com.google.common.collect.Maps;
 import com.google.common.collect.Sets;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hive.ql.exec.OperatorUtils;
-import org.apache.hadoop.hive.ql.exec.Task;
 import org.apache.hadoop.hive.ql.exec.spark.SparkUtilities;
+import org.apache.hadoop.hive.ql.lib.GraphWalker;
+import org.apache.hadoop.hive.ql.lib.PreOrderWalker;
 import 

[50/50] [abbrv] hive git commit: Prepare for storage-api 2.6 branch.

2018-04-26 Thread omalley
Prepare for storage-api 2.6 branch.


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/c055e844
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/c055e844
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/c055e844

Branch: refs/heads/storage-branch-2.6
Commit: c055e8444d45e2c421958a626ba49990fc348da6
Parents: 087ef7b
Author: Owen O'Malley 
Authored: Thu Apr 26 07:55:13 2018 -0700
Committer: Owen O'Malley 
Committed: Thu Apr 26 07:57:47 2018 -0700

--
 pom.xml | 30 --
 1 file changed, 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/c055e844/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 50f416e..ede38d8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -32,36 +32,6 @@
 
   
 storage-api
-accumulo-handler
-vector-code-gen
-beeline
-classification
-cli
-common
-contrib
-druid-handler
-hbase-handler
-jdbc-handler
-hcatalog
-hplsql
-jdbc
-metastore
-ql
-serde
-service-rpc
-service
-streaming
-llap-common
-llap-client
-llap-ext-client
-llap-tez
-llap-server
-shims
-spark-client
-kryo-registrator
-testutils
-packaging
-standalone-metastore
   
 
   



[30/50] [abbrv] hive git commit: HIVE-19260 - Streaming Ingest API doesn't normalize db.table names (Eugene Koifman, reviewed by Prasanth Jayachandran)

2018-04-26 Thread omalley
HIVE-19260 - Streaming Ingest API doesn't normalize db.table names (Eugene 
Koifman, reviewed by Prasanth Jayachandran)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/36ef2741
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/36ef2741
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/36ef2741

Branch: refs/heads/storage-branch-2.6
Commit: 36ef27418f68d5e1f6c2f718a953cc605a5e836f
Parents: 6934bb9
Author: Eugene Koifman 
Authored: Tue Apr 24 11:31:28 2018 -0700
Committer: Eugene Koifman 
Committed: Tue Apr 24 11:31:28 2018 -0700

--
 .../hive/hcatalog/streaming/HiveEndPoint.java   |   4 +-
 .../hive/hcatalog/streaming/TestStreaming.java  |  11 +-
 .../hadoop/hive/ql/txn/compactor/Cleaner.java   | 119 +--
 .../hadoop/hive/ql/txn/compactor/Initiator.java |   2 +-
 .../hadoop/hive/metastore/txn/TxnHandler.java   |  22 ++--
 .../apache/hive/streaming/TestStreaming.java|   9 +-
 6 files changed, 139 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/36ef2741/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/HiveEndPoint.java
--
diff --git 
a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/HiveEndPoint.java
 
b/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/HiveEndPoint.java
index 6d248ea..8582e9a 100644
--- 
a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/HiveEndPoint.java
+++ 
b/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/HiveEndPoint.java
@@ -88,13 +88,13 @@ public class HiveEndPoint {
 if (database==null) {
   throw new IllegalArgumentException("Database cannot be null for 
HiveEndPoint");
 }
-this.database = database;
-this.table = table;
+this.database = database.toLowerCase();
 if (table==null) {
   throw new IllegalArgumentException("Table cannot be null for 
HiveEndPoint");
 }
 this.partitionVals = partitionVals==null ? new ArrayList()
  : new ArrayList( 
partitionVals );
+this.table = table.toLowerCase();
   }
 
 

http://git-wip-us.apache.org/repos/asf/hive/blob/36ef2741/hcatalog/streaming/src/test/org/apache/hive/hcatalog/streaming/TestStreaming.java
--
diff --git 
a/hcatalog/streaming/src/test/org/apache/hive/hcatalog/streaming/TestStreaming.java
 
b/hcatalog/streaming/src/test/org/apache/hive/hcatalog/streaming/TestStreaming.java
index 3733e3d..fe2b1c1 100644
--- 
a/hcatalog/streaming/src/test/org/apache/hive/hcatalog/streaming/TestStreaming.java
+++ 
b/hcatalog/streaming/src/test/org/apache/hive/hcatalog/streaming/TestStreaming.java
@@ -67,6 +67,8 @@ import org.apache.hadoop.hive.metastore.api.TxnState;
 import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
 import org.apache.hadoop.hive.metastore.txn.AcidHouseKeeperService;
 import org.apache.hadoop.hive.metastore.txn.TxnDbUtil;
+import org.apache.hadoop.hive.metastore.txn.TxnStore;
+import org.apache.hadoop.hive.metastore.txn.TxnUtils;
 import org.apache.hadoop.hive.ql.DriverFactory;
 import org.apache.hadoop.hive.ql.IDriver;
 import org.apache.hadoop.hive.ql.io.AcidUtils;
@@ -353,10 +355,10 @@ public class TestStreaming {
 //todo: why does it need transactional_properties?
 queryTable(driver, "create table default.streamingnobuckets (a string, b 
string) stored as orc TBLPROPERTIES('transactional'='true', 
'transactional_properties'='default')");
 queryTable(driver, "insert into default.streamingnobuckets 
values('foo','bar')");
-List rs = queryTable(driver, "select * from 
default.streamingnobuckets");
+List rs = queryTable(driver, "select * from 
default.streamingNoBuckets");
 Assert.assertEquals(1, rs.size());
 Assert.assertEquals("foo\tbar", rs.get(0));
-HiveEndPoint endPt = new HiveEndPoint(metaStoreURI, "default", 
"streamingnobuckets", null);
+HiveEndPoint endPt = new HiveEndPoint(metaStoreURI, "Default", 
"StreamingNoBuckets", null);
 String[] colNames1 = new String[] { "a", "b" };
 StreamingConnection connection = endPt.newConnection(false, "UT_" + 
Thread.currentThread().getName());
 DelimitedInputWriter wr = new DelimitedInputWriter(colNames1,",",  endPt, 
connection);
@@ -365,6 +367,11 @@ public class TestStreaming {
 txnBatch.beginNextTransaction();
 txnBatch.write("a1,b2".getBytes());
 txnBatch.write("a3,b4".getBytes());
+TxnStore txnHandler = TxnUtils.getTxnStore(conf);
+ShowLocksResponse resp = txnHandler.showLocks(new ShowLocksRequest());
+Assert.assertEquals(resp.getLocksSize(), 1);
+Assert.assertEquals("streamingnobuckets", 

[33/50] [abbrv] hive git commit: HIVE-19232: results_cache_invalidation2 is failing (Jason Dere, reviewed by Vineet Garg)

2018-04-26 Thread omalley
HIVE-19232: results_cache_invalidation2 is failing (Jason Dere, reviewed by 
Vineet Garg)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/e9094483
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/e9094483
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/e9094483

Branch: refs/heads/storage-branch-2.6
Commit: e909448389c60a8bd13866cbfed98b083be52ba3
Parents: 36ef274
Author: Jason Dere 
Authored: Tue Apr 24 15:51:02 2018 -0700
Committer: Jason Dere 
Committed: Tue Apr 24 15:51:02 2018 -0700

--
 .../test/resources/testconfiguration.properties |  16 +-
 .../results_cache_invalidation2.q   |   3 +-
 .../clientpositive/llap/results_cache_2.q.out   | 180 +
 .../llap/results_cache_capacity.q.out   | 258 +++
 .../llap/results_cache_lifetime.q.out   | 117 +++
 .../llap/results_cache_quoted_identifiers.q.out | 124 +++
 .../llap/results_cache_temptable.q.out  | 323 
 .../llap/results_cache_with_masking.q.out   | 116 +++
 .../clientpositive/results_cache_1.q.out| 579 --
 .../clientpositive/results_cache_2.q.out| 176 -
 .../clientpositive/results_cache_capacity.q.out | 238 --
 .../results_cache_empty_result.q.out|  91 ---
 .../results_cache_invalidation.q.out| 748 ---
 .../results_cache_invalidation2.q.out   | 373 -
 .../clientpositive/results_cache_lifetime.q.out | 112 ---
 .../results_cache_quoted_identifiers.q.out  | 116 ---
 .../results_cache_temptable.q.out   | 293 
 .../results_cache_transactional.q.out   | 583 ---
 .../results_cache_with_masking.q.out| 106 ---
 19 files changed, 1130 insertions(+), 3422 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/e9094483/itests/src/test/resources/testconfiguration.properties
--
diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index e43c7d4..f32b431 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -216,11 +216,6 @@ minillaplocal.shared.query.files=alter_merge_2_orc.q,\
   ptf.q,\
   ptf_matchpath.q,\
   ptf_streaming.q,\
-  results_cache_1.q,\
-  results_cache_empty_result.q,\
-  results_cache_invalidation.q,\
-  results_cache_transactional.q,\
-  results_cache_invalidation2.q,\
   sample1.q,\
   selectDistinctStar.q,\
   select_dummy_source.q,\
@@ -602,6 +597,17 @@ minillaplocal.query.files=\
   ptf_streaming.q,\
   quotedid_smb.q,\
   resourceplan.q,\
+  results_cache_1.q,\
+  results_cache_2.q,\
+  results_cache_capacity.q,\
+  results_cache_empty_result.q,\
+  results_cache_invalidation.q,\
+  results_cache_invalidation2.q,\
+  results_cache_lifetime.q,\
+  results_cache_quoted_identifiers.q,\
+  results_cache_temptable.q,\
+  results_cache_transactional.q,\
+  results_cache_with_masking.q,\
   sample10.q,\
   schema_evol_orc_acid_part_llap_io.q,\
   schema_evol_orc_acid_part.q,\

http://git-wip-us.apache.org/repos/asf/hive/blob/e9094483/ql/src/test/queries/clientpositive/results_cache_invalidation2.q
--
diff --git a/ql/src/test/queries/clientpositive/results_cache_invalidation2.q 
b/ql/src/test/queries/clientpositive/results_cache_invalidation2.q
index ceee78f..b360c85 100644
--- a/ql/src/test/queries/clientpositive/results_cache_invalidation2.q
+++ b/ql/src/test/queries/clientpositive/results_cache_invalidation2.q
@@ -2,8 +2,7 @@
 set 
hive.metastore.event.listeners=org.apache.hive.hcatalog.listener.DbNotificationListener;
 
 set hive.query.results.cache.enabled=true;
--- Enable this setting when HIVE-18609 is in
---set hive.query.results.cache.nontransactional.tables.enabled=true;
+set hive.query.results.cache.nontransactional.tables.enabled=true;
 
 set hive.fetch.task.conversion=more;
 -- Start polling the notification events

http://git-wip-us.apache.org/repos/asf/hive/blob/e9094483/ql/src/test/results/clientpositive/llap/results_cache_2.q.out
--
diff --git a/ql/src/test/results/clientpositive/llap/results_cache_2.q.out 
b/ql/src/test/results/clientpositive/llap/results_cache_2.q.out
new file mode 100644
index 000..0971fc7
--- /dev/null
+++ b/ql/src/test/results/clientpositive/llap/results_cache_2.q.out
@@ -0,0 +1,180 @@
+PREHOOK: query: explain
+select key, value from src where key=0
+PREHOOK: type: QUERY
+POSTHOOK: query: explain
+select key, value from src where key=0
+POSTHOOK: type: QUERY
+STAGE DEPENDENCIES:
+  Stage-0 

[41/50] [abbrv] hive git commit: HIVE-19281 : incorrect protocol name for LLAP AM plugin (Sergey Shelukhin, reviewed by Jason Dere)

2018-04-26 Thread omalley
HIVE-19281 : incorrect protocol name for LLAP AM plugin (Sergey Shelukhin, 
reviewed by Jason Dere)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/6607f8f5
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/6607f8f5
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/6607f8f5

Branch: refs/heads/storage-branch-2.6
Commit: 6607f8f51849921eb3796c18a58d3f6621a1ab2d
Parents: e89f98c4
Author: sergey 
Authored: Wed Apr 25 11:11:22 2018 -0700
Committer: sergey 
Committed: Wed Apr 25 11:11:22 2018 -0700

--
 .../org/apache/hadoop/hive/llap/protocol/LlapPluginProtocolPB.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/6607f8f5/llap-common/src/java/org/apache/hadoop/hive/llap/protocol/LlapPluginProtocolPB.java
--
diff --git 
a/llap-common/src/java/org/apache/hadoop/hive/llap/protocol/LlapPluginProtocolPB.java
 
b/llap-common/src/java/org/apache/hadoop/hive/llap/protocol/LlapPluginProtocolPB.java
index 0d04b14..5fcffc1 100644
--- 
a/llap-common/src/java/org/apache/hadoop/hive/llap/protocol/LlapPluginProtocolPB.java
+++ 
b/llap-common/src/java/org/apache/hadoop/hive/llap/protocol/LlapPluginProtocolPB.java
@@ -20,7 +20,7 @@ import org.apache.hadoop.security.token.TokenInfo;
 import org.apache.hadoop.hive.llap.plugin.rpc.LlapPluginProtocolProtos;
 import org.apache.tez.runtime.common.security.JobTokenSelector;
 
-@ProtocolInfo(protocolName = 
"org.apache.hadoop.hive.llap.protocol.LlapPluginProtocolBlockingPB", 
protocolVersion = 1)
+@ProtocolInfo(protocolName = 
"org.apache.hadoop.hive.llap.protocol.LlapPluginProtocolPB", protocolVersion = 
1)
 @TokenInfo(JobTokenSelector.class)
 @InterfaceAudience.Private
 public interface LlapPluginProtocolPB extends 
LlapPluginProtocolProtos.LlapPluginProtocol.BlockingInterface {



[15/50] [abbrv] hive git commit: Revert "HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via Ashutosh Chauhan)"

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/f0199500/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php 
b/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
index 1c1d58e..9c94942 100644
--- 
a/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
+++ 
b/standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
@@ -1528,17 +1528,6 @@ interface ThriftHiveMetastoreIf extends 
\FacebookServiceIf {
* @return bool
*/
   public function heartbeat_lock_materialization_rebuild($dbName, $tableName, 
$txnId);
-  /**
-   * @param \metastore\RuntimeStat $stat
-   * @throws \metastore\MetaException
-   */
-  public function add_runtime_stats(\metastore\RuntimeStat $stat);
-  /**
-   * @param \metastore\GetRuntimeStatsRequest $rqst
-   * @return \metastore\RuntimeStat[]
-   * @throws \metastore\MetaException
-   */
-  public function get_runtime_stats(\metastore\GetRuntimeStatsRequest $rqst);
 }
 
 class ThriftHiveMetastoreClient extends \FacebookServiceClient implements 
\metastore\ThriftHiveMetastoreIf {
@@ -13018,111 +13007,6 @@ class ThriftHiveMetastoreClient extends 
\FacebookServiceClient implements \metas
 throw new \Exception("heartbeat_lock_materialization_rebuild failed: 
unknown result");
   }
 
-  public function add_runtime_stats(\metastore\RuntimeStat $stat)
-  {
-$this->send_add_runtime_stats($stat);
-$this->recv_add_runtime_stats();
-  }
-
-  public function send_add_runtime_stats(\metastore\RuntimeStat $stat)
-  {
-$args = new \metastore\ThriftHiveMetastore_add_runtime_stats_args();
-$args->stat = $stat;
-$bin_accel = ($this->output_ instanceof TBinaryProtocolAccelerated) && 
function_exists('thrift_protocol_write_binary');
-if ($bin_accel)
-{
-  thrift_protocol_write_binary($this->output_, 'add_runtime_stats', 
TMessageType::CALL, $args, $this->seqid_, $this->output_->isStrictWrite());
-}
-else
-{
-  $this->output_->writeMessageBegin('add_runtime_stats', 
TMessageType::CALL, $this->seqid_);
-  $args->write($this->output_);
-  $this->output_->writeMessageEnd();
-  $this->output_->getTransport()->flush();
-}
-  }
-
-  public function recv_add_runtime_stats()
-  {
-$bin_accel = ($this->input_ instanceof TBinaryProtocolAccelerated) && 
function_exists('thrift_protocol_read_binary');
-if ($bin_accel) $result = thrift_protocol_read_binary($this->input_, 
'\metastore\ThriftHiveMetastore_add_runtime_stats_result', 
$this->input_->isStrictRead());
-else
-{
-  $rseqid = 0;
-  $fname = null;
-  $mtype = 0;
-
-  $this->input_->readMessageBegin($fname, $mtype, $rseqid);
-  if ($mtype == TMessageType::EXCEPTION) {
-$x = new TApplicationException();
-$x->read($this->input_);
-$this->input_->readMessageEnd();
-throw $x;
-  }
-  $result = new \metastore\ThriftHiveMetastore_add_runtime_stats_result();
-  $result->read($this->input_);
-  $this->input_->readMessageEnd();
-}
-if ($result->o1 !== null) {
-  throw $result->o1;
-}
-return;
-  }
-
-  public function get_runtime_stats(\metastore\GetRuntimeStatsRequest $rqst)
-  {
-$this->send_get_runtime_stats($rqst);
-return $this->recv_get_runtime_stats();
-  }
-
-  public function send_get_runtime_stats(\metastore\GetRuntimeStatsRequest 
$rqst)
-  {
-$args = new \metastore\ThriftHiveMetastore_get_runtime_stats_args();
-$args->rqst = $rqst;
-$bin_accel = ($this->output_ instanceof TBinaryProtocolAccelerated) && 
function_exists('thrift_protocol_write_binary');
-if ($bin_accel)
-{
-  thrift_protocol_write_binary($this->output_, 'get_runtime_stats', 
TMessageType::CALL, $args, $this->seqid_, $this->output_->isStrictWrite());
-}
-else
-{
-  $this->output_->writeMessageBegin('get_runtime_stats', 
TMessageType::CALL, $this->seqid_);
-  $args->write($this->output_);
-  $this->output_->writeMessageEnd();
-  $this->output_->getTransport()->flush();
-}
-  }
-
-  public function recv_get_runtime_stats()
-  {
-$bin_accel = ($this->input_ instanceof TBinaryProtocolAccelerated) && 
function_exists('thrift_protocol_read_binary');
-if ($bin_accel) $result = thrift_protocol_read_binary($this->input_, 
'\metastore\ThriftHiveMetastore_get_runtime_stats_result', 
$this->input_->isStrictRead());
-else
-{
-  $rseqid = 0;
-  $fname = null;
-  $mtype = 0;
-
-  $this->input_->readMessageBegin($fname, $mtype, $rseqid);
-  if ($mtype == TMessageType::EXCEPTION) {
-$x = new TApplicationException();
-$x->read($this->input_);
-$this->input_->readMessageEnd();
-throw $x;
-  }
-  $result = new 

[47/50] [abbrv] hive git commit: HIVE-19204: Detailed errors from some tasks are not displayed to the client because the tasks don't set exception when they fail (Aihua Xu, reviewed by Sahil Takiar)

2018-04-26 Thread omalley
HIVE-19204: Detailed errors from some tasks are not displayed to the client 
because the tasks don't set exception when they fail (Aihua Xu, reviewed by 
Sahil Takiar)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/11b0d857
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/11b0d857
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/11b0d857

Branch: refs/heads/storage-branch-2.6
Commit: 11b0d85786cd58469d5662c3027e9389cff07710
Parents: f94ae7f
Author: Aihua Xu 
Authored: Mon Apr 16 10:36:02 2018 -0700
Committer: Aihua Xu 
Committed: Wed Apr 25 16:09:42 2018 -0700

--
 .../java/org/apache/hadoop/hive/ql/Driver.java  |  6 -
 .../hive/ql/exec/ColumnStatsUpdateTask.java |  1 +
 .../hive/ql/exec/ExplainSQRewriteTask.java  |  8 +++---
 .../apache/hadoop/hive/ql/exec/ExplainTask.java |  5 ++--
 .../hive/ql/exec/MaterializedViewTask.java  |  1 +
 .../hadoop/hive/ql/exec/ReplCopyTask.java   |  4 +--
 .../apache/hadoop/hive/ql/exec/StatsTask.java   |  1 +
 .../hadoop/hive/ql/exec/mr/ExecDriver.java  |  4 +--
 .../io/rcfile/truncate/ColumnTruncateTask.java  | 26 +++-
 .../ql/reexec/ReExecutionOverlayPlugin.java |  2 +-
 10 files changed, 29 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/11b0d857/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java 
b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
index 4e8dbe2..f83bdaf 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
@@ -2389,7 +2389,11 @@ public class Driver implements IDriver {
 if(downstreamError != null) {
   //here we assume that upstream code may have parametrized the msg from 
ErrorMsg
   //so we want to keep it
-  errorMessage += ". " + downstreamError.getMessage();
+  if (downstreamError.getMessage() != null) {
+errorMessage += ". " + downstreamError.getMessage();
+  } else {
+errorMessage += ". " + 
org.apache.hadoop.util.StringUtils.stringifyException(downstreamError);
+  }
 }
 else {
   ErrorMsg em = ErrorMsg.getErrorMsg(exitVal);

http://git-wip-us.apache.org/repos/asf/hive/blob/11b0d857/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
index 207b66f..a53ff5a 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsUpdateTask.java
@@ -300,6 +300,7 @@ public class ColumnStatsUpdateTask extends 
Task {
   Hive db = getHive();
   return persistColumnStats(db);
 } catch (Exception e) {
+  setException(e);
   LOG.info("Failed to persist stats in metastore", e);
 }
 return 1;

http://git-wip-us.apache.org/repos/asf/hive/blob/11b0d857/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainSQRewriteTask.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainSQRewriteTask.java 
b/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainSQRewriteTask.java
index 80d54bf..1f9e9aa 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainSQRewriteTask.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainSQRewriteTask.java
@@ -38,11 +38,13 @@ import org.apache.hadoop.hive.ql.parse.SubQueryDiagnostic;
 import org.apache.hadoop.hive.ql.plan.ExplainSQRewriteWork;
 import org.apache.hadoop.hive.ql.plan.api.StageType;
 import org.apache.hadoop.io.IOUtils;
-import org.apache.hadoop.util.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 
 public class ExplainSQRewriteTask extends Task 
implements Serializable {
   private static final long serialVersionUID = 1L;
+  private final Logger LOG = 
LoggerFactory.getLogger(this.getClass().getName());
 
   @Override
   public StageType getType() {
@@ -76,8 +78,8 @@ public class ExplainSQRewriteTask extends 
Task implements
   return (0);
 }
 catch (Exception e) {
-  console.printError("Failed with exception " + e.getMessage(),
-  "\n" + StringUtils.stringifyException(e));
+  setException(e);
+  LOG.error(org.apache.hadoop.util.StringUtils.stringifyException(e));
   return (1);
 }
 finally {

http://git-wip-us.apache.org/repos/asf/hive/blob/11b0d857/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java

[05/50] [abbrv] hive git commit: HIVE-17647 : DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions (Sergey Shelukhin, reviewed by Eugene Koifman)

2018-04-26 Thread omalley
HIVE-17647 : DDLTask.generateAddMmTasks(Table tbl) and other random code should 
not start transactions (Sergey Shelukhin, reviewed by Eugene Koifman)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/62244019
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/62244019
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/62244019

Branch: refs/heads/storage-branch-2.6
Commit: 622440199c2207616356c03d9bf6eb94e8f6bd99
Parents: bdb0457
Author: sergey 
Authored: Mon Apr 23 11:07:40 2018 -0700
Committer: sergey 
Committed: Mon Apr 23 11:07:40 2018 -0700

--
 .../test/resources/testconfiguration.properties |   2 +-
 .../java/org/apache/hadoop/hive/ql/Driver.java  |  23 +-
 .../org/apache/hadoop/hive/ql/QueryPlan.java|  17 +-
 .../org/apache/hadoop/hive/ql/exec/DDLTask.java |  31 +-
 .../hadoop/hive/ql/exec/ImportCommitTask.java   |  62 
 .../hadoop/hive/ql/exec/ImportCommitWork.java   |  54 
 .../apache/hadoop/hive/ql/exec/TaskFactory.java |   2 -
 .../org/apache/hadoop/hive/ql/io/AcidUtils.java |   7 +-
 .../apache/hadoop/hive/ql/metadata/Hive.java|   2 +-
 .../hive/ql/parse/BaseSemanticAnalyzer.java |   5 +
 .../hive/ql/parse/DDLSemanticAnalyzer.java  |  18 +-
 .../hive/ql/parse/ImportSemanticAnalyzer.java   |  37 +--
 .../ql/parse/ReplicationSemanticAnalyzer.java   |   4 +-
 .../hadoop/hive/ql/plan/AlterTableDesc.java | 109 ---
 .../org/apache/hadoop/hive/ql/plan/DDLDesc.java |   6 +
 .../hive/ql/lockmgr/TestDbTxnManager2.java  |  18 ++
 .../results/clientpositive/mm_conversions.q.out | 309 ---
 17 files changed, 161 insertions(+), 545 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/62244019/itests/src/test/resources/testconfiguration.properties
--
diff --git a/itests/src/test/resources/testconfiguration.properties 
b/itests/src/test/resources/testconfiguration.properties
index 3aaa68b..ed161da 100644
--- a/itests/src/test/resources/testconfiguration.properties
+++ b/itests/src/test/resources/testconfiguration.properties
@@ -192,7 +192,6 @@ minillaplocal.shared.query.files=alter_merge_2_orc.q,\
   metadata_only_queries.q,\
   metadata_only_queries_with_filters.q,\
   metadataonly1.q,\
-  mm_conversions.q,\
   mrr.q,\
   nonmr_fetch_threshold.q,\
   optimize_nullscan.q,\
@@ -581,6 +580,7 @@ minillaplocal.query.files=\
   mapjoin_hint.q,\
   mapjoin_emit_interval.q,\
   mergejoin_3way.q,\
+  mm_conversions.q,\
   mm_exim.q,\
   mrr.q,\
   multiMapJoin1.q,\

http://git-wip-us.apache.org/repos/asf/hive/blob/62244019/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
--
diff --git a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java 
b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
index 9cb2ff1..a35a215 100644
--- a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
+++ b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
@@ -113,6 +113,7 @@ import org.apache.hadoop.hive.ql.parse.ParseUtils;
 import org.apache.hadoop.hive.ql.parse.PrunedPartitionList;
 import org.apache.hadoop.hive.ql.parse.SemanticAnalyzer;
 import org.apache.hadoop.hive.ql.parse.SemanticAnalyzerFactory;
+import org.apache.hadoop.hive.ql.plan.DDLDesc.DDLDescWithWriteId;
 import org.apache.hadoop.hive.ql.plan.FileSinkDesc;
 import org.apache.hadoop.hive.ql.plan.HiveOperation;
 import org.apache.hadoop.hive.ql.plan.TableDesc;
@@ -1415,8 +1416,9 @@ public class Driver implements IDriver {
   if(userFromUGI == null) {
 throw createProcessorResponse(10);
   }
+
   // Set the table write id in all of the acid file sinks
-  if (haveAcidWrite()) {
+  if (!plan.getAcidSinks().isEmpty()) {
 List acidSinks = new ArrayList<>(plan.getAcidSinks());
 //sorting makes tests easier to write since file names and ROW__IDs 
depend on statementId
 //so this makes (file name -> data) mapping stable
@@ -1433,6 +1435,18 @@ public class Driver implements IDriver {
   desc.setStatementId(queryTxnMgr.getStmtIdAndIncrement());
 }
   }
+
+  // Note: the sinks and DDL cannot coexist at this time; but if they 
could we would
+  //   need to make sure we don't get two write IDs for the same table.
+  DDLDescWithWriteId acidDdlDesc = plan.getAcidDdlDesc();
+  if (acidDdlDesc != null && acidDdlDesc.mayNeedWriteId()) {
+String fqTableName = acidDdlDesc.getFullTableName();
+long writeId = queryTxnMgr.getTableWriteId(
+Utilities.getDatabaseName(fqTableName), 
Utilities.getTableName(fqTableName));
+acidDdlDesc.setWriteId(writeId);
+  }
+
+
   /*It's imperative that {@code acquireLocks()} is called for 

[03/50] [abbrv] hive git commit: HIVE-19214: High throughput ingest ORC format (Prasanth Jayachandran reviewed by Gopal V)

2018-04-26 Thread omalley
HIVE-19214: High throughput ingest ORC format (Prasanth Jayachandran reviewed 
by Gopal V)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/9fe65dae
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/9fe65dae
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/9fe65dae

Branch: refs/heads/storage-branch-2.6
Commit: 9fe65dae8938d497e463d2be061fb0591df6c7f7
Parents: 334c8ca
Author: Prasanth Jayachandran 
Authored: Mon Apr 23 09:52:08 2018 -0700
Committer: Prasanth Jayachandran 
Committed: Mon Apr 23 09:52:08 2018 -0700

--
 .../org/apache/hadoop/hive/conf/HiveConf.java   |  3 +
 .../llap/io/decode/OrcEncodedDataConsumer.java  | 13 +++-
 .../llap/io/encoded/OrcEncodedDataReader.java   | 10 +--
 .../llap/io/metadata/OrcStripeMetadata.java | 17 +
 .../hadoop/hive/ql/io/orc/OrcInputFormat.java   |  5 ++
 .../hadoop/hive/ql/io/orc/OrcRecordUpdater.java |  7 ++
 .../ql/io/orc/encoded/EncodedReaderImpl.java| 36 +++---
 .../queries/clientpositive/orc_ppd_exception.q  | 13 
 .../test/queries/clientpositive/vector_acid3.q  | 10 +++
 .../clientpositive/llap/vector_acid3.q.out  | 34 +
 .../clientpositive/orc_ppd_exception.q.out  | 57 
 .../results/clientpositive/vector_acid3.q.out   | 34 +
 .../apache/hive/streaming/TestStreaming.java| 72 
 13 files changed, 293 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/9fe65dae/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
--
diff --git a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
index 536c7b4..2403d7a 100644
--- a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+++ b/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
@@ -1874,6 +1874,9 @@ public class HiveConf extends Configuration {
 
 HIVE_ORC_BASE_DELTA_RATIO("hive.exec.orc.base.delta.ratio", 8, "The ratio 
of base writer and\n" +
 "delta writer in terms of STRIPE_SIZE and BUFFER_SIZE."),
+
HIVE_ORC_DELTA_STREAMING_OPTIMIZATIONS_ENABLED("hive.exec.orc.delta.streaming.optimizations.enabled",
 false,
+  "Whether to enable streaming optimizations for ORC delta files. This 
will disable ORC's internal indexes,\n" +
+"disable compression, enable fast encoding and disable dictionary 
encoding."),
 HIVE_ORC_SPLIT_STRATEGY("hive.exec.orc.split.strategy", "HYBRID", new 
StringSet("HYBRID", "BI", "ETL"),
 "This is not a user level config. BI strategy is used when the 
requirement is to spend less time in split generation" +
 " as opposed to query execution (split generation does not read or 
cache file footers)." +

http://git-wip-us.apache.org/repos/asf/hive/blob/9fe65dae/llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/OrcEncodedDataConsumer.java
--
diff --git 
a/llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/OrcEncodedDataConsumer.java
 
b/llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/OrcEncodedDataConsumer.java
index 9e8ae10..fc0c66a 100644
--- 
a/llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/OrcEncodedDataConsumer.java
+++ 
b/llap-server/src/java/org/apache/hadoop/hive/llap/io/decode/OrcEncodedDataConsumer.java
@@ -113,18 +113,25 @@ public class OrcEncodedDataConsumer
   ConsumerStripeMetadata stripeMetadata = stripes.get(currentStripeIndex);
   // Get non null row count from root column, to get max vector batches
   int rgIdx = batch.getBatchKey().rgIx;
-  long nonNullRowCount = -1;
+  long nonNullRowCount;
+  boolean noIndex = false;
   if (rgIdx == OrcEncodedColumnBatch.ALL_RGS) {
 nonNullRowCount = stripeMetadata.getRowCount();
   } else {
 OrcProto.RowIndexEntry rowIndex = stripeMetadata.getRowIndexEntry(0, 
rgIdx);
-nonNullRowCount = getRowCount(rowIndex);
+// index is disabled
+if (rowIndex == null) {
+  nonNullRowCount = stripeMetadata.getRowCount();
+  noIndex = true;
+} else {
+  nonNullRowCount = getRowCount(rowIndex);
+}
   }
   int maxBatchesRG = (int) ((nonNullRowCount / 
VectorizedRowBatch.DEFAULT_SIZE) + 1);
   int batchSize = VectorizedRowBatch.DEFAULT_SIZE;
   TypeDescription fileSchema = fileMetadata.getSchema();
 
-  if (columnReaders == null || !sameStripe) {
+  if (columnReaders == null || !sameStripe || noIndex) {
 createColumnReaders(batch, stripeMetadata, fileSchema);
   } else {
 repositionInStreams(this.columnReaders, batch, sameStripe, 

[13/50] [abbrv] hive git commit: HIVE-19168: Ranger changes for llap commands (Prasanth Jayachandran reviewed by Sergey Shelukhin)

2018-04-26 Thread omalley
HIVE-19168: Ranger changes for llap commands (Prasanth Jayachandran reviewed by 
Sergey Shelukhin)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/997ad34c
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/997ad34c
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/997ad34c

Branch: refs/heads/storage-branch-2.6
Commit: 997ad34c0ba4d3c3906e618d00074e8727e81346
Parents: b3e2d8a
Author: Prasanth Jayachandran 
Authored: Mon Apr 23 14:27:22 2018 -0700
Committer: Prasanth Jayachandran 
Committed: Mon Apr 23 14:27:22 2018 -0700

--
 .../authorization/TestJdbcWithSQLAuthorization.java |  7 ---
 .../ql/processors/LlapCacheResourceProcessor.java   |  5 -
 .../ql/processors/LlapClusterResourceProcessor.java | 16 +++-
 .../authorization/plugin/HiveOperationType.java |  4 ++--
 .../plugin/sqlstd/Operation2Privilege.java  |  7 +--
 5 files changed, 30 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/997ad34c/itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java
--
diff --git 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java
 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java
index 6d5c743..2009d11 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java
@@ -172,9 +172,10 @@ public class TestJdbcWithSQLAuthorization {
 stmt.execute("llap cache -purge");
   } catch (SQLException e) {
 caughtException = true;
-String msg = "Error while processing statement: Permission denied: 
Principal [name=user1, type=USER] " +
-  "does not have following privileges for operation LLAP_CACHE [[ADMIN 
PRIVILEGE] on Object " +
-  "[type=COMMAND_PARAMS, name=[-purge]], [ADMIN PRIVILEGE] on Object 
[type=SERVICE_NAME, name=localhost]]";
+String msg = "Error while processing statement: Permission denied: 
Principal [name=user1, type=USER] does " +
+  "not have following privileges for operation LLAP_CACHE_PURGE 
[[ADMIN PRIVILEGE] on Object " +
+  "[type=COMMAND_PARAMS, name=[llap, cache, -purge]], [ADMIN 
PRIVILEGE] on Object " +
+  "[type=SERVICE_NAME, name=localhost]]";
 assertEquals(msg, e.getMessage());
   } finally {
 stmt.close();

http://git-wip-us.apache.org/repos/asf/hive/blob/997ad34c/ql/src/java/org/apache/hadoop/hive/ql/processors/LlapCacheResourceProcessor.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/processors/LlapCacheResourceProcessor.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/processors/LlapCacheResourceProcessor.java
index f455055..b11014c 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/processors/LlapCacheResourceProcessor.java
+++ 
b/ql/src/java/org/apache/hadoop/hive/ql/processors/LlapCacheResourceProcessor.java
@@ -58,6 +58,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
 
 public class LlapCacheResourceProcessor implements CommandProcessor {
   public static final Logger LOG = 
LoggerFactory.getLogger(LlapCacheResourceProcessor.class);
@@ -97,8 +98,10 @@ public class LlapCacheResourceProcessor implements 
CommandProcessor {
   hs2Host = ss.getHiveServer2Host();
 }
 if (purge) {
+  List fullCommand = Lists.newArrayList("llap", "cache");
+  fullCommand.addAll(Arrays.asList(params));
   CommandProcessorResponse authErrResp =
-CommandUtil.authorizeCommandAndServiceObject(ss, 
HiveOperationType.LLAP_CACHE, Arrays.asList(params), hs2Host);
+CommandUtil.authorizeCommandAndServiceObject(ss, 
HiveOperationType.LLAP_CACHE_PURGE, fullCommand, hs2Host);
   if (authErrResp != null) {
 // there was an authorization issue
 return authErrResp;

http://git-wip-us.apache.org/repos/asf/hive/blob/997ad34c/ql/src/java/org/apache/hadoop/hive/ql/processors/LlapClusterResourceProcessor.java
--
diff --git 
a/ql/src/java/org/apache/hadoop/hive/ql/processors/LlapClusterResourceProcessor.java
 
b/ql/src/java/org/apache/hadoop/hive/ql/processors/LlapClusterResourceProcessor.java
index 0238727..64a5b10 100644
--- 
a/ql/src/java/org/apache/hadoop/hive/ql/processors/LlapClusterResourceProcessor.java
+++ 

[09/50] [abbrv] hive git commit: HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via Ashutosh Chauhan)

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/b3e2d8a0/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
index a354f27..e2f0e82 100644
--- 
a/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
+++ 
b/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
@@ -446,6 +446,10 @@ import org.slf4j.LoggerFactory;
 
 public boolean heartbeat_lock_materialization_rebuild(String dbName, 
String tableName, long txnId) throws org.apache.thrift.TException;
 
+public void add_runtime_stats(RuntimeStat stat) throws MetaException, 
org.apache.thrift.TException;
+
+public List get_runtime_stats(GetRuntimeStatsRequest rqst) 
throws MetaException, org.apache.thrift.TException;
+
   }
 
   @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public interface 
AsyncIface extends com.facebook.fb303.FacebookService .AsyncIface {
@@ -854,6 +858,10 @@ import org.slf4j.LoggerFactory;
 
 public void heartbeat_lock_materialization_rebuild(String dbName, String 
tableName, long txnId, org.apache.thrift.async.AsyncMethodCallback 
resultHandler) throws org.apache.thrift.TException;
 
+public void add_runtime_stats(RuntimeStat stat, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
+public void get_runtime_stats(GetRuntimeStatsRequest rqst, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException;
+
   }
 
   @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public static class 
Client extends com.facebook.fb303.FacebookService.Client implements Iface {
@@ -6692,6 +6700,55 @@ import org.slf4j.LoggerFactory;
   throw new 
org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT,
 "heartbeat_lock_materialization_rebuild failed: unknown result");
 }
 
+public void add_runtime_stats(RuntimeStat stat) throws MetaException, 
org.apache.thrift.TException
+{
+  send_add_runtime_stats(stat);
+  recv_add_runtime_stats();
+}
+
+public void send_add_runtime_stats(RuntimeStat stat) throws 
org.apache.thrift.TException
+{
+  add_runtime_stats_args args = new add_runtime_stats_args();
+  args.setStat(stat);
+  sendBase("add_runtime_stats", args);
+}
+
+public void recv_add_runtime_stats() throws MetaException, 
org.apache.thrift.TException
+{
+  add_runtime_stats_result result = new add_runtime_stats_result();
+  receiveBase(result, "add_runtime_stats");
+  if (result.o1 != null) {
+throw result.o1;
+  }
+  return;
+}
+
+public List get_runtime_stats(GetRuntimeStatsRequest rqst) 
throws MetaException, org.apache.thrift.TException
+{
+  send_get_runtime_stats(rqst);
+  return recv_get_runtime_stats();
+}
+
+public void send_get_runtime_stats(GetRuntimeStatsRequest rqst) throws 
org.apache.thrift.TException
+{
+  get_runtime_stats_args args = new get_runtime_stats_args();
+  args.setRqst(rqst);
+  sendBase("get_runtime_stats", args);
+}
+
+public List recv_get_runtime_stats() throws MetaException, 
org.apache.thrift.TException
+{
+  get_runtime_stats_result result = new get_runtime_stats_result();
+  receiveBase(result, "get_runtime_stats");
+  if (result.isSetSuccess()) {
+return result.success;
+  }
+  if (result.o1 != null) {
+throw result.o1;
+  }
+  throw new 
org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT,
 "get_runtime_stats failed: unknown result");
+}
+
   }
   @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public static class 
AsyncClient extends com.facebook.fb303.FacebookService.AsyncClient implements 
AsyncIface {
 @org.apache.hadoop.classification.InterfaceAudience.Public 
@org.apache.hadoop.classification.InterfaceStability.Stable public static class 
Factory implements org.apache.thrift.async.TAsyncClientFactory {
@@ -13660,6 +13717,70 @@ import org.slf4j.LoggerFactory;
   }
 }
 
+public void add_runtime_stats(RuntimeStat stat, 
org.apache.thrift.async.AsyncMethodCallback resultHandler) throws 
org.apache.thrift.TException {
+  checkReady();
+  add_runtime_stats_call method_call = new add_runtime_stats_call(stat, 
resultHandler, this, 

[06/50] [abbrv] hive git commit: HIVE-17970 : MM LOAD DATA with OVERWRITE doesn't use base_n directory concept (Sergey Shelukhin, reviewed by Eugene Koifman)

2018-04-26 Thread omalley
HIVE-17970 : MM LOAD DATA with OVERWRITE doesn't use base_n directory concept 
(Sergey Shelukhin, reviewed by Eugene Koifman)


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/4f67bebe
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/4f67bebe
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/4f67bebe

Branch: refs/heads/storage-branch-2.6
Commit: 4f67bebe1916b77a8366a2f1627d59bb2d800522
Parents: 6224401
Author: sergey 
Authored: Mon Apr 23 11:18:51 2018 -0700
Committer: sergey 
Committed: Mon Apr 23 11:18:51 2018 -0700

--
 .../apache/hadoop/hive/common/JavaUtils.java|  25 +-
 .../hadoop/hive/ql/history/TestHiveHistory.java |   2 +-
 .../test/resources/testconfiguration.properties |   1 +
 .../apache/hadoop/hive/ql/exec/MoveTask.java|  14 +-
 .../apache/hadoop/hive/ql/exec/Utilities.java   |   6 +-
 .../apache/hadoop/hive/ql/metadata/Hive.java| 112 +++
 .../hive/ql/parse/LoadSemanticAnalyzer.java |  18 +-
 .../hadoop/hive/ql/exec/TestExecDriver.java |   2 +-
 .../clientpositive/llap/mm_loaddata.q.out   | 296 +++
 .../results/clientpositive/mm_loaddata.q.out| 296 ---
 10 files changed, 361 insertions(+), 411 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/4f67bebe/common/src/java/org/apache/hadoop/hive/common/JavaUtils.java
--
diff --git a/common/src/java/org/apache/hadoop/hive/common/JavaUtils.java 
b/common/src/java/org/apache/hadoop/hive/common/JavaUtils.java
index 7894ec1..45abd2f 100644
--- a/common/src/java/org/apache/hadoop/hive/common/JavaUtils.java
+++ b/common/src/java/org/apache/hadoop/hive/common/JavaUtils.java
@@ -188,39 +188,26 @@ public final class JavaUtils {
 
   public static class IdPathFilter implements PathFilter {
 private String baseDirName, deltaDirName;
-private final boolean isMatch, isIgnoreTemp, isDeltaPrefix;
+private final boolean isDeltaPrefix;
 
-public IdPathFilter(long writeId, int stmtId, boolean isMatch) {
-  this(writeId, stmtId, isMatch, false);
-}
-
-public IdPathFilter(long writeId, int stmtId, boolean isMatch, boolean 
isIgnoreTemp) {
+public IdPathFilter(long writeId, int stmtId) {
   String deltaDirName = null;
   deltaDirName = DELTA_PREFIX + "_" + String.format(DELTA_DIGITS, writeId) 
+ "_" +
-  String.format(DELTA_DIGITS, writeId) + "_";
+  String.format(DELTA_DIGITS, writeId);
   isDeltaPrefix = (stmtId < 0);
   if (!isDeltaPrefix) {
-deltaDirName += String.format(STATEMENT_DIGITS, stmtId);
+deltaDirName += "_" + String.format(STATEMENT_DIGITS, stmtId);
   }
 
   this.baseDirName = BASE_PREFIX + "_" + String.format(DELTA_DIGITS, 
writeId);
   this.deltaDirName = deltaDirName;
-  this.isMatch = isMatch;
-  this.isIgnoreTemp = isIgnoreTemp;
 }
 
 @Override
 public boolean accept(Path path) {
   String name = path.getName();
-  if (name.equals(baseDirName) || (isDeltaPrefix && 
name.startsWith(deltaDirName))
-  || (!isDeltaPrefix && name.equals(deltaDirName))) {
-return isMatch;
-  }
-  if (isIgnoreTemp && name.length() > 0) {
-char c = name.charAt(0);
-if (c == '.' || c == '_') return false; // Regardless of isMatch, 
ignore this.
-  }
-  return !isMatch;
+  return name.equals(baseDirName) || (isDeltaPrefix && 
name.startsWith(deltaDirName))
+  || (!isDeltaPrefix && name.equals(deltaDirName));
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hive/blob/4f67bebe/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/history/TestHiveHistory.java
--
diff --git 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/history/TestHiveHistory.java
 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/history/TestHiveHistory.java
index 0168472..9b50fd4 100644
--- 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/history/TestHiveHistory.java
+++ 
b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/history/TestHiveHistory.java
@@ -107,7 +107,7 @@ public class TestHiveHistory extends TestCase {
 db.createTable(src, cols, null, TextInputFormat.class,
 IgnoreKeyTextOutputFormat.class);
 db.loadTable(hadoopDataFile[i], src,
-  LoadFileType.KEEP_EXISTING, false, false, false, false, null, 0);
+  LoadFileType.KEEP_EXISTING, false, false, false, false, null, 0, 
false);
 i++;
   }
 

http://git-wip-us.apache.org/repos/asf/hive/blob/4f67bebe/itests/src/test/resources/testconfiguration.properties

[18/50] [abbrv] hive git commit: Revert "HIVE-19171 : Persist runtime statistics in metastore (Zoltan Haindrich via Ashutosh Chauhan)"

2018-04-26 Thread omalley
http://git-wip-us.apache.org/repos/asf/hive/blob/f0199500/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
--
diff --git 
a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 
b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
index 4787703..dfa13a0 100644
--- a/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
+++ b/standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
@@ -2107,14 +2107,14 @@ uint32_t 
ThriftHiveMetastore_get_databases_result::read(::apache::thrift::protoc
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 this->success.clear();
-uint32_t _size1191;
-::apache::thrift::protocol::TType _etype1194;
-xfer += iprot->readListBegin(_etype1194, _size1191);
-this->success.resize(_size1191);
-uint32_t _i1195;
-for (_i1195 = 0; _i1195 < _size1191; ++_i1195)
+uint32_t _size1187;
+::apache::thrift::protocol::TType _etype1190;
+xfer += iprot->readListBegin(_etype1190, _size1187);
+this->success.resize(_size1187);
+uint32_t _i1191;
+for (_i1191 = 0; _i1191 < _size1187; ++_i1191)
 {
-  xfer += iprot->readString(this->success[_i1195]);
+  xfer += iprot->readString(this->success[_i1191]);
 }
 xfer += iprot->readListEnd();
   }
@@ -2153,10 +2153,10 @@ uint32_t 
ThriftHiveMetastore_get_databases_result::write(::apache::thrift::proto
 xfer += oprot->writeFieldBegin("success", 
::apache::thrift::protocol::T_LIST, 0);
 {
   xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, 
static_cast(this->success.size()));
-  std::vector ::const_iterator _iter1196;
-  for (_iter1196 = this->success.begin(); _iter1196 != 
this->success.end(); ++_iter1196)
+  std::vector ::const_iterator _iter1192;
+  for (_iter1192 = this->success.begin(); _iter1192 != 
this->success.end(); ++_iter1192)
   {
-xfer += oprot->writeString((*_iter1196));
+xfer += oprot->writeString((*_iter1192));
   }
   xfer += oprot->writeListEnd();
 }
@@ -2201,14 +2201,14 @@ uint32_t 
ThriftHiveMetastore_get_databases_presult::read(::apache::thrift::proto
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 (*(this->success)).clear();
-uint32_t _size1197;
-::apache::thrift::protocol::TType _etype1200;
-xfer += iprot->readListBegin(_etype1200, _size1197);
-(*(this->success)).resize(_size1197);
-uint32_t _i1201;
-for (_i1201 = 0; _i1201 < _size1197; ++_i1201)
+uint32_t _size1193;
+::apache::thrift::protocol::TType _etype1196;
+xfer += iprot->readListBegin(_etype1196, _size1193);
+(*(this->success)).resize(_size1193);
+uint32_t _i1197;
+for (_i1197 = 0; _i1197 < _size1193; ++_i1197)
 {
-  xfer += iprot->readString((*(this->success))[_i1201]);
+  xfer += iprot->readString((*(this->success))[_i1197]);
 }
 xfer += iprot->readListEnd();
   }
@@ -2325,14 +2325,14 @@ uint32_t 
ThriftHiveMetastore_get_all_databases_result::read(::apache::thrift::pr
 if (ftype == ::apache::thrift::protocol::T_LIST) {
   {
 this->success.clear();
-uint32_t _size1202;
-::apache::thrift::protocol::TType _etype1205;
-xfer += iprot->readListBegin(_etype1205, _size1202);
-this->success.resize(_size1202);
-uint32_t _i1206;
-for (_i1206 = 0; _i1206 < _size1202; ++_i1206)
+uint32_t _size1198;
+::apache::thrift::protocol::TType _etype1201;
+xfer += iprot->readListBegin(_etype1201, _size1198);
+this->success.resize(_size1198);
+uint32_t _i1202;
+for (_i1202 = 0; _i1202 < _size1198; ++_i1202)
 {
-  xfer += iprot->readString(this->success[_i1206]);
+  xfer += iprot->readString(this->success[_i1202]);
 }
 xfer += iprot->readListEnd();
   }
@@ -2371,10 +2371,10 @@ uint32_t 
ThriftHiveMetastore_get_all_databases_result::write(::apache::thrift::p
 xfer += oprot->writeFieldBegin("success", 
::apache::thrift::protocol::T_LIST, 0);
 {
   xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, 
static_cast(this->success.size()));
-  std::vector ::const_iterator _iter1207;
-  for (_iter1207 = this->success.begin(); _iter1207 != 
this->success.end(); ++_iter1207)
+  std::vector ::const_iterator _iter1203;
+  for (_iter1203 = this->success.begin(); _iter1203 != 
this->success.end(); ++_iter1203)
   {
-xfer += 

hive git commit: Prepare for storage-api 2.6 branch.

2018-04-26 Thread omalley
Repository: hive
Updated Branches:
  refs/heads/storage-branch-2.6 [created] a60871fd7


Prepare for storage-api 2.6 branch.


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/a60871fd
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/a60871fd
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/a60871fd

Branch: refs/heads/storage-branch-2.6
Commit: a60871fd79b49a7b5ff0d491236b87b73b40f7ec
Parents: b3fe652
Author: Owen O'Malley 
Authored: Thu Apr 26 07:55:13 2018 -0700
Committer: Owen O'Malley 
Committed: Thu Apr 26 07:55:13 2018 -0700

--
 pom.xml | 29 -
 1 file changed, 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/a60871fd/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 5802bd3..67ce988 100644
--- a/pom.xml
+++ b/pom.xml
@@ -32,35 +32,6 @@
 
   
 storage-api
-accumulo-handler
-vector-code-gen
-beeline
-classification
-cli
-common
-contrib
-druid-handler
-hbase-handler
-jdbc-handler
-hcatalog
-hplsql
-jdbc
-metastore
-ql
-serde
-service-rpc
-service
-llap-common
-llap-client
-llap-ext-client
-llap-tez
-llap-server
-shims
-spark-client
-kryo-registrator
-testutils
-packaging
-standalone-metastore
   
 
   



svn commit: r26000 - in /release/hive: hive-parent-auth-hook/ ldap-fix/

2018-03-27 Thread omalley
Author: omalley
Date: Tue Mar 27 23:46:35 2018
New Revision: 26000

Log:
Remove old hot fix security releases.

Removed:
release/hive/hive-parent-auth-hook/
release/hive/ldap-fix/



hive git commit: Prepare for development after Hive Storage API 2.5.0.

2018-03-27 Thread omalley
Repository: hive
Updated Branches:
  refs/heads/storage-branch-2.5 4e018faab -> 605d8fa28


Prepare for development after Hive Storage API 2.5.0.

Signed-off-by: Owen O'Malley 


Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/605d8fa2
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/605d8fa2
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/605d8fa2

Branch: refs/heads/storage-branch-2.5
Commit: 605d8fa28afc93910441082780fb8a26685e7499
Parents: 4e018fa
Author: Owen O'Malley 
Authored: Tue Mar 27 16:19:04 2018 -0700
Committer: Owen O'Malley 
Committed: Tue Mar 27 16:19:04 2018 -0700

--
 storage-api/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hive/blob/605d8fa2/storage-api/pom.xml
--
diff --git a/storage-api/pom.xml b/storage-api/pom.xml
index 836316f..d213b88 100644
--- a/storage-api/pom.xml
+++ b/storage-api/pom.xml
@@ -25,7 +25,7 @@
 
   org.apache.hive
   hive-storage-api
-  2.5.0
+  2.5.1-SNAPSHOT
   jar
   Hive Storage API
 



[hive] Git Push Summary

2018-03-27 Thread omalley
Repository: hive
Updated Tags:  refs/tags/storage-release-2.5.0-rc0 [deleted] d5d31f283


  1   2   3   4   5   6   7   8   >