[jira] [Closed] (HAWQ-1194) Add EncryptionZones related RPC

2017-07-12 Thread Amy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amy closed HAWQ-1194.
-

> Add EncryptionZones related RPC
> ---
>
> Key: HAWQ-1194
> URL: https://issues.apache.org/jira/browse/HAWQ-1194
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Amy
> Fix For: 2.2.0.0-incubating
>
>
> Add createEncryption, getEZForPath, listEncryptionZones RPC



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1194) Add EncryptionZones related RPC

2017-07-12 Thread Amy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amy resolved HAWQ-1194.
---
Resolution: Fixed

> Add EncryptionZones related RPC
> ---
>
> Key: HAWQ-1194
> URL: https://issues.apache.org/jira/browse/HAWQ-1194
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Amy
> Fix For: 2.2.0.0-incubating
>
>
> Add createEncryption, getEZForPath, listEncryptionZones RPC



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1194) Add EncryptionZones related RPC

2017-07-12 Thread Amy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amy updated HAWQ-1194:
--
Fix Version/s: (was: backlog)
   2.2.0.0-incubating

> Add EncryptionZones related RPC
> ---
>
> Key: HAWQ-1194
> URL: https://issues.apache.org/jira/browse/HAWQ-1194
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Amy
> Fix For: 2.2.0.0-incubating
>
>
> Add createEncryption, getEZForPath, listEncryptionZones RPC



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (HAWQ-1194) Add EncryptionZones related RPC

2017-07-12 Thread Amy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amy reopened HAWQ-1194:
---

> Add EncryptionZones related RPC
> ---
>
> Key: HAWQ-1194
> URL: https://issues.apache.org/jira/browse/HAWQ-1194
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Amy
> Fix For: backlog
>
>
> Add createEncryption, getEZForPath, listEncryptionZones RPC



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-12 Thread linwen
Github user linwen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r127118606
  
--- Diff: depends/libhdfs3/src/client/CryptoCodec.cpp ---
@@ -0,0 +1,163 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "CryptoCodec.h"
+#include "Logger.h"
+
+using namespace Hdfs::Internal;
+
+namespace Hdfs {
+
+/**
+ * Construct a CryptoCodec instance.
+ * @param encryptionInfo the encryption info of file.
+ * @param kcp a KmsClientProvider instance to get key from kms server.
+ * @param bufSize crypto buffer size.
+ */
+CryptoCodec::CryptoCodec(FileEncryptionInfo *encryptionInfo, 
std::shared_ptr kcp, int32_t bufSize) : 
encryptionInfo(encryptionInfo), kcp(kcp), bufSize(bufSize)
+{
+   
+   /* Init global status. */
+   ERR_load_crypto_strings();
+   OpenSSL_add_all_algorithms();
+   OPENSSL_config(NULL);
+
+   /* Create cipher context. */
+   encryptCtx = EVP_CIPHER_CTX_new();  
+   cipher = NULL;  
+
+}
+
+/**
+ * Destroy a CryptoCodec instance.
+ */
+CryptoCodec::~CryptoCodec()
+{
+   if (encryptCtx) 
+   EVP_CIPHER_CTX_free(encryptCtx);
+}
+
+/**
+ * Get decrypted key from kms.
+ */
+std::string CryptoCodec::getDecryptedKeyFromKms()
+{
+   ptree map = kcp->decryptEncryptedKey(*encryptionInfo);
+   std::string key = map.get("material");
+
+   int rem = key.length() % 4;
+   if (rem) {
+   rem = 4 - rem;
+   while (rem != 0) {
+   key = key + "=";
+   rem--;
+   }
+   }
+
+   std::replace(key.begin(), key.end(), '-', '+');
+   std::replace(key.begin(), key.end(), '_', '/');
+
+   LOG(INFO, "material is :%s", key.c_str());
--- End diff --

Suggest provide more clear log message, and if this function is called 
frequently, use DEBUG3 instead of INFO. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1194) Add EncryptionZones related RPC

2017-07-12 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085089#comment-16085089
 ] 

Radar Lei commented on HAWQ-1194:
-

[~vVineet] I checked the commit log, this is belong to 2.2.0.0.

> Add EncryptionZones related RPC
> ---
>
> Key: HAWQ-1194
> URL: https://issues.apache.org/jira/browse/HAWQ-1194
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Amy
> Fix For: backlog
>
>
> Add createEncryption, getEZForPath, listEncryptionZones RPC



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-12 Thread stanlyxiang
Github user stanlyxiang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r127112526
  
--- Diff: depends/libhdfs3/src/client/HttpClient.h ---
@@ -0,0 +1,155 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef _HDFS_LIBHDFS3_CLIENT_HTTPCLIENT_H_
+#define _HDFS_LIBHDFS3_CLIENT_HTTPCLIENT_H_
+
+#include 
+#include 
+#include 
+#include "Exception.h"
+#include "ExceptionInternal.h"
+
+typedef enum httpMethod {
+HTTP_GET = 0,
+   HTTP_POST = 1,
+   HTTP_DELETE = 2,
+   HTTP_PUT = 3
+} httpMethod;
+
+namespace Hdfs {
+
+class HttpClient {
+public:
--- End diff --

If we have something to do in the future, I think a jira should be filed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-07-12 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1487.
---
Resolution: Fixed

> hang process due to deadlock when it try to process interrupt in error 
> handling
> ---
>
> Key: HAWQ-1487
> URL: https://issues.apache.org/jira/browse/HAWQ-1487
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
>
> It has hang process when it try to process interrupt in error handling. To be 
> specific, some QE encounter division by zero error, and then it error out. 
> During the error processing, it try to handle query cancelling interrupt and 
> thus deadlock occur.
> The hang process is:
> {noformat}
> $ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger p
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> co
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer p
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoi
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsupe
> $ ps -ef | grep postgres | grep -v grep
> gpadmin   51245  1  0 06:15 ?00:01:01 
> /usr/local/hawq_2_2_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
>  -i -M segment -p 20100 --silent-mode=true
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger process
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> collector process
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer process
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoint process
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment resource manager
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
> MPPEXEC SELECT
> gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
> {noformat}
> The call stack is:
> {noformat}
> $ sudo gdb -p 182983
> (gdb) bt
> #0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
> #6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 
> "postgres.c", lineno=3618,
> funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
> #9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
> #10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
> postgres.c:3463
> #11 
> #12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
> lineno=839, funcname=0xd3bf3a "float8div",
> domain=0x0) at elog.c:492
> #18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
> #19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
> econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:1762
> #20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
> isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:2250
> #21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
> argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
> #22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
> econtext=0x32495d8,
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at 
> execQual.c:1532
> #23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177",
> isDone=0x0) at execQual.c:2228
> #24 

[jira] [Closed] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-07-12 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-1487.
-

> hang process due to deadlock when it try to process interrupt in error 
> handling
> ---
>
> Key: HAWQ-1487
> URL: https://issues.apache.org/jira/browse/HAWQ-1487
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
>
> It has hang process when it try to process interrupt in error handling. To be 
> specific, some QE encounter division by zero error, and then it error out. 
> During the error processing, it try to handle query cancelling interrupt and 
> thus deadlock occur.
> The hang process is:
> {noformat}
> $ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger p
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> co
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer p
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoi
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsupe
> $ ps -ef | grep postgres | grep -v grep
> gpadmin   51245  1  0 06:15 ?00:01:01 
> /usr/local/hawq_2_2_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
>  -i -M segment -p 20100 --silent-mode=true
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger process
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> collector process
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer process
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoint process
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment resource manager
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
> MPPEXEC SELECT
> gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
> {noformat}
> The call stack is:
> {noformat}
> $ sudo gdb -p 182983
> (gdb) bt
> #0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
> #6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 
> "postgres.c", lineno=3618,
> funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
> #9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
> #10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
> postgres.c:3463
> #11 
> #12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
> lineno=839, funcname=0xd3bf3a "float8div",
> domain=0x0) at elog.c:492
> #18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
> #19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
> econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:1762
> #20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
> isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:2250
> #21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
> argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
> #22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
> econtext=0x32495d8,
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at 
> execQual.c:1532
> #23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177",
> isDone=0x0) at execQual.c:2228
> #24 0x0076eed2 in initFcinfo 

[jira] [Commented] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-07-12 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084988#comment-16084988
 ] 

Ruilong Huo commented on HAWQ-1487:
---

Thanks [~vVineet], closing this jira as the fix is available.

> hang process due to deadlock when it try to process interrupt in error 
> handling
> ---
>
> Key: HAWQ-1487
> URL: https://issues.apache.org/jira/browse/HAWQ-1487
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
>
> It has hang process when it try to process interrupt in error handling. To be 
> specific, some QE encounter division by zero error, and then it error out. 
> During the error processing, it try to handle query cancelling interrupt and 
> thus deadlock occur.
> The hang process is:
> {noformat}
> $ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger p
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> co
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer p
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoi
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsupe
> $ ps -ef | grep postgres | grep -v grep
> gpadmin   51245  1  0 06:15 ?00:01:01 
> /usr/local/hawq_2_2_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
>  -i -M segment -p 20100 --silent-mode=true
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger process
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> collector process
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer process
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoint process
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment resource manager
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
> MPPEXEC SELECT
> gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
> {noformat}
> The call stack is:
> {noformat}
> $ sudo gdb -p 182983
> (gdb) bt
> #0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
> #6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 
> "postgres.c", lineno=3618,
> funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
> #9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
> #10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
> postgres.c:3463
> #11 
> #12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
> lineno=839, funcname=0xd3bf3a "float8div",
> domain=0x0) at elog.c:492
> #18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
> #19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
> econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:1762
> #20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
> isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:2250
> #21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
> argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
> #22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
> econtext=0x32495d8,
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at 
> execQual.c:1532
> #23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
> 

[jira] [Commented] (HAWQ-1453) relation_close() report error at analyzeStmt(): is not owned by resource owner TopTransaction (resowner.c:814)

2017-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084933#comment-16084933
 ] 

Vineet Goel commented on HAWQ-1453:
---

[~liming01] - I think this JIRA can be resolved now, since you merged the PR. 
If that's correct, could you please Resolve this? 
Thanks

> relation_close() report error at analyzeStmt(): is not owned by resource 
> owner TopTransaction (resowner.c:814)
> --
>
> Key: HAWQ-1453
> URL: https://issues.apache.org/jira/browse/HAWQ-1453
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: backlog
>
>
> I created a simple MapReduce map-only program (to simulate a Spark executor, 
> like in the customer's environment) that uses JDBC through Postgresql Driver 
> (like the customer is doing) and I executed the queries the customer is 
> trying to execute. I can reproduce all the errors reported by customer.
> 2017-04-28 03:50:38.299276 
> IST,"gpadmin","gpadmin",p91745,th-609535712,"10.193.102.144","3228",2017-04-28
>  03:50:35 
> IST,156637,con4578,cmd36,seg-1,,,x156637,sx1,"ERROR","XX000","relcache 
> reference e_event_1_0_102_1_prt_2 is not owned by resource owner 
> TopTransaction (resowner.c:814)",,"ANALYZE 
> mis_data_ig_account_details.e_event_1_0_102",0,,"resowner.c",814,"Stack trace:
> 10x8ce4a8 postgres errstart + 0x288
> 20x8d022b postgres elog_finish + 0xab
> 30x4ca654 postgres relation_close + 0x14
> 40x5e7508 postgres analyzeStmt + 0xd58
> 50x5e8b07 postgres analyzeStatement + 0x97
> 60x65c3bc postgres vacuum + 0x6c
> 70x7f61e2 postgres ProcessUtility + 0x542
> 80x7f1cae postgres  + 0x7f1cae
> 90x7f348e postgres  + 0x7f348e
> 10   0x7f51f5 postgres PortalRun + 0x465
> 11   0x7ee268 postgres PostgresMain + 0x1908
> 12   0x7a0560 postgres  + 0x7a0560
> 13   0x7a3329 postgres PostmasterMain + 0x759
> 14   0x4a5319 postgres main + 0x519
> 15   0x3a1661ed1d libc.so.6 __libc_start_main + 0xfd
> 16   0x4a5399 postgres  + 0x4a5399
> "



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1443) Implement Ranger lookup for HAWQ with Kerberos enabled.

2017-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084930#comment-16084930
 ] 

Vineet Goel commented on HAWQ-1443:
---

Is this JIRA ready to resolve now?

> Implement Ranger lookup for HAWQ with Kerberos enabled.
> ---
>
> Key: HAWQ-1443
> URL: https://issues.apache.org/jira/browse/HAWQ-1443
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
> Fix For: backlog
>
> Attachments: Kerberos Support for Ranger Lookup HAWQ.pdf
>
>
> When add a HAWQ service in Ranger, we also need to configure Ranger look up 
> service for HAWQ. Lookup service can be done through JDBC with username and 
> password. But It cannot support Kerberos authentication currently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1438) Analyze report error: relcache reference xxx is not owned by resource owner TopTransaction

2017-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084928#comment-16084928
 ] 

Vineet Goel commented on HAWQ-1438:
---

[~liming01] - I think this JIRA can be resolved now, since you merged the PR. 
If that's correct, could you please Resolve this? 
Thanks

> Analyze report error: relcache reference xxx is not owned by resource owner 
> TopTransaction
> --
>
> Key: HAWQ-1438
> URL: https://issues.apache.org/jira/browse/HAWQ-1438
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: backlog, 2.3.0.0-incubating
>
>
> 2017-04-12 14:23:13.866064 
> BST,"mis_ig","ig",p124811,th-224249568,"10.33.188.8","5172",2017-04-12 
> 14:20:42 
> BST,76687174,con61,cmd16,seg-1,,,x76687174,sx1,"ERROR","XX000","relcache 
> reference e_event_1_0_102_1_prt_2 is not owned by resource owner 
> TopTransaction (resowner.c:766)",,"ANALYZE 
> mis_data_ig_account_details.e_event_1_0_102",0,,"resowner.c",766,"Stack trace:
> 1 0x8ce438 postgres errstart (elog.c:492)
> 2 0x8d01bb postgres elog_finish (elog.c:1443)
> 3 0x4ca5f4 postgres relation_close (heapam.c:1267)
> 4 0x5e7498 postgres analyzeStmt (analyze.c:728)
> 5 0x5e8a97 postgres analyzeStatement (analyze.c:274)
> 6 0x65c34c postgres vacuum (vacuum.c:319)
> 7 0x7f6172 postgres ProcessUtility (utility.c:1472)
> 8 0x7f1c3e postgres  (pquery.c:1974)
> 9 0x7f341e postgres  (pquery.c:2078)
> 10 0x7f5185 postgres PortalRun (pquery.c:1599)
> 11 0x7ee1f8 postgres PostgresMain (postgres.c:2782)
> 12 0x7a04f0 postgres  (postmaster.c:5486)
> 13 0x7a32b9 postgres PostmasterMain (postmaster.c:1459)
> 14 0x4a52b9 postgres main (main.c:226)
> 15 0x7fcaee7ded5d libc.so.6 __libc_start_main (??:0)
> 16 0x4a5339 postgres  (??:0)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084935#comment-16084935
 ] 

Vineet Goel commented on HAWQ-1487:
---

Hi [~huor], can this JIRA be resolved now?

> hang process due to deadlock when it try to process interrupt in error 
> handling
> ---
>
> Key: HAWQ-1487
> URL: https://issues.apache.org/jira/browse/HAWQ-1487
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
>
> It has hang process when it try to process interrupt in error handling. To be 
> specific, some QE encounter division by zero error, and then it error out. 
> During the error processing, it try to handle query cancelling interrupt and 
> thus deadlock occur.
> The hang process is:
> {noformat}
> $ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger p
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> co
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer p
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoi
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsupe
> $ ps -ef | grep postgres | grep -v grep
> gpadmin   51245  1  0 06:15 ?00:01:01 
> /usr/local/hawq_2_2_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
>  -i -M segment -p 20100 --silent-mode=true
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger process
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> collector process
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer process
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoint process
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment resource manager
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
> MPPEXEC SELECT
> gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
> {noformat}
> The call stack is:
> {noformat}
> $ sudo gdb -p 182983
> (gdb) bt
> #0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
> #6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 
> "postgres.c", lineno=3618,
> funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
> #9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
> #10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
> postgres.c:3463
> #11 
> #12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
> lineno=839, funcname=0xd3bf3a "float8div",
> domain=0x0) at elog.c:492
> #18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
> #19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
> econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:1762
> #20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
> isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:2250
> #21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
> argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
> #22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
> econtext=0x32495d8,
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at 
> execQual.c:1532
> #23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
> isNull=0x7ffd04d2c5c1 

[jira] [Updated] (HAWQ-1438) Analyze report error: relcache reference xxx is not owned by resource owner TopTransaction

2017-07-12 Thread Vineet Goel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Goel updated HAWQ-1438:
--
Fix Version/s: 2.3.0.0-incubating

> Analyze report error: relcache reference xxx is not owned by resource owner 
> TopTransaction
> --
>
> Key: HAWQ-1438
> URL: https://issues.apache.org/jira/browse/HAWQ-1438
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: backlog, 2.3.0.0-incubating
>
>
> 2017-04-12 14:23:13.866064 
> BST,"mis_ig","ig",p124811,th-224249568,"10.33.188.8","5172",2017-04-12 
> 14:20:42 
> BST,76687174,con61,cmd16,seg-1,,,x76687174,sx1,"ERROR","XX000","relcache 
> reference e_event_1_0_102_1_prt_2 is not owned by resource owner 
> TopTransaction (resowner.c:766)",,"ANALYZE 
> mis_data_ig_account_details.e_event_1_0_102",0,,"resowner.c",766,"Stack trace:
> 1 0x8ce438 postgres errstart (elog.c:492)
> 2 0x8d01bb postgres elog_finish (elog.c:1443)
> 3 0x4ca5f4 postgres relation_close (heapam.c:1267)
> 4 0x5e7498 postgres analyzeStmt (analyze.c:728)
> 5 0x5e8a97 postgres analyzeStatement (analyze.c:274)
> 6 0x65c34c postgres vacuum (vacuum.c:319)
> 7 0x7f6172 postgres ProcessUtility (utility.c:1472)
> 8 0x7f1c3e postgres  (pquery.c:1974)
> 9 0x7f341e postgres  (pquery.c:2078)
> 10 0x7f5185 postgres PortalRun (pquery.c:1599)
> 11 0x7ee1f8 postgres PostgresMain (postgres.c:2782)
> 12 0x7a04f0 postgres  (postmaster.c:5486)
> 13 0x7a32b9 postgres PostmasterMain (postmaster.c:1459)
> 14 0x4a52b9 postgres main (main.c:226)
> 15 0x7fcaee7ded5d libc.so.6 __libc_start_main (??:0)
> 16 0x4a5339 postgres  (??:0)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1490) PXF dummy text profile for non-hadoop testing.

2017-07-12 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1490.

Resolution: Fixed

> PXF dummy text profile for non-hadoop testing.
> --
>
> Key: HAWQ-1490
> URL: https://issues.apache.org/jira/browse/HAWQ-1490
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Tests
>Reporter: John Gaskin
>Assignee: Shivram Mani
> Fix For: 2.3.0.0-incubating
>
>
> A user should be able to create a PXF ext table with Demo profile that allows 
> her to check whether PXF service is functional in a given system without 
> relying on any backend data source.
> This would serve as a profile to test PXF service/framework. The demo profile 
> could return some configured static data while invoking all the necessary 
> APIs/functions for the smoke test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084738#comment-16084738
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127074846
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084741#comment-16084741
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127072711
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
--- End diff --

Small edit:  Identify an existing **HAWQ administrative role** or create a 
new ...


> document hawq/ranger kerberos support
> -
>
> Key: HAWQ-1479
> URL: https://issues.apache.org/jira/browse/HAWQ-1479
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add some doc content addressing hawq/ranger/rps kerberos config and any other 
> considerations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084742#comment-16084742
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127073559
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084736#comment-16084736
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127073872
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
--- End diff --

I think for consistency we should 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084737#comment-16084737
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127073604
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084739#comment-16084739
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127075938
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084744#comment-16084744
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127078185
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084745#comment-16084745
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127074583
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084740#comment-16084740
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127075036
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084743#comment-16084743
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127075166
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084735#comment-16084735
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127073500
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
--- End diff --

press -> click


> document hawq/ranger kerberos support
> -
>
> Key: HAWQ-1479
> URL: https://issues.apache.org/jira/browse/HAWQ-1479
>   

[jira] [Resolved] (HAWQ-1409) HAWQ send additional header to PXF to indicate aggregate function type

2017-07-12 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko resolved HAWQ-1409.
---
Resolution: Fixed

> HAWQ send additional header to PXF to indicate aggregate function type
> --
>
> Key: HAWQ-1409
> URL: https://issues.apache.org/jira/browse/HAWQ-1409
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: 2.3.0.0-incubating
>
>
> PXF can take advantage of some file formats such as ORC and leverage the 
> stats in the metadata. This means that for some simple aggregate functions 
> like count, min, max without any complex joins or filters PXF can simply read 
> the metadata and avoid reading tuples. In order for PXF to know that a query 
> can be completed via ORC metadata HAWQ must indicate to PXF that the query is 
> an aggregate query and the type of function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1461) Improve partition parameters validation for PXF-JDBC plugin

2017-07-12 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko resolved HAWQ-1461.
---
Resolution: Fixed

> Improve partition parameters validation for PXF-JDBC plugin
> ---
>
> Key: HAWQ-1461
> URL: https://issues.apache.org/jira/browse/HAWQ-1461
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Lav Jain
> Fix For: 2.3.0.0-incubating
>
>
> h3. User didn't pass interval type:
> {code}
> CREATE EXTERNAL TABLE pxf_jdbc_multiple_fragments_by_date (t1text, t2
> text, num1  int, dub1  double precision, dec1  numeric, tm timestamp, r real, 
> bg bigint, b boolean, tn smallint, sml smallint, dt date, vc1 varchar(5), c1 
> char(3), bin bytea) LOCATION 
> (E'pxf://127.0.0.1:51200/hawq_types?PROFILE=Jdbc_DRIVER=org.postgresql.Driver_URL=jdbc:postgresql:pxfautomation//localhost:5432_BY=dt:date=2015-03-06:2015-03-20=1=adiachenko')
>  FORMAT 'CUSTOM' (formatter='pxfwritable_import');
> {code}
> Actual behavior:
> {code}
> select * from pxf_jdbc_multiple_fragments_by_date;
> ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
> report   message   description   The server encountered an internal error 
> that prevented it from fulfilling this request.exception   
> java.lang.NullPointerException (libchurl.c:897)
> {code}
> h3. User didn't pass interval:
> {code}
> CREATE EXTERNAL TABLE pxf_jdbc_multiple_fragments_by_date (t1text, t2
> text, num1  int, dub1  double precision, dec1  numeric, tm timestamp, r real, 
> bg bigint, b boolean, tn smallint, sml smallint, dt date, vc1 varchar(5), c1 
> char(3), bin bytea) LOCATION 
> (E'pxf://127.0.0.1:51200/hawq_types?PROFILE=Jdbc_DRIVER=org.postgresql.Driver_URL=jdbc:postgresql:pxfautomation//localhost:5432_BY=dt:date=2015-03-06:2015-03-20=adiachenko')
>  FORMAT 'CUSTOM' (formatter='pxfwritable_import');
> {code}
> Actual behavior:
> {code}
> ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
> report   message   description   The server encountered an internal error 
> that prevented it from fulfilling this request.exception   
> java.lang.NullPointerException (libchurl.c:897)
> {code}
> h3. User didn't pass the upper boundary of a range:
> {code}
> CREATE EXTERNAL TABLE pxf_jdbc_multiple_fragments_by_date (t1text, t2
> text, num1  int, dub1  double precision, dec1  numeric, tm timestamp, r real, 
> bg bigint, b boolean, tn smallint, sml smallint, dt date, vc1 varchar(5), c1 
> char(3), bin bytea) LOCATION 
> (E'pxf://127.0.0.1:51200/hawq_types?PROFILE=Jdbc_DRIVER=org.postgresql.Driver_URL=jdbc:postgresql:pxfautomation//localhost:5432_BY=dt:date=2015-03-06:=1:DAY=adiachenko')
>  FORMAT 'CUSTOM' (formatter='pxfwritable_import');
> {code}
> Actual behavior:
> {code}
> select * from pxf_jdbc_multiple_fragments_by_date;
> ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
> report   message   java.lang.Exception: 
> java.lang.ArrayIndexOutOfBoundsException: 1description   The server 
> encountered an internal error that prevented it from fulfilling this request. 
>exception   javax.servlet.ServletException: java.lang.Exception: 
> java.lang.ArrayIndexOutOfBoundsException: 1 (libchurl.c:897)
> {code}
> h3. User didn't pass range:
> {code}
> CREATE EXTERNAL TABLE pxf_jdbc_multiple_fragments_by_date (t1text, t2
> text, num1  int, dub1  double precision, dec1  numeric, tm timestamp, r real, 
> bg bigint, b boolean, tn smallint, sml smallint, dt date, vc1 varchar(5), c1 
> char(3), bin bytea) LOCATION 
> (E'pxf://127.0.0.1:51200/hawq_types?PROFILE=Jdbc_DRIVER=org.postgresql.Driver_URL=jdbc:postgresql:pxfautomation//localhost:5432_BY=dt:date=1:DAY=adiachenko')
>  FORMAT 'CUSTOM' (formatter='pxfwritable_import');
> {code}
> Actual behavior:
> {code}
> select * from pxf_jdbc_multiple_fragments_by_date;
> ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
> report   message   java.lang.Exception: java.lang.NullPointerException
> description   The server encountered an internal error that prevented it from 
> fulfilling this request.exception   javax.servlet.ServletException: 
> java.lang.Exception: java.lang.NullPointerException (libchurl.c:897)
> {code}
> Expected behavior fro all cases: user-friendly meaningful message, hinting to 
> a user which parameter is missing/incorrect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1490) PXF dummy text profile for non-hadoop testing.

2017-07-12 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1490.
--

> PXF dummy text profile for non-hadoop testing.
> --
>
> Key: HAWQ-1490
> URL: https://issues.apache.org/jira/browse/HAWQ-1490
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Tests
>Reporter: John Gaskin
>Assignee: Shivram Mani
> Fix For: 2.3.0.0-incubating
>
>
> A user should be able to create a PXF ext table with Demo profile that allows 
> her to check whether PXF service is functional in a given system without 
> relying on any backend data source.
> This would serve as a profile to test PXF service/framework. The demo profile 
> could return some configured static data while invoking all the necessary 
> APIs/functions for the smoke test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1454) Exclude certain jars from Ranger Plugin Service packaging

2017-07-12 Thread Vineet Goel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Goel updated HAWQ-1454:
--
Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> Exclude certain jars from Ranger Plugin Service packaging
> -
>
> Key: HAWQ-1454
> URL: https://issues.apache.org/jira/browse/HAWQ-1454
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Security
>Reporter: Lav Jain
>Assignee: Ed Espino
> Fix For: 2.3.0.0-incubating
>
>
> The following jars may cause conflicts in certain environments depending on 
> how the classes are being loaded.
> ```
> WEB-INF/lib/jersey-json-1.9.jar
> WEB-INF/lib/jersey-core-1.9.jar
> WEB-INF/lib/jersey-server-1.9.jar
> ```
> We need to exclude them while building the RPM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1469) Don't expose RPS warning messages to command line

2017-07-12 Thread Vineet Goel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Goel updated HAWQ-1469:
--
Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> Don't expose RPS warning messages to command line
> -
>
> Key: HAWQ-1469
> URL: https://issues.apache.org/jira/browse/HAWQ-1469
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.3.0.0-incubating
>
>
> RPS service address exposing to end-user is not secure, and we should not 
> expose it out.
> **Case 1: When master RPS is down, changing to standby RPS**
> Current behavior
> ```
> postgres=# select * from a;
> WARNING:  ranger plugin service from http://test1:8432/rps is unavailable : 
> Couldn't connect to server, try another http://test5:8432/rps
> ERROR:  permission denied for relation(s): public.a
> ``` 
> Warning should be removed.
> Expected
> ```
> postgres=# select * from a;
> ERROR:  permission denied for relation(s): public.a
> ```
> **Case 2: When both RPS are down, should only print that RPS is unavailable.**
> Current Behavior:
> ```
> postgres=# select * from a;
> WARNING:  ranger plugin service from http://test5:8432/rps is unavailable : 
> Couldn't connect to server, try another http://test1:8432/rps
> ERROR:  ranger plugin service from http://test1:8432/rps is unavailable : 
> Couldn't connect to server. (rangerrest.c:463)
> ```
> Expected
> ```
> postgres=# select * from a;
> ERROR:  ranger plugin service is unavailable : Couldn't connect to server. 
> (rangerrest.c:463)
> ```
> The warning message should be printed in cvs log file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1417) Crashed at ANALYZE after COPY

2017-07-12 Thread Vineet Goel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Goel updated HAWQ-1417:
--
Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> Crashed at ANALYZE after COPY
> -
>
> Key: HAWQ-1417
> URL: https://issues.apache.org/jira/browse/HAWQ-1417
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.3.0.0-incubating
>
>
> This is the line in the master log where the PANIC is reported:
> {code}
> (gdb) bt
> #0  0x7f6d35b0e6ab in raise () from 
> /data/logs/52280/new_panic/packcore-core.postgres.457052/lib64/libpthread.so.0
> #1  0x008c7d79 in SafeHandlerForSegvBusIll (postgres_signal_arg=11, 
> processName=) at elog.c:4519
> #2  
> #3  ResourceOwnerEnlargeRelationRefs (owner=0x0) at resowner.c:708
> #4  0x008b5659 in RelationIncrementReferenceCount (rel=0x1baf500) at 
> relcache.c:1941
> #5  RelationIdGetRelation (relationId=relationId@entry=1259) at 
> relcache.c:1895
> #6  0x004ca664 in relation_open (lockmode=lockmode@entry=1, 
> relationId=relationId@entry=1259) at heapam.c:882
> #7  heap_open (relationId=relationId@entry=1259, lockmode=lockmode@entry=1) 
> at heapam.c:1285
> #8  0x008b0945 in ScanPgRelation (targetRelId=targetRelId@entry=5010, 
> indexOK=indexOK@entry=1 '\001', 
> pg_class_relation=pg_class_relation@entry=0x7ffdf2aed390) at relcache.c:279
> #9  0x008b4302 in RelationBuildDesc (targetRelId=5010, 
> insertIt=) at relcache.c:1209
> #10 0x008b56c7 in RelationIdGetRelation 
> (relationId=relationId@entry=5010) at relcache.c:1918
> #11 0x004ca664 in relation_open (lockmode=, 
> relationId=5010) at heapam.c:882
> #12 heap_open (relationId=5010, lockmode=) at heapam.c:1285
> #13 0x0055d1e6 in caql_basic_fn_all (pcql=0x1d70a58, 
> bLockEntireTable=0 '\000', pCtx=0x7ffdf2aed480, pchn=0xf4b328 
> ) at caqlanalyze.c:343
> #14 caql_switch (pchn=pchn@entry=0xf4b328 , 
> pCtx=pCtx@entry=0x7ffdf2aed480, pcql=pcql@entry=0x1d70a58) at 
> caqlanalyze.c:229
> #15 0x005636db in caql_getcount (pCtx0=pCtx0@entry=0x0, 
> pcql=0x1d70a58) at caqlaccess.c:367
> #16 0x009ddc47 in rel_is_partitioned (relid=1882211) at 
> cdbpartition.c:232
> #17 rel_part_status (relid=relid@entry=1882211) at cdbpartition.c:484
> #18 0x005e7d43 in calculate_virtual_segment_number 
> (candidateOids=) at analyze.c:833
> #19 analyzeStmt (stmt=stmt@entry=0x2045dd0, relids=relids@entry=0x0, 
> preferred_seg_num=preferred_seg_num@entry=-1) at analyze.c:486
> #20 0x005e89a7 in analyzeStatement (stmt=stmt@entry=0x2045dd0, 
> relids=relids@entry=0x0, preferred_seg_num=preferred_seg_num@entry=-1) at 
> analyze.c:271
> #21 0x0065c25c in vacuum (vacstmt=vacstmt@entry=0x2045bf0, 
> relids=relids@entry=0x0, preferred_seg_num=preferred_seg_num@entry=-1) at 
> vacuum.c:316
> #22 0x007f6012 in ProcessUtility 
> (parsetree=parsetree@entry=0x2045bf0, queryString=0x2045d30 "ANALYZE 
> mis_data_ig_account_details.e_event_1_0_102", params=0x0, 
> isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0xf04ba0 ,
> completionTag=completionTag@entry=0x7ffdf2aee3f0 "") at utility.c:1471
> #23 0x007f1ade in PortalRunUtility (portal=portal@entry=0x1bfb490, 
> utilityStmt=utilityStmt@entry=0x2045bf0, isTopLevel=isTopLevel@entry=1 
> '\001', dest=dest@entry=0xf04ba0 , 
> completionTag=completionTag@entry=0x7ffdf2aee3f0 "") at pquery.c:1968
> #24 0x007f32be in PortalRunMulti (portal=portal@entry=0x1bfb490, 
> isTopLevel=isTopLevel@entry=1 '\001', dest=0xf04ba0 , 
> dest@entry=0x1b93f60, altdest=0xf04ba0 , 
> altdest@entry=0x1b93f60, completionTag=completionTag@entry=0x7ffdf2aee3f0 "")
> at pquery.c:2078
> #25 0x007f5025 in PortalRun (portal=portal@entry=0x1bfb490, 
> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x1b93f60, altdest=altdest@entry=0x1b93f60, 
> completionTag=completionTag@entry=0x7ffdf2aee3f0 "") at pquery.c:1595
> #26 0x007ee098 in exec_execute_message (max_rows=9223372036854775807, 
> portal_name=0x1b93ad0 "") at postgres.c:2782
> #27 PostgresMain (argc=, argv=, 
> argv@entry=0x1a49b40, username=0x1a498f0 "mis_ig") at postgres.c:5170
> #28 0x007a0390 in BackendRun (port=0x1a185f0) at postmaster.c:5915
> #29 BackendStartup (port=0x1a185f0) at postmaster.c:5484
> #30 ServerLoop () at postmaster.c:2163
> #31 0x007a3159 in PostmasterMain (argc=, 
> argv=) at postmaster.c:1454
> #32 0x004a52b9 in main (argc=9, argv=0x1a20d10) at main.c:226
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1194) Add EncryptionZones related RPC

2017-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084492#comment-16084492
 ] 

Vineet Goel commented on HAWQ-1194:
---

Could you please update "Fix Version/s:" field with the appropriate version 
number? I think it should be 2.3.0.0.

> Add EncryptionZones related RPC
> ---
>
> Key: HAWQ-1194
> URL: https://issues.apache.org/jira/browse/HAWQ-1194
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Amy
> Fix For: backlog
>
>
> Add createEncryption, getEZForPath, listEncryptionZones RPC



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq pull request #1254: HAWQ-1373 - Added feature to reload GUC v...

2017-07-12 Thread outofmem0ry
Github user outofmem0ry closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1254


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1254: HAWQ-1373 - Added feature to reload GUC values u...

2017-07-12 Thread outofmem0ry
Github user outofmem0ry commented on the issue:

https://github.com/apache/incubator-hawq/pull/1254
  
Thank you @linwen @radarwave . Closing this PR, will also submit a PR for 
documentation changes to apache/incubator-hawq-docs repo.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1227: HAWQ-1448. Fixed postmaster process hung ...

2017-07-12 Thread liming01
Github user liming01 closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1227


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-12 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126904979
  
--- Diff: depends/libhdfs3/src/client/CryptoCodec.cpp ---
@@ -0,0 +1,163 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "CryptoCodec.h"
+#include "Logger.h"
+
+using namespace Hdfs::Internal;
+
+namespace Hdfs {
+
+/**
+ * Construct a CryptoCodec instance.
+ * @param encryptionInfo the encryption info of file.
+ * @param kcp a KmsClientProvider instance to get key from kms server.
+ * @param bufSize crypto buffer size.
+ */
+CryptoCodec::CryptoCodec(FileEncryptionInfo *encryptionInfo, 
std::shared_ptr kcp, int32_t bufSize) : 
encryptionInfo(encryptionInfo), kcp(kcp), bufSize(bufSize)
+{
+   
+   /* Init global status. */
+   ERR_load_crypto_strings();
+   OpenSSL_add_all_algorithms();
+   OPENSSL_config(NULL);
+
+   /* Create cipher context. */
+   encryptCtx = EVP_CIPHER_CTX_new();  
+   cipher = NULL;  
+
+}
+
+/**
+ * Destroy a CryptoCodec instance.
+ */
+CryptoCodec::~CryptoCodec()
+{
+   if (encryptCtx) 
+   EVP_CIPHER_CTX_free(encryptCtx);
+}
+
+/**
+ * Get decrypted key from kms.
+ */
+std::string CryptoCodec::getDecryptedKeyFromKms()
+{
+   ptree map = kcp->decryptEncryptedKey(*encryptionInfo);
+   std::string key = map.get("material");
--- End diff --

need a try catch here? when map doesn't has a material field.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-12 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126905189
  
--- Diff: depends/libhdfs3/src/client/CryptoCodec.cpp ---
@@ -0,0 +1,163 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "CryptoCodec.h"
+#include "Logger.h"
+
+using namespace Hdfs::Internal;
+
+namespace Hdfs {
+
+/**
+ * Construct a CryptoCodec instance.
+ * @param encryptionInfo the encryption info of file.
+ * @param kcp a KmsClientProvider instance to get key from kms server.
+ * @param bufSize crypto buffer size.
+ */
+CryptoCodec::CryptoCodec(FileEncryptionInfo *encryptionInfo, 
std::shared_ptr kcp, int32_t bufSize) : 
encryptionInfo(encryptionInfo), kcp(kcp), bufSize(bufSize)
+{
+   
+   /* Init global status. */
+   ERR_load_crypto_strings();
+   OpenSSL_add_all_algorithms();
+   OPENSSL_config(NULL);
+
+   /* Create cipher context. */
+   encryptCtx = EVP_CIPHER_CTX_new();  
+   cipher = NULL;  
+
+}
+
+/**
+ * Destroy a CryptoCodec instance.
+ */
+CryptoCodec::~CryptoCodec()
+{
+   if (encryptCtx) 
+   EVP_CIPHER_CTX_free(encryptCtx);
+}
+
+/**
+ * Get decrypted key from kms.
+ */
+std::string CryptoCodec::getDecryptedKeyFromKms()
+{
+   ptree map = kcp->decryptEncryptedKey(*encryptionInfo);
+   std::string key = map.get("material");
+
+   int rem = key.length() % 4;
+   if (rem) {
+   rem = 4 - rem;
+   while (rem != 0) {
+   key = key + "=";
+   rem--;
+   }
+   }
--- End diff --

maybe we should refer the hadoop-kms code to find the answer...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-12 Thread interma
Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126904920
  
--- Diff: depends/libhdfs3/src/client/CryptoCodec.cpp ---
@@ -0,0 +1,163 @@
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "CryptoCodec.h"
+#include "Logger.h"
+
+using namespace Hdfs::Internal;
+
+namespace Hdfs {
+
+/**
+ * Construct a CryptoCodec instance.
+ * @param encryptionInfo the encryption info of file.
+ * @param kcp a KmsClientProvider instance to get key from kms server.
+ * @param bufSize crypto buffer size.
+ */
+CryptoCodec::CryptoCodec(FileEncryptionInfo *encryptionInfo, 
std::shared_ptr kcp, int32_t bufSize) : 
encryptionInfo(encryptionInfo), kcp(kcp), bufSize(bufSize)
+{
+   
+   /* Init global status. */
+   ERR_load_crypto_strings();
+   OpenSSL_add_all_algorithms();
+   OPENSSL_config(NULL);
+
+   /* Create cipher context. */
+   encryptCtx = EVP_CIPHER_CTX_new();  
+   cipher = NULL;  
+
+}
+
+/**
+ * Destroy a CryptoCodec instance.
+ */
+CryptoCodec::~CryptoCodec()
+{
+   if (encryptCtx) 
+   EVP_CIPHER_CTX_free(encryptCtx);
+}
+
+/**
+ * Get decrypted key from kms.
+ */
+std::string CryptoCodec::getDecryptedKeyFromKms()
+{
+   ptree map = kcp->decryptEncryptedKey(*encryptionInfo);
+   std::string key = map.get("material");
--- End diff --

need a try catch here? when map doesn't has a material field.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---