[jira] [Updated] (HAWQ-1503) Failure building on centos-6 using dockerfile present under incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-dev
[ https://issues.apache.org/jira/browse/HAWQ-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleksandr Diachenko updated HAWQ-1503: -- Fix Version/s: 2.3.0.0-incubating > Failure building on centos-6 using dockerfile present under > incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-dev > -- > > Key: HAWQ-1503 > URL: https://issues.apache.org/jira/browse/HAWQ-1503 > Project: Apache HAWQ > Issue Type: Bug > Components: Build >Reporter: Shubham Sharma >Assignee: Radar Lei >Priority: Minor > Fix For: 2.3.0.0-incubating > > > Using build instructions from > [repo|https://github.com/apache/incubator-hawq/tree/master/contrib/hawq-docker] > make build fails while building images for centos6. > From the error it looks like the ftp link, which dockerfile uses to upgrade > gcc etc. no longer exists. > {code} > curl: (22) The requested URL returned error: 404 Not Found > error: > http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern: > import read failed(2). > The command '/bin/sh -c wget -O /etc/yum.repos.d/slc6-devtoolset.repo > http://linuxsoft.cern.ch/cern/devtoolset/slc6-devtoolset.repo && rpm > --import > http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern > && yum install -y devtoolset-2-gcc devtoolset-2-binutils > devtoolset-2-gcc-c++ && echo "source /opt/rh/devtoolset-2/enable" >> > ~/.bashrc && source ~/.bashrc' returned a non-zero code: 1 > make[1]: *** [build-hawq-dev-centos6] Error 1 > make: *** [build] Error 2 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] incubator-hawq issue #1266: HAWQ-1503 - Fixed ftp link causing build failure...
Github user lavjain commented on the issue: https://github.com/apache/incubator-hawq/pull/1266 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Comment Edited] (HAWQ-1503) Failure building on centos-6 using dockerfile present under incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-dev
[ https://issues.apache.org/jira/browse/HAWQ-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087470#comment-16087470 ] Shubham Sharma edited comment on HAWQ-1503 at 7/14/17 4:21 PM: --- Submitted PR - [#1266|https://github.com/apache/incubator-hawq/pull/1266] was (Author: outofmemory): Submitting PR shortly. > Failure building on centos-6 using dockerfile present under > incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-dev > -- > > Key: HAWQ-1503 > URL: https://issues.apache.org/jira/browse/HAWQ-1503 > Project: Apache HAWQ > Issue Type: Bug > Components: Build >Reporter: Shubham Sharma >Assignee: Radar Lei >Priority: Minor > > Using build instructions from > [repo|https://github.com/apache/incubator-hawq/tree/master/contrib/hawq-docker] > make build fails while building images for centos6. > From the error it looks like the ftp link, which dockerfile uses to upgrade > gcc etc. no longer exists. > {code} > curl: (22) The requested URL returned error: 404 Not Found > error: > http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern: > import read failed(2). > The command '/bin/sh -c wget -O /etc/yum.repos.d/slc6-devtoolset.repo > http://linuxsoft.cern.ch/cern/devtoolset/slc6-devtoolset.repo && rpm > --import > http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern > && yum install -y devtoolset-2-gcc devtoolset-2-binutils > devtoolset-2-gcc-c++ && echo "source /opt/rh/devtoolset-2/enable" >> > ~/.bashrc && source ~/.bashrc' returned a non-zero code: 1 > make[1]: *** [build-hawq-dev-centos6] Error 1 > make: *** [build] Error 2 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] incubator-hawq pull request #1266: HAWQ-1503 - Fixed ftp link causing build ...
GitHub user outofmem0ry opened a pull request: https://github.com/apache/incubator-hawq/pull/1266 HAWQ-1503 - Fixed ftp link causing build failure on centos-6 docker You can merge this pull request into a Git repository by running: $ git pull https://github.com/outofmem0ry/incubator-hawq feature/HAWQ-1503 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/1266.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1266 commit 042297486809cc7c789d014e3ee1cbb625699b54 Author: Shubham SharmaDate: 2017-07-14T16:18:18Z HAWQ-1503 - Fixed unavailable ftp link causing build failure on centos6 docker --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (HAWQ-1503) Failure building on centos-6 using dockerfile present under incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-dev
[ https://issues.apache.org/jira/browse/HAWQ-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087470#comment-16087470 ] Shubham Sharma commented on HAWQ-1503: -- Submitting PR shortly. > Failure building on centos-6 using dockerfile present under > incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-dev > -- > > Key: HAWQ-1503 > URL: https://issues.apache.org/jira/browse/HAWQ-1503 > Project: Apache HAWQ > Issue Type: Bug > Components: Build >Reporter: Shubham Sharma >Assignee: Radar Lei >Priority: Minor > > Using build instructions from > [repo|https://github.com/apache/incubator-hawq/tree/master/contrib/hawq-docker] > make build fails while building images for centos6. > From the error it looks like the ftp link, which dockerfile uses to upgrade > gcc etc. no longer exists. > {code} > curl: (22) The requested URL returned error: 404 Not Found > error: > http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern: > import read failed(2). > The command '/bin/sh -c wget -O /etc/yum.repos.d/slc6-devtoolset.repo > http://linuxsoft.cern.ch/cern/devtoolset/slc6-devtoolset.repo && rpm > --import > http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern > && yum install -y devtoolset-2-gcc devtoolset-2-binutils > devtoolset-2-gcc-c++ && echo "source /opt/rh/devtoolset-2/enable" >> > ~/.bashrc && source ~/.bashrc' returned a non-zero code: 1 > make[1]: *** [build-hawq-dev-centos6] Error 1 > make: *** [build] Error 2 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HAWQ-1503) Failure building on centos-6 using dockerfile present under incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-dev
[ https://issues.apache.org/jira/browse/HAWQ-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shubham Sharma updated HAWQ-1503: - Priority: Minor (was: Major) > Failure building on centos-6 using dockerfile present under > incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-dev > -- > > Key: HAWQ-1503 > URL: https://issues.apache.org/jira/browse/HAWQ-1503 > Project: Apache HAWQ > Issue Type: Bug > Components: Build >Reporter: Shubham Sharma >Assignee: Radar Lei >Priority: Minor > > Using build instructions from > [repo|https://github.com/apache/incubator-hawq/tree/master/contrib/hawq-docker] > make build fails while building images for centos6. > From the error it looks like the ftp link, which dockerfile uses to upgrade > gcc etc. no longer exists. > {code} > curl: (22) The requested URL returned error: 404 Not Found > error: > http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern: > import read failed(2). > The command '/bin/sh -c wget -O /etc/yum.repos.d/slc6-devtoolset.repo > http://linuxsoft.cern.ch/cern/devtoolset/slc6-devtoolset.repo && rpm > --import > http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern > && yum install -y devtoolset-2-gcc devtoolset-2-binutils > devtoolset-2-gcc-c++ && echo "source /opt/rh/devtoolset-2/enable" >> > ~/.bashrc && source ~/.bashrc' returned a non-zero code: 1 > make[1]: *** [build-hawq-dev-centos6] Error 1 > make: *** [build] Error 2 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HAWQ-1503) Failure building on centos-6 using dockerfile present under incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-dev
Shubham Sharma created HAWQ-1503: Summary: Failure building on centos-6 using dockerfile present under incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-dev Key: HAWQ-1503 URL: https://issues.apache.org/jira/browse/HAWQ-1503 Project: Apache HAWQ Issue Type: Bug Components: Build Reporter: Shubham Sharma Assignee: Radar Lei Using build instructions from [repo|https://github.com/apache/incubator-hawq/tree/master/contrib/hawq-docker] make build fails while building images for centos6. >From the error it looks like the ftp link, which dockerfile uses to upgrade >gcc etc. no longer exists. {code} curl: (22) The requested URL returned error: 404 Not Found error: http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern: import read failed(2). The command '/bin/sh -c wget -O /etc/yum.repos.d/slc6-devtoolset.repo http://linuxsoft.cern.ch/cern/devtoolset/slc6-devtoolset.repo && rpm --import http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern && yum install -y devtoolset-2-gcc devtoolset-2-binutils devtoolset-2-gcc-c++ && echo "source /opt/rh/devtoolset-2/enable" >> ~/.bashrc && source ~/.bashrc' returned a non-zero code: 1 make[1]: *** [build-hawq-dev-centos6] Error 1 make: *** [build] Error 2 {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support
[ https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087008#comment-16087008 ] ASF GitHub Bot commented on HAWQ-1479: -- Github user interma commented on a diff in the pull request: https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127405360 --- Diff: markdown/ranger/ranger-kerberos.html.md.erb --- @@ -0,0 +1,209 @@ +--- +title: HAWQ Ranger Kerberos Integration +--- + + + +When you have enabled Ranger Authorization for HAWQ, your HAWQ installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in Service. + +Specific HAWQ Ranger configuration is required when Kerberos authentication is enabled for HAWQ or for Ranger. You must configure Kerberos support for: + +- HAWQ resource lookup by the Ranger Administration host during HAWQ policy definition +- HAWQ Ranger Plug-in Service communication with the Ranger Administration host for policy refresh + +Use the following procedures to configure Kerberos support for your Ranger-authorized HAWQ cluster. + +## Prerequisites + +Before you configure Kerberos for your Ranger-authorized HAWQ cluster, ensure that you have: + +- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise Linux 6.x or 7.x. +- (Non-OpenJDK Java installations) Installed the Java Cryptography Extension (JCE) on all nodes in your cluster. +- If you manage your cluster with Ambari, you installed the JCE on each node before you enabled Kerberos with the Ambari **Kerberos Security Wizard**. +- If you manage your cluster from the command line, you must manually install the extension on these systems. +- Noted the host name or IP address of your Ranger Administration host (\) and HAWQ master (\ ) nodes. +- Identified an existing Kerberos Key Distribution Center (KDC) or set up your KDC as described in [Install and Configure a Kerberos KDC Server](../clientaccess/kerberos.html#task_setup_kdc). +- Note the host name or IP address of your KDC (\ ). +- Note the name of the Kerberos \ in which your cluster resides. +- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use Ranger Policy Management](ranger-integration-config.html). + + +## Configure Ranger for Kerberized HAWQ + +When you define HAWQ Ranger authorization policies, the Ranger Administration Host uses JDBC to connect to HAWQ during policy definition to look up policy resource names. When Kerberos user authentication is enabled for HAWQ, you must configure this connection for Kerberos. + +To configure Ranger access to a HAWQ cluster enabled with Kerberos user authentication, you must: + +- Identify an existing HAWQ administrative role or create a new HAWQ administrative role for Ranger lookup of HAWQ resources +- Create a Kerberos principal for the lookup role +- Update the Ranger HAWQ service definition + +### Procedure + +Perform the following procedure to enable the Ranger Administration Host to look up resources in your kerberized HAWQ cluster. You will perform operations on the HAWQ \ , \ , and \ nodes. + +1. Log in to the HAWQ master node and set up your environment: + +``` shell +$ ssh gpadmin@ +gpadmin@master$ . /usr/local/hawq/greenplum_path.sh +``` + +2. Identify an existing HAWQ administrative role or create a new HAWQ administrative role for Ranger resource lookup. For example, to create a new administrative role: + +``` shell +gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN SUPERUSER;' +``` + +You may choose a different name for the Ranger lookup role. + +3. Log in to the KDC server system and generate a principal for the HAWQ `rangerlookup_hawq` role. Substitute your Kerberos \ . For example: + +``` shell +$ ssh root@ +root@kdc-server$ kadmin.local -q "addprinc -pw changeme rangerlookup_hawq@REALM.DOMAIN" +``` + +You do not need to generate a keytab file for the `rangerlookup_hawq` principal because you will provide the password in the HAWQ service definition of the Ranger Admin UI. + +4. Start the Ranger Admin UI in a supported web browser. The default URL is \ :6080. + +5. Locate the HAWQ service definition and click the **Edit** button. + +6. Update the applicable **Config Properties** fields: + +**HAWQ User Name*** : Enter the HAWQ Ranger lookup role you identified or created in Step 2 above. +**HAWQ User
[jira] [Commented] (HAWQ-1497) docs - refactor the kerberos sections
[ https://issues.apache.org/jira/browse/HAWQ-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086990#comment-16086990 ] ASF GitHub Bot commented on HAWQ-1497: -- Github user radarwave commented on a diff in the pull request: https://github.com/apache/incubator-hawq-docs/pull/127#discussion_r127403320 --- Diff: markdown/clientaccess/disable-kerberos.html.md.erb --- @@ -21,43 +21,49 @@ specific language governing permissions and limitations under the License. --> -Follow these steps to disable Kerberos security for HAWQ and PXF for manual installations. +HAWQ supports Kerberos at both the HDFS and/or user authentication levels. You will perform different disable procedures for each. -**Note:** If you install or manage your cluster using Ambari, then the HAWQ Ambari plug-in automatically disables security for HAWQ and PXF when you disable security for Hadoop. The following instructions are only necessary for manual installations, or when Hadoop security is disabled outside of Ambari. -1. Disable Kerberos on the Hadoop cluster on which you use HAWQ. -2. Disable security for HAWQ: -1. Login to the HAWQ database master server as the `gpadmin` user: +## Disable Kerberized HDFS for HAWQ/PXF -``` bash -$ ssh hawq_master_fqdn -``` +You will perform different procedures to disable HAWQ/PXF access to a previously-kerberized HDFS depending upon whether you manage your cluster from the command line or use Ambari to manage your cluster. + +### Procedure for Ambari-Managed Clusters + +If you manage your cluster using Ambari, you will disable Kerberos authentication for your cluster as described in the [How To Disable Kerberos](https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-user-guide/content/how_to_disable_kerberos.html) Hortonworks documentation. Ambari will guide you through the de-kerberization process, including removing/updating any authentication-related configuration in your cluster. -2. Run the following command to set up HAWQ environment variables: +### Procedure for Command-Line-Managed Clusters + +If you manage your cluster from the command line, follow these instructions to disable HDFS Kerberos security for HAWQ and PXF. + +1. Disable Kerberos on the Hadoop cluster on which you use HAWQ. +2. Disable security for HAWQ: +1. Login to the HAWQ database master server as the `gpadmin` user and set up your HAWQ environment: ``` bash -$ source /usr/local/hawq/greenplum_path.sh +$ ssh gpadmin@ +gpadmin@master$ . /usr/local/hawq/greenplum_path.sh ``` -3. Start HAWQ if necessary: +2. Start HAWQ if necessary: ``` bash -$ hawq start -a +gpadmin@master$ hawq start cluster -a ``` -4. Run the following command to disable security: +3. Update HAWQ configuration to disable security: ``` bash -$ hawq config --masteronly -c enable_secure_filesystem -v “off” +gpadmin@master$ hawq config --masteronly -c enable_secure_filesystem -v “off” --- End diff -- '--masteronly' is deprecated. We should not add this option. > docs - refactor the kerberos sections > - > > Key: HAWQ-1497 > URL: https://issues.apache.org/jira/browse/HAWQ-1497 > Project: Apache HAWQ > Issue Type: Improvement > Components: Documentation >Reporter: Lisa Owen >Assignee: David Yozie > > the kerberos docs do not really distinguish between enabling kerberos at the > HDFS filesystem level vs. enabling kerberos user authentication for HAWQ. > also missing content for config'ing HAWQ/PXF for secure HDFS. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HAWQ-1497) docs - refactor the kerberos sections
[ https://issues.apache.org/jira/browse/HAWQ-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086989#comment-16086989 ] ASF GitHub Bot commented on HAWQ-1497: -- Github user radarwave commented on a diff in the pull request: https://github.com/apache/incubator-hawq-docs/pull/127#discussion_r127402619 --- Diff: markdown/clientaccess/disable-kerberos.html.md.erb --- @@ -21,43 +21,49 @@ specific language governing permissions and limitations under the License. --> -Follow these steps to disable Kerberos security for HAWQ and PXF for manual installations. +HAWQ supports Kerberos at both the HDFS and/or user authentication levels. You will perform different disable procedures for each. -**Note:** If you install or manage your cluster using Ambari, then the HAWQ Ambari plug-in automatically disables security for HAWQ and PXF when you disable security for Hadoop. The following instructions are only necessary for manual installations, or when Hadoop security is disabled outside of Ambari. -1. Disable Kerberos on the Hadoop cluster on which you use HAWQ. -2. Disable security for HAWQ: -1. Login to the HAWQ database master server as the `gpadmin` user: +## Disable Kerberized HDFS for HAWQ/PXF -``` bash -$ ssh hawq_master_fqdn -``` +You will perform different procedures to disable HAWQ/PXF access to a previously-kerberized HDFS depending upon whether you manage your cluster from the command line or use Ambari to manage your cluster. + +### Procedure for Ambari-Managed Clusters + +If you manage your cluster using Ambari, you will disable Kerberos authentication for your cluster as described in the [How To Disable Kerberos](https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-user-guide/content/how_to_disable_kerberos.html) Hortonworks documentation. Ambari will guide you through the de-kerberization process, including removing/updating any authentication-related configuration in your cluster. -2. Run the following command to set up HAWQ environment variables: +### Procedure for Command-Line-Managed Clusters + +If you manage your cluster from the command line, follow these instructions to disable HDFS Kerberos security for HAWQ and PXF. + +1. Disable Kerberos on the Hadoop cluster on which you use HAWQ. +2. Disable security for HAWQ: +1. Login to the HAWQ database master server as the `gpadmin` user and set up your HAWQ environment: ``` bash -$ source /usr/local/hawq/greenplum_path.sh +$ ssh gpadmin@ +gpadmin@master$ . /usr/local/hawq/greenplum_path.sh ``` -3. Start HAWQ if necessary: +2. Start HAWQ if necessary: ``` bash -$ hawq start -a +gpadmin@master$ hawq start cluster -a --- End diff -- ‘hawq start cluster’ do not need '-a' option. > docs - refactor the kerberos sections > - > > Key: HAWQ-1497 > URL: https://issues.apache.org/jira/browse/HAWQ-1497 > Project: Apache HAWQ > Issue Type: Improvement > Components: Documentation >Reporter: Lisa Owen >Assignee: David Yozie > > the kerberos docs do not really distinguish between enabling kerberos at the > HDFS filesystem level vs. enabling kerberos user authentication for HAWQ. > also missing content for config'ing HAWQ/PXF for secure HDFS. -- This message was sent by Atlassian JIRA (v6.4.14#64029)