This is an automated email from the ASF dual-hosted git repository.

djwang pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/cloudberry-pxf.git


The following commit(s) were added to refs/heads/main by this push:
     new 29cb92d1 Remove dev files from source dir
29cb92d1 is described below

commit 29cb92d1f99e9d1d04f5342c3e08e289265cf0e3
Author: Dianjin Wang <[email protected]>
AuthorDate: Thu Feb 5 15:13:43 2026 +0800

    Remove dev files from source dir
    
    * Remove obsoleted files under `dev` dir
    * Move `dev/start_minio.bash` to 
`ci/docker/pxf-cbdb-dev/ubuntu/script/start_minio.bash`
    
    See: https://github.com/apache/cloudberry-pxf/issues/48
---
 README.md                                          |  52 +++---
 ci/docker/pxf-cbdb-dev/ubuntu/script/entrypoint.sh |   2 +-
 .../ubuntu/script/entrypoint_kerberos.sh           |   2 +-
 .../pxf-cbdb-dev/ubuntu/script}/start_minio.bash   |   0
 dev/.gitignore                                     |   1 -
 dev/README.md                                      |  17 --
 dev/bootstrap.bash                                 |  42 -----
 dev/build_and_install_gpdb.bash                    |   6 -
 dev/build_gpdb.bash                                |  13 --
 dev/configure_singlecluster.bash                   | 189 --------------------
 dev/init_greenplum.bash                            |  41 -----
 dev/init_hadoop.bash                               | 198 ---------------------
 dev/install_gpdb.bash                              |   5 -
 dev/install_greenplum.bash                         |  40 -----
 dev/install_pxf.bash                               |  25 ---
 dev/smoke_shortcut.sh                              |  39 ----
 dev/start.bash                                     |  20 ---
 17 files changed, 22 insertions(+), 670 deletions(-)

diff --git a/README.md b/README.md
index 03489d1c..26bbc1c4 100755
--- a/README.md
+++ b/README.md
@@ -51,7 +51,8 @@ To build PXF, you must have:
 
     Assuming you have installed Cloudberry into `/usr/local/cloudberrydb` 
directory, run its environment script:
     ```
-    source /usr/local/cloudberrydb/greenplum_path.sh
+    source /usr/local/cloudberrydb/greenplum_path.sh # For Cloudberry 2.0
+    source /usr/local/cloudberrydb/cloudberry-env.sh # For Cloudberry 2.1+
     ```
 
 3. JDK 1.8 or JDK 11 to compile/run
@@ -171,34 +172,25 @@ cp ${PXF_HOME}/templates/*-site.xml 
${PXF_BASE}/servers/default
 > [!Note]
 > Since the docker container will house all Single cluster Hadoop, Cloudberry 
 > and PXF, we recommend that you have at least 4 cpus and 6GB memory allocated 
 > to Docker. These settings are available under docker preferences.
 
-The quick and easy is to download the Cloudberry RPM from GitHub and move it 
into the `/downloads` folder. Then run `./dev/start.bash` to get a docker image 
with a running Cloudberry, Hadoop cluster and an installed PXF.
+We provide a Docker-based development environment that includes Cloudberry, 
Hadoop, and PXF. See [automation/README.Docker.md](automation/README.Docker.md) 
for detailed instructions.
 
-#### Setup Cloudberry in the Docker image
-
-Configure, build and install Cloudberry. This will be needed only when you use 
the container for the first time with Cloudberry source.
+**Quick Start:**
 
 ```bash
-~/workspace/pxf/dev/build_gpdb.bash
-sudo mkdir /usr/local/cloudberry-db-devel
-sudo chown gpadmin:gpadmin /usr/local/cloudberry-db-devel
-~/workspace/pxf/dev/install_gpdb.bash
-```
+# Build and start the development container
+docker compose -f ci/docker/pxf-cbdb-dev/ubuntu/docker-compose.yml build
+docker compose -f ci/docker/pxf-cbdb-dev/ubuntu/docker-compose.yml up -d
 
-For subsequent minor changes to Cloudberry source you can simply do the 
following:
-```bash
-~/workspace/pxf/dev/install_gpdb.bash
-```
+# Enter the container and run setup
+docker exec -it pxf-cbdb-dev bash -c \
+   "cd /home/gpadmin/workspace/cloudberry-pxf/ci/docker/pxf-cbdb-dev/ubuntu && 
./script/entrypoint.sh"
 
-Run all the instructions below and run GROUP=smoke (in one script):
-```bash
-~/workspace/pxf/dev/smoke_shortcut.sh
-```
+# Run tests
+docker exec -it pxf-cbdb-dev bash -c \
+   "cd /home/gpadmin/workspace/cloudberry-pxf/ci/docker/pxf-cbdb-dev/ubuntu && 
./script/run_tests.sh"
 
-Create Cloudberry Cluster
-```bash
-source /usr/local/cloudberrydb-db-devel/greenplum_path.sh
-make -C ~/workspace/cbdb create-demo-cluster
-source ~/workspace/cbdb/gpAux/gpdemo/gpdemo-env.sh
+# Stop and clean up
+docker compose -f ci/docker/pxf-cbdb-dev/ubuntu/docker-compose.yml down -v
 ```
 
 #### Setup Hadoop
@@ -206,9 +198,7 @@ Hdfs will be needed to demonstrate functionality. You can 
choose to start additi
 
 Setup [User 
Impersonation](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html)
 prior to starting the hadoop components (this allows the `gpadmin` user to 
access hadoop data).
 
-```bash
-~/workspace/pxf/dev/configure_singlecluster.bash
-```
+The Docker development environment automatically configures Hadoop. For manual 
setup, see [automation/README.Docker.md](automation/README.Docker.md).
 
 Setup and start HDFS
 ```bash
@@ -233,13 +223,11 @@ popd
 ```
 
 #### Setup Minio (optional)
-Minio is an S3-API compatible local storage solution. The development docker 
image comes with Minio software pre-installed. To start the Minio server, run 
the following script:
-```bash
-source ~/workspace/pxf/dev/start_minio.bash
-```
+Minio is an S3-API compatible local storage solution. The development docker 
image comes with Minio software pre-installed. MinIO is automatically started 
by the Docker development environment.
+
 After the server starts, you can access Minio UI at `http://localhost:9000` 
from the host OS. Use `admin` for the access key and `password` for the secret 
key when connecting to your local Minio instance.
 
-The script also sets `PROTOCOL=minio` so that the automation framework will 
use the local Minio server when running S3 automation tests. If later you would 
like to run Hadoop HDFS tests, unset this variable with `unset PROTOCOL` 
command.
+To run S3 automation tests, set `PROTOCOL=minio`. If later you would like to 
run Hadoop HDFS tests, unset this variable with `unset PROTOCOL` command.
 
 #### Setup PXF
 
@@ -330,7 +318,7 @@ no JDK set for Gradle. Just cancel and retry. It goes away 
the second time.
 - Download bin_gpdb (from any of the pipelines)
 - Download pxf_tarball (from any of the pipelines)
 
-These instructions allow you to run a Kerberized cluster
+These instructions allow you to run a Kerberized cluster. See 
[automation/README.Docker.md](automation/README.Docker.md) for detailed 
Kerberos setup instructions.
 
 ```bash
 docker run --rm -it \
diff --git a/ci/docker/pxf-cbdb-dev/ubuntu/script/entrypoint.sh 
b/ci/docker/pxf-cbdb-dev/ubuntu/script/entrypoint.sh
index ed7396d8..80cbb35a 100755
--- a/ci/docker/pxf-cbdb-dev/ubuntu/script/entrypoint.sh
+++ b/ci/docker/pxf-cbdb-dev/ubuntu/script/entrypoint.sh
@@ -468,7 +468,7 @@ start_hive_services() {
 
 deploy_minio() {
   log "deploying MinIO"
-  bash "${REPO_DIR}/dev/start_minio.bash"
+  bash "${PXF_SCRIPTS}/start_minio.bash"
 }
 
 main() {
diff --git a/ci/docker/pxf-cbdb-dev/ubuntu/script/entrypoint_kerberos.sh 
b/ci/docker/pxf-cbdb-dev/ubuntu/script/entrypoint_kerberos.sh
index 8469fc33..f64fabee 100755
--- a/ci/docker/pxf-cbdb-dev/ubuntu/script/entrypoint_kerberos.sh
+++ b/ci/docker/pxf-cbdb-dev/ubuntu/script/entrypoint_kerberos.sh
@@ -293,7 +293,7 @@ setup_ssl_material() {
 
 deploy_minio() {
   log "deploying MinIO (for S3 tests)"
-  bash "${REPO_ROOT}/dev/start_minio.bash"
+  bash "${PXF_SCRIPTS}/start_minio.bash"
 }
 
 configure_pxf_s3() {
diff --git a/dev/start_minio.bash 
b/ci/docker/pxf-cbdb-dev/ubuntu/script/start_minio.bash
similarity index 100%
rename from dev/start_minio.bash
rename to ci/docker/pxf-cbdb-dev/ubuntu/script/start_minio.bash
diff --git a/dev/.gitignore b/dev/.gitignore
deleted file mode 100644
index eb219429..00000000
--- a/dev/.gitignore
+++ /dev/null
@@ -1 +0,0 @@
-dataproc_env_files/
diff --git a/dev/README.md b/dev/README.md
deleted file mode 100644
index e3aa2a6d..00000000
--- a/dev/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
-# Profiling
-
-### Visual VM Profiling
-
-To perform memory profiling add the following line to PXF's environment 
settings (`pxf/conf/pxf-env.sh`) on the machine where we want to debug:
-
-```
-export CATALINA_OPTS="-Dcom.sun.management.jmxremote=true 
-Dcom.sun.management.jmxremote.rmi.port=9090 
-Dcom.sun.management.jmxremote.port=9090 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.local.only=false 
-Djava.rmi.server.hostname=127.0.0.1"
-```
-
-### JProfiler
-
-To perform memory profiling in JProfiler add the following setting to your 
`PXF_JVM_OPTS`:
-
-```
-export PXF_JVM_OPTS="-Xmx2g -Xms1g 
-agentpath:/Applications/JProfiler.app/Contents/Resources/app/bin/macos/libjprofilerti.jnilib=port=8849"
-```
diff --git a/dev/bootstrap.bash b/dev/bootstrap.bash
deleted file mode 100755
index a4e10a27..00000000
--- a/dev/bootstrap.bash
+++ /dev/null
@@ -1,42 +0,0 @@
-#!/bin/bash
-
-SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-
-# setup environment for gpadmin
-#export PS1="[\u@\h \W]\$ "
-#export HADOOP_ROOT=~/workspace/singlecluster
-#export PXF_JVM_OPTS="-Xmx512m -Xms256m"
-#export BUILD_PARAMS="-x test"
-
-export JAVA_HOME=/etc/alternatives/java_sdk
-
-# install and init Greenplum as gpadmin user
-su - gpadmin -c ${SCRIPT_DIR}/install_greenplum.bash
-
-# now GPHOME should be discoverable by .pxfrc
-source ~gpadmin/.pxfrc
-chown -R gpadmin:gpadmin ${GPHOME}
-
-# remove existing PXF, if any, that could come pre-installed with Greenplum RPM
-if [[ -d ${GPHOME}/pxf ]]; then
-    echo; echo "=====> Removing PXF installed with GPDB <====="; echo
-    rm -rf ${GPHOME}/pxf
-    rm ${GPHOME}/lib/postgresql/pxf.so
-    rm ${GPHOME}/share/postgresql/extension/pxf.control
-    rm ${GPHOME}/share/postgresql/extension/pxf*.sql
-fi
-
-# prepare PXF_HOME for PXF installation
-mkdir -p ${PXF_HOME}
-chown -R gpadmin:gpadmin ${PXF_HOME}
-
-# configure and start Hadoop single cluster
-chmod a+w /singlecluster
-SLAVES=1 ${SCRIPT_DIR}/init_hadoop.bash
-
-su - gpadmin -c "
-               source ~/.pxfrc &&
-               env &&
-               ${SCRIPT_DIR}/init_greenplum.bash &&
-               ${SCRIPT_DIR}/install_pxf.bash
-       "
diff --git a/dev/build_and_install_gpdb.bash b/dev/build_and_install_gpdb.bash
deleted file mode 100755
index 76d6cdfc..00000000
--- a/dev/build_and_install_gpdb.bash
+++ /dev/null
@@ -1,6 +0,0 @@
-#!/bin/bash
-
-CWDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-
-${CWDIR}/build_gpdb.bash
-${CWDIR}/install_gpdb.bash
diff --git a/dev/build_gpdb.bash b/dev/build_gpdb.bash
deleted file mode 100755
index 6f299b14..00000000
--- a/dev/build_gpdb.bash
+++ /dev/null
@@ -1,13 +0,0 @@
-#!/bin/bash
-
-pushd ~/workspace/gpdb
-make clean
-./configure \
-  --enable-debug \
-  --with-perl \
-  --with-python \
-  --with-libxml \
-  --disable-orca \
-  --prefix=/usr/local/greenplum-db-devel
-make -j8
-popd
\ No newline at end of file
diff --git a/dev/configure_singlecluster.bash b/dev/configure_singlecluster.bash
deleted file mode 100755
index 9c7e7275..00000000
--- a/dev/configure_singlecluster.bash
+++ /dev/null
@@ -1,189 +0,0 @@
-#!/bin/bash
-
->~/workspace/singlecluster/hadoop/etc/hadoop/core-site.xml cat <<EOF
-<?xml version="1.0" encoding="UTF-8"?>
-<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-<!--
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-
-<!-- Put site-specific property overrides in this file. -->
-
-<configuration>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://0.0.0.0:8020</value>
-    </property>
-    <property>
-        <name>ipc.ping.interval</name>
-        <value>900000</value>
-    </property>
-    <property>
-        <name>hadoop.proxyuser.gpadmin.hosts</name>
-        <value>*</value>
-    </property>
-    <property>
-        <name>hadoop.proxyuser.gpadmin.groups</name>
-        <value>*</value>
-    </property>
-    <property>
-        <name>hadoop.security.authorization</name>
-        <value>true</value>
-    </property>
-    <property>
-        <name>hbase.security.authorization</name>
-        <value>true</value>
-    </property>
-    <property>
-        <name>hbase.rpc.protection</name>
-        <value>authentication</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.master.classes</name>
-        <value>org.apache.hadoop.hbase.security.access.AccessController</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.region.classes</name>
-        
<value>org.apache.hadoop.hbase.security.access.AccessController,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.regionserver.classes</name>
-        <value>org.apache.hadoop.hbase.security.access.AccessController</value>
-    </property>
-</configuration>
-EOF
-
->~/workspace/singlecluster/hbase/conf/hbase-site.xml cat <<EOF
-<?xml version="1.0"?>
-<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-<!--
-/**
- * Copyright 2010 The Apache Software Foundation
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-<configuration>
-    <property>
-        <name>hbase.rootdir</name>
-        <value>hdfs://0.0.0.0:8020/hbase</value>
-    </property>
-    <property>
-        <name>dfs.replication</name>
-        <value>3</value>
-    </property>
-    <property>
-        <name>dfs.support.append</name>
-        <value>true</value>
-    </property>
-    <property>
-        <name>hbase.cluster.distributed</name>
-        <value>true</value>
-    </property>
-       <property>
-               <name>hbase.zookeeper.quorum</name>
-               <value>127.0.0.1</value>
-       </property>
-       <property>
-               <name>hbase.zookeeper.property.clientPort</name>
-               <value>2181</value>
-       </property>
-    <property>
-        <name>hadoop.proxyuser.gpadmin.hosts</name>
-        <value>*</value>
-    </property>
-    <property>
-        <name>hadoop.proxyuser.gpadmin.groups</name>
-        <value>*</value>
-    </property>
-    <property>
-        <name>hadoop.security.authorization</name>
-        <value>true</value>
-    </property>
-    <property>
-        <name>hbase.security.authorization</name>
-        <value>true</value>
-    </property>
-    <property>
-        <name>hbase.rpc.protection</name>
-        <value>authentication</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.master.classes</name>
-        <value>org.apache.hadoop.hbase.security.access.AccessController</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.region.classes</name>
-        
<value>org.apache.hadoop.hbase.security.access.AccessController,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.regionserver.classes</name>
-        <value>org.apache.hadoop.hbase.security.access.AccessController</value>
-    </property>
-</configuration>
-EOF
-
->~/workspace/singlecluster/hive/conf/hive-site.xml cat <<EOF
-<configuration>
-       <property>
-               <name>hive.metastore.warehouse.dir</name>
-               <value>/hive/warehouse</value>
-       </property>
-       <property>
-               <name>hive.metastore.uris</name>
-               <value>thrift://localhost:9083</value>
-       </property>
-       <property>
-               <name>hive.server2.enable.impersonation</name>
-               <value>true</value>
-               <description>Set this property to enable impersonation in Hive 
Server 2</description>
-       </property>
-       <property>
-               <name>hive.server2.enable.doAs</name>
-               <value>false</value>
-               <description>Set this property to enable impersonation in Hive 
Server 2</description>
-       </property>
-       <property>
-               <name>hive.execution.engine</name>
-               <value>mr</value>
-               <description>Chooses execution engine. Options are: 
mr(default), tez, or spark</description>
-       </property>
-       <property>
-               <name>hive.metastore.schema.verification</name>
-               <value>false</value>
-               <description>Modify schema instead of reporting 
error</description>
-       </property>
-       <property>
-               <name>datanucleus.autoCreateTables</name>
-               <value>True</value>
-       </property>
-       <property>
-               <name>hive.metastore.integral.jdo.pushdown</name>
-               <value>True</value>
-       </property>
-</configuration>
-EOF
diff --git a/dev/init_greenplum.bash b/dev/init_greenplum.bash
deleted file mode 100755
index abffc34f..00000000
--- a/dev/init_greenplum.bash
+++ /dev/null
@@ -1,41 +0,0 @@
-#!/usr/bin/env bash
-
-set -euo pipefail
-
-GPHOME=${GPHOME:=/usr/local/greenplum-db}
-PYTHONHOME='' source "${GPHOME}/greenplum_path.sh"
-
-# Create config and data dirs.
-data_dirs=(~gpadmin/data{1..3}/primary)
-dirs=(~gpadmin/{gpconfigs,data/master} "${data_dirs[@]}")
-mkdir -p "${dirs[@]}"
-
-sed -e "s/MASTER_HOSTNAME=mdw/MASTER_HOSTNAME=\$(hostname -f)/g" \
-       -e "s|declare -a DATA_DIRECTORY.*|declare -a DATA_DIRECTORY=( 
${data_dirs[*]} )|g" \
-       -e "s|MASTER_DIRECTORY=.*|MASTER_DIRECTORY=~gpadmin/data/master|g" \
-       "${GPHOME}/docs/cli_help/gpconfigs/gpinitsystem_config" 
>~gpadmin/gpconfigs/gpinitsystem_config
-chmod +w ~gpadmin/gpconfigs/gpinitsystem_config
-
-#Script to start segments and create directories.
-hostname -f >/tmp/hosts.txt
-
-# gpinitsystem fails in concourse environment without this "ping" workaround. 
"[FATAL]:-Unknown host..."
-sudo chmod u+s /bin/ping
-
-pgrep sshd || sudo /usr/sbin/sshd
-gpssh-exkeys -f /tmp/hosts.txt
-
-# 5X gpinitsystem returns 1 exit code on warnings.
-# so we ignore return code of 1, but otherwise we fail
-set +e
-gpinitsystem -a -c ~gpadmin/gpconfigs/gpinitsystem_config -h /tmp/hosts.txt 
--su_password=changeme
-(( $? > 1 )) && exit 1
-set -e
-
-echo 'host all all 0.0.0.0/0 password' 
>>~gpadmin/data/master/gpseg-1/pg_hba.conf
-
-# reload pg_hba.conf
-MASTER_DATA_DIRECTORY=~gpadmin/data/master/gpseg-1 gpstop -u
-
-sleep 3
-psql -d template1 -c "CREATE DATABASE gpadmin;"
\ No newline at end of file
diff --git a/dev/init_hadoop.bash b/dev/init_hadoop.bash
deleted file mode 100755
index 84317362..00000000
--- a/dev/init_hadoop.bash
+++ /dev/null
@@ -1,198 +0,0 @@
-#!/usr/bin/env bash
-
->/singlecluster/hadoop/etc/hadoop/core-site.xml cat <<EOF
-<?xml version="1.0" encoding="UTF-8"?>
-<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-<!--
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-
-<!-- Put site-specific property overrides in this file. -->
-
-<configuration>
-    <property>
-        <name>fs.defaultFS</name>
-        <value>hdfs://0.0.0.0:8020</value>
-    </property>
-    <property>
-        <name>ipc.ping.interval</name>
-        <value>900000</value>
-    </property>
-    <property>
-        <name>hadoop.proxyuser.gpadmin.hosts</name>
-        <value>*</value>
-    </property>
-    <property>
-        <name>hadoop.proxyuser.gpadmin.groups</name>
-        <value>*</value>
-    </property>
-    <property>
-        <name>hadoop.security.authorization</name>
-        <value>true</value>
-    </property>
-    <property>
-        <name>hbase.security.authorization</name>
-        <value>true</value>
-    </property>
-    <property>
-        <name>hbase.rpc.protection</name>
-        <value>authentication</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.master.classes</name>
-        <value>org.apache.hadoop.hbase.security.access.AccessController</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.region.classes</name>
-        
<value>org.apache.hadoop.hbase.security.access.AccessController,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.regionserver.classes</name>
-        <value>org.apache.hadoop.hbase.security.access.AccessController</value>
-    </property>
-</configuration>
-EOF
-
->/singlecluster/hbase/conf/hbase-site.xml cat <<EOF
-<?xml version="1.0"?>
-<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-<!--
-/**
- * Copyright 2010 The Apache Software Foundation
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-<configuration>
-    <property>
-        <name>hbase.rootdir</name>
-        <value>hdfs://0.0.0.0:8020/hbase</value>
-    </property>
-    <property>
-        <name>dfs.replication</name>
-        <value>3</value>
-    </property>
-    <property>
-        <name>dfs.support.append</name>
-        <value>true</value>
-    </property>
-    <property>
-        <name>hbase.cluster.distributed</name>
-        <value>true</value>
-    </property>
-       <property>
-               <name>hbase.zookeeper.quorum</name>
-               <value>127.0.0.1</value>
-       </property>
-       <property>
-               <name>hbase.zookeeper.property.clientPort</name>
-               <value>2181</value>
-       </property>
-    <property>
-        <name>hadoop.proxyuser.gpadmin.hosts</name>
-        <value>*</value>
-    </property>
-    <property>
-        <name>hadoop.proxyuser.gpadmin.groups</name>
-        <value>*</value>
-    </property>
-    <property>
-        <name>hadoop.security.authorization</name>
-        <value>true</value>
-    </property>
-    <property>
-        <name>hbase.security.authorization</name>
-        <value>true</value>
-    </property>
-    <property>
-        <name>hbase.rpc.protection</name>
-        <value>authentication</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.master.classes</name>
-        <value>org.apache.hadoop.hbase.security.access.AccessController</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.region.classes</name>
-        
<value>org.apache.hadoop.hbase.security.access.AccessController,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint</value>
-    </property>
-    <property>
-        <name>hbase.coprocessor.regionserver.classes</name>
-        <value>org.apache.hadoop.hbase.security.access.AccessController</value>
-    </property>
-</configuration>
-EOF
-
->/singlecluster/hive/conf/hive-site.xml cat <<EOF
-<configuration>
-       <property>
-               <name>hive.metastore.warehouse.dir</name>
-               <value>/hive/warehouse</value>
-       </property>
-       <property>
-               <name>hive.metastore.uris</name>
-               <value>thrift://localhost:9083</value>
-       </property>
-       <property>
-               <name>hive.server2.enable.impersonation</name>
-               <value>true</value>
-               <description>Set this property to enable impersonation in Hive 
Server 2</description>
-       </property>
-       <property>
-               <name>hive.server2.enable.doAs</name>
-               <value>false</value>
-               <description>Set this property to enable impersonation in Hive 
Server 2</description>
-       </property>
-       <property>
-               <name>hive.execution.engine</name>
-               <value>mr</value>
-               <description>Chooses execution engine. Options are: 
mr(default), tez, or spark</description>
-       </property>
-       <property>
-               <name>hive.metastore.schema.verification</name>
-               <value>false</value>
-               <description>Modify schema instead of reporting 
error</description>
-       </property>
-       <property>
-               <name>datanucleus.autoCreateTables</name>
-               <value>True</value>
-       </property>
-       <property>
-               <name>hive.metastore.integral.jdo.pushdown</name>
-               <value>True</value>
-       </property>
-</configuration>
-EOF
-
-pushd /singlecluster/bin > /dev/null
-  echo y | ./init-gphd.sh
-  ./start-hdfs.sh
-  ./start-yarn.sh
-  ./start-hive.sh
-  ./start-zookeeper.sh
-  ./start-hbase.sh
-popd > /dev/null
diff --git a/dev/install_gpdb.bash b/dev/install_gpdb.bash
deleted file mode 100755
index a4a66264..00000000
--- a/dev/install_gpdb.bash
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/bin/bash
-
-pushd ~/workspace/gpdb
-make -j4 install
-popd
diff --git a/dev/install_greenplum.bash b/dev/install_greenplum.bash
deleted file mode 100755
index c485912f..00000000
--- a/dev/install_greenplum.bash
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env bash
-
-CWDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-
-pushd ${CWDIR}/../downloads > /dev/null
-
-# CentOS releases contain a /etc/redhat-release which is symlinked to 
/etc/centos-release
-if [[ -f /etc/redhat-release ]]; then
-    major_version=$(cat /etc/redhat-release | tr -dc '0-9.'|cut -d \. -f1)
-    ARTIFACT_OS="rhel${major_version}"
-    LATEST_RPM=$(ls greenplum*${ARTIFACT_OS}*.rpm | sort -r | head -1)
-
-    if [[ -z $LATEST_RPM ]]; then
-        echo "ERROR: No greenplum RPM found in ${PWD}"
-        popd > /dev/null
-        exit 1
-    fi
-
-    echo "Installing GPDB from ${LATEST_RPM} ..."
-    sudo rpm --quiet -ivh "${LATEST_RPM}"
-
-elif [[ -f /etc/debian_version ]]; then
-    ARTIFACT_OS="ubuntu"
-    LATEST_DEB=$(ls *greenplum*ubuntu*.deb | sort -r | head -1)
-
-    if [[ -z $LATEST_DEB ]]; then
-        echo "ERROR: No greenplum DEB found in ${PWD}"
-        popd > /dev/null
-        exit 1
-    fi
-
-    echo "Installing GPDB from ${LATEST_DEB} ..."
-    # apt-get wants a full path
-    sudo apt-get install -qq "${PWD}/${LATEST_DEB}"
-else
-    echo "Unsupported operating system '$(source /etc/os-release && echo 
"${PRETTY_NAME}")'. Exiting..."
-    exit 1
-fi
-
-popd > /dev/null
diff --git a/dev/install_pxf.bash b/dev/install_pxf.bash
deleted file mode 100755
index 97adcb5d..00000000
--- a/dev/install_pxf.bash
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/usr/bin/env bash
-
-function display() {
-    echo
-    echo "=====> $1 <====="
-    echo
-}
-
-display "Compiling and Installing PXF"
-make -C ~gpadmin/workspace/pxf install
-
-display "Initializing PXF"
-pxf init
-
-display "Starting PXF"
-pxf start
-
-display "Setting up default PXF server"
-cp "${PXF_HOME}"/templates/*-site.xml "${PXF_HOME}"/servers/default
-
-display "Registering PXF Greenplum extension"
-psql -d template1 -c "create extension pxf"
-
-#cd ~/workspace/pxf/automation
-#make GROUP=smoke
\ No newline at end of file
diff --git a/dev/smoke_shortcut.sh b/dev/smoke_shortcut.sh
deleted file mode 100755
index 34c23a5d..00000000
--- a/dev/smoke_shortcut.sh
+++ /dev/null
@@ -1,39 +0,0 @@
-#!/bin/bash
-set -e
-
-~/workspace/pxf/dev/install_gpdb.bash
-
-source /usr/local/greenplum-db-devel/greenplum_path.sh
-make -C ~/workspace/gpdb create-demo-cluster
-source ~/workspace/gpdb/gpAux/gpdemo/gpdemo-env.sh
-
-~/workspace/pxf/dev/configure_singlecluster.bash
-
-pushd ~/workspace/singlecluster/bin
-  echo y | ./init-gphd.sh
-  ./start-hdfs.sh
-  ./start-yarn.sh
-  ./start-hive.sh
-  ./start-zookeeper.sh
-  ./start-hbase.sh
-popd
-
-make -C ~/workspace/pxf install
-export PXF_BASE=$PXF_HOME
-export PXF_JVM_OPTS="-Xmx512m -Xms256m"
-$PXF_HOME/bin/pxf init
-$PXF_HOME/bin/pxf start
-
-cp "${PXF_BASE}"/templates/*-site.xml "${PXF_BASE}"/servers/default
-
-if [ -d ~/workspace/gpdb/gpAux/extensions/pxf ]; then
-  PXF_EXTENSIONS_DIR=gpAux/extensions/pxf
-else
-  PXF_EXTENSIONS_DIR=gpcontrib/pxf
-fi
-
-make -C ~/workspace/gpdb/${PXF_EXTENSIONS_DIR} installcheck
-psql -d template1 -c "create extension pxf"
-
-cd ~/workspace/pxf/automation
-make GROUP=smoke
diff --git a/dev/start.bash b/dev/start.bash
deleted file mode 100755
index ced0e240..00000000
--- a/dev/start.bash
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/usr/bin/env bash
-
-if [[ -z ${GCR_PROJECT} ]]; then
-    echo "Please set GCR_PROJECT variable to the name of your Google Container 
Registry project"
-    exit 1
-fi
-
-docker run --rm -it \
-  -p 5432:5432 \
-  -p 5888:5888 \
-  -p 8000:8000 \
-  -p 5005:5005 \
-  -p 8020:8020 \
-  -p 9000:9000 \
-  -p 9090:9090 \
-  -p 50070:50070 \
-  -w /home/gpadmin/workspace \
-  -v ~/workspace/pxf:/home/gpadmin/workspace/pxf \
-  gcr.io/${GCR_PROJECT}/gpdb-pxf-dev/gpdb6-centos7-test-pxf-hdp2:latest 
/bin/bash -c \
-  "/home/gpadmin/workspace/pxf/dev/bootstrap.bash && su - gpadmin"
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to