This is an automated email from the ASF dual-hosted git repository.

djwang pushed a commit to branch merge-with-upstream
in repository https://gitbox.apache.org/repos/asf/cloudberry-pxf.git

commit 39c1cd8cba7bc36a792ca78417ab8e87b5b558a3
Author: Dianjin Wang <[email protected]>
AuthorDate: Wed Nov 6 10:24:57 2024 +0800

    Update asf.yaml and README.md
---
 .asf.yaml | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 README.md | 39 ++++++++++++++++-----------------
 2 files changed, 93 insertions(+), 20 deletions(-)

diff --git a/.asf.yaml b/.asf.yaml
new file mode 100644
index 00000000..4c647111
--- /dev/null
+++ b/.asf.yaml
@@ -0,0 +1,74 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+github:
+  description: Platform Extension Framework (PXF) for Apache Cloudberry 
(Incubating)
+  homepage: https://cloudberry.apache.org
+  labels:
+    - mpp
+    - big-data
+    - data-warehouse
+    - data-analysis
+    - olap
+    - distributed-database
+    - database
+    - postgres
+    - postgresql
+    - greenplum
+    - cloudberry
+    - ai
+    - pxf
+  features:
+    # Enable wiki for documentation
+    wiki: false
+    # Enable issues management
+    issues: true
+    # Enable projects for project management boards
+    projects: false
+  enabled_merge_buttons:
+    # enable squash button:
+    squash: true
+    # disable merge button:
+    merge: false
+    # enable rebase button:
+    rebase: true
+  protected_branches:
+    main:
+      required_status_checks:
+        # strict means "Require branches to be up to date before merging".
+        strict: true
+        # contexts are the names of checks that must pass.
+      required_pull_request_reviews:
+        dismiss_stale_reviews: false
+        required_approving_review_count: 1
+      # squash or rebase must be allowed in the repo for this setting to be 
set to true.
+      required_linear_history: true
+
+      required_signatures: false
+
+      # requires all conversations to be resolved before merging is possible
+      required_conversation_resolution: true
+  del_branch_on_merge: true
+  dependabot_alerts: true
+  dependabot_updates: false
+  protected_tags:
+    - "[0-9]*.*"
+notifications:
+  commits: [email protected]
+  issues: [email protected]
+  pullrequests: [email protected]
+  pullrequests_bot_dependabot: [email protected]
diff --git a/README.md b/README.md
index 0f664531..5ba55036 100755
--- a/README.md
+++ b/README.md
@@ -1,34 +1,33 @@
-# Platform Extension Framework (PXF) for Cloudberry Database
+# Platform Extension Framework (PXF) for Apache Cloudberry (Incubating)
 
 
[![Slack](https://img.shields.io/badge/Join_Slack-6a32c9)](https://communityinviter.com/apps/cloudberrydb/welcome)
 [![Twitter 
Follow](https://img.shields.io/twitter/follow/cloudberrydb)](https://twitter.com/cloudberrydb)
-[![Website](https://img.shields.io/badge/Visit%20Website-eebc46)](https://cloudberrydb.org)
-[![GitHub 
Discussions](https://img.shields.io/github/discussions/cloudberrydb/cloudberrydb)](https://github.com/orgs/cloudberrydb/discussions)
+[![Website](https://img.shields.io/badge/Visit%20Website-eebc46)](https://cloudberry.apache.org)
 
 ---
 
 ## Introduction
 
-PXF is an extensible framework that allows a distributed database like 
Greenplum and Cloudberry Database to query external data files, whose metadata 
is not managed by the database.
+PXF is an extensible framework that allows a distributed database like 
Greenplum and Apache Cloudberry to query external data files, whose metadata is 
not managed by the database.
 PXF includes built-in connectors for accessing data that exists inside HDFS 
files, Hive tables, HBase tables, JDBC-accessible databases and more.
 Users can also create their own connectors to other data storage or processing 
engines.
 
-This project is forked from 
[greenplum/pxf](https://github.com/greenplum-db/pxf-archive) and customized for 
Cloudberry Database.
+This project is forked from 
[greenplum/pxf](https://github.com/greenplum-db/pxf-archive) and customized for 
Apache Cloudberry.
 
 ## Repository Contents
 
-* `external-table/` : Contains the CloudberryDB extension implementing an 
External Table protocol handler
-* `fdw/` : Contains the CloudberryDB extension implementing a Foreign Data 
Wrapper (FDW) for PXF
+* `external-table/` : Contains the Cloudberry extension implementing an 
External Table protocol handler
+* `fdw/` : Contains the Cloudberry extension implementing a Foreign Data 
Wrapper (FDW) for PXF
 * `server/` : Contains the server side code of PXF along with the PXF Service 
and all the Plugins
 * `cli/` : Contains command line interface code for PXF
 * `automation/` : Contains the automation and integration tests for PXF 
against the various datasources
 * `singlecluster/` : Hadoop testing environment to exercise the pxf automation 
tests
 * `regression/` : Contains the end-to-end (integration) tests for PXF against 
the various datasources, utilizing the PostgreSQL testing framework `pg_regress`
-* `downloads/` : An empty directory that serves as a staging location for 
CloudberryDB RPMs for the development Docker image
+* `downloads/` : An empty directory that serves as a staging location for 
Cloudberry RPMs for the development Docker image
 
 ## PXF Development
 
-Below are the steps to build and install PXF along with its dependencies 
including CloudberryDB and Hadoop.
+Below are the steps to build and install PXF along with its dependencies 
including Cloudberry and Hadoop.
 
 > [!Note]
 > To start, ensure you have a `~/workspace` directory and have cloned the 
 > `pxf` and its prerequisites (shown below) under it.
@@ -38,7 +37,7 @@ Below are the steps to build and install PXF along with its 
dependencies includi
 mkdir -p ~/workspace
 cd ~/workspace
 
-git clone https://github.com/cloudberrydb/pxf.git
+git clone https://github.com/apache/cloudberry-pxf.git
 ```
 
 ### Install Dependencies
@@ -46,11 +45,11 @@ git clone https://github.com/cloudberrydb/pxf.git
 To build PXF, you must have:
 
 1. GCC compiler, `make` system, `unzip` package, `maven` for running 
integration tests
-2. Installed Cloudberry Database
+2. Installed Cloudberry
 
-    Either download and install CloudberryDB RPM or build CloudberryDB from 
the source by following instructions in the 
[CloudberryDB](https://github.com/cloudberrydb/cloudberrydb).
+    Either download and install Cloudberry RPM or build Cloudberry from the 
source by following instructions in the 
[Cloudberry](https://github.com/apache/cloudberry).
 
-    Assuming you have installed CloudberryDB into `/usr/local/cloudberrydb` 
directory, run its environment script:
+    Assuming you have installed Cloudberry into `/usr/local/cloudberrydb` 
directory, run its environment script:
     ```
     source /usr/local/cloudberrydb/greenplum_path.sh
     ```
@@ -131,7 +130,7 @@ If `${HOME}/pxf-base` does not exist, `pxf prepare` will 
create the directory fo
 
 ### Re-installing PXF after making changes
 
-Note: Local development with PXF requires a running CloudberryDB cluster.
+Note: Local development with PXF requires a running Cloudberry cluster.
 
 Once the desired changes have been made, there are 2 options to re-install PXF:
 
@@ -170,13 +169,13 @@ cp ${PXF_HOME}/templates/*-site.xml 
${PXF_BASE}/servers/default
 ### Development With Docker
 
 > [!Note]
-> Since the docker container will house all Single cluster Hadoop, 
CloudberryDB and PXF, we recommend that you have at least 4 cpus and 6GB memory 
allocated to Docker. These settings are available under docker preferences.
+> Since the docker container will house all Single cluster Hadoop, Cloudberry 
and PXF, we recommend that you have at least 4 cpus and 6GB memory allocated to 
Docker. These settings are available under docker preferences.
 
-The quick and easy is to download the CloudberryDB RPM from GitHub and move it 
into the `/downloads` folder. Then run `./dev/start.bash` to get a docker image 
with a running CloudberryDB, Hadoop cluster and an installed PXF.
+The quick and easy is to download the Cloudberry RPM from GitHub and move it 
into the `/downloads` folder. Then run `./dev/start.bash` to get a docker image 
with a running Cloudberry, Hadoop cluster and an installed PXF.
 
-#### Setup CloudberryDB in the Docker image
+#### Setup Cloudberry in the Docker image
 
-Configure, build and install CloudberryDB. This will be needed only when you 
use the container for the first time with CloudberryDB source.
+Configure, build and install Cloudberry. This will be needed only when you use 
the container for the first time with Cloudberry source.
 
 ```bash
 ~/workspace/pxf/dev/build_gpdb.bash
@@ -185,7 +184,7 @@ sudo chown gpadmin:gpadmin /usr/local/cloudberry-db-devel
 ~/workspace/pxf/dev/install_gpdb.bash
 ```
 
-For subsequent minor changes to CloudberryDB source you can simply do the 
following:
+For subsequent minor changes to Cloudberry source you can simply do the 
following:
 ```bash
 ~/workspace/pxf/dev/install_gpdb.bash
 ```
@@ -195,7 +194,7 @@ Run all the instructions below and run GROUP=smoke (in one 
script):
 ~/workspace/pxf/dev/smoke_shortcut.sh
 ```
 
-Create CloudberryDB Cluster
+Create Cloudberry Cluster
 ```bash
 source /usr/local/cloudberrydb-db-devel/greenplum_path.sh
 make -C ~/workspace/cbdb create-demo-cluster


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to