peterxcli commented on code in PR #8556: URL: https://github.com/apache/ozone/pull/8556#discussion_r2125261292
########## hadoop-hdds/docs/content/recipe/PyArrowTutorial.md: ########## @@ -0,0 +1,141 @@ +--- +title: Access Ozone using PyArrow (Docker Quickstart) +linkTitle: PyArrow Access (Docker) +summary: Step-by-step tutorial for accessing Ozone from Python using PyArrow in a Docker environment. +weight: 11 +--- + +<!-- +Licensed to the Apache Software Foundation (ASF) under one or more +contributor license agreements. See the NOTICE file distributed with +this work for additional information regarding copyright ownership. +The ASF licenses this file to You under the Apache License, Version 2.0 +(the "License"); you may not use this file except in compliance with +the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +--> + +This tutorial demonstrates how to access Apache Ozone from Python using **PyArrow**, with Ozone running in Docker. + +## Prerequisites + +- Docker and Docker Compose installed. +- Python 3.x environment. + +## Steps + +### 1️⃣ Start Ozone in Docker + +Download the latest Docker Compose file for Ozone and start the cluster with 3 DataNodes: + +```bash +curl -O https://raw.githubusercontent.com/apache/ozone-docker/refs/heads/latest/docker-compose.yaml +docker compose up -d --scale datanode=3 +``` + +### 2️⃣ Connect to the SCM Container + +```bash +docker exec -it <your-scm-container-name-or-id> bash +``` +> Change the container id `<your-scm-container-name-or-id>` to your actual container id. + +The rest of the tutorial will run on this container. + +Create a volume and a bucket inside Ozone: + +```bash +ozone sh volume create volume +ozone sh bucket create volume/bucket +``` + +### 3️⃣ Install PyArrow in Your Python Environment + +```bash +pip install pyarrow +``` + +### 4️⃣ Download Hadoop Native Libraries for libhdfs Support + +Depending on your system architecture, run one of the following: + +For ARM64 (Apple Silicon, ARM servers): +```bash +curl -L "https://www.apache.org/dyn/closer.lua?action=download&filename=hadoop/common/hadoop-3.4.0/hadoop-3.4.0-aarch64.tar.gz" | tar -xz --wildcards 'hadoop-3.4.0/lib/native/libhdfs.*' +``` + +For x86_64 (most desktops and servers): +```bash +curl -L "https://www.apache.org/dyn/closer.lua?action=download&filename=hadoop/common/hadoop-3.4.0/hadoop-3.4.0.tar.gz" | tar -xz --wildcards 'hadoop-3.4.0/lib/native/libhdfs.*' +``` + +Set environment variables to point to the native libraries and Ozone classpath: + +```bash +export ARROW_LIBHDFS_DIR=hadoop-3.4.0/lib/native/ +export CLASSPATH=$(ozone classpath ozone-tools) +``` Review Comment: Can tell user there are more variable they can set: https://lidavidm.github.io/arrow-docs-next/python/filesystems.html#hadoop-file-system-hdfs > The libhdfs library is loaded at runtime (rather than at link / library load time, since the library may not be in your LD_LIBRARY_PATH), and relies on some environment variables. > > `HADOOP_HOME`: the root of your installed Hadoop distribution. Often has lib/native/libhdfs.so. > > `JAVA_HOME`: the location of your Java SDK installation. > > `ARROW_LIBHDFS_DIR` (optional): explicit location of libhdfs.so if it is installed somewhere other than $HADOOP_HOME/lib/native. > > `CLASSPATH`: must contain the Hadoop jars. You can set these using: > > ```bash > export CLASSPATH=`$HADOOP_HOME/bin/hdfs classpath --glob` > ``` > If `CLASSPATH` is not set, then it will be set automatically if the `hadoop` executable is in your system path, or if `HADOOP_HOME` is set. ########## hadoop-hdds/docs/content/recipe/PyArrowTutorial.md: ########## @@ -0,0 +1,141 @@ +--- +title: Access Ozone using PyArrow (Docker Quickstart) +linkTitle: PyArrow Access (Docker) +summary: Step-by-step tutorial for accessing Ozone from Python using PyArrow in a Docker environment. +weight: 11 +--- + +<!-- +Licensed to the Apache Software Foundation (ASF) under one or more +contributor license agreements. See the NOTICE file distributed with +this work for additional information regarding copyright ownership. +The ASF licenses this file to You under the Apache License, Version 2.0 +(the "License"); you may not use this file except in compliance with +the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +--> + +This tutorial demonstrates how to access Apache Ozone from Python using **PyArrow**, with Ozone running in Docker. + +## Prerequisites + +- Docker and Docker Compose installed. +- Python 3.x environment. + +## Steps + +### 1️⃣ Start Ozone in Docker + +Download the latest Docker Compose file for Ozone and start the cluster with 3 DataNodes: + +```bash +curl -O https://raw.githubusercontent.com/apache/ozone-docker/refs/heads/latest/docker-compose.yaml +docker compose up -d --scale datanode=3 +``` + +### 2️⃣ Connect to the SCM Container + +```bash +docker exec -it <your-scm-container-name-or-id> bash +``` +> Change the container id `<your-scm-container-name-or-id>` to your actual container id. + +The rest of the tutorial will run on this container. + +Create a volume and a bucket inside Ozone: + +```bash +ozone sh volume create volume +ozone sh bucket create volume/bucket +``` + +### 3️⃣ Install PyArrow in Your Python Environment + +```bash +pip install pyarrow +``` + +### 4️⃣ Download Hadoop Native Libraries for libhdfs Support + +Depending on your system architecture, run one of the following: + +For ARM64 (Apple Silicon, ARM servers): +```bash +curl -L "https://www.apache.org/dyn/closer.lua?action=download&filename=hadoop/common/hadoop-3.4.0/hadoop-3.4.0-aarch64.tar.gz" | tar -xz --wildcards 'hadoop-3.4.0/lib/native/libhdfs.*' +``` + +For x86_64 (most desktops and servers): +```bash +curl -L "https://www.apache.org/dyn/closer.lua?action=download&filename=hadoop/common/hadoop-3.4.0/hadoop-3.4.0.tar.gz" | tar -xz --wildcards 'hadoop-3.4.0/lib/native/libhdfs.*' +``` + +Set environment variables to point to the native libraries and Ozone classpath: + +```bash +export ARROW_LIBHDFS_DIR=hadoop-3.4.0/lib/native/ +export CLASSPATH=$(ozone classpath ozone-tools) Review Comment: I think this should be: ```bash export CLASSPATH=$(hadoop-3.4.0/lib/native/bin/hdfs classpath --glob) ``` Otherwise users need to download both hadoop and ozone build. I'm not really sure this is good? maybe ozone's client has better performance or compatibility? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
