ashb commented on a change in pull request #12082:
URL: https://github.com/apache/airflow/pull/12082#discussion_r518610428
##########
File path: provider_packages/DEV-README.md
##########
@@ -171,3 +171,124 @@ the version in PyPI does not contain the leading 0s in
version name - therefore
also do not container the leading 0s.
* You can install the .whl packages with `pip install <PACKAGE_FILE>`
+
+
+# Testing provider package scripts
+
+The backport packages importing and tests execute within the "CI" environment
of Airflow -the
+same image that is used by Breeze. They however require special mounts (no
+sources of Airflow mounted to it) and possibility to install all extras and
packages in order to test
+importability of all the packages. It is rather simple but requires some
semi-automated process:
+
+## Backport packages
+
+1. Prepare backport packages
+
+
+```shell script
+./breeze --backports prepare-provider-packages
+```
+
+This prepares all backport packages in the "dist" folder
+
+2. Enter the container:
+
+```shell script
+export INSTALL_AIRFLOW_VERSION=1.10.12
+export BACKPORT_PACKAGES="true"
+
+./scripts/ci/provider_packages/ci_enter_breeze_provider_package_tests.sh
Review comment:
This script is wrongly named -- it isn't used by CI so shouldn't live
under `scripts/ci`/
##########
File path: provider_packages/DEV-README.md
##########
@@ -171,3 +171,124 @@ the version in PyPI does not contain the leading 0s in
version name - therefore
also do not container the leading 0s.
* You can install the .whl packages with `pip install <PACKAGE_FILE>`
+
+
+# Testing provider package scripts
+
+The backport packages importing and tests execute within the "CI" environment
of Airflow -the
+same image that is used by Breeze. They however require special mounts (no
+sources of Airflow mounted to it) and possibility to install all extras and
packages in order to test
+importability of all the packages. It is rather simple but requires some
semi-automated process:
+
+## Backport packages
+
+1. Prepare backport packages
+
+
+```shell script
+./breeze --backports prepare-provider-packages
+```
+
+This prepares all backport packages in the "dist" folder
+
+2. Enter the container:
+
+```shell script
+export INSTALL_AIRFLOW_VERSION=1.10.12
+export BACKPORT_PACKAGES="true"
+
+./scripts/ci/provider_packages/ci_enter_breeze_provider_package_tests.sh
+```
+
+(the rest of it is in the container)
+
+3. \[IN CONTAINER\] Install all remaining dependencies and reinstall airflow
1.10:
+
+```shell script
+cd /airflow_sources
+
+pip install ".[all]"
+
+pip install "apache-airflow==${INSTALL_AIRFLOW_VERSION}"
+
+cd
+```
+
+4. \[IN CONTAINER\] Install the provider packages from /dist
+
+```shell script
+pip install /dist/apache_airflow_backport_providers_*.whl
+```
+
+5. \[IN CONTAINER\] Check the installation folder for providers:
+
+```shell script
+python3 <<EOF 2>/dev/null
+import airflow.providers;
+path=airflow.providers.__path__
+for p in path._path:
+ print(p)
+EOF
+```
+
+6. \[IN CONTAINER\] Check if all the providers can be imported
+python3 /opt/airflow/dev/import_all_classes.py --path
<PATH_REPORTED_IN_THE_PREVIOUS_STEP>
Review comment:
I'm sorry. I still don't see why this needs to be an argument. Every
example and every use case you've shown of this could be discovered _in the
script_.
##########
File path: scripts/in_container/_in_container_utils.sh
##########
@@ -259,11 +259,15 @@ function install_released_airflow_version() {
export SLUGIFY_USES_TEXT_UNIDECODE=yes
fi
rm -rf "${AIRFLOW_SOURCES}"/*.egg-info
- INSTALLS=("apache-airflow==${1}" "werkzeug<1.0.0")
- pip install --upgrade "${INSTALLS[@]}"
+ if [[ ${INSTALL_AIRFLOW_VERSION} == "wheel" ]]; then
+ pip install /dist/apache_airflow-*.whl
+ else
+ INSTALLS=("apache-airflow==${1}" "werkzeug<1.0.0")
Review comment:
?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]