This is an automated email from the ASF dual-hosted git repository.

damccorm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/beam.git


The following commit(s) were added to refs/heads/master by this push:
     new 5e00feea6c6 Update links to point to About Beam ML  (#29709)
5e00feea6c6 is described below

commit 5e00feea6c6b355092a9746a0af0d0989a42f371
Author: Rebecca Szper <[email protected]>
AuthorDate: Tue Dec 12 07:15:34 2023 -0800

    Update links to point to About Beam ML  (#29709)
    
    * Update Beam ML links
    
    * Update Beam ML links
---
 examples/notebooks/beam-ml/README.md                       |  6 +++---
 examples/notebooks/beam-ml/automatic_model_refresh.ipynb   | 10 ----------
 examples/notebooks/beam-ml/mltransform_basic.ipynb         |  2 +-
 examples/notebooks/beam-ml/per_key_models.ipynb            |  2 +-
 examples/notebooks/beam-ml/run_custom_inference.ipynb      |  2 +-
 .../beam-ml/run_inference_pytorch_tensorflow_sklearn.ipynb |  4 ++--
 examples/notebooks/beam-ml/speech_emotion_tensorflow.ipynb |  2 +-
 sdks/python/apache_beam/examples/inference/README.md       |  2 +-
 sdks/python/apache_beam/ml/inference/__init__.py           |  2 +-
 sdks/python/apache_beam/ml/inference/base.py               | 14 +++++++-------
 website/www/site/content/en/documentation/ml/about-ml.md   |  2 +-
 .../site/content/en/documentation/ml/inference-overview.md |  4 ++--
 .../content/en/documentation/ml/multi-model-pipelines.md   |  2 +-
 website/www/site/content/en/documentation/ml/overview.md   |  4 ++--
 website/www/site/content/en/documentation/sdks/python.md   |  2 +-
 .../transforms/python/elementwise/runinference.md          |  2 +-
 .../content/en/get-started/resources/learning-resources.md |  2 +-
 17 files changed, 27 insertions(+), 37 deletions(-)

diff --git a/examples/notebooks/beam-ml/README.md 
b/examples/notebooks/beam-ml/README.md
index 0ae937e9e28..5a90f1c68b4 100644
--- a/examples/notebooks/beam-ml/README.md
+++ b/examples/notebooks/beam-ml/README.md
@@ -25,13 +25,13 @@ transform.
 This transform allows you to make predictions and inference on data with 
machine learning (ML) models.
 The model handler abstracts the user from the configuration needed for
 specific frameworks, such as Tensorflow, PyTorch, and others. For a full list 
of supported frameworks,
-see the Apache Beam [Machine 
Learning](https://beam.apache.org/documentation/sdks/python-machine-learning) 
page.
+see the [About Beam ML](https://beam.apache.org/documentation/ml/about-ml/) 
page.
 
 ## Using The Notebooks
 
 These notebooks illustrate ways to use Apache Beam's RunInference transforms, 
as well as different
-use cases for 
[ModelHandler](https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.base.html#apache_beam.ml.inference.base.ModelHandler)
 implementations.
-Beam comes with [multiple ModelHandler 
implementations](https://beam.apache.org/documentation/sdks/python-machine-learning/#modify-a-pipeline-to-use-an-ml-model).
+use cases for 
[`ModelHandler`](https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.base.html#apache_beam.ml.inference.base.ModelHandler)
 implementations.
+Beam comes with [multiple `ModelHandler` 
implementations](https://beam.apache.org/documentation/ml/about-ml/#modify-a-python-pipeline-to-use-an-ml-model).
 
 ### Loading the Notebooks
 
diff --git a/examples/notebooks/beam-ml/automatic_model_refresh.ipynb 
b/examples/notebooks/beam-ml/automatic_model_refresh.ipynb
index cf05979c5b3..b0e9bc2f53e 100644
--- a/examples/notebooks/beam-ml/automatic_model_refresh.ipynb
+++ b/examples/notebooks/beam-ml/automatic_model_refresh.ipynb
@@ -15,16 +15,6 @@
     }
   },
   "cells": [
-    {
-      "cell_type": "markdown",
-      "metadata": {
-        "id": "view-in-github",
-        "colab_type": "text"
-      },
-      "source": [
-        "<a 
href=\"https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/beam-ml/automatic_model_refresh.ipynb\";
 target=\"_parent\"><img 
src=\"https://colab.research.google.com/assets/colab-badge.svg\"; alt=\"Open In 
Colab\"/></a>"
-      ]
-    },
     {
       "cell_type": "code",
       "source": [
diff --git a/examples/notebooks/beam-ml/mltransform_basic.ipynb 
b/examples/notebooks/beam-ml/mltransform_basic.ipynb
index 43068624eb8..b0af96d0859 100644
--- a/examples/notebooks/beam-ml/mltransform_basic.ipynb
+++ b/examples/notebooks/beam-ml/mltransform_basic.ipynb
@@ -55,7 +55,7 @@
         "id": "d3b81cf2-8603-42bd-995e-9e14631effd0"
       },
       "source": [
-        "This notebook demonstrates how to use `MLTransform` to preprocess 
data for machine learning workflows. Apache Beam provides a set of transforms 
for preprocessing data for training and inference. The `MLTransform` class 
wraps various transforms in one PTransform, simplifying your workflow. For a 
list of available preprocessing transforms see the [Preprocess data with 
MLTransform](https://beam.apache.org/documentation/ml/preprocess-data/#transforms)
 page in Apache Beam documentation.\n",
+        "This notebook demonstrates how to use `MLTransform` to preprocess 
data for machine learning workflows. Apache Beam provides a set of transforms 
for preprocessing data for training and inference. The `MLTransform` class 
wraps various transforms in one `PTransform`, simplifying your workflow. For a 
list of available preprocessing transforms see the [Preprocess data with 
MLTransform](https://beam.apache.org/documentation/ml/preprocess-data/#transforms)
 page in the Apache Beam docum [...]
         "\n",
         "This notebook uses data processing transforms defined in the 
[apache_beam/ml/transforms/tft](https://beam.apache.org/releases/pydoc/current/apache_beam.ml.transforms.tft.html)
 module."
       ]
diff --git a/examples/notebooks/beam-ml/per_key_models.ipynb 
b/examples/notebooks/beam-ml/per_key_models.ipynb
index 53845c0b3e1..3e71c1d119a 100644
--- a/examples/notebooks/beam-ml/per_key_models.ipynb
+++ b/examples/notebooks/beam-ml/per_key_models.ipynb
@@ -402,7 +402,7 @@
     {
       "cell_type": "markdown",
       "source": [
-        "Use the formatted keys to define a `KeyedModelHandler` that maps keys 
to the `ModelHandler` used for those keys. The `KeyedModelHandler` method lets 
you define an optional `max_models_per_worker_hint`, which limits the number of 
models that can be held in a single worker process at one time. If your worker 
might run out of memory, use this option. For more information about managing 
memory, see [Use a keyed 
ModelHandler](https://beam.apache.org/documentation/sdks/python-machine- [...]
+        "Use the formatted keys to define a `KeyedModelHandler` that maps keys 
to the `ModelHandler` used for those keys. The `KeyedModelHandler` method lets 
you define an optional `max_models_per_worker_hint`, which limits the number of 
models that can be held in a single worker process at one time. If your worker 
might run out of memory, use this option. For more information about managing 
memory, see [Use a keyed 
ModelHandler](https://beam.apache.org/documentation/ml/about-ml/#use-a-k [...]
       ],
       "metadata": {
         "id": "IP65_5nNGIb8"
diff --git a/examples/notebooks/beam-ml/run_custom_inference.ipynb 
b/examples/notebooks/beam-ml/run_custom_inference.ipynb
index a66c5847de0..2ca3b69bb72 100644
--- a/examples/notebooks/beam-ml/run_custom_inference.ipynb
+++ b/examples/notebooks/beam-ml/run_custom_inference.ipynb
@@ -60,7 +60,7 @@
         "NLP locates named entities in unstructured text and classifies the 
entities using pre-defined labels, such as person name, organization, date, and 
so on.\n",
         "\n",
         "This example illustrates how to use the popular `spaCy` package to 
load a machine learning (ML) model and perform inference in an Apache Beam 
pipeline using the RunInference `PTransform`.\n",
-        "For more information about the RunInference API, see [Machine 
Learning](https://beam.apache.org/documentation/sdks/python-machine-learning) 
in the Apache Beam documentation."
+        "For more information about the RunInference API, see [About Beam 
ML](https://beam.apache.org/documentation/ml/about-ml) in the Apache Beam 
documentation."
       ]
     },
     {
diff --git 
a/examples/notebooks/beam-ml/run_inference_pytorch_tensorflow_sklearn.ipynb 
b/examples/notebooks/beam-ml/run_inference_pytorch_tensorflow_sklearn.ipynb
index a314f6cd711..115b70b11e9 100644
--- a/examples/notebooks/beam-ml/run_inference_pytorch_tensorflow_sklearn.ipynb
+++ b/examples/notebooks/beam-ml/run_inference_pytorch_tensorflow_sklearn.ipynb
@@ -71,7 +71,7 @@
         "You can use Apache Beam versions 2.40.0 and later with the 
[RunInference 
API](https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.base.html#apache_beam.ml.inference.base.RunInference)
 for local and remote inference with batch and streaming pipelines.\n",
         "The RunInference API leverages Apache Beam concepts, such as the 
`BatchElements` transform and the `Shared` class, to support models in your 
pipelines that create transforms optimized for machine learning inference.\n",
         "\n",
-        "For more information about the RunInference API, see [Machine 
Learning](https://beam.apache.org/documentation/sdks/python-machine-learning) 
in the Apache Beam documentation."
+        "For more information about the RunInference API, see [About Beam 
ML](https://beam.apache.org/documentation/ml/about-ml) in the Apache Beam 
documentation."
       ]
     },
     {
@@ -221,7 +221,7 @@
         "    # Pad the token tensors to max length to make sure that all of 
the tensors\n",
         "    # are of the same length and stack-able by the RunInference API. 
Normally, you would batch first\n",
         "    # then tokenize the batch, padding each tensor the max length in 
the batch.\n",
-        "    # See: 
https://beam.apache.org/documentation/sdks/python-machine-learning/#unable-to-batch-tensor-elements\n";,
+        "    # See: 
https://beam.apache.org/documentation/ml/about-ml/#unable-to-batch-tensor-elements\n";,
         "    tokens = self._tokenizer(text_input, return_tensors='pt', 
padding='max_length', max_length=512)\n",
         "    # Squeeze because tokenization adds an extra dimension, which is 
empty,\n",
         "    # in this case because we tokenize one element at a time.\n",
diff --git a/examples/notebooks/beam-ml/speech_emotion_tensorflow.ipynb 
b/examples/notebooks/beam-ml/speech_emotion_tensorflow.ipynb
index 098cb150bfd..c2dfb06a6e6 100644
--- a/examples/notebooks/beam-ml/speech_emotion_tensorflow.ipynb
+++ b/examples/notebooks/beam-ml/speech_emotion_tensorflow.ipynb
@@ -132,7 +132,7 @@
         "* **[IPython](https://ipython.readthedocs.io/en/stable/index.html)**: 
Creates visualizations for multimedia content. Here we have used it for playing 
audio files.\n",
         "* **[Sklearn](https://scikit-learn.org/stable/index.html)**: Offers 
comprehensive tools for Machine Learning. Here we have used it for 
preprocessing and splitting the data.\n",
         "* **[TensorFlow](https://www.tensorflow.org/api_docs)** and 
**[Keras](https://keras.io/api/)**: Enables building and training complex 
Machine Learning and Deep Learning model.\n",
-        "* 
**[TFModelHandlerNumpy](https://beam.apache.org/documentation/sdks/python-machine-learning/#tensorflow)**:
 Defines the configuration used to load/use the model that we train. We use 
TFModelHandlerNumpy because the model was trained with TensorFlow and takes 
numpy arrays as input.\n",
+        "* 
**[TFModelHandlerNumpy](https://beam.apache.org/documentation/ml/about-ml/#tensorflow)**:
 Defines the configuration used to load/use the model that we train. We use 
TFModelHandlerNumpy because the model was trained with TensorFlow and takes 
numpy arrays as input.\n",
         "* 
**[RunInference](https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.html#apache_beam.ml.inference.RunInference)**:
 Loads the model and obtains predictions as part of the Apache Beam pipeline. 
For more information, see docs on prediction and inference.\n",
         "* **[Apache Beam](https://beam.apache.org/documentation/)**: Builds a 
pipeline for Image Processing."
       ]
diff --git a/sdks/python/apache_beam/examples/inference/README.md 
b/sdks/python/apache_beam/examples/inference/README.md
index cd92d9c127e..3bb68440ed6 100644
--- a/sdks/python/apache_beam/examples/inference/README.md
+++ b/sdks/python/apache_beam/examples/inference/README.md
@@ -113,7 +113,7 @@ pip install skl2onnx
 
 ### Additional resources
 For more information, see the
-[Machine Learning](/documentation/sdks/python-machine-learning) and the
+[About Beam ML](/documentation/ml/about-ml) and the
 [RunInference 
transform](/documentation/transforms/python/elementwise/runinference) 
documentation.
 
 ---
diff --git a/sdks/python/apache_beam/ml/inference/__init__.py 
b/sdks/python/apache_beam/ml/inference/__init__.py
index 0433761ded8..3ba45ad7def 100644
--- a/sdks/python/apache_beam/ml/inference/__init__.py
+++ b/sdks/python/apache_beam/ml/inference/__init__.py
@@ -21,7 +21,7 @@ as an interface for adding unsupported frameworks.
 
 Note: on top of the frameworks captured in submodules below, Beam also has
 a supported TensorFlow model handler via the tfx-bsl library. See
-https://beam.apache.org/documentation/sdks/python-machine-learning/#tensorflow
+https://beam.apache.org/documentation/ml/about-ml/#tensorflow
 for more information on using TensorFlow in Beam.
 """
 
diff --git a/sdks/python/apache_beam/ml/inference/base.py 
b/sdks/python/apache_beam/ml/inference/base.py
index 90ba22fcd31..63038f9c9b7 100644
--- a/sdks/python/apache_beam/ml/inference/base.py
+++ b/sdks/python/apache_beam/ml/inference/base.py
@@ -143,7 +143,7 @@ class KeyModelPathMapping(Generic[KeyT]):
   information see the KeyedModelHandler documentation
   
https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.base.html#apache_beam.ml.inference.base.KeyedModelHandler
   documentation and the website section on model updates
-  
https://beam.apache.org/documentation/sdks/python-machine-learning/#automatic-model-refresh
+  https://beam.apache.org/documentation/ml/about-ml/#automatic-model-refresh
   """
   keys: List[KeyT]
   update_path: str
@@ -224,7 +224,7 @@ class ModelHandler(Generic[ExampleT, PredictionT, ModelT]):
     used when a ModelHandler represents a single model, not multiple models.
     This will be true in most cases. For more information see the website
     section on model updates
-    
https://beam.apache.org/documentation/sdks/python-machine-learning/#automatic-model-refresh
+    https://beam.apache.org/documentation/ml/about-ml/#automatic-model-refresh
     """
     pass
 
@@ -239,7 +239,7 @@ class ModelHandler(Generic[ExampleT, PredictionT, ModelT]):
     the KeyedModelHandler documentation
     
https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.base.html#apache_beam.ml.inference.base.KeyedModelHandler
     documentation and the website section on model updates
-    
https://beam.apache.org/documentation/sdks/python-machine-learning/#automatic-model-refresh
+    https://beam.apache.org/documentation/ml/about-ml/#automatic-model-refresh
     """
     pass
 
@@ -452,7 +452,7 @@ class KeyedModelHandler(Generic[KeyT, ExampleT, 
PredictionT, ModelT],
     KeyedModelHandlers support Automatic Model Refresh to update your model
     to a newer version without stopping your streaming pipeline. For an
     overview of this feature, see
-    
https://beam.apache.org/documentation/sdks/python-machine-learning/#automatic-model-refresh
+    https://beam.apache.org/documentation/ml/about-ml/#automatic-model-refresh
 
 
     To use this feature with a KeyedModelHandler that has many models per key,
@@ -465,7 +465,7 @@ class KeyedModelHandler(Generic[KeyT, ExampleT, 
PredictionT, ModelT],
     will update the model corresponding to keys 'k1' and 'k2' with path
     'update/path/1' and the model corresponding to 'k3' with 'update/path/2'.
     In order to do a side input update: (1) all restrictions mentioned in
-    
https://beam.apache.org/documentation/sdks/python-machine-learning/#automatic-model-refresh
+    https://beam.apache.org/documentation/ml/about-ml/#automatic-model-refresh
     must be met, (2) all update_paths must be non-empty, even if they are not
     being updated from their original values, and (3) the set of keys
     originally defined cannot change. This means that if originally you have
@@ -486,7 +486,7 @@ class KeyedModelHandler(Generic[KeyT, ExampleT, 
PredictionT, ModelT],
     memory (OOM) exception. To avoid this issue, use the parameter
     `max_models_per_worker_hint` to limit the number of models that are loaded
     at the same time. For more information about memory management, see
-    `Use a keyed `ModelHandler 
<https://beam.apache.org/documentation/sdks/python-machine-learning/#use-a-keyed-modelhandler>_`.
  # pylint: disable=line-too-long
+    `Use a keyed `ModelHandler 
<https://beam.apache.org/documentation/ml/about-ml/#use-a-keyed-modelhandler-object>_`.
  # pylint: disable=line-too-long
 
 
     Args:
@@ -498,7 +498,7 @@ class KeyedModelHandler(Generic[KeyT, ExampleT, 
PredictionT, ModelT],
         example, if your worker has 8 GB of memory provisioned and your workers
         take up 1 GB each, you should set this to 7 to allow all models to sit
         in memory with some buffer. For more information about memory 
management,
-        see `Use a keyed `ModelHandler 
<https://beam.apache.org/documentation/sdks/python-machine-learning/#use-a-keyed-modelhandler>_`.
  # pylint: disable=line-too-long
+        see `Use a keyed `ModelHandler 
<https://beam.apache.org/documentation/ml/about-ml/#use-a-keyed-modelhandler-object>_`.
  # pylint: disable=line-too-long
     """
     self._metrics_collectors: Dict[str, _MetricsCollector] = {}
     self._default_metrics_collector: _MetricsCollector = None
diff --git a/website/www/site/content/en/documentation/ml/about-ml.md 
b/website/www/site/content/en/documentation/ml/about-ml.md
index 1753c0ed56c..70ad546c40a 100644
--- a/website/www/site/content/en/documentation/ml/about-ml.md
+++ b/website/www/site/content/en/documentation/ml/about-ml.md
@@ -61,7 +61,7 @@ To keep your model up to date and performing well as your 
data grows and evolves
 
 ## Use RunInference
 
-The [RunInference API](/documentation/sdks/python-machine-learning/) is a 
`PTransform` optimized for machine learning inferences that lets you 
efficiently use ML models in your pipelines. The API includes the following 
features:
+The RunInference API is a `PTransform` optimized for machine learning 
inferences that lets you efficiently use ML models in your pipelines. The API 
includes the following features:
 
 - To efficiently feed your model, dynamically batches inputs based on pipeline 
throughput using Apache Beam's `BatchElements` transform.
 - To balance memory and throughput usage, determines the optimal number of 
models to load using a central model manager. Shares these models across 
threads and processes as needed to maximize throughput.
diff --git a/website/www/site/content/en/documentation/ml/inference-overview.md 
b/website/www/site/content/en/documentation/ml/inference-overview.md
index a79c9a6b01e..1821102c17f 100644
--- a/website/www/site/content/en/documentation/ml/inference-overview.md
+++ b/website/www/site/content/en/documentation/ml/inference-overview.md
@@ -38,7 +38,7 @@ Beam provides different ways to implement inference as part 
of your pipeline. Yo
   </tr>
 </table>
 
-The RunInference API is available with the Beam Python SDK versions 2.40.0 and 
later. You can use Apache Beam with the RunInference API to use machine 
learning (ML) models to do local and remote inference with batch and streaming 
pipelines. Starting with Apache Beam 2.40.0, PyTorch and Scikit-learn 
frameworks are supported. Tensorflow models are supported through `tfx-bsl`. 
For more deatils about using RunInference with Python, see [Machine Learning 
with Python](/documentation/sdks/pytho [...]
+The RunInference API is available with the Beam Python SDK versions 2.40.0 and 
later. You can use Apache Beam with the RunInference API to use machine 
learning (ML) models to do local and remote inference with batch and streaming 
pipelines. Starting with Apache Beam 2.40.0, PyTorch and Scikit-learn 
frameworks are supported. Tensorflow models are supported through `tfx-bsl`. 
For more deatils about using RunInference with Python, see [About Beam 
ML](/documentation/ml/about-ml).
 
 The RunInference API is available with the Beam Java SDK versions 2.41.0 and 
later through Apache Beam's [Multi-language Pipelines 
framework](/documentation/programming-guide/#multi-language-pipelines). For 
information about the Java wrapper transform, see 
[RunInference.java](https://github.com/apache/beam/blob/master/sdks/java/extensions/python/src/main/java/org/apache/beam/sdk/extensions/python/transforms/RunInference.java).
 To try it out, see the [Java Sklearn Mnist Classification exa [...]
 
@@ -47,7 +47,7 @@ You can create multiple types of transforms using the 
RunInference API: the API
 {{< table >}}
 | Task | Example |
 | ------- | ---------------|
-| I want to use the RunInference transform | [Modify a Python pipeline to use 
an ML 
model](/documentation/sdks/python-machine-learning/#modify-a-python-pipeline-to-use-an-ml-model)
 |
+| I want to use the RunInference transform | [Modify a Python pipeline to use 
an ML 
model](/documentation/ml/about-ml/#modify-a-python-pipeline-to-use-an-ml-model) 
|
 | I want to use RunInference with PyTorch | [Use RunInference with 
PyTorch](/documentation/transforms/python/elementwise/runinference-pytorch/) |
 | I want to use RunInference with Sklearn | [Use RunInference with 
Sklearn](/documentation/transforms/python/elementwise/runinference-sklearn/) |
 | I want to use pre-trained models (PyTorch, Scikit-learn, or TensorFlow) | 
[Use pre-trained models](/documentation/ml/about-ml/#use-pre-trained-models) |:
diff --git 
a/website/www/site/content/en/documentation/ml/multi-model-pipelines.md 
b/website/www/site/content/en/documentation/ml/multi-model-pipelines.md
index c42c8b8ae66..dcb96e17a46 100644
--- a/website/www/site/content/en/documentation/ml/multi-model-pipelines.md
+++ b/website/www/site/content/en/documentation/ml/multi-model-pipelines.md
@@ -33,7 +33,7 @@ all of those steps together by encapsulating them in a single 
Apache Beam Direct
 resilient and scalable end-to-end machine learning systems.
 
 To deploy your machine learning model in an Apache Beam pipeline, use
-the [`RunInferenceAPI`](/documentation/sdks/python-machine-learning/), which
+the [`RunInferenceAPI`](/documentation/ml/about-ml), which
 facilitates the integration of your model as a `PTransform` step in your DAG. 
Composing
 multiple `RunInference` transforms within a single DAG makes it possible to 
build a pipeline that consists
 of multiple ML models. In this way, Apache Beam supports the development of 
complex ML systems.
diff --git a/website/www/site/content/en/documentation/ml/overview.md 
b/website/www/site/content/en/documentation/ml/overview.md
index 63aaf0f86e2..3a49d40548e 100644
--- a/website/www/site/content/en/documentation/ml/overview.md
+++ b/website/www/site/content/en/documentation/ml/overview.md
@@ -54,7 +54,7 @@ Beam provides different ways to implement inference as part 
of your pipeline. Yo
 
 ### RunInference
 
-The RunInfernce API is available with the Beam Python SDK versions 2.40.0 and 
later. You can use Apache Beam with the RunInference API to use machine 
learning (ML) models to do local and remote inference with batch and streaming 
pipelines. Starting with Apache Beam 2.40.0, PyTorch and Scikit-learn 
frameworks are supported. Tensorflow models are supported through `tfx-bsl`. 
For more deatils about using RunInference with Python, see [Machine Learning 
with Python](/documentation/sdks/python [...]
+The RunInfernce API is available with the Beam Python SDK versions 2.40.0 and 
later. You can use Apache Beam with the RunInference API to use machine 
learning (ML) models to do local and remote inference with batch and streaming 
pipelines. Starting with Apache Beam 2.40.0, PyTorch and Scikit-learn 
frameworks are supported. Tensorflow models are supported through `tfx-bsl`. 
For more deatils about using RunInference, see [About Beam 
ML](/documentation/ml/about-ml).
 
 The RunInference API is available with the Beam Java SDK versions 2.41.0 and 
later through Apache Beam's [Multi-language Pipelines 
framework](/documentation/programming-guide/#multi-language-pipelines). For 
information about the Java wrapper transform, see 
[RunInference.java](https://github.com/apache/beam/blob/master/sdks/java/extensions/python/src/main/java/org/apache/beam/sdk/extensions/python/transforms/RunInference.java).
 To try it out, see the [Java Sklearn Mnist Classification exa [...]
 
@@ -63,7 +63,7 @@ You can create multiple types of transforms using the 
RunInference API: the API
 {{< table >}}
 | Task | Example |
 | ------- | ---------------|
-| I want to use the RunInference transform | [Modify a Python pipeline to use 
an ML 
model](/documentation/sdks/python-machine-learning/#modify-a-python-pipeline-to-use-an-ml-model)
 |
+| I want to use the RunInference transform | [Modify a Python pipeline to use 
an ML 
model](/documentation/ml/about-ml/#modify-a-python-pipeline-to-use-an-ml-model) 
|
 | I want to use RunInference with PyTorch | [Use RunInference with 
PyTorch](/documentation/transforms/python/elementwise/runinference-pytorch/) |
 | I want to use RunInference with Sklearn | [Use RunInference with 
Sklearn](/documentation/transforms/python/elementwise/runinference-sklearn/) |
 | I want to use pre-trained models (PyTorch, Scikit-learn, or TensorFlow) | 
[Use pre-trained models](/documentation/ml/about-ml/#use-pre-trained-models) |
diff --git a/website/www/site/content/en/documentation/sdks/python.md 
b/website/www/site/content/en/documentation/sdks/python.md
index 2902001066d..f5121832767 100644
--- a/website/www/site/content/en/documentation/sdks/python.md
+++ b/website/www/site/content/en/documentation/sdks/python.md
@@ -52,7 +52,7 @@ To integrate machine learning models into your pipelines for 
making inferences,
 [library from 
`tfx_bsl`](https://github.com/tensorflow/tfx-bsl/tree/master/tfx_bsl/beam).
 
 You can create multiple types of transforms using the RunInference API: the 
API takes multiple types of setup parameters from model handlers, and the 
parameter type determines the model implementation. For more information,
-see [Machine Learning](/documentation/sdks/python-machine-learning).
+see [About Beam ML](/documentation/ml/about-ml).
 
 [TensorFlow Extended (TFX)](https://www.tensorflow.org/tfx) is an end-to-end 
platform for deploying production ML pipelines. TFX is integrated with Beam. 
For more information, see [TFX user 
guide](https://www.tensorflow.org/tfx/guide).
 
diff --git 
a/website/www/site/content/en/documentation/transforms/python/elementwise/runinference.md
 
b/website/www/site/content/en/documentation/transforms/python/elementwise/runinference.md
index 369b7471442..47944b9a232 100644
--- 
a/website/www/site/content/en/documentation/transforms/python/elementwise/runinference.md
+++ 
b/website/www/site/content/en/documentation/transforms/python/elementwise/runinference.md
@@ -31,7 +31,7 @@ limitations under the License.
 
 Uses models to do local and remote inference. A `RunInference` transform 
performs inference on a `PCollection` of examples using a machine learning (ML) 
model. The transform outputs a `PCollection` that contains the input examples 
and output predictions. Avaliable in Apache Beam 2.40.0 and later versions.
 
-For more information, read about Beam RunInference APIs on [Machine Learning 
with 
Python](https://beam.apache.org/documentation/sdks/python-machine-learning) 
page and explore [RunInference API pipeline 
examples](https://github.com/apache/beam/tree/master/sdks/python/apache_beam/examples/inference).
+For more information about Beam RunInference APIs, see the [About Beam 
ML](https://beam.apache.org/documentation/ml/about-ml) page and the 
[RunInference API 
pipeline](https://github.com/apache/beam/tree/master/sdks/python/apache_beam/examples/inference)
 examples.
 
 ## Examples
 
diff --git 
a/website/www/site/content/en/get-started/resources/learning-resources.md 
b/website/www/site/content/en/get-started/resources/learning-resources.md
index abd4566f4d3..14bd90ee80b 100644
--- a/website/www/site/content/en/get-started/resources/learning-resources.md
+++ b/website/www/site/content/en/get-started/resources/learning-resources.md
@@ -69,7 +69,7 @@ If you have additional material that you would like to see 
here, please let us k
 
 ### Machine Learning
 
-*   **[Machine Learning with Python using the RunInference 
API](/documentation/sdks/python-machine-learning/)** - Use Apache Beam with the 
RunInference API to use machine learning (ML) models to do local and remote 
inference with batch and streaming pipelines. Follow the [RunInference API 
pipeline 
examples](https://github.com/apache/beam/tree/master/sdks/python/apache_beam/examples/inference)
 to do image classification, image segmentation, language modeling, and MNIST 
digit classificatio [...]
+*   **[Machine Learning using the RunInference 
API](/documentation/ml/about-ml)** - Use Apache Beam with the RunInference API 
to use machine learning (ML) models to do local and remote inference with batch 
and streaming pipelines. Follow the [RunInference API pipeline 
examples](https://github.com/apache/beam/tree/master/sdks/python/apache_beam/examples/inference)
 to do image classification, image segmentation, language modeling, and MNIST 
digit classification. See examples of [RunInferen [...]
 *   **[Machine Learning Preprocessing and 
Prediction](https://cloud.google.com/dataflow/examples/molecules-walkthrough)** 
- Predict the molecular energy from data stored in the [Spatial Data 
File](https://en.wikipedia.org/wiki/Spatial_Data_File) (SDF) format. Train a 
[TensorFlow](https://www.tensorflow.org/) model with 
[tf.Transform](https://github.com/tensorflow/transform) for preprocessing in 
Python. This also shows how to create batch and streaming prediction pipelines 
in Apache Beam.
 *   **[Machine Learning 
Preprocessing](https://cloud.google.com/blog/products/ai-machine-learning/pre-processing-tensorflow-pipelines-tftransform-google-cloud)**
 - Find the optimal parameter settings for simulated physical machines like a 
bottle filler or cookie machine. The goal of each simulated machine is to have 
the same input/output of the actual machine, making it a "digital twin". This 
uses [tf.Transform](https://github.com/tensorflow/transform) for preprocessing.
 

Reply via email to