This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/beam.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 8bb4c428242 Publishing website 2023/04/18 22:17:00 at commit 1c63a02
8bb4c428242 is described below

commit 8bb4c42824273e9e4740f8b5859ac59ef0b23b19
Author: jenkins <bui...@apache.org>
AuthorDate: Tue Apr 18 22:17:01 2023 +0000

    Publishing website 2023/04/18 22:17:00 at commit 1c63a02
---
 website/generated-content/documentation/index.xml                       | 2 +-
 website/generated-content/documentation/ml/about-ml/index.html          | 2 +-
 .../documentation/sdks/python-machine-learning/index.html               | 2 +-
 website/generated-content/sitemap.xml                                   | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/website/generated-content/documentation/index.xml 
b/website/generated-content/documentation/index.xml
index 39a0289a1fe..5defb0ce992 100644
--- a/website/generated-content/documentation/index.xml
+++ b/website/generated-content/documentation/index.xml
@@ -106,7 +106,7 @@ that illustrates running Scikit-learn models with Apache 
Beam.&lt;/p>
 &lt;li>Use tensorflow 2.7 or later.&lt;/li>
 &lt;li>Pass the path of the model to the TensorFlow 
&lt;code>ModelHandler&lt;/code> by using 
&lt;code>model_uri=&amp;lt;path_to_trained_model&amp;gt;&lt;/code>.&lt;/li>
 &lt;li>Alternatively, you can pass the path to saved weights of the trained 
model, a function to build the model using 
&lt;code>create_model_fn=&amp;lt;function&amp;gt;&lt;/code>, and set the 
&lt;code>model_type=ModelType.SAVED_WEIGHTS&lt;/code>.
-See &lt;a 
href="https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow_with_tensorflowhub.ipynb";>this
 notebook&lt;/a> that illustrates running Tensorflow models with Built-in model 
handlers.&lt;/li>
+See &lt;a 
href="https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow.ipynb";>this
 notebook&lt;/a> that illustrates running Tensorflow models with Built-in model 
handlers.&lt;/li>
 &lt;/ul>
 &lt;/li>
 &lt;li>Using &lt;code>tfx_bsl&lt;/code>.
diff --git a/website/generated-content/documentation/ml/about-ml/index.html 
b/website/generated-content/documentation/ml/about-ml/index.html
index 8e4e61e1f6a..e92768efd5a 100644
--- a/website/generated-content/documentation/ml/about-ml/index.html
+++ b/website/generated-content/documentation/ml/about-ml/index.html
@@ -29,7 +29,7 @@ that illustrates running PyTorch models with Apache 
Beam.</p><h4 id=scikit-learn
 <code>model_uri=&lt;path_to_pickled_file></code> and <code>model_file_type: 
&lt;ModelFileType></code>, where you can specify
 <code>ModelFileType.PICKLE</code> or <code>ModelFileType.JOBLIB</code>, 
depending on how the model was serialized.</li></ol><p>See <a 
href=https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_sklearn.ipynb>this
 notebook</a>
 that illustrates running Scikit-learn models with Apache Beam.</p><h4 
id=tensorflow>TensorFlow</h4><p>To use TensorFlow with the RunInference API, 
you have two options:</p><ol><li>Use the built-in TensorFlow Model Handlers in 
Apache Beam SDK - <code>TFModelHandlerNumpy</code> and 
<code>TFModelHandlerTensor</code>.<ul><li>Depending on the type of input for 
your model, use <code>TFModelHandlerNumpy</code> for <code>numpy</code> input 
and <code>TFModelHandlerTensor</code> for <code>tf.Tenso [...]
-See <a 
href=https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow_with_tensorflowhub.ipynb>this
 notebook</a> that illustrates running Tensorflow models with Built-in model 
handlers.</li></ul></li><li>Using <code>tfx_bsl</code>.<ul><li>Use this 
approach if your model input is of type <code>tf.Example</code>.</li><li>Use 
<code>tfx_bsl</code> version 1.10.0 or later.</li><li>Create a model handler 
using <code>tfx_bsl.public.beam.run_inference.CreateM [...]
+See <a 
href=https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow.ipynb>this
 notebook</a> that illustrates running Tensorflow models with Built-in model 
handlers.</li></ul></li><li>Using <code>tfx_bsl</code>.<ul><li>Use this 
approach if your model input is of type <code>tf.Example</code>.</li><li>Use 
<code>tfx_bsl</code> version 1.10.0 or later.</li><li>Create a model handler 
using <code>tfx_bsl.public.beam.run_inference.CreateModelHandler()</code [...]
 See <a 
href=https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow.ipynb>this
 notebook</a>
 that illustrates running TensorFlow models with Apache Beam and 
tfx-bsl.</li></ul></li></ol><h2 id=automatic-model-refresh>Automatic model 
refresh</h2><p>To automatically update the model being used with the 
RunInference <code>PTransform</code> without stopping the pipeline, pass a <a 
href=https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.base.html#apache_beam.ml.inference.base.ModelMetadata><code>ModelMetadata</code></a>
 side input <code>PCollection</code> to the R [...]
 an update to the side input. This could happen with global windowed side 
inputs with data driven triggers, such as <code>AfterCount</code> and 
<code>AfterProcessingTime</code>. Until the side input is updated, emit the 
default or initial model ID that is used to pass the respective 
<code>ModelHandler</code> as a side input.</p><h2 id=custom-inference>Custom 
Inference</h2><p>The RunInference API doesn&rsquo;t currently support making 
remote inference calls using, for example, the Natural  [...]
diff --git 
a/website/generated-content/documentation/sdks/python-machine-learning/index.html
 
b/website/generated-content/documentation/sdks/python-machine-learning/index.html
index 04aa8a31a89..8b7811f281a 100644
--- 
a/website/generated-content/documentation/sdks/python-machine-learning/index.html
+++ 
b/website/generated-content/documentation/sdks/python-machine-learning/index.html
@@ -39,7 +39,7 @@ that illustrates running PyTorch models with Apache 
Beam.</p><h3 id=scikit-learn
 <code>model_uri=&lt;path_to_pickled_file></code> and <code>model_file_type: 
&lt;ModelFileType></code>, where you can specify
 <code>ModelFileType.PICKLE</code> or <code>ModelFileType.JOBLIB</code>, 
depending on how the model was serialized.</li></ol><p>See <a 
href=https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_sklearn.ipynb>this
 notebook</a>
 that illustrates running Scikit-learn models with Apache Beam.</p><h3 
id=tensorflow>TensorFlow</h3><p>To use TensorFlow with the RunInference API, 
you have two options:</p><ol><li>Use the built-in TensorFlow Model Handlers in 
Apache Beam SDK - <code>TFModelHandlerNumpy</code> and 
<code>TFModelHandlerTensor</code>.<ul><li>Depending on the type of input for 
your model, use <code>TFModelHandlerNumpy</code> for <code>numpy</code> input 
and <code>TFModelHandlerTensor</code> for <code>tf.Tenso [...]
-See <a 
href=https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow_with_tensorflowhub.ipynb>this
 notebook</a> that illustrates running Tensorflow models with Built-in model 
handlers.</li></ul></li><li>Using <code>tfx_bsl</code>.<ul><li>Use this 
approach if your model input is of type <code>tf.Example</code>.</li><li>Use 
<code>tfx_bsl</code> version 1.10.0 or later.</li><li>Create a model handler 
using <code>tfx_bsl.public.beam.run_inference.CreateM [...]
+See <a 
href=https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow.ipynb>this
 notebook</a> that illustrates running Tensorflow models with Built-in model 
handlers.</li></ul></li><li>Using <code>tfx_bsl</code>.<ul><li>Use this 
approach if your model input is of type <code>tf.Example</code>.</li><li>Use 
<code>tfx_bsl</code> version 1.10.0 or later.</li><li>Create a model handler 
using <code>tfx_bsl.public.beam.run_inference.CreateModelHandler()</code [...]
 See <a 
href=https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow.ipynb>this
 notebook</a>
 that illustrates running TensorFlow models with Apache Beam and 
tfx-bsl.</li></ul></li></ol><h2 id=use-custom-models>Use custom 
models</h2><p>If you would like to use a model that isn&rsquo;t specified by 
one of the supported frameworks, the RunInference API is designed flexibly to 
allow you to use any custom machine learning models.
 You only need to create your own <code>ModelHandler</code> or 
<code>KeyedModelHandler</code> with logic to load your model and use it to run 
the inference.</p><p>A simple example can be found in <a 
href=https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_custom_inference.ipynb>this
 notebook</a>.
diff --git a/website/generated-content/sitemap.xml 
b/website/generated-content/sitemap.xml
index 9103d943b91..a4979c10191 100644
--- a/website/generated-content/sitemap.xml
+++ b/website/generated-content/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/blog/beam-2.46.0/</loc><lastmod>2023-04-18T11:41:46-04:00</lastmod></url><url><loc>/categories/blog/</loc><lastmod>2023-04-18T11:41:46-04:00</lastmod></url><url><loc>/blog/</loc><lastmod>2023-04-18T11:41:46-04:00</lastmod></url><url><loc>/categories/</loc><lastmod>2023-04-18T11:41:46-04:00</lastmod></url><url><loc>/catego
 [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/blog/beam-2.46.0/</loc><lastmod>2023-04-18T16:44:37-04:00</lastmod></url><url><loc>/categories/blog/</loc><lastmod>2023-04-18T16:44:37-04:00</lastmod></url><url><loc>/blog/</loc><lastmod>2023-04-18T16:44:37-04:00</lastmod></url><url><loc>/categories/</loc><lastmod>2023-04-18T16:44:37-04:00</lastmod></url><url><loc>/catego
 [...]
\ No newline at end of file

Reply via email to