This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/beam.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new d09ae8b626d Publishing website 2022/09/14 22:17:13 at commit bb7468f
d09ae8b626d is described below

commit d09ae8b626d73fdf0b4694a17564fd1cf36f1932
Author: jenkins <bui...@apache.org>
AuthorDate: Wed Sep 14 22:17:13 2022 +0000

    Publishing website 2022/09/14 22:17:13 at commit bb7468f
---
 .../documentation/sdks/python-machine-learning/index.html | 15 +++++++++++++--
 website/generated-content/sitemap.xml                     |  2 +-
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git 
a/website/generated-content/documentation/sdks/python-machine-learning/index.html
 
b/website/generated-content/documentation/sdks/python-machine-learning/index.html
index 4a418f5710f..9c80b0f81c8 100644
--- 
a/website/generated-content/documentation/sdks/python-machine-learning/index.html
+++ 
b/website/generated-content/documentation/sdks/python-machine-learning/index.html
@@ -19,7 +19,7 @@
 function addPlaceholder(){$('input:text').attr('placeholder',"What are you 
looking for?");}
 function endSearch(){var 
search=document.querySelector(".searchBar");search.classList.add("disappear");var
 icons=document.querySelector("#iconsBar");icons.classList.remove("disappear");}
 function blockScroll(){$("body").toggleClass("fixedPosition");}
-function openMenu(){addPlaceholder();blockScroll();}</script><div 
class="clearfix container-main-content"><div class="section-nav closed" 
data-offset-top=90 data-offset-bottom=500><span class="section-nav-back 
glyphicon glyphicon-menu-left"></span><nav><ul class=section-nav-list 
data-section-nav><li><span 
class=section-nav-list-main-title>Languages</span></li><li><span 
class=section-nav-list-title>Java</span><ul class=section-nav-list><li><a 
href=/documentation/sdks/java/>Java SDK overvi [...]
+function openMenu(){addPlaceholder();blockScroll();}</script><div 
class="clearfix container-main-content"><div class="section-nav closed" 
data-offset-top=90 data-offset-bottom=500><span class="section-nav-back 
glyphicon glyphicon-menu-left"></span><nav><ul class=section-nav-list 
data-section-nav><li><span 
class=section-nav-list-main-title>Languages</span></li><li><span 
class=section-nav-list-title>Java</span><ul class=section-nav-list><li><a 
href=/documentation/sdks/java/>Java SDK overvi [...]
 Pydoc</a></td></table><p><br><br><br></p><p>You can use Apache Beam with the 
RunInference API to use machine learning (ML) models to do local and remote 
inference with batch and streaming pipelines. Starting with Apache Beam 2.40.0, 
PyTorch and Scikit-learn frameworks are supported. You can create multiple 
types of transforms using the RunInference API: the API takes multiple types of 
setup parameters from model handlers, and the parameter type determines the 
model implementation.</p><h2 [...]
 <a 
href=https://github.com/apache/beam/blob/master/sdks/python/apache_beam/utils/shared.py#L20><code>Shared</code>
 class documentation</a>.</p><h3 id=multi-model-pipelines>Multi-model 
pipelines</h3><p>The RunInference API can be composed into multi-model 
pipelines. Multi-model pipelines can be useful for A/B testing or for building 
out ensembles made up of models that perform tokenization, sentence 
segmentation, part-of-speech tagging, named entity extraction, language 
detection, corefer [...]
 with pipeline as p:
@@ -39,7 +39,18 @@ from apache_beam.ml.inference.pytorch_inference import 
PytorchModelHandlerKeyedT
    data = p | 'Read' &gt;&gt; beam.ReadFromSource('a_source')
    model_a_predictions = data | RunInference(&lt;model_handler_A&gt;)
    model_b_predictions = model_a_predictions | beam.Map(some_post_processing) 
| RunInference(&lt;model_handler_B&gt;)
-</code></pre><p>Where <code>model_handler_A</code> and 
<code>model_handler_B</code> are the model handler setup code.</p><h3 
id=use-a-keyed-modelhandler>Use a keyed ModelHandler</h3><p>If a key is 
attached to the examples, wrap the <code>KeyedModelHandler</code> around the 
<code>ModelHandler</code> object:</p><pre><code>from 
apache_beam.ml.inference.base import KeyedModelHandler
+</code></pre><p>Where <code>model_handler_A</code> and 
<code>model_handler_B</code> are the model handler setup code.</p><h4 
id=use-resource-hints-for-different-model-requirements>Use Resource Hints for 
Different Model Requirements</h4><p>When using multiple models in a single 
pipeline, different models may have different memory or worker SKU requirements.
+Resource hints allow you to provide information to a runner about the compute 
resource requirements for each step in your
+pipeline.</p><p>For example, the following snippet extends the previous 
ensemble pattern with hints for each RunInference call
+to specify RAM and hardware accelerator requirements:</p><pre><code>with 
pipeline as p:
+   data = p | 'Read' &gt;&gt; beam.ReadFromSource('a_source')
+   model_a_predictions = data | 
RunInference(&lt;model_handler_A&gt;).with_resource_hints(min_ram=&quot;20GB&quot;)
+   model_b_predictions = model_a_predictions
+      | beam.Map(some_post_processing)
+      | RunInference(&lt;model_handler_B&gt;).with_resource_hints(
+         min_ram=&quot;4GB&quot;,
+         
accelerator=&quot;type:nvidia-tesla-k80;count:1;install-nvidia-driver&quot;)
+</code></pre><p>For more information on resource hints, see <a 
href=https://beam.apache.org/documentation/runtime/resource-hints/>Resource 
hints</a>.</p><h3 id=use-a-keyed-modelhandler>Use a keyed 
ModelHandler</h3><p>If a key is attached to the examples, wrap the 
<code>KeyedModelHandler</code> around the <code>ModelHandler</code> 
object:</p><pre><code>from apache_beam.ml.inference.base import 
KeyedModelHandler
 keyed_model_handler = KeyedModelHandler(PytorchModelHandlerTensor(...))
 with pipeline as p:
    data = p | beam.Create([
diff --git a/website/generated-content/sitemap.xml 
b/website/generated-content/sitemap.xml
index 2ba465d3cdf..2de89808af0 100644
--- a/website/generated-content/sitemap.xml
+++ b/website/generated-content/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/blog/beam-2.41.0/</loc><lastmod>2022-08-23T21:36:06+00:00</lastmod></url><url><loc>/categories/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/categories/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/catego
 [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/blog/beam-2.41.0/</loc><lastmod>2022-08-23T21:36:06+00:00</lastmod></url><url><loc>/categories/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/blog/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/categories/</loc><lastmod>2022-09-02T14:00:10-04:00</lastmod></url><url><loc>/catego
 [...]
\ No newline at end of file

Reply via email to