gemini-code-assist[bot] commented on code in PR #38110:
URL: https://github.com/apache/beam/pull/38110#discussion_r3086235636


##########
sdks/python/apache_beam/yaml/yaml_ml.py:
##########
@@ -282,6 +282,52 @@ def inference_output_type(self):
                                           ('model_id', Optional[str])])
 
 
[email protected]_handler_type('HuggingFacePipeline')
+class HuggingFacePipelineProvider(ModelHandlerProvider):
+  def __init__(
+      self,
+      task: str = "",
+      model: str = "",
+      preprocess: Optional[dict[str, str]] = None,
+      postprocess: Optional[dict[str, str]] = None,
+      device: Optional[str] = None,
+      inference_fn: Optional[dict[str, str]] = None,
+      load_pipeline_args: Optional[dict[str, Any]] = None,

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The default values for `task` and `model` should be `None` instead of empty 
strings to correctly handle optional parameters in the underlying HuggingFace 
pipeline. Additionally, the `device` parameter should allow for integer values 
(GPU indices), so the type hint should be updated to `Any` or a union of `str` 
and `int`.
   
   ```suggestion
         task: Optional[str] = None,
         model: Optional[str] = None,
         preprocess: Optional[dict[str, str]] = None,
         postprocess: Optional[dict[str, str]] = None,
         device: Optional[Any] = None,
         inference_fn: Optional[dict[str, str]] = None,
         load_pipeline_args: Optional[dict[str, Any]] = None,
   ```



##########
sdks/python/apache_beam/yaml/yaml_ml.py:
##########
@@ -282,6 +282,52 @@ def inference_output_type(self):
                                           ('model_id', Optional[str])])
 
 
[email protected]_handler_type('HuggingFacePipeline')
+class HuggingFacePipelineProvider(ModelHandlerProvider):
+  def __init__(
+      self,
+      task: str = "",
+      model: str = "",
+      preprocess: Optional[dict[str, str]] = None,
+      postprocess: Optional[dict[str, str]] = None,
+      device: Optional[str] = None,
+      inference_fn: Optional[dict[str, str]] = None,
+      load_pipeline_args: Optional[dict[str, Any]] = None,
+      **kwargs):
+    try:
+      from apache_beam.ml.inference.huggingface_inference import 
HuggingFacePipelineModelHandler
+    except ImportError:
+      raise ValueError(
+          'Unable to import HuggingFacePipelineModelHandler. Please '
+          'install transformers dependencies.')
+
+    kwargs = {k: v for k, v in kwargs.items() if not k.startswith('_')}
+
+    inference_fn_obj = self.parse_processing_transform(
+        inference_fn, 'inference_fn') if inference_fn else None
+
+    handler_kwargs = {}
+    if inference_fn_obj:
+      handler_kwargs['inference_fn'] = inference_fn_obj
+
+    _handler = HuggingFacePipelineModelHandler(
+        task=task,
+        model=model,
+        device=device,
+        load_pipeline_args=load_pipeline_args,
+        **handler_kwargs,
+        **kwargs)
+
+    super().__init__(_handler, preprocess, postprocess)
+
+  @staticmethod
+  def validate(model_handler_spec):
+    pass

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The `validate` method should be implemented to ensure that at least one of 
`task` or `model` is provided in the configuration. This provides better error 
messages at pipeline construction time rather than failing during execution.
   
   ```suggestion
     def validate(model_handler_spec):
       config = model_handler_spec.get('config', {})
       if not config.get('task') and not config.get('model'):
         raise ValueError(
             "HuggingFacePipeline requires either 'task' or 'model' to be 
specified.")
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to