AnandInguva commented on code in PR #25321:
URL: https://github.com/apache/beam/pull/25321#discussion_r1102921175


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -81,11 +81,8 @@ def _validate_constructor_args(
 
 
 def _load_model(
-    model_class: Optional[Callable[..., torch.nn.Module]],
-    state_dict_path: Optional[str],
-    device: torch.device,
-    model_params: Optional[Dict[str, Any]],
-    torch_script_model_path: Optional[str]):
+    model_class, state_dict_path, device, model_params,
+    torch_script_model_path):

Review Comment:
   since model_params accepts `Optional[Dict[str, Any]]` in the model handler 
constructor, we do a check in the model handler constructor to check if it is 
`None` and replace `None` with `{}`. then we pass the model_params, which is 
always a `Dict` to `_load_model`
   
   Here 
   1. In the _load_model, we still need to provide the type annotation as 
`Optional[Dict[str, Any]]` instead of `Dict[str, Any]`(even though we know it 
would always be a `dict`) but `mypy` cannot deduce that. 
   2. If I provide `Optional[Dict[str, Any]]`, then `mypy` thinks it could be 
`None` sometimes and the type checker fails at the code `**model_params`, 
saying `after **, a Map is expected`. We know it would be always a Map but we 
need to match type of the `_load_model`'s `model_params` with upstream 
`model_params` type. 
   
   Either way, we get an error. not really our bug. So I removed the type hints 
since this is an internal function and we anyway actually provide typehints in 
the upstream methods that use `_load_model`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to