yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r950466702
##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,30 @@
def _load_model(
model_class: torch.nn.Module, state_dict_path, device, **model_params):
model = model_class(**model_params)
+
+ if device == torch.device('cuda') and not torch.cuda.is_available():
+ logging.warning(
+ "Specified 'GPU', but could not find device. Switching to CPU.")
+ device = torch.device('cpu')
+
+ try:
+ logging.info("Reading state_dict_path %s onto %s", state_dict_path, device)
+ file = FileSystems.open(state_dict_path, 'rb')
+ state_dict = torch.load(file, map_location=device)
+ except RuntimeError as e:
Review Comment:
The type of the exception so we can't go more specific than that.
```
RuntimeError: Attempting to deserialize object on a CUDA device but
torch.cuda.is_available() is False. If you are running on a CPU-only machine,
please use torch.load with map_location=torch.device('cpu') to map your
storages to the CPU.
```
Yes, I can change to just have the `torch.load()` line in the block.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]