damccorm commented on code in PR #37112:
URL: https://github.com/apache/beam/pull/37112#discussion_r2734035992
##########
sdks/python/apache_beam/utils/multi_process_shared.py:
##########
@@ -200,9 +226,99 @@ def __call__(self, *args, **kwargs):
def __getattr__(self, name):
return getattr(self._proxyObject, name)
+ def __setstate__(self, state):
+ self.__dict__.update(state)
+
+ def __getstate__(self):
+ return self.__dict__
Review Comment:
Does this mean we're also double proxying the data (once from client to
model manager, once to model manager process)?
> Otherwise we will need to make RunInference does work to manage the model
instances to avoid this pattern. WDYT?
I think this is ok - it shouldn't need to be a ton of code (basically a
"check in before inference and after inference", and I think it will end up
being more efficient
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]