AMOOOMA commented on code in PR #37112:
URL: https://github.com/apache/beam/pull/37112#discussion_r2734225727


##########
sdks/python/apache_beam/utils/multi_process_shared.py:
##########
@@ -200,9 +226,99 @@ def __call__(self, *args, **kwargs):
   def __getattr__(self, name):
     return getattr(self._proxyObject, name)
 
+  def __setstate__(self, state):
+    self.__dict__.update(state)
+
+  def __getstate__(self):
+    return self.__dict__

Review Comment:
   The data would be directly from the client to the model, the model manager 
will just give the proxy object of the model instance directly to RunInference. 
   
   I also just realized we might have to have the proxy holds on to the 
proxy/reference of the model instances because otherwise sharing the model 
instances across different sdk harness will be challenging, and would ended up 
probably storing the same info of the proxy object like uri and port etc.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to