damccorm commented on code in PR #37112:
URL: https://github.com/apache/beam/pull/37112#discussion_r2736612044


##########
sdks/python/apache_beam/utils/multi_process_shared.py:
##########
@@ -200,9 +226,99 @@ def __call__(self, *args, **kwargs):
   def __getattr__(self, name):
     return getattr(self._proxyObject, name)
 
+  def __setstate__(self, state):
+    self.__dict__.update(state)
+
+  def __getstate__(self):
+    return self.__dict__

Review Comment:
   > the model manager will just give the proxy object of the model instance 
directly to RunInference.
   
   Oh right, this is why it needs pickled in the first place, since we copy 
over the full object.
   
   > I also just realized we might have to have the proxy holds on to the 
proxy/reference of the model instances because otherwise sharing the model 
instances across different sdk harness will be challenging, and would ended up 
probably storing the same info of the proxy object like uri and port etc.
   
   I'm not following what you're saying here, I think because proxy is an 
overloaded term here. I think maybe you're saying that the proxy returned from 
the model manager to the client might not have a valid reference to the actual 
model, at which point we'd need to have tighter coordination with the model 
manager anyways. I think this is probably right.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to