Great info, and explanation Thank you Rüdiger

Regarding the error, to try narrow down where to focus my effort I have
tested the following:


   - Using RPyC and multiprocessing passing Job object from client to
   server --> FAIL
   - Using RPyC and multiprocessing passing strings and lists from client
   to server where the server creates Job Object --> FAIL
   - Using RPyC and not using multiprocessing --> PASS
   - Not Using RPyC and using multiprocessing --> PASS


The above behavior lead me to believe that the issue is neither RPyC or
Multiprocessing isolation but rather trying to use them together.




On Tue, Sep 22, 2015 at 1:48 AM, Rüdiger Kessel <[email protected]>
wrote:

> I think the problem is not rpyc related. It seems that in the traceback no
> rpyc lib is envolved.
>
> You should test to call exposed_submitJob() directly on the server
> without using rpyc. Probably the problem will show up as well.
>
> If you create the Job object on the client side and you pass the object to
> the server then the server just gets a reference (called netref if I
> remember correctly) to the object. If you now call currJob.execute() on the
> server then the server gets a reference to the method execute() in the job
> objects and executes this method. Now, where does the object exist? It is
> on the client side. So where is the execute-method? It is also on the
> client side. So you end up running the job on the client PC that way. Why
> do you use the whole server hokus pokus to end up running a local job?
> If you want something to be executed on the server you need to make sure
> that it exists as a callable on the server (and not just a netref to the
> client) and that the server has all resources including file acces to run
> it.
> One way to achieve this is to create the job object on the server. There
> might also be a way of copying byte code but I do not know how to achieve
> this.
> You can pass the object (as netref) back to the client and the client can
> access the object via netref as if it would be existing locally.
>
> So rpyc allows to create objects anwhere and to use them anywhere. But the
> objects stay where they were created and execution is always happening
> where the objects were created. (The same is true for the object data by
> the way). rpyc is doing a very good job to hide the details and you can use
> remote objects as if they would be existing locally.
>
> So the usual problem is that the client knows what the server should do
> including some resources. The server has the computing power and other
> required resources. Now one needs to create a callable on the server side
> based on the specification from the client and execute it on the server.
> During the execution the client might supply some local resources (like
> files) to support the server. At the end some results and some status
> informations should be available on the client side.
>
> rpyc can organize the communication between client and server and it can
> zerodeploy (http://rpyc.readthedocs.org/en/latest/docs/zerodeploy.html)
> rpyc. You might need something similar to deploy the "what to do list
> (usually called program)" to the server. So you need to write a framework
> which deploys the "program" to the server where you want the execution to
> happen. You can use rpyc as a transport mechanism (e.g. remote file
> access). Then you can use rpyc to start the execution on the server
> (standard remote call). During execution the server can access all objects,
> which are not locally available, via netref. At the end the client can
> access results and data if you pass back a netref to the client. You need
> to make sure that as long as you want to access stuff via netref a thread
> handling rpyc requests is running on the server and the client.
>
> In my Monte Carlo tool I only needed the computing power from the server.
> The programs are numerical calculations (equations) encoded in xml which
> were parsed on the server side and compiled to a python callable. The
> client provides an interface object for file access, standard IO and
> status/control information.
>
> The limitation of this approach is that the "program" is limited to what
> the parser can handle. But for my tool this limitation existed also for the
> local simulation tool which existed before. If you use pythen as a parser
> then you could transport python source code to the server and compile it
> there which would be more generic. Or you could copy batch files and run
> them on the server (similar to zerodeploy).
>
> I hope this will help...
>
> Greetings
> Rüdiger
>
>
>
> .
>
>
> 2015-09-21 16:23 GMT+02:00 Michael Mann <[email protected]>:
>
>> I should add that I believe this is directly related to using
>> multiprocessing module in server because I do not get an error when I
>> execute the job directly (not using multiprocessing.process()) in function
>> exposed_submitJobObj(self,newJob)
>> using newJob.execute() as opposed to calling addJobToQueue(self,newJob)
>>
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "rpyc" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "rpyc" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/rpyc/BWIlXGaqlMk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Michael Mann

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"rpyc" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to