Hi David,
I think you're right, so one workaround is to create multiple connections
to the server. For fun I hacked together a connection pool class (this is a
pretty dirty hack, I wouldn't recommend using it for anything serious),
client side:

import time, rpyc, random, threading
from rpyc.core.async import AsyncResult


class RpycConnectionPool(object):
    def __init__(self, num_connections, *args, **kwargs):
        self.connections = [rpyc.connect(*args, **kwargs) for i in
range(num_connections)]
        for connection in self.connections:
            connection.in_active_use = threading.Event()
            connection.in_active_use.clear()

    @property
    def modules(self):
        connection = self._find_inactive_connection()
        connection.in_active_use.set()
        return connection.modules

    def async_request(self, handler, *args, **kwargs):
        class _AsyncResultWrapper(AsyncResult):
            @property
            def value(self_):
                self_.wait()
                try:
                    if self_._is_exc:
                        raise self_._obj
                    else:
                        return self_._obj
                finally:
                    # update the connection
                    for conn in self.connections:
                        if conn == self_._conn:
                            conn.in_active_use.clear()

        timeout = kwargs.pop("timeout", None)
        if kwargs:
            raise TypeError("got unexpected keyword argument(s) %s" %
(list(kwargs.keys()),))
        res = _AsyncResultWrapper(weakref.proxy(self))
        self._async_request(handler, args, res)
        if timeout is not None:
            res.set_expiry(timeout)
        return res

    def _find_inactive_connection(self):
        for connection in self.connections:
            if not connection.in_active_use.is_set():
                return connection
        # No inactive connections, return a random active one (calls to
this will block):
        return random.choice(self.connections)


pool = RpycConnectionPool(2, 'localhost', 9001, rpyc.SlaveService)

print "starting sleep", time.time()
async_wrapper = rpyc.async(pool.modules.time.sleep)
remote_sleep_result = async_wrapper(2)

print "local time:", time.time()
for i in range(2):
    # Make sure connection.in_active_use is cleared...
    remote_time_time = rpyc.async(pool.modules.time.time)
    remote_time_result = remote_time_time()
    while not remote_time_result.ready:
        pass
    print "remote time ready", time.time()
    print "remote time:", remote_time_result.value

print "local time:", time.time()
while not remote_sleep_result.ready:
    pass
print "sleep done", time.time()


(Server side is the same command you posted using ThreadPoolServer).
Depending on the nature of your application, you could maybe just create a
fixed number of connections? Another workaround I thought of is to use
remove eval/exec and get it to spawn a thread server side calling
time.sleep. Maybe you could write a nice pythonic wrapper for this on the
client side...
Hope this helps,
Oliver


On 13 June 2013 08:49, David West <[email protected]> wrote:

> Oliver,
>
> The ThreadPoolServer seems to exhibit the same behavior:
>
> Server:
> ----
> $ python -c 'import
> rpyc.utils.server;rpyc.utils.server.ThreadPoolServer(service=rpyc.SlaveService,port=9001).start()'
> ----
>
> Client:
> ----
> $ python -c 'import time,rpyc,thread;c=rpyc.connect("127.0.0.1",
> 9001,rpyc.SlaveService);thread.start_new_thread(c.modules.time.sleep,(10,));print
> time.ctime();c.modules.time.time();print time.ctime()'
> Wed Jun 12 15:35:15 2013
> Wed Jun 12 15:35:25 2013
>
> $ python -c 'import time,rpyc;c=rpyc.connect("127.0.0.1",
> 9001,rpyc.SlaveService);asleep=rpyc.async(c.modules.time.sleep);r=asleep(10);print
> time.ctime();c.modules.time.time();print time.ctime();print r.ready,
> r.value'
> Wed Jun 12 15:35:51 2013
> Wed Jun 12 15:36:02 2013
> True None
> ----
>
> My guess is that the "thread pooling" in this server is for new incoming
> connections.  But it looks like RPyC's protocol requires all interactions
> via a single connection be strictly serialized.
>
> David E. West
>
> On Tuesday, June 11, 2013 4:24:15 AM UTC-4, Oliver Drake wrote:
>
>> Hi David,
>> Have you tried the ThreadPoolServer? http://rpyc.**
>> readthedocs.org/en/latest/api/**utils_server.html#rpyc.utils.**
>> server.ThreadPoolServer<http://rpyc.readthedocs.org/en/latest/api/utils_server.html#rpyc.utils.server.ThreadPoolServer>
>> Cheers,
>> Oliver
>>
>>
>> On 11 June 2013 19:35, Tomer Filiba <[email protected]> wrote:
>>
>>> ---------- Forwarded message ----------
>>> From: <[email protected]>
>>> Date: Tue, Jun 11, 2013 at 2:30 AM
>>> Subject: Can't post to rpyc user group.
>>> To: [email protected]
>>>
>>>
>>> Hello,
>>>
>>> I know your RPyC page said not to contact you directly, but I seem to be
>>> unable to post to the rpyc user group.
>>>
>>> David E. West
>>>
>>> I wished to post the following:
>>>
>>> ----
>>> Hello,
>>>
>>> First off, RPyC implements an amazing concept!
>>>
>>> My question has to do with concurrent access of these netref's.  It
>>> appears that the protocol is designed to be used by only a single
>>> thread at a time.  Below is an example of what I'm talking about:
>>>
>>> Server:
>>> ----
>>> $ python -c 'import
>>> rpyc.utils.server;rpyc.utils.**server.ThreadedServer(service=**
>>> rpyc.SlaveService,port=9001).**start()'
>>> ----
>>>
>>> Client:
>>> ----
>>> $ python -c 'import time,rpyc,thread;c=rpyc.**connect("127.0.0.1", 9001,
>>> rpyc.SlaveService);thread.**start_new_thread(c.modules.**
>>> time.sleep,(10,));print
>>> time.ctime();c.modules.time.**time();print time.ctime()'
>>> Mon Jun 10 12:09:00 2013
>>> Mon Jun 10 12:09:10 2013
>>> ----
>>>
>>> So we issue a remote time.sleep(10) in a background thread, and this
>>> blocks a remote time.time() in the foreground.
>>>
>>> Also, here is an example that uses rpyc.async instead of a thread.  It
>>> suffers the same issue:
>>>
>>> ----
>>> $ python -c 'import time,rpyc;c=rpyc.connect("127.**0.0.1", 9001,
>>> rpyc.SlaveService);asleep=**rpyc.async(c.modules.time.**
>>> sleep);r=asleep(10);print
>>> time.ctime();c.modules.time.**time();print time.ctime();print r.ready,
>>> r.value'
>>> Mon Jun 10 11:55:01 2013
>>> Mon Jun 10 11:55:11 2013
>>> True None
>>> ----
>>>
>>> So even though we are issuing the remote time.sleep(10) via
>>> rpyc.async, the call to time.time() still blocks.
>>>
>>> Is this behavior by design?
>>>
>>> I can imagine that making the protocol concurrent would introduce the
>>> complexity of requiring some sort of thread pool on the remote side.
>>> Is it just a matter of trying to keep the rpyc implementation as
>>> simple as possible, or is there a design philosophy being imposed
>>> here?
>>>
>>> David E. West
>>> ----
>>>
>>>  --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "rpyc" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to rpyc+uns...@googlegroups.**com.
>>>
>>> For more options, visit 
>>> https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
>>> .
>>>
>>>
>>>
>>
>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"rpyc" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to