I agree about using a Lease. Then the client also knows if the server dies and 
can re-subscribe.

If the latency of the timeout is a concern, then my solution has been to use a 
3 thread executor to send the updates. I chose 3 as a good number because of 
the philosophy of "once is an event, twice is a coincidence and thrice is 
conspiracy." That is, with 1 thread you should expect to be blocked; with 2 
threads you should rarely be blocked but you could be blocked by bad luck; with 
3 threads you'll only be blocked if there's a noteworthy outage which probably 
has a cause outside of your control so more threads won't help.

Chris

________________________________________
From: Gregg Wonderly [[email protected]]
Sent: Wednesday, June 13, 2012 3:53 PM
To: [email protected]
Subject: Re: Client timeouts and remote calls

There are timeouts that you can change in your Configuration to control how 
long the waits occur.  If it's important that everyone agree on the values 
being changed, you could include the use of a transaction so that if one client 
dies in the middle, then everyone can revert, and you can retry to get things 
to a sane state.

This is important if the data the clients receives, controls how they interact 
with the service.  But, you can otherwise, just do what you are doing, without 
a transaction.  If you turn on DGC, or use a Lease on the client received 
endpoint, then you might be able to know that a client is actually gone, rather 
than just temporarily unreachable.

Gregg

On Jun 13, 2012, at 2:13 PM, Sergio Aguilera Cazorla wrote:

> Hello,
>
> I have a question regarding client-side timeouts in Jini / Apache River. I
> am finishing a program where a certain number of clients can obtain a proxy
> and set / get properties (values) from an exported class in a server. Each
> client becomes a RemoteEventListener of the server, so each time a property
> is changed, the server calls notify() in ALL clients to make them aware
> that a property has changed (and all clients update their data).
>
> This architecture performs great if client programs finish in a "graceful"
> way, because I have a register / unregister mechanism that makes the server
> have an updated list of "alive" clients. However, if client machines "die
> suddenly", the server will be unaware and will try to call notify() next
> time that call is needed. Example (setSomething is a remote method on the
> Server):
>
> public void setSomething(String param) {
>
> <do the Stuff>
> Remote event ev = <proper RemoteEvent object>
> for(RemoteEventListener l : listeners) {
>
> try {
>
> l.notify(ev);
>
> } catch(Exception e) {   listeners.remove(l);   }
>
> }
>
> }
>
> I'm sure you see where I want to go: if some clients in the list died
> suddenly, the notify() will be called over them. A ConnectException is
> thrown and the client is removed properly but... it takes a long time for
> the exception to be thrown! Do you know how to control this situation?
>
> Thanks in advance!
>
> ADDITIONAL DATA:
> I have tried setting un the following RMI System properies, and didn't work:
> System.setProperty("sun.rmi.transport.tcp.responseTimeout","2000");
> System.setProperty("sun.rmi.transport.tcp.handshakeTimeout","2000");
> System.setProperty("sun.rmi.transport.tcp.readTimeout","2000");
> System.setProperty("sun.rmi.transport.connectionTimeout","2000");
> System.setProperty("sun.rmi.transport.proxy.connectTimeout ","2000");
>
> At present moment, under Windows XP for both client and server, the
> ConnectException takes exactly* 21 seconds* to be thrown. Do you know the
> reason for this value?
>
> --
> *Sergio Aguilera*

Reply via email to