Bryan,
In your example below, I'm referring to TcpServerEndpoint and its associated
classes (it's its ListenEndpoint and Endpoint implementations for which
Object.equals/hashCode are critical), because that's what you're using in your
exporters. And the standard TcpServerEndpoint should be fine in this regard.
-- Peter
On Jan 13, 2012, at 9:44 AM, Bryan Thompson wrote:
> Peter,
>
> Can you clarify what you mean by the end point in this context? That would
> be the server side object which is being exported? I.e., the proxy wrapping
> the Future? Or would this be the long lived service which is exported and
> against which the requests are being made?
>
> If you are talking about the RemoteFutureImpl class (in the code below), then
> it does not override equals()/hashCode(). However, I am unsure how anything
> would benefit from that it if did as no two futures (or their server side
> wrappers) should be equals().
>
> The code to export the Future from the long lived service looks like this:
>
> public <E> Future<E> getProxy(final Future<E> future) {
>
> /*
> * Setup the Exporter for the Future.
> *
> * Note: Distributed garbage collection is enabled since the proxied
> * future CAN become locally weakly reachable sooner than the client
> can
> * get() the result. Distributed garbage collection handles this for us
> * and automatically unexports the proxied iterator once it is no
> longer
> * strongly referenced by the client.
> */
> final Exporter exporter = getExporter(true/* enableDGC */);
>
> // wrap the future in a proxyable object.
> final RemoteFuture<E> impl = new RemoteFutureImpl<E>(future);
>
> /*
> * Export the proxy.
> */
> final RemoteFuture<E> proxy;
> try {
>
> // export proxy.
> proxy = (RemoteFuture<E>) exporter.export(impl);
>
> if (log.isInfoEnabled()) {
>
> log.info("Exported proxy: proxy=" + proxy + "("
> + proxy.getClass() + ")");
>
> }
>
> } catch (ExportException ex) {
>
> throw new RuntimeException("Export error: " + ex, ex);
>
> }
>
> // return proxy to caller.
> return new ClientFuture<E>(proxy);
>
> }
>
> /**
> * Return an {@link Exporter} for a single object that implements one or
> * more {@link Remote} interfaces.
> * <p>
> * Note: This uses TCP Server sockets.
> * <p>
> * Note: This uses [port := 0], which means a random port is assigned.
> * <p>
> * Note: The VM WILL NOT be kept alive by the exported proxy (keepAlive is
> * <code>false</code>).
> *
> * @param enableDGC
> * if distributed garbage collection should be used for the
> * object to be exported.
> *
> * @return The {@link Exporter}.
> */
> protected Exporter getExporter(final boolean enableDGC) {
>
> return new BasicJeriExporter(TcpServerEndpoint
> .getInstance(0/* port */), invocationLayerFactory, enableDGC,
> false/* keepAlive */);
>
> }
>
> The code to export the long lived service looks like this:
>
> /*
> * Extract how the service will provision itself from the
> * Configuration.
> */
>
> // The exporter used to expose the service proxy.
> exporter = (Exporter) config.getEntry(//
> getClass().getName(), // component
> ConfigurationOptions.EXPORTER, // name
> Exporter.class, // type (of the return object)
> /*
> * The default exporter is a BasicJeriExporter using a
> * TcpServerEnpoint.
> */
> new BasicJeriExporter(TcpServerEndpoint.getInstance(0),
> new BasicILFactory())
> );
>
> /*
> * Export a proxy object for this service instance.
> *
> * Note: This must be done before we start the join manager since the
> * join manager will register the proxy.
> */
> try {
>
> proxy = exporter.export(impl);
>
> if (log.isInfoEnabled())
> log.info("Proxy is " + proxy + "(" + proxy.getClass() + ")");
>
> } catch (ExportException ex) {
>
> fatal("Export error: "+this, ex);
>
> }
>
> Thanks,
> Bryan
>
>> -----Original Message-----
>> From: Peter Jones [mailto:[email protected]]
>> Sent: Friday, January 13, 2012 9:33 AM
>> To: [email protected]
>> Subject: Re: DGC threads issue
>>
>> Which is why the Object.equals/hashCode for an endpoint
>> implementation is critical.
>>
>> -- Peter
>>
>>
>> On Jan 13, 2012, at 2:05 AM, Gregg Wonderly wrote:
>>
>>> I would say, that it's very easy to just code up a
>> configuration entry, or a dynamic construction in code where
>> a new endpoint is also created per each Exporter. That can
>> quickly turn into a problematic situation in cases like this,
>> where there are lots of "quick" exports followed by
>> termination without "unexport" being done directly as part of
>> the returning context.
>>>
>>> Gregg Wonderly
>>>
>>> On Jan 13, 2012, at 12:01 AM, Peter Firmstone wrote:
>>>
>>>> Is there another way to create an Endpoint per exported
>> object? I'm just thinking, it seems unlikely that Brian's
>> implemented his own Endpoint, but are there any other error
>> conditions or incorrect use scenarios that could produce the
>> same problem?
>>>>
>>>> Cheers,
>>>>
>>>> Peter.
>>>>
>>>> Peter Jones wrote:
>>>>> Bryan,
>>>>>
>>>>> DGC threads should not be per exported object. Generally
>> speaking, they tend to be per endpoint (at which there are
>> one or more remote objects exported). Are you using any sort
>> of custom endpoint implementation? Problems like this can
>> occur when an endpoint implementation doesn't implement
>> Object.equals and hashCode appropriately, so the expected
>> batching of threads (and communication) per endpoint does not occur.
>>>>>
>>>>> It might help to see, from a thread dump, exactly which
>> DGC threads are causing this problem. And they are in the
>> server JVM (with all the exported remote objects), not the
>> remote callers' JVM(s)?
>>>>>
>>>>> -- Peter
>>>>>
>>>>>
>>>>> On Jan 12, 2012, at 3:45 PM, Tom Hobbs wrote:
>>>>>
>>>>>
>>>>>> Hi Bryan,
>>>>>>
>>>>>> Sorry that no one got back to you about this. I'm afraid that I
>>>>>> don't know the answer to your question, I've copied the dev list
>>>>>> into this email in case someone who monitors that list (but not
>>>>>> this one) has any ideas.
>>>>>>
>>>>>> Best regards,
>>>>>>
>>>>>> Tom
>>>>>>
>>>>>> On Thu, Jan 12, 2012 at 2:29 PM, Bryan Thompson
>> <[email protected]> wrote:
>>>>>>
>>>>>>> Just to follow up on this thread myself. I modified
>> the pattern to return a "thick" future rather than a proxy
>> for the future. This caused the RMI call to wait on the
>> server until the future was done and then sent back the
>> outcome. This "fixed" the DGC memory/thread leak by reducing
>> the number of exported proxies drammatically.
>>>>>>>
>>>>>>> In terms of best practices, is distributed DGC simply
>> not useful for exported objects with short life spans? Can
>> it only be used with proxies for relatively long lived services?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Bryan
>>>>>>>
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Bryan Thompson
>>>>>>>> Sent: Tuesday, January 03, 2012 12:06 PM
>>>>>>>> To: [email protected]
>>>>>>>> Subject: DGC threads issue
>>>>>>>>
>>>>>>>> Hello,
>>>>>>>>
>>>>>>>> Background:
>>>>>>>>
>>>>>>>> I am seeing what would appear to be one DGC thread
>> allocated per
>>>>>>>> exported object. This is using River 2.2 and Sun JDK
>> 1.6.0_17.
>>>>>>>> Relevant configuration parameters are below.
>>>>>>>>
>>>>>>>> I am observing problems with the DGC threads not being
>> retired on
>>>>>>>> a timely basis. The exported objects are proxies for Futures
>>>>>>>> which are being executed on the service. The code pattern is
>>>>>>>> such that the proxied Future goes out of lexical scope quite
>>>>>>>> quickly. E.g., rmiCallReturningProxyForFuture().get().
>>>>>>>>
>>>>>>>> Under a modest load, a large number of such Futures
>> are exported
>>>>>>>> which results in a large number of long lived DGC
>> threads. This
>>>>>>>> turns into a problem for the JVM due to the stack
>> allocation per
>>>>>>>> thread. Presumably this is not good for other reasons as well
>>>>>>>> (e.g., scheduling).
>>>>>>>>
>>>>>>>> I have tried to override the leaseValue and checkInterval
>>>>>>>> defaults per the configuration options below. I
>> suspect that the
>>>>>>>> lease interval is somehow not being obeyed, which is
>> presumably a
>>>>>>>> problem on my end. However, I can verify that the
>> configuration
>>>>>>>> values are in fact showing up in
>>>>>>>> System.getProperties() for at least some of the JVMs involved
>>>>>>>> (the one which drives the workload and the one that I am
>>>>>>>> monitoring with the large number of DGC lease threads).
>>>>>>>>
>>>>>>>> Some questions:
>>>>>>>>
>>>>>>>> Is this one-thread-per-exported proxy the expected
>> behavior when
>>>>>>>> DGC is requested for the exported object?
>>>>>>>>
>>>>>>>> The DGC lease checker threads appear to expire ~14 -
>> 15 minutes
>>>>>>>> after I terminate the process which was originating the RMI
>>>>>>>> requests. This is close the sum of the default
>> leaseValue (10m)
>>>>>>>> and checkInterval (5m) parameters, but maybe there is
>> some other
>>>>>>>> timeout which is controlling this? If this is the sum
>> of those
>>>>>>>> parameters, why would the DGC lease threads live until
>> the sum of
>>>>>>>> those values? I thought that the lease would expire after the
>>>>>>>> leaseValue (10m default).
>>>>>>>>
>>>>>>>> Can the issue I am observing be caused by a low heap
>> pressure on
>>>>>>>> the JVM to which the RMI proxies were exported? If it
>> fails to
>>>>>>>> GC those proxies, even though they are reachable, could that
>>>>>>>> cause DGC to continue to retain those proxies on the JVM which
>>>>>>>> exported them?
>>>>>>>>
>>>>>>>> Is there any way to configure DGC to use a thread pool
>> or to have
>>>>>>>> the leases managed by a single thread?
>>>>>>>>
>>>>>>>> Is it possible that there is an interaction with the
>> useNIO option?
>>>>>>>>
>>>>>>>> Relevant options that I am using include:
>>>>>>>>
>>>>>>>> -Dcom.sun.jini.jeri.tcp.useNIO=true
>>>>>>>> -Djava.rmi.dgc.leaseValue=30000
>>>>>>>> -Dsun.rmi.dgc.checkInterval=15000
>>>>>>>> -Dsun.rmi.transport.tcp.connectionPool=true
>>>>>>>>
>>>>>>>> Thanks in advance,
>>>>>>>> Bryan
>>>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>