Hi

I'm not 100% sure but perhaps in some cases the exceptions go directly to the out fault chain, so registering a custom out fault interceptor may help

Sergey
On 07/04/15 15:14, Eirik Bjørsnøs wrote:
Hello,

We have implemented a set of interceptors for limiting the number of
concurrent invocations to each remote service.

The idea is that each remote service (be it JaxWS dispatchers or SEI
ports) should be invoked by at most N threads concurrently. If the
remote service has more than N active invocations, we should instead
return a soap fault with a message saying that the system is over
capacity. (We want to limit the load to a the number of requests we
know we can serve)

If this was to implement this using servlet filters, I would have
written a Filter.doFilter method like this::

     int current = counter.incrementAndGet();
     try {
         if(current > limit) {
             throw new ServletException("System is over capacity");
         }
         chain.doFilter(request, response);
     } finally {
         counter.decrementAndGet();
     }

However, CXF's interceptor design doesn't lend itself to use Java's
try/catch syntax.

What is the proper way of implementing try/catch semantics using CXF
Interceptors on a Client?

What we currently do:

1) We're adding a QosBeforeInterceptor early in the Client's out
interceptor chain. This implements handleMessage. We increment the
counter. If it is larger than N we throw a SoapFault. In this case
handleFault gets called (by CXF), where we decrement the counter.

2) We're adding a QosAfterInterceptor early in the Client's in
interceptor chain. This implements handleMessage where we decrement
the counter.

So by design the counter is always incremented by
QoSBeforeInterceptor.handleMessage. It should always be decremented by
either QoSBeforeInterceptor.handleFault OR by
QoSAfterInterceptor.handleMessage.

We are seeing reports from production indicating that in some cases
the counter might have been incremented, but not decremented for a
message, leading to a "leak" of counts. This causes the service to be
blocked for access indefinitely when it reaches N.

I have not been able to reproduce this "leak" situation offline,
experiementing with various exception and timeout scenarios. So I'm
not 100% sure our design is broken.

Still I would very much like some feedback on my try / catch semantics
implementation and the assumtions it is based on:

A) When a SoapFault is thrown from my interceptor's handleMessage, it
will always lead to handleFault being invoked on the same interceptor?
B) When a SoapFault is thrown from handleMessage by some later
interceptor, this always leads to handleFault being called on all
earlier interceptors in the same chain?
C) If handleFault is not called on the outgoing chain of a Client,
then handleMessage will always be called on an interceptor on the
Client's incoming chain?

I am trying to investigate reasons for the counter being incremented,
but not decremented. That is, cases where handleMessage is called on
the out chain, but neither outgoing handleFault nor incoming
handleMessage is called.

Assuming I can't be the first to implement try/catch semantics using
interceptors, maybe someone can spot a weakness in my design?

Cheers,
Eirik.


Reply via email to