Patrick, service monitoring is left to applications. I think Kubernetes has
some basic monitoring of pods and
if more is needed I am sure there are third party monitoring tools
available.

The UIMA-AS does provide you with info about which service instance is
processing each CAS.
If you register a callback listener, you get node IP, pid and CAS via

Callback.onBeforeProcessCAS(status, nodeIP, pid);

You can use this to check if the service is running. Recent UIMA-AS
supports targeting specific service instance.
https://uima.apache.org/d/uima-as-2.10.3/uima_async_scaleout.html#ugr.ref.async.api.usage_targetservice
You can send a small test CAS to the specific instance at regular intervals
to see if it is viable.

The UIMA-AS holds on to each CAS which is in-process because it needs it to
deserialize a reply and
to support retry. The only controls you have are the size of the pool and
-Xmx

Jerry

On Thu, Oct 11, 2018 at 5:26 AM Huy, Patrick <patrick....@sap.com> wrote:

> Hi,
>
> we are running UIMA-AS on Kubernetes and we are facing some issues: when
> the endpoint a BaseUIMAAsynchronousEngine_impl is connected to goes missing
> (because it's being restarted by Kubernetes) while a CAS is being processed
> it seems like the AsynchronousEngine never recovers and becomes unusable
> from that point (it seems that the CAS is "gone"), currently we are using
> the "failover" activemq protocol but it does not seem to always help with
> the situation.
>
> When the components processing a CAS go down while a CAS is being
> processed we are currently unable to detect whether the process is just
> taking long or whether the component failed - we are currently waiting on
> processing Timeouts to catch this state but something better/faster (like
> healthiness pings or something) would be more desirable.
>
> Also it seems like our component using the  BaseUIMAAsynchronousEngines
> requires a very high amount of RAM to hold the CASes for each CAS Pool
> (around 5MB per CAS) is there anything that can be done to optimize this?
>
> Thanks in advance!
> Patrick
>

Reply via email to