Hello!

  Hmmm...My expectations regarding the UIMAASEngine capabilities were
different than the current implementations.In my opinion the client
shouldn't hang up due to the poor performance of the processing pipeline.
It should pump up all the created JCAS to the MQ queue, thus consumer(UIMA
AS service) MQ queue will be filled up with messages.  Also, after the CAS
was sent to the pipeline the the CAS should be put in the pool. You can see
my expected feature the same way for a client to the MQ Broker. The JMS
client puts messages in the queue without any concerns regarding the time
the message will be consumed by the endpoint.
    What is your opinions regarding my approach? Is it feasible? How this
will affect the processing pipeline?

I look forward for your answers.
Thank you.

Kind regards,
  Florin




On Thu, Feb 2, 2012 at 8:19 PM, Marshall Schor <m...@schor.com> wrote:

>
>
> On 2/2/2012 9:34 AM, Spico Florin wrote:
>
>>
>>
>> Hello!
>>
>>  Thank you for your answers. I'm using uima 2.3.1. Here is my testing
>> code:
>> //Endpoint initialization
>>    private int fsHeapSize = 2000000;
>>    private int timeout = 600;
>>    private int getmeta_timeout = 60;
>>    private int cpc_timeout = 1;
>>
>>      Map<String, Object> appCtx = new HashMap<String, Object>();
>>            uimaEEEngine = new BaseUIMAAsynchronousEngine_**impl();
>>
>>        appCtx.put(**UimaAsynchronousEngine.**ServerUri,
>> "tcp://localhost:61616");
>>        ppCtx.put(**UimaAsynchronousEngine.**Endpoint, "myEndpoint");
>>        appCtx.put(**UimaAsynchronousEngine.**Timeout, timeout * 1000);
>>        appCtx.put(**UimaAsynchronousEngine.**GetMetaTimeout,
>>                getmeta_timeout * 1000);
>>        appCtx.put(**UimaAsynchronousEngine.**CpcTimeout, cpc_timeout *
>> 1000);
>> *     appCtx.put(**UimaAsynchronousEngine.**CasPoolSize, 50);
>> *    appCtx.put(UIMAFramework.CAS_**INITIAL_HEAP_SIZE,
>>
>>                Integer.valueOf(fsHeapSize / 4).toString());
>>        uimaEEEngine.initialize(**appCtx);
>>
>> //sending message to UIMA
>> public void sendToUIMA(String msg) throws Exception {
>>        long startTime = System.currentTimeMillis();
>>        System.out.println("Preparing CAS");
>> *   CAS cas = uimaEEEngine.getCAS();
>> *   long eTime = System.currentTimeMillis();
>>
>>        System.out.println("Prepared CAS time:"+(eTime-startTome));
>> *
>> *
>>        JCas jcas;
>>        try {
>> **  jcas = cas.getJCas();*
>> * } catch (CASException e) {
>>
>>            throw new CollectionException(e);
>>        }
>>        uimaEEEngine.sendCAS(jcas.**getCas());
>> }
>>
>> //calls from main() to UIMA client
>>
>> public static void main(String[] args) throws Exception {
>>        UIMAPipelineConnector runner = new UIMAPipelineConnector();
>>
>>        for (int i = 0; i < 1000; i++) {
>>            runner.sendToUIMA("A message that takes a lot of time for the
>> UIMA pipeline to process");
>>
>>            Thread.sleep(10);
>>        }
>> }
>>
>>
>> //deployment descriptor main set up
>>
>> endpoint flow controller descriptor (main entry of the pipeline):
>>
>> <deployment protocol="jms" provider="activemq">
>> *<casPool numberOfCASes="15"/>
>> *<service>
>>
>> <inputQueue endpoint="myEndpoint" brokerURL="${defaultBrokerURL}**"/>
>>
>>
>> In my pipeline I have a component that is running slow (aprox 14s). What
>> is interesting is when the pool size is reaching its limits (50), the time
>> spent by the UIMAEngine to get a new CAS (getCAS() method) is almost the
>> same time with the time spent by the slower component.
>>
>
> hmmm, I think this may be working exactly as it should?   It's supposed to
> work like this:
>
> 1) you set up a pool size in your client of 50.
> 2) The client starts sending CASes to the server(s).
> 3) Each CAS it sends depletes the pool; each CAS that finishes is added
> back to the pool.
> 4) Since you can generate CASes in the client faster than the pipeline can
> process them, a Queue builds up to hold the CASes the client has submitted,
> but are not yet being processed.  The size of this queue is limited to 50 -
> the size of the client CAS pool.
> 5) At some point the pool is empty - all 50 CASes are out either at the
> broker (in queue) or being processed.  At that point, the client's call to
> getCas will "hang" because the pool is empty, until one of the 50
> outstanding CASes returns.
>
> Does this sound like what's happening, or am I missing something?
>
> -Marshall
>
>  Below is  the output of running the code:
>>
>> Preparing CAS
>> Prepared CAS time:14270
>> Preparing CAS
>> Prepared CAS time:14478
>>
>> Another proof the the method getCAS() poor performance is given by the
>> attached snapshot taken from the profiler jVisualVM.
>>
>> Hope that the above clarifies my problems and concerns.
>> I'll look forward for your suggestions and answers.
>>
>> Thank you.
>> Regards,
>>  Florin
>>
>>
>>
>>
>> On Wed, Feb 1, 2012 at 4:55 PM, Jaroslaw Cwiklik <uim...@gmail.com<mailto:
>> uim...@gmail.com>> wrote:
>>
>>    Sorry didnt finish my thought on question #1. If you see sendCAS()
>>    blocking, attach jConsole to the application (you may need to enable
>> JMX),
>>    view the threads and check where your application thread is blocking.
>>
>>    JC
>>
>>    On Wed, Feb 1, 2012 at 9:52 AM, Jaroslaw Cwiklik <uim...@gmail.com
>>    <mailto:uim...@gmail.com>> wrote:
>>
>>    > Florin from you description I cant figure out the cause of the
>> slowness
>>    > that you see. Are you saying that your application thread is stuck in
>>    > sendCAS() method as if it was waiting for a reply? This is certainly
>> not
>>    > intent behind this API. It is an asynchronous call and should not
>> wait for
>>    > a reply when the request is dispatched. How are you getting CASes?
>> Do you
>>    > have your own CAS pool or use the one the UimaAsynchronousEngine
>> provides.
>>    > How big is the CAS pool? Are you getting any replies via a
>>    > entityProcessComplete() callback? Which version of uima-as are you
>> using:
>>    > 2.3.1 or recent build from svn?
>>    >
>>    > To your questions:
>>    >
>>    > 1) What do you mean by "..disable the response feature"? The
>> sendCAS() is
>>    > asynch method which should not block. If it is blocking, than this
>> is a bug
>>    > in UIMA AS client. To debug this problem, you can
>>    > 2) The UimaAsynchronousEngine can be called from multiple threads
>> and use
>>    > of ThreadPoolExecutor seems fine.
>>    > 3) Have you tried to scale the pipeline to allow multiple CASes to be
>>    > processed at the same time? .
>>    >
>>    > Jerry
>>    >
>>    >
>>    > On Wed, Feb 1, 2012 at 5:11 AM, Spico Florin <spicoflo...@gmail.com
>>    <mailto:spicoflo...@gmail.com>**>wrote:
>>
>>    >
>>    >> Hello!
>>    >> I have application client that is receiving messages from a Queue
>> via JMS.
>>    >> The message is then packed in a JCas and sent to the UIMA AS
>> pipeline via
>>    >> UimaAsynchronousEngine.
>>    >> If the UIMA AS pipeline processing is slow then it impacts the
>> client in
>>    >> the way that the received messages from the
>>    >> Queue will not be sent as they arrived. I'm using sendCAS(CAS)
>> method of
>>    >> UimaAsynchronousEngine, thus the call to pipeline should be
>> asynchronous
>>    >> (as specfied in the spec).
>>    >>  In my opinion the described behavior is not as expected (i.e. the
>> client
>>    >> should not be affected by the UIMA pipeline performance and
>>    >> it should send the received messages for processing right away,
>> without
>>    >> waiting some responses).
>>    >>  My questions are:
>>    >> 1. I suspect, that my client is somehow waiting the response from
>> the
>>    >> pipeline. Is there any way to disable the response feature?
>>    >> 2. I'm using a thread pool executor that is sending the messages to
>> UIMA
>>    >> pipeline. Is this a good approach?
>>    >> 3. How to design my client in order to send the messages to the
>> pipeline
>>    >> without concerning the pipeline performance?
>>    >>
>>    >> I look forward for your answers and advices.
>>    >>  Thank you.
>>    >>   Best regards,
>>    >>
>>    >>    Florin
>>    >>
>>    >
>>    >
>>
>>
>>

Reply via email to