Reshu,
UIMA-AS configurations are normally used in DUCC as Services for
interactive applications or to support Jobs. They can be used in Jobs, but
typically are not.
There is also a difference in the inputs between Job processes and
Services. Services will normally receive a CAS with the artifact
Eddie,
I was using this same scenario and doing hit and try to compare this
with UIMA AS to get the more scaled pipeline as I think UIMA AS can also
did this. But I am unable to touch the processing time of DUCC's default
configuration like you mentioned with UIMA AS.
Can you help me in doin
The simplest way of vertically scaling a Job process is to specify the
analysis pipeline using core UIMA descriptors and then using
--process_thread_count to specify how many copies of the pipeline to
deploy, each in a different thread. No use of UIMA-AS at all. Please check
out the "Raw Text Proce
Ohh!!! I misunderstand this. I thought this would scale my Aggregate and
AEs both.
I want to scale aggregate as well as individual AEs. Is there any way of
doing this in UIMA AS/DUCC?
On 04/28/2015 07:14 PM, Jaroslaw Cwiklik wrote:
In async aggregate you scale individual AEs not the aggre
In async aggregate you scale individual AEs not the aggregate as a whole.
The below configuration should do that. Are there any warnings from
dd2spring at startup with your configuration?
Hi,
I was trying to scale my processing pipeline to be run in DUCC
environment with uima as process_dd. If I was trying to scale using the
below given configuration, the threads started were not as expected:
http://uima.apache.org/resourceSpecifier";>
Uima v3 Deployment Descripter