Re: Problem in running DUCC Job for Arabic Language

2018-11-06 Thread Jaroslaw Cwiklik
Forgot to mention that if you have a shared file system the best practice
is not to serialize your content (SOFA)
from JD to service. Instead, in a CR add a path to the file containing
Subject of Analysis to the CAS and have
the CM in the pipeline read the content from the shared file system.
-jerry


On Tue, Nov 6, 2018 at 9:37 AM Jaroslaw Cwiklik  wrote:

> Can you try setting -Dfile.encoding=ISO-8859-1 for the service (job)
> process and -Djavax.servlet.request.encoding=ISO-8859-1
> -Dfile.encoding=ISO-8859-1 for the JD process.
>
> The JD actually uses Jetty webserver to serve service requests over HTTP.
> I went as far as extracting Jetty server code from JD into a simple http
> server process and also extracted HttpClient related code from the service
> into a simple client process to be able to test.
>
> So on the server side I have:
> String text = new String("استعرض المتحدث باسم قوات «التحالف العربي
> لدعم".getBytes("UTF-8"),"ISO-8859-1");
> response.setHeader("content-type", "text/xml");
> String body = marshall(text);   // XStream serialization
> response.getWriter().write(body);
>
> On the client side:
>   System.out.println("Default Locale:   " + Locale.getDefault());
>   System.out.println("Default Charset:  " + Charset.defaultCharset());
>   System. out.println("file.encoding;" +
> System.getProperty("file.encoding"));
>
> HttpResponse response = httpClient.execute(postMethod);
> HttpEntity entity = response.getEntity();
> String content = EntityUtils.toString(entity);
>  String result = (String) unmarshall(content); //XStream unmarshall
>   String o = new String(result.getBytes() );
> System.out.println(o);
>
> When I run with the above -D settings the client console shows:
> Default Locale:   en_US
> Default Charset:  ISO-8859-1
> file.encoding;ISO-8859-1
>
> استعرض المتحدث باسم قوات «التحالف العربي لدعم
>
> Without the -D's I dont see arabic text and instead see garbage on the
> console.
>
> On Fri, Jul 6, 2018 at 3:00 AM rohit14csu...@ncuindia.edu <
> rohit14csu...@ncuindia.edu> wrote:
>
>> Yes if i run the AE as a DUCC UIMA-AS Service and send it CASes from
>> UIMA-AS client it works fine.
>> Infact the enviornment i.e the LANG argument is same for UIMA-AS Service
>> and DUCC JOB.
>>
>> Environ[3] = LANG=en_IN
>>
>> And if i change the LANG=ar then while getting the data coming in JD the
>> arabic text is already replaced with ???(Question Mark) and the encoding of
>> the data coming in JD or CR  shows ASCII encoding.
>> I don't understand why is this happening.
>>
>> Best
>> Rohit
>>
>>
>> On 2018/07/05 13:35:11, Eddie Epstein  wrote:
>> > So if you run the AE as a DUCC UIMA-AS service and send it CASes from
>> some
>> > UIMA-AS client it works OK? The full environment for all processes that
>> > DUCC launches are available via ducc-mon under the Specification or
>> > Registry tab for that job or managed reservation or service. Please see
>> if
>> > the LANG setting for the service is different from the LANG setting for
>> the
>> > job.
>> >
>> > One can also see the LANG setting for a linux process-id by doing:
>> >
>> > cat /proc//environ
>> >
>> > The LANG to be used for a DUCC process can be set by adding to the
>> > --environment argument "LANG=xxx" as needed
>> >
>> > Thanks,
>> > Eddie
>> >
>> >
>> >
>> > On Thu, Jul 5, 2018 at 6:47 AM, rohit14csu...@ncuindia.edu <
>> > rohit14csu...@ncuindia.edu> wrote:
>> >
>> > > Hey,
>> > >  Yeah you got it right the first snippet comes in CR before the data
>> goes
>> > > in CAS.
>> > > And the second snippet is in the first annotator or analysis
>> engine(AE) of
>> > > my Aggregate Desciptor.
>> > > I am pretty sure this is an issue of the CAS used by DUCC because if
>> i use
>> > > service of DUCC in which we are supposed to send the CAS and receive
>> the
>> > > same CAS with added features from DUCC i get accurate results.
>> > >
>> > > But the only problem comes in submitting a job where the cas is
>> generated
>> > > by DUCC.
>> > > This can also be a issue of the enviornment(Language) of DUCC because
>> the
>> > > default language is english.
>> > >
>> > > Bets Regards
>> > > Rohit
>> > >
>> > > On 2018/07/03 13:11:50, Eddie Epstein  wrote:
>> > > > Rohit,
>> > > >
>> > > > Before sending the data into jcas if i force encode it :-
>> > > > >
>> > > > > String content2 = null;
>> > > > > content2 = new String(content.getBytes("UTF-8"), "ISO-8859-1");
>> > > > > jcas.setDocumentText(content2);
>> > > > >
>> > > >
>> > > > Where is this code, in the job CR?
>> > > >
>> > > >
>> > > >
>> > > > >
>> > > > > And when i go in my first annotator i force decode it:-
>> > > > >
>> > > > > String content = null;
>> > > > > content = new String(jcas.getDocumentText.getBytes("ISO-8859-1"),
>> > > > > "UTF-8");
>> > > > >
>> > > >
>> > > > And is this in the first annotator of the job process, i.e. the CM?
>> > > >
>> > > > Please be as specific as possible.
>> > > >
>> > > > Thanks,

Re: Problem in running DUCC Job for Arabic Language

2018-11-06 Thread Jaroslaw Cwiklik
Can you try setting -Dfile.encoding=ISO-8859-1 for the service (job)
process and -Djavax.servlet.request.encoding=ISO-8859-1
-Dfile.encoding=ISO-8859-1 for the JD process.

The JD actually uses Jetty webserver to serve service requests over HTTP. I
went as far as extracting Jetty server code from JD into a simple http
server process and also extracted HttpClient related code from the service
into a simple client process to be able to test.

So on the server side I have:
String text = new String("استعرض المتحدث باسم قوات «التحالف العربي
لدعم".getBytes("UTF-8"),"ISO-8859-1");
response.setHeader("content-type", "text/xml");
String body = marshall(text);   // XStream serialization
response.getWriter().write(body);

On the client side:
  System.out.println("Default Locale:   " + Locale.getDefault());
  System.out.println("Default Charset:  " + Charset.defaultCharset());
  System. out.println("file.encoding;" +
System.getProperty("file.encoding"));

HttpResponse response = httpClient.execute(postMethod);
HttpEntity entity = response.getEntity();
String content = EntityUtils.toString(entity);
 String result = (String) unmarshall(content); //XStream unmarshall
  String o = new String(result.getBytes() );
System.out.println(o);

When I run with the above -D settings the client console shows:
Default Locale:   en_US
Default Charset:  ISO-8859-1
file.encoding;ISO-8859-1

استعرض المتحدث باسم قوات «التحالف العربي لدعم

Without the -D's I dont see arabic text and instead see garbage on the
console.

On Fri, Jul 6, 2018 at 3:00 AM rohit14csu...@ncuindia.edu <
rohit14csu...@ncuindia.edu> wrote:

> Yes if i run the AE as a DUCC UIMA-AS Service and send it CASes from
> UIMA-AS client it works fine.
> Infact the enviornment i.e the LANG argument is same for UIMA-AS Service
> and DUCC JOB.
>
> Environ[3] = LANG=en_IN
>
> And if i change the LANG=ar then while getting the data coming in JD the
> arabic text is already replaced with ???(Question Mark) and the encoding of
> the data coming in JD or CR  shows ASCII encoding.
> I don't understand why is this happening.
>
> Best
> Rohit
>
>
> On 2018/07/05 13:35:11, Eddie Epstein  wrote:
> > So if you run the AE as a DUCC UIMA-AS service and send it CASes from
> some
> > UIMA-AS client it works OK? The full environment for all processes that
> > DUCC launches are available via ducc-mon under the Specification or
> > Registry tab for that job or managed reservation or service. Please see
> if
> > the LANG setting for the service is different from the LANG setting for
> the
> > job.
> >
> > One can also see the LANG setting for a linux process-id by doing:
> >
> > cat /proc//environ
> >
> > The LANG to be used for a DUCC process can be set by adding to the
> > --environment argument "LANG=xxx" as needed
> >
> > Thanks,
> > Eddie
> >
> >
> >
> > On Thu, Jul 5, 2018 at 6:47 AM, rohit14csu...@ncuindia.edu <
> > rohit14csu...@ncuindia.edu> wrote:
> >
> > > Hey,
> > >  Yeah you got it right the first snippet comes in CR before the data
> goes
> > > in CAS.
> > > And the second snippet is in the first annotator or analysis
> engine(AE) of
> > > my Aggregate Desciptor.
> > > I am pretty sure this is an issue of the CAS used by DUCC because if i
> use
> > > service of DUCC in which we are supposed to send the CAS and receive
> the
> > > same CAS with added features from DUCC i get accurate results.
> > >
> > > But the only problem comes in submitting a job where the cas is
> generated
> > > by DUCC.
> > > This can also be a issue of the enviornment(Language) of DUCC because
> the
> > > default language is english.
> > >
> > > Bets Regards
> > > Rohit
> > >
> > > On 2018/07/03 13:11:50, Eddie Epstein  wrote:
> > > > Rohit,
> > > >
> > > > Before sending the data into jcas if i force encode it :-
> > > > >
> > > > > String content2 = null;
> > > > > content2 = new String(content.getBytes("UTF-8"), "ISO-8859-1");
> > > > > jcas.setDocumentText(content2);
> > > > >
> > > >
> > > > Where is this code, in the job CR?
> > > >
> > > >
> > > >
> > > > >
> > > > > And when i go in my first annotator i force decode it:-
> > > > >
> > > > > String content = null;
> > > > > content = new String(jcas.getDocumentText.getBytes("ISO-8859-1"),
> > > > > "UTF-8");
> > > > >
> > > >
> > > > And is this in the first annotator of the job process, i.e. the CM?
> > > >
> > > > Please be as specific as possible.
> > > >
> > > > Thanks,
> > > > Eddie
> > > >
> > >
> >
>


Re: Problem in running DUCC Job for Arabic Language

2018-07-06 Thread rohit14csu173
Yes if i run the AE as a DUCC UIMA-AS Service and send it CASes from UIMA-AS 
client it works fine.
Infact the enviornment i.e the LANG argument is same for UIMA-AS Service and 
DUCC JOB.

Environ[3] = LANG=en_IN

And if i change the LANG=ar then while getting the data coming in JD the arabic 
text is already replaced with ???(Question Mark) and the encoding of the data 
coming in JD or CR  shows ASCII encoding.
I don't understand why is this happening.

Best
Rohit 


On 2018/07/05 13:35:11, Eddie Epstein  wrote: 
> So if you run the AE as a DUCC UIMA-AS service and send it CASes from some
> UIMA-AS client it works OK? The full environment for all processes that
> DUCC launches are available via ducc-mon under the Specification or
> Registry tab for that job or managed reservation or service. Please see if
> the LANG setting for the service is different from the LANG setting for the
> job.
> 
> One can also see the LANG setting for a linux process-id by doing:
> 
> cat /proc//environ
> 
> The LANG to be used for a DUCC process can be set by adding to the
> --environment argument "LANG=xxx" as needed
> 
> Thanks,
> Eddie
> 
> 
> 
> On Thu, Jul 5, 2018 at 6:47 AM, rohit14csu...@ncuindia.edu <
> rohit14csu...@ncuindia.edu> wrote:
> 
> > Hey,
> >  Yeah you got it right the first snippet comes in CR before the data goes
> > in CAS.
> > And the second snippet is in the first annotator or analysis engine(AE) of
> > my Aggregate Desciptor.
> > I am pretty sure this is an issue of the CAS used by DUCC because if i use
> > service of DUCC in which we are supposed to send the CAS and receive the
> > same CAS with added features from DUCC i get accurate results.
> >
> > But the only problem comes in submitting a job where the cas is generated
> > by DUCC.
> > This can also be a issue of the enviornment(Language) of DUCC because the
> > default language is english.
> >
> > Bets Regards
> > Rohit
> >
> > On 2018/07/03 13:11:50, Eddie Epstein  wrote:
> > > Rohit,
> > >
> > > Before sending the data into jcas if i force encode it :-
> > > >
> > > > String content2 = null;
> > > > content2 = new String(content.getBytes("UTF-8"), "ISO-8859-1");
> > > > jcas.setDocumentText(content2);
> > > >
> > >
> > > Where is this code, in the job CR?
> > >
> > >
> > >
> > > >
> > > > And when i go in my first annotator i force decode it:-
> > > >
> > > > String content = null;
> > > > content = new String(jcas.getDocumentText.getBytes("ISO-8859-1"),
> > > > "UTF-8");
> > > >
> > >
> > > And is this in the first annotator of the job process, i.e. the CM?
> > >
> > > Please be as specific as possible.
> > >
> > > Thanks,
> > > Eddie
> > >
> >
> 


Re: Problem in running DUCC Job for Arabic Language

2018-07-05 Thread Eddie Epstein
So if you run the AE as a DUCC UIMA-AS service and send it CASes from some
UIMA-AS client it works OK? The full environment for all processes that
DUCC launches are available via ducc-mon under the Specification or
Registry tab for that job or managed reservation or service. Please see if
the LANG setting for the service is different from the LANG setting for the
job.

One can also see the LANG setting for a linux process-id by doing:

cat /proc//environ

The LANG to be used for a DUCC process can be set by adding to the
--environment argument "LANG=xxx" as needed

Thanks,
Eddie



On Thu, Jul 5, 2018 at 6:47 AM, rohit14csu...@ncuindia.edu <
rohit14csu...@ncuindia.edu> wrote:

> Hey,
>  Yeah you got it right the first snippet comes in CR before the data goes
> in CAS.
> And the second snippet is in the first annotator or analysis engine(AE) of
> my Aggregate Desciptor.
> I am pretty sure this is an issue of the CAS used by DUCC because if i use
> service of DUCC in which we are supposed to send the CAS and receive the
> same CAS with added features from DUCC i get accurate results.
>
> But the only problem comes in submitting a job where the cas is generated
> by DUCC.
> This can also be a issue of the enviornment(Language) of DUCC because the
> default language is english.
>
> Bets Regards
> Rohit
>
> On 2018/07/03 13:11:50, Eddie Epstein  wrote:
> > Rohit,
> >
> > Before sending the data into jcas if i force encode it :-
> > >
> > > String content2 = null;
> > > content2 = new String(content.getBytes("UTF-8"), "ISO-8859-1");
> > > jcas.setDocumentText(content2);
> > >
> >
> > Where is this code, in the job CR?
> >
> >
> >
> > >
> > > And when i go in my first annotator i force decode it:-
> > >
> > > String content = null;
> > > content = new String(jcas.getDocumentText.getBytes("ISO-8859-1"),
> > > "UTF-8");
> > >
> >
> > And is this in the first annotator of the job process, i.e. the CM?
> >
> > Please be as specific as possible.
> >
> > Thanks,
> > Eddie
> >
>


Re: Problem in running DUCC Job for Arabic Language

2018-07-05 Thread rohit14csu173
Hey,
 Yeah you got it right the first snippet comes in CR before the data goes in 
CAS.
And the second snippet is in the first annotator or analysis engine(AE) of my 
Aggregate Desciptor.
I am pretty sure this is an issue of the CAS used by DUCC because if i use 
service of DUCC in which we are supposed to send the CAS and receive the same 
CAS with added features from DUCC i get accurate results.

But the only problem comes in submitting a job where the cas is generated by 
DUCC.
This can also be a issue of the enviornment(Language) of DUCC because the 
default language is english.

Bets Regards
Rohit

On 2018/07/03 13:11:50, Eddie Epstein  wrote: 
> Rohit,
> 
> Before sending the data into jcas if i force encode it :-
> >
> > String content2 = null;
> > content2 = new String(content.getBytes("UTF-8"), "ISO-8859-1");
> > jcas.setDocumentText(content2);
> >
> 
> Where is this code, in the job CR?
> 
> 
> 
> >
> > And when i go in my first annotator i force decode it:-
> >
> > String content = null;
> > content = new String(jcas.getDocumentText.getBytes("ISO-8859-1"),
> > "UTF-8");
> >
> 
> And is this in the first annotator of the job process, i.e. the CM?
> 
> Please be as specific as possible.
> 
> Thanks,
> Eddie
> 


Re: Problem in running DUCC Job for Arabic Language

2018-07-03 Thread Eddie Epstein
Rohit,

Before sending the data into jcas if i force encode it :-
>
> String content2 = null;
> content2 = new String(content.getBytes("UTF-8"), "ISO-8859-1");
> jcas.setDocumentText(content2);
>

Where is this code, in the job CR?



>
> And when i go in my first annotator i force decode it:-
>
> String content = null;
> content = new String(jcas.getDocumentText.getBytes("ISO-8859-1"),
> "UTF-8");
>

And is this in the first annotator of the job process, i.e. the CM?

Please be as specific as possible.

Thanks,
Eddie


Re: Problem in running DUCC Job for Arabic Language

2018-07-02 Thread rohit14csu173
Hey Eddie,

Before sending the data into jcas if i force encode it :-

String content2 = null;
content2 = new String(content.getBytes("UTF-8"), "ISO-8859-1");
jcas.setDocumentText(content2);

And when i go in my first annotator i force decode it:-

String content = null;
content = new String(jcas.getDocumentText.getBytes("ISO-8859-1"), "UTF-8");

Now the text is coming in arabic language without any problem.But again i have 
many analysis engine in my aggregate and i can't hardcode this snippet 
everywhere.

Maybe there is a problem in unicoding of the cas that is sent from collection 
reader to analysis engine.Now i was thinking that maybe if i can get to know 
the type of encoding in the cas, i can just encode the content into the unicode 
of CAS and it may work fine.

Best Regards
Rohit
On 2018/06/18 17:42:04, Eddie Epstein  wrote: 
> Hi Rohit,
> 
> In a DUCC job the CAS created by users CR in the Job Driver is serialized
> into cas.xmi format, transported to the Job Process where it is
> deserialized and given to the users analytics. Likely the problem is in CAS
> serialization or deserialization, perhaps due to the active LANG
> environment on the JD or JP machines?
> 
> Eddie
> 
> On Thu, Jun 14, 2018 at 1:48 AM, Rohit yadav  wrote:
> 
> > Hey,
> >
> > I use DUCC for english language and it works without any problem.
> > But lately i tried deploying a job for Arabic Language and all the content
> > of Arabic Text is replaced by *'?'* (Question Mark).
> >
> > I am extracting Data from Accumlo and after processing i send it to ES6.
> >
> > When i checked the log files of JD it shows that arabic data is coming
> > into CR without any problem.
> > But when i check another log file it shows that the moment data enters
> > into my AE arabic content is replaced by Question mark.
> > Please find the log files attached with this mail.
> >
> > I think this may be a problem of CM because the data is fine inside CR and
> > the most interesting part is that if i try running the same pipeline
> > through CPM  it works without any problem which means DUCC is facing some
> > issue.
> >
> > I'll look forward to your reply.
> >
> > --
> > Best Regards,
> > *Rohit Yadav*
> >
> 


Re: Problem in running DUCC Job for Arabic Language

2018-06-18 Thread Eddie Epstein
Hi Rohit,

In a DUCC job the CAS created by users CR in the Job Driver is serialized
into cas.xmi format, transported to the Job Process where it is
deserialized and given to the users analytics. Likely the problem is in CAS
serialization or deserialization, perhaps due to the active LANG
environment on the JD or JP machines?

Eddie

On Thu, Jun 14, 2018 at 1:48 AM, Rohit yadav  wrote:

> Hey,
>
> I use DUCC for english language and it works without any problem.
> But lately i tried deploying a job for Arabic Language and all the content
> of Arabic Text is replaced by *'?'* (Question Mark).
>
> I am extracting Data from Accumlo and after processing i send it to ES6.
>
> When i checked the log files of JD it shows that arabic data is coming
> into CR without any problem.
> But when i check another log file it shows that the moment data enters
> into my AE arabic content is replaced by Question mark.
> Please find the log files attached with this mail.
>
> I think this may be a problem of CM because the data is fine inside CR and
> the most interesting part is that if i try running the same pipeline
> through CPM  it works without any problem which means DUCC is facing some
> issue.
>
> I'll look forward to your reply.
>
> --
> Best Regards,
> *Rohit Yadav*
>


Problem in running DUCC Job for Arabic Language

2018-06-14 Thread Rohit yadav

Hey,

I use DUCC for english language and it works without any problem.
But lately i tried deploying a job for Arabic Language and all the 
content of Arabic Text is replaced by *'?'* (Question Mark).


I am extracting Data from Accumlo and after processing i send it to ES6.

When i checked the log files of JD it shows that arabic data is coming 
into CR without any problem.
But when i check another log file it shows that the moment data enters 
into my AE arabic content is replaced by Question mark.

Please find the log files attached with this mail.

I think this may be a problem of CM because the data is fine inside CR 
and the most interesting part is that if i try running the same pipeline 
through CPM  it works without any problem which means DUCC is facing 
some issue.


I'll look forward to your reply.

--
Best Regards,
*Rohit Yadav*
Wed Jun 13 18:25:47 2018
050 ducc_ling Version 2.2.1 compiled Nov 22 2017 at 15:59:27
4050 Limits:   CORE soft[0] hard[-1]
4050 Limits:CPU soft[-1] hard[-1]
4050 Limits:   DATA soft[-1] hard[-1]
4050 Limits:  FSIZE soft[-1] hard[-1]
4050 Limits:MEMLOCK soft[65536] hard[65536]
4050 Limits: NOFILE soft[65000] hard[65000]
4050 Limits:  NPROC soft[256774] hard[256774]
4050 Limits:RSS soft[-1] hard[-1]
4050 Limits:  STACK soft[8388608] hard[-1]
4050 Limits: AS soft[-1] hard[-1]
4050 Limits:  LOCKS soft[-1] hard[-1]
4050 Limits: SIGPENDING soft[256774] hard[256774]
4050 Limits:   MSGQUEUE soft[819200] hard[819200]
4050 Limits:   NICE soft[0] hard[0]
4050 Limits:  STACK soft[8388608] hard[-1]
4050 Limits: RTPRIO soft[0] hard[0]
1120 Changed to working directory /mario/Uima_Arabic_new_v_1.1
Environ[0] = DUCC_PROCESSID=0
Environ[1] = DUCC_UMASK=002
Environ[2] = USER=mario
Environ[3] = LANG=en_IN
Environ[4] = DUCC_STATE_UPDATE_PORT=52048
Environ[5] = DUCC_PROCESS_UNIQUEID=2196a7f9-1ecd-4716-9139-861ca674f834
Environ[6] = DUCC_JOBID=41010
Environ[7] = DUCC_IP=192.168.10.145
Environ[8] = DUCC_PROCESS_LOG_PREFIX=/mario/ducc/logs/41010/41010-JD-S145
Environ[9] = HOME=/mario
Environ[10] = DUCC_NODENAME=S145
1000 Command to exec: /usr/local/java/jdk1.8.0_25/jre/bin/java
arg[1]: 
-Dducc.deploy.configuration=/mario/apache-uima-ducc-2.2.1/resources/ducc.properties
arg[2]: -Dducc.deploy.components=jd
arg[3]: -Dducc.job.id=41010
arg[4]: -Xmx300M
arg[5]: -Dducc.deploy.JobId=41010
arg[6]: 
-Dducc.deploy.CollectionReaderXml=desc/orkash/Reader/Accumlo_collectionReaderDescriptor
arg[7]: 
-Dducc.deploy.UserClasspath=/mario/apache-uima-ducc-2.2.1/lib/uima-ducc/user/*:UimaArabicES6.jar
arg[8]: -Dducc.deploy.WorkItemTimeout=10
arg[9]: -Dducc.deploy.JobDirectory=/mario/ducc/logs
arg[10]: -Dducc.deploy.JpFlowController=org.apache.uima.ducc.FlowController
arg[11]: 
-Dducc.deploy.JpAeDescriptor=desc/orkash/Aggregate/Aggregate1_aeDescriptor
arg[12]: 
-Dducc.deploy.JpCcDescriptor=desc/orkash/CASConsumer/casConsumer_Descriptor
arg[13]: -Dducc.deploy.JpThreadCount=5
arg[14]: -DDUCC_HOME=/mario/apache-uima-ducc-2.2.1
arg[15]: -Dducc.deploy.JpUniqueId=2196a7f9-1ecd-4716-9139-861ca674f834
arg[16]: -Dducc.process.log.dir=/mario/ducc/logs/41010/
arg[17]: -Dducc.process.log.basename=41010-JD-S145
arg[18]: -classpath
arg[19]: 
/mario/apache-uima-ducc-2.2.1/lib/uima-ducc/*:/mario/apache-uima-ducc-2.2.1/lib/uima-ducc/user/*:/mario/apache-uima-ducc-2.2.1/apache-uima/lib/uima-core.jar:/mario/apache-uima-ducc-2.2.1/lib/apache-log4j/*:/mario/apache-uima-ducc-2.2.1/webserver/lib/*:/mario/apache-uima-ducc-2.2.1/apache-uima/apache-activemq/lib/*:/mario/apache-uima-ducc-2.2.1/apache-uima/apache-activemq/lib/optional/*:/mario/apache-uima-ducc-2.2.1/lib/apache-camel/*:/mario/apache-uima-ducc-2.2.1/lib/apache-commons/*:/mario/apache-uima-ducc-2.2.1/lib/google-gson/*:/mario/apache-uima-ducc-2.2.1/lib/springframework/*
arg[20]: org.apache.uima.ducc.common.main.DuccService
1001 Command launching...
13 Jun 2018 18:25:48,534  INFO DUCC.DuccService - J[N/A] T[1] Component  
Starting Component 
{ducc.agent.exclusion.file=/mario/apache-uima-ducc-2.2.1/resources/exclusion.nodes,
 file.encoding.pkg=sun.io, ducc.orchestrator.http.node=S145, 
ducc.sm.meta.ping.stability=10, ducc.rm.admin.endpoint.type=queue, 
ducc.default.process.per.item.time.max=1440, 
ducc.agent.managed.process.state.update.endpoint.type=socket, 
java.home=/usr/local/java/jdk1.8.0_25/jre, 
ducc.jd.share.quantum.reserve.count=3, ducc.sm.http.port=19988, 
ducc.jd.communications.scheme=https, ducc.rm.reserve_overage=0, ducc.head=S145, 
ducc.rm.class.definitions=ducc.classes, ducc.agent.jvm.args=-Xmx500M, 
ducc.jd.queue.timeout.minutes=5, 
ducc.daemons.state.change.endpoint=activemq:queue:ducc.daemons.state.change, 
ducc.broker.memory.options=-Xmx1G, 
java.endorsed.dirs=/usr/local/java/jdk1.8.0_25/jre/lib/endorsed, 
ducc.sm.api.endpoint=activemq:queue:ducc.sm.api,