DUCC: Unable to do "Fixed" type of Reservation

2016-03-31 Thread reshu.agarwal

Hi,

In DUCC 1.x, we are able to do fixed reservation of some of the memory 
of Nodes but We are restricted to do "reserve" type of reservation in 
DUCC 2.x. I want to know the reason for the same.


I am using ubuntu for DUCC installation and not be able to configure 
c-groups in it, So, I have tried to manage RAM utilization through FIXED 
reservation in DUCC 1.x. But, Now I have no option.


Hope, you can solve my problem.

Cheers.

Reshu.


Re: DUCC 2.0.1: Job is not going to complete State

2016-03-30 Thread reshu.agarwal

Hi,

Log file have no exceptions or errors.

I have found the issue. My CR was skipping some of the documents in 
hasNext() to some in-valid documents due to this total no of documents 
are not equals to count of processed document.


In DUCC 1.x, Job got completed with status " Premature" in this same case.

After removing the code from hasNext() and making equal both counts, Job 
got completed successfully.


Reshu.

On 03/26/2016 02:27 PM, Lou DeGenaro wrote:

Does the job's JD log file have any exceptions or errors?

Lou.

On Fri, Mar 25, 2016 at 2:49 AM, reshu.agarwal 
wrote:


Hi,

I am facing a problem in DUCC 2.0.1 i.e. job is not completing even after
CR hasNext() returned false. I had tested my job with
"all_in_one=local/remote" both. It was successfully completed. But, in
running cluster environment, It failed to stopped and it was continuously
going to hasNext() even after returning false from it.

Need Help. Thanks in advance.

Reshu.





DUCC 2.0.1: Job is not going to complete State

2016-03-24 Thread reshu.agarwal

Hi,

I am facing a problem in DUCC 2.0.1 i.e. job is not completing even 
after CR hasNext() returned false. I had tested my job with 
"all_in_one=local/remote" both. It was successfully completed. But, in 
running cluster environment, It failed to stopped and it was 
continuously going to hasNext() even after returning false from it.


Need Help. Thanks in advance.

Reshu.


Re: DUCC - Work Item Queue Time Management

2016-01-13 Thread reshu.agarwal


But, I did not need to do this in case of DUCC 1.1.0 and 1.0.0.

Reshu.
Signature

On 01/12/2016 06:35 PM, Lou DeGenaro wrote:


Reshu,

Very good.  Looks to me like no DUCC changes are needed with respect to
this issue.

Lou.

On Tue, Jan 12, 2016 at 12:07 AM, reshu.agarwal 
wrote:


Lou,

I placed names.txt in config directory of current working Directory.

Reshu.

Signature On 01/11/2016 06:57 PM, Lou DeGenaro wrote:


Reshu,

Am I to understand that you did this:

"Alternatively, you can place your own names.txt file either in the
current
working directory or in your config directory ('path.conf' setting or
$home/
config by default, $home being the value of 'path.home' setting or
'user.dir' system property by default)."

and that resolved your problem?

Lou.

On Mon, Jan 11, 2016 at 7:10 AM, reshu.agarwal 
wrote:

Lou,

The stack trace was not complete. There is a caused by section, you can
check it here:

Caused by: org.elasticsearch.env.FailedToResolveConfigException: Failed
to
resolve config path [names.txt], tried file path [names.txt], path file
[/home/ducc/Uima_Test/config/names.txt], and classpath
  at
org.elasticsearch.env.Environment.resolveConfig(Environment.java:213)
  at

org.elasticsearch.node.internal.InternalSettingsPreparer.prepareSettings(InternalSettingsPreparer.java:119)
  at

org.elasticsearch.client.transport.TransportClient.(TransportClient.java:160)
  at

org.elasticsearch.client.transport.TransportClient.(TransportClient.java:126)

Here is the similar kind of problem:

http://stackoverflow.com/questions/21528766/error-while-using-elastic-search-while-creating-a-osgi-bundle
.
As I do not want to change in the code, I choose to create the file.

Reshu.


On 01/07/2016 02:45 AM, Lou DeGenaro wrote:

Reshu,

Going back through this thread I see that you posted a stack trace on
9/25/15.  Is that the entire trace?  Is there any CausedBy section?

Lou.

On Wed, Jan 6, 2016 at 1:33 AM, reshu.agarwal 
The problem is the inability to resolve config Path. In initializing
Job
driver, a class is using a file names.txt from jar's config/names.txt
but
tried to find the file from /home/ducc/Uima_Test/config/names.txt. The
case
is coming in this version of DUCC and in Job driver initialization. As
service has been created without issue even if using the same
initialization method as well as adding *--all_in_one local* in props
file
also run job successfully.

Hope this will help.

Thanks in advanced.

Reshu.


On 01/06/2016 11:11 AM, reshu.agarwal wrote:

Dear Lou,


Sorry for this delay. I have tried this in DUCC 2.0.1 version and job
successfully run with this but shows "Job -1 submitted". But if I
remove
this additional flag, It still shows DriverProcessFailed.

Reshu.
On 10/06/2015 01:56 AM, Lou DeGenaro wrote:

Reshu,


Have you tried ducc_submit with the additional flag:

  *--all_in_one local*

?

Lou.

On Mon, Oct 5, 2015 at 12:25 AM, reshu.agarwal <
reshu.agar...@orkash.com
wrote:

Lou,

My job identifies the CR from test.jar but it also uses the other 3rd

Party libraries which are in lib folder suppose if you are using
mysql
database for getting data and the mysql classes jar is in lib folder
instead of test.jar.

Hope it will clarify the situation.

Reshu.


On 10/01/2015 06:46 PM, Lou DeGenaro wrote:

Reshu,

I have tried submitting jobs with the following problems:

1. user CP with missing UIMA jars
2. user CP with missing CR jar
3. user CP with CR xml file that specifies non-existent CR class
4. user CP with CR that throws NPE upon construction

In all cases an exception is logged that gives relevant information
to
solve the issue.  So far I'm unable to imagine how you are able to
create
java.lang.reflect.InvocationTargetException.

Lou.

On Thu, Oct 1, 2015 at 8:06 AM, Lou DeGenaro <
lou.degen...@gmail.com
wrote:

Reshu,

Are you able to share your (non-confidential) Job specification and


corresponding jar files that demonstrate the problem?

Lou.

On Thu, Oct 1, 2015 at 7:54 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

Lou,

Yes, classpath specification contains all classes needed to run my


Collection Reader.


On 10/01/2015 05:21 PM, Lou DeGenaro wrote:

Reshu,

In DUCC 2.0.0 we've introduced the idea of classpath separation
so


that
the
user classpath is not contaminated by the DUCC classpath.  Thus,
in
the
JD
there are 2 classpaths, one used by DUCC itself ("DUCC-side")
and
the
other
used when running user code ("user-side").

When the JD is running on the DUCC-side it uses the classpath
specified
in
job.classpath.properties.  User code (e.g. your Collection
Reader)
does
not
run under this path.

When the JD is running on the user-side, it uses the Java
classloading
employing the classpath specified in your job specification.  If
this
classpath is incomplete then needed classes will not be
loadable.
So
everything needed

Re: DUCC 2.0.1 : JP Http Client Unable to Communicate with JD

2016-01-12 Thread reshu.agarwal

Adding to this, Jd is showing this log:

12 Jan 2016 15:25:52,028  INFO ActionGet - T[37] engage  seqNo=21 
remote=user.13895.34
[Fatal Error] :1:7109: The element type 
"org.apache.uima.ducc.container.net.impl.MetaCasTransaction" must be terminated by the matching 
end-tag "".
12 Jan 2016 15:25:52,071  WARN JD.log - J[N/A] T[36] engage /jdApp
com.thoughtworks.xstream.io.StreamException:  : The element type 
"org.apache.uima.ducc.container.net.impl.MetaCasTransaction" must be terminated by the matching 
end-tag "".
at 
com.thoughtworks.xstream.io.xml.DomDriver.createReader(DomDriver.java:86)
at 
com.thoughtworks.xstream.io.xml.DomDriver.createReader(DomDriver.java:66)
at com.thoughtworks.xstream.XStream.fromXML(XStream.java:853)
at com.thoughtworks.xstream.XStream.fromXML(XStream.java:845)
at 
org.apache.uima.ducc.common.utils.XStreamUtils.unmarshall(XStreamUtils.java:35)
at 
org.apache.uima.ducc.transport.configuration.jd.JobDriverConfiguration$JDServlet.doPost(JobDriverConfiguration.java:240)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:538)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:478)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:937)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:871)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:110)
at org.eclipse.jetty.server.Server.handle(Server.java:346)
at 
org.eclipse.jetty.server.HttpConnection.handleRequest(HttpConnection.java:589)
at 
org.eclipse.jetty.server.HttpConnection$RequestHandler.content(HttpConnection.java:1065)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:823)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:220)
at 
org.eclipse.jetty.server.HttpConnection.handle(HttpConnection.java:411)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:535)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:40)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:529)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 7109; The element type 
"org.apache.uima.ducc.container.net.impl.MetaCasTransaction" must be terminated by the matching 
end-tag "".
at 
com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:257)
at 
com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:347)
at 
com.thoughtworks.xstream.io.xml.DomDriver.createReader(DomDriver.java:79)
... 26 more



On 01/12/2016 10:06 AM, reshu.agarwal wrote:


Hi,

I was getting this error after 17 out of 200 documents were processed. 
I am unable to find any reason for the same. Please see the error below:


INFO: Asynchronous Client Has Been Initialized. Serialization 
Strategy: [SerializationStrategy] Ready To Process.

DuccAbstractProcessContainer.deploy() <<<<<<<< User Container deployed
 Deployed Processing Container - Initialization Successful - 
Thread 32

DuccAbstractProcessContainer.deploy() >>>>>>>>> Deploying User Container
... UimaProcessContainer.doDeploy()
11 Jan 2016 17:18:36,969  INFO AgentSession - T[29] 
notifyAgentWithStatus  ... Job Process State Changed - PID:24790. 
Process State: Initializing. JMX Url:N/A Dispatched State Update Event 
to Agent with IP:192.168.10.126

DuccAbstractProcessContainer.deploy() <<<<<<<< User Container deployed
 Deployed Processing Container - Initialization Successful - 
Thread 34

DuccAbstractProcessContainer.deploy() >>>>>>>>> Deploying User Container
... UimaProcessContainer.doDeploy()
DuccAbstractProcessContainer.deploy() <<<<<<<< User Container deployed
 Deployed Processing Container - Initialization Successful - 
Thread 33
11 Jan 2016 17:18:38,277  INFO JobProcessComponent - T[33] setState  
Notifying Agent New State:Running
11 Jan 2016 17:18:38,279  INFO AgentSession - T[1] 
notifyAgentWithStatus  ... Job Process State Changed - PID:24790. 
Process

Re: DUCC - Work Item Queue Time Management

2016-01-11 Thread reshu.agarwal

Lou,

I placed names.txt in config directory of current working Directory.

Reshu.

Signature On 01/11/2016 06:57 PM, Lou DeGenaro wrote:

Reshu,

Am I to understand that you did this:

"Alternatively, you can place your own names.txt file either in the current
working directory or in your config directory ('path.conf' setting or $home/
config by default, $home being the value of 'path.home' setting or
'user.dir' system property by default)."

and that resolved your problem?

Lou.

On Mon, Jan 11, 2016 at 7:10 AM, reshu.agarwal 
wrote:


Lou,

The stack trace was not complete. There is a caused by section, you can
check it here:

Caused by: org.elasticsearch.env.FailedToResolveConfigException: Failed to
resolve config path [names.txt], tried file path [names.txt], path file
[/home/ducc/Uima_Test/config/names.txt], and classpath
 at
org.elasticsearch.env.Environment.resolveConfig(Environment.java:213)
 at
org.elasticsearch.node.internal.InternalSettingsPreparer.prepareSettings(InternalSettingsPreparer.java:119)
 at
org.elasticsearch.client.transport.TransportClient.(TransportClient.java:160)
 at
org.elasticsearch.client.transport.TransportClient.(TransportClient.java:126)

Here is the similar kind of problem:
http://stackoverflow.com/questions/21528766/error-while-using-elastic-search-while-creating-a-osgi-bundle
.
As I do not want to change in the code, I choose to create the file.

Reshu.


On 01/07/2016 02:45 AM, Lou DeGenaro wrote:


Reshu,

Going back through this thread I see that you posted a stack trace on
9/25/15.  Is that the entire trace?  Is there any CausedBy section?

Lou.

On Wed, Jan 6, 2016 at 1:33 AM, reshu.agarwal 
wrote:

Lou,

The problem is the inability to resolve config Path. In initializing Job
driver, a class is using a file names.txt from jar's config/names.txt but
tried to find the file from /home/ducc/Uima_Test/config/names.txt. The
case
is coming in this version of DUCC and in Job driver initialization. As
service has been created without issue even if using the same
initialization method as well as adding *--all_in_one local* in props
file
also run job successfully.

Hope this will help.

Thanks in advanced.

Reshu.


On 01/06/2016 11:11 AM, reshu.agarwal wrote:

Dear Lou,

Sorry for this delay. I have tried this in DUCC 2.0.1 version and job
successfully run with this but shows "Job -1 submitted". But if I remove
this additional flag, It still shows DriverProcessFailed.

Reshu.
On 10/06/2015 01:56 AM, Lou DeGenaro wrote:

Reshu,

Have you tried ducc_submit with the additional flag:

 *--all_in_one local*

?

Lou.

On Mon, Oct 5, 2015 at 12:25 AM, reshu.agarwal <
reshu.agar...@orkash.com
wrote:

Lou,


My job identifies the CR from test.jar but it also uses the other 3rd
Party libraries which are in lib folder suppose if you are using mysql
database for getting data and the mysql classes jar is in lib folder
instead of test.jar.

Hope it will clarify the situation.

Reshu.


On 10/01/2015 06:46 PM, Lou DeGenaro wrote:

Reshu,


I have tried submitting jobs with the following problems:

1. user CP with missing UIMA jars
2. user CP with missing CR jar
3. user CP with CR xml file that specifies non-existent CR class
4. user CP with CR that throws NPE upon construction

In all cases an exception is logged that gives relevant information
to
solve the issue.  So far I'm unable to imagine how you are able to
create
java.lang.reflect.InvocationTargetException.

Lou.

On Thu, Oct 1, 2015 at 8:06 AM, Lou DeGenaro 
corresponding jar files that demonstrate the problem?

Lou.

On Thu, Oct 1, 2015 at 7:54 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

Lou,

Yes, classpath specification contains all classes needed to run my

Collection Reader.


On 10/01/2015 05:21 PM, Lou DeGenaro wrote:

Reshu,

In DUCC 2.0.0 we've introduced the idea of classpath separation so

that
the
user classpath is not contaminated by the DUCC classpath.  Thus,
in
the
JD
there are 2 classpaths, one used by DUCC itself ("DUCC-side") and
the
other
used when running user code ("user-side").

When the JD is running on the DUCC-side it uses the classpath
specified
in
job.classpath.properties.  User code (e.g. your Collection Reader)
does
not
run under this path.

When the JD is running on the user-side, it uses the Java
classloading
employing the classpath specified in your job specification.  If
this
classpath is incomplete then needed classes will not be loadable.
So
everything needed to run your Collection Reader must be explicitly
specified in the Job specification's (user-side) classpath.

Does the user-side classpath (the one in your job specification)
contain
all classes needed to run your Collection Reader?

Lou.


On Thu, Oct 1, 2015 at 12:52 AM, reshu.agarwal <
reshu.agar...@orkash.com
wrote:

Dear Lou,

I have also tried by specifying the complete path of test.j

DUCC 2.0.1 : JP Http Client Unable to Communicate with JD

2016-01-11 Thread reshu.agarwal


Hi,

I was getting this error after 17 out of 200 documents were processed. I 
am unable to find any reason for the same. Please see the error below:


INFO: Asynchronous Client Has Been Initialized. Serialization Strategy: 
[SerializationStrategy] Ready To Process.
DuccAbstractProcessContainer.deploy()  User Container deployed
 Deployed Processing Container - Initialization Successful - Thread 32
DuccAbstractProcessContainer.deploy() > Deploying User Container
... UimaProcessContainer.doDeploy()
11 Jan 2016 17:18:36,969  INFO AgentSession - T[29] notifyAgentWithStatus  ... 
Job Process State Changed - PID:24790. Process State: Initializing. JMX Url:N/A 
Dispatched State Update Event to Agent with IP:192.168.10.126
DuccAbstractProcessContainer.deploy()  User Container deployed
 Deployed Processing Container - Initialization Successful - Thread 34
DuccAbstractProcessContainer.deploy() > Deploying User Container
... UimaProcessContainer.doDeploy()
DuccAbstractProcessContainer.deploy()  User Container deployed
 Deployed Processing Container - Initialization Successful - Thread 33
11 Jan 2016 17:18:38,277  INFO JobProcessComponent - T[33] setState  Notifying 
Agent New State:Running
11 Jan 2016 17:18:38,279  INFO AgentSession - T[1] notifyAgentWithStatus  ... 
Job Process State Changed - PID:24790. Process State: Running. JMX 
Url:service:jmx:rmi:///jndi/rmi://user:2106/jmxrmi Dispatched State Update 
Event to Agent with IP:192.168.10.126
11 Jan 2016 17:18:38,281  INFO AgentSession - T[33] notifyAgentWithStatus  ... 
Job Process State Changed - PID:24790. Process State: Running. JMX 
Url:service:jmx:rmi:///jndi/rmi://user:2106/jmxrmi Dispatched State Update 
Event to Agent with IP:192.168.10.126
11 Jan 2016 17:18:38,281  INFO HttpWorkerThread - T[33] HttpWorkerThread.run()  
Begin Processing Work Items - Thread Id:33
11 Jan 2016 17:18:38,285  INFO HttpWorkerThread - T[34] HttpWorkerThread.run()  
Begin Processing Work Items - Thread Id:34
11 Jan 2016 17:18:38,285  INFO HttpWorkerThread - T[32] HttpWorkerThread.run()  
Begin Processing Work Items - Thread Id:32
11 Jan 2016 17:18:38,458  INFO HttpWorkerThread - T[34] run  Thread:34 Recv'd 
WI:19
11 Jan 2016 17:18:38,468  INFO HttpWorkerThread - T[32] run  Thread:32 Recv'd 
WI:18
11 Jan 2016 17:18:38,478  INFO HttpWorkerThread - T[33] run  Thread:33 Recv'd 
WI:21
11 Jan 2016 17:18:38,515 ERROR DuccHttpClient - T[33] execute  Unable to Communicate with JD - 
Error:HTTP/1.1 500  : The element type 
"org.apache.uima.ducc.container.net.impl.MetaCasTransaction" must be terminated by the matching 
end-tag "".
11 Jan 2016 17:18:38,515 ERROR DuccHttpClient - T[33] execute  Content causing 
error:[B@3c0873f9
Thread::33 ERRR::Content causing error:[B@3c0873f9
11 Jan 2016 17:18:38,516 ERROR DuccHttpClient - T[33] run
java.lang.RuntimeException: JP Http Client Unable to Communicate with JD - Error:HTTP/1.1 500  : The 
element type "org.apache.uima.ducc.container.net.impl.MetaCasTransaction" must be terminated by 
the matching end-tag "".
at 
org.apache.uima.ducc.transport.configuration.jp.DuccHttpClient.execute(DuccHttpClient.java:226)
at 
org.apache.uima.ducc.transport.configuration.jp.HttpWorkerThread.run(HttpWorkerThread.java:178)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at 
org.apache.uima.ducc.transport.configuration.jp.UimaServiceThreadFactory$1.run(UimaServiceThreadFactory.java:85)
at java.lang.Thread.run(Thread.java:745)
11 Jan 2016 17:18:38,535 ERROR DuccHttpClient - T[33] execute  Unable to 
Communicate with JD - Error:HTTP/1.1 501 Method n>POST is not defined in RFC 
2068 and is not supported by the Servlet API
11 Jan 2016 17:18:38,535 ERROR DuccHttpClient - T[33] execute  Content causing 
error:[B@12e81893
Thread::33 ERRR::Content causing error:[B@12e81893
11 Jan 2016 17:18:38,535 ERROR DuccHttpClient - T[33] run
java.lang.RuntimeException: JP Http Client Unable to Communicate with JD - 
Error:HTTP/1.1 501 Method n>POST is not defined in RFC 2068 and is not 
supported by the Servlet API
at 
org.apache.uima.ducc.transport.configuration.jp.DuccHttpClient.execute(DuccHttpClient.java:226)
at 
org.apache.uima.ducc.transport.configuration.jp.HttpWorkerThread.run(HttpWorkerThread.java:178)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at 
org.apache.uima.ducc.transport.configurati

Problem in DUCC 2.0.1 : 1100 Unable to initialize groups for ducc.: Operation not permitted

2016-01-11 Thread reshu.agarwal

Hi,

I am getting message " 1100 Unable to initialize groups for ducc.: 
Operation not permitted" when I clicked stop in ducc services.


Please resolve.

Reshu.


Re: DUCC - Work Item Queue Time Management

2016-01-11 Thread reshu.agarwal

Lou,

The stack trace was not complete. There is a caused by section, you can 
check it here:


Caused by: org.elasticsearch.env.FailedToResolveConfigException: Failed to 
resolve config path [names.txt], tried file path [names.txt], path file 
[/home/ducc/Uima_Test/config/names.txt], and classpath
at org.elasticsearch.env.Environment.resolveConfig(Environment.java:213)
at 
org.elasticsearch.node.internal.InternalSettingsPreparer.prepareSettings(InternalSettingsPreparer.java:119)
at 
org.elasticsearch.client.transport.TransportClient.(TransportClient.java:160)
at 
org.elasticsearch.client.transport.TransportClient.(TransportClient.java:126)

Here is the similar kind of problem: 
http://stackoverflow.com/questions/21528766/error-while-using-elastic-search-while-creating-a-osgi-bundle.

As I do not want to change in the code, I choose to create the file.

Reshu.

On 01/07/2016 02:45 AM, Lou DeGenaro wrote:

Reshu,

Going back through this thread I see that you posted a stack trace on
9/25/15.  Is that the entire trace?  Is there any CausedBy section?

Lou.

On Wed, Jan 6, 2016 at 1:33 AM, reshu.agarwal 
wrote:


Lou,

The problem is the inability to resolve config Path. In initializing Job
driver, a class is using a file names.txt from jar's config/names.txt but
tried to find the file from /home/ducc/Uima_Test/config/names.txt. The case
is coming in this version of DUCC and in Job driver initialization. As
service has been created without issue even if using the same
initialization method as well as adding *--all_in_one local* in props file
also run job successfully.

Hope this will help.

Thanks in advanced.

Reshu.


On 01/06/2016 11:11 AM, reshu.agarwal wrote:


Dear Lou,

Sorry for this delay. I have tried this in DUCC 2.0.1 version and job
successfully run with this but shows "Job -1 submitted". But if I remove
this additional flag, It still shows DriverProcessFailed.

Reshu.
On 10/06/2015 01:56 AM, Lou DeGenaro wrote:


Reshu,

Have you tried ducc_submit with the additional flag:

*--all_in_one local*

?

Lou.

On Mon, Oct 5, 2015 at 12:25 AM, reshu.agarwal 
My job identifies the CR from test.jar but it also uses the other 3rd
Party libraries which are in lib folder suppose if you are using mysql
database for getting data and the mysql classes jar is in lib folder
instead of test.jar.

Hope it will clarify the situation.

Reshu.


On 10/01/2015 06:46 PM, Lou DeGenaro wrote:

Reshu,

I have tried submitting jobs with the following problems:

1. user CP with missing UIMA jars
2. user CP with missing CR jar
3. user CP with CR xml file that specifies non-existent CR class
4. user CP with CR that throws NPE upon construction

In all cases an exception is logged that gives relevant information to
solve the issue.  So far I'm unable to imagine how you are able to
create
java.lang.reflect.InvocationTargetException.

Lou.

On Thu, Oct 1, 2015 at 8:06 AM, Lou DeGenaro 
wrote:

Reshu,


Are you able to share your (non-confidential) Job specification and
corresponding jar files that demonstrate the problem?

Lou.

On Thu, Oct 1, 2015 at 7:54 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

Lou,


Yes, classpath specification contains all classes needed to run my
Collection Reader.


On 10/01/2015 05:21 PM, Lou DeGenaro wrote:

Reshu,


In DUCC 2.0.0 we've introduced the idea of classpath separation so
that
the
user classpath is not contaminated by the DUCC classpath.  Thus, in
the
JD
there are 2 classpaths, one used by DUCC itself ("DUCC-side") and
the
other
used when running user code ("user-side").

When the JD is running on the DUCC-side it uses the classpath
specified
in
job.classpath.properties.  User code (e.g. your Collection Reader)
does
not
run under this path.

When the JD is running on the user-side, it uses the Java
classloading
employing the classpath specified in your job specification.  If
this
classpath is incomplete then needed classes will not be loadable.
So
everything needed to run your Collection Reader must be explicitly
specified in the Job specification's (user-side) classpath.

Does the user-side classpath (the one in your job specification)
contain
all classes needed to run your Collection Reader?

Lou.


On Thu, Oct 1, 2015 at 12:52 AM, reshu.agarwal <
reshu.agar...@orkash.com
wrote:

Dear Lou,

I have also tried by specifying the complete path of test.jar i.e.

/home/ducc/Uima_Test/test.jar. Yes, My job required the directories
and
jar
in UserClasspath. Others are the uima and uima as jars. But the
problem
is
these jar are not actually available in jd classpath.

Reshu.

Signature On 09/28/2015 05:51 PM, Lou DeGenaro wrote:

Reshu,

By my eye, the -classpath for the JD itself is correct, as your

seems
to
exactly match mine.

With respect to the user specified code, your
ducc.deploy.UserClasspath
differs from mine, as is expected.  However, I notice in your
UserClasspath
the following:

Re: DUCC - Work Item Queue Time Management

2016-01-05 Thread reshu.agarwal

Lou,

The problem is the inability to resolve config Path. In initializing Job 
driver, a class is using a file names.txt from jar's config/names.txt 
but tried to find the file from /home/ducc/Uima_Test/config/names.txt. 
The case is coming in this version of DUCC and in Job driver 
initialization. As service has been created without issue even if using 
the same initialization method as well as adding *--all_in_one local* in 
props file also run job successfully.


Hope this will help.

Thanks in advanced.

Reshu.

On 01/06/2016 11:11 AM, reshu.agarwal wrote:

Dear Lou,

Sorry for this delay. I have tried this in DUCC 2.0.1 version and job 
successfully run with this but shows "Job -1 submitted". But if I 
remove this additional flag, It still shows DriverProcessFailed.


Reshu.
On 10/06/2015 01:56 AM, Lou DeGenaro wrote:

Reshu,

Have you tried ducc_submit with the additional flag:

   *--all_in_one local*

?

Lou.

On Mon, Oct 5, 2015 at 12:25 AM, reshu.agarwal 


wrote:


Lou,

My job identifies the CR from test.jar but it also uses the other 3rd
Party libraries which are in lib folder suppose if you are using mysql
database for getting data and the mysql classes jar is in lib folder
instead of test.jar.

Hope it will clarify the situation.

Reshu.


On 10/01/2015 06:46 PM, Lou DeGenaro wrote:


Reshu,

I have tried submitting jobs with the following problems:

1. user CP with missing UIMA jars
2. user CP with missing CR jar
3. user CP with CR xml file that specifies non-existent CR class
4. user CP with CR that throws NPE upon construction

In all cases an exception is logged that gives relevant information to
solve the issue.  So far I'm unable to imagine how you are able to 
create

java.lang.reflect.InvocationTargetException.

Lou.

On Thu, Oct 1, 2015 at 8:06 AM, Lou DeGenaro 
wrote:

Reshu,

Are you able to share your (non-confidential) Job specification and
corresponding jar files that demonstrate the problem?

Lou.

On Thu, Oct 1, 2015 at 7:54 AM, reshu.agarwal 


wrote:

Lou,

Yes, classpath specification contains all classes needed to run my
Collection Reader.


On 10/01/2015 05:21 PM, Lou DeGenaro wrote:

Reshu,
In DUCC 2.0.0 we've introduced the idea of classpath separation 
so that

the
user classpath is not contaminated by the DUCC classpath.  Thus, 
in the

JD
there are 2 classpaths, one used by DUCC itself ("DUCC-side") 
and the

other
used when running user code ("user-side").

When the JD is running on the DUCC-side it uses the classpath 
specified

in
job.classpath.properties.  User code (e.g. your Collection 
Reader) does

not
run under this path.

When the JD is running on the user-side, it uses the Java 
classloading
employing the classpath specified in your job specification.  If 
this
classpath is incomplete then needed classes will not be 
loadable.  So

everything needed to run your Collection Reader must be explicitly
specified in the Job specification's (user-side) classpath.

Does the user-side classpath (the one in your job specification)
contain
all classes needed to run your Collection Reader?

Lou.


On Thu, Oct 1, 2015 at 12:52 AM, reshu.agarwal <
reshu.agar...@orkash.com
wrote:

Dear Lou,


I have also tried by specifying the complete path of test.jar i.e.
/home/ducc/Uima_Test/test.jar. Yes, My job required the 
directories

and
jar
in UserClasspath. Others are the uima and uima as jars. But the
problem
is
these jar are not actually available in jd classpath.

Reshu.

Signature On 09/28/2015 05:51 PM, Lou DeGenaro wrote:

Reshu,

By my eye, the -classpath for the JD itself is correct, as 
your seems

to
exactly match mine.

With respect to the user specified code, your
ducc.deploy.UserClasspath
differs from mine, as is expected.  However, I notice in your
UserClasspath
the following: /home/ducc/Uima_Test/lib/*:test.jar:.  There is no
path
to
test.jar?  Also, does your Job really use the other directories &
jars
in
UserClasspath?

Lou.

On Mon, Sep 28, 2015 at 7:52 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

The log is:/

1000 Command to exec: /usr/java/jdk1.7.0_71/jre/bin/java

arg[1]:
-DDUCC_HOME=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT
arg[2]:



-Dducc.deploy.configuration=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties 


arg[3]: -Dducc.agent.process.state.update.port=56622
arg[4]: -Dducc.process.log.dir=/home/ducc/ducc/logs/67/
arg[5]: -Dducc.process.log.basename=67-JD-S211
arg[6]: -Dducc.job.id=67
arg[7]:



-Dducc.deploy.configuration=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties 


arg[8]: -Dducc.deploy.components=jd
arg[9]: -Dducc.job.id=67
arg[10]: -Xmx300M
arg[11]: -Dducc.deploy.JobId=67
arg[12]:



-Dducc.deploy.CollectionReaderXml=desc/collection_reader/DBCollectionReader 


arg[13]:



-Dducc.deploy.UserClasspath=/home/ducc/apac

Re: DUCC - Work Item Queue Time Management

2016-01-05 Thread reshu.agarwal

Dear Lou,

Sorry for this delay. I have tried this in DUCC 2.0.1 version and job 
successfully run with this but shows "Job -1 submitted". But if I remove 
this additional flag, It still shows DriverProcessFailed.


Reshu.
On 10/06/2015 01:56 AM, Lou DeGenaro wrote:

Reshu,

Have you tried ducc_submit with the additional flag:

   *--all_in_one local*

?

Lou.

On Mon, Oct 5, 2015 at 12:25 AM, reshu.agarwal 
wrote:


Lou,

My job identifies the CR from test.jar but it also uses the other 3rd
Party libraries which are in lib folder suppose if you are using mysql
database for getting data and the mysql classes jar is in lib folder
instead of test.jar.

Hope it will clarify the situation.

Reshu.


On 10/01/2015 06:46 PM, Lou DeGenaro wrote:


Reshu,

I have tried submitting jobs with the following problems:

1. user CP with missing UIMA jars
2. user CP with missing CR jar
3. user CP with CR xml file that specifies non-existent CR class
4. user CP with CR that throws NPE upon construction

In all cases an exception is logged that gives relevant information to
solve the issue.  So far I'm unable to imagine how you are able to create
java.lang.reflect.InvocationTargetException.

Lou.

On Thu, Oct 1, 2015 at 8:06 AM, Lou DeGenaro 
wrote:

Reshu,

Are you able to share your (non-confidential) Job specification and
corresponding jar files that demonstrate the problem?

Lou.

On Thu, Oct 1, 2015 at 7:54 AM, reshu.agarwal 
wrote:

Lou,

Yes, classpath specification contains all classes needed to run my
Collection Reader.


On 10/01/2015 05:21 PM, Lou DeGenaro wrote:

Reshu,

In DUCC 2.0.0 we've introduced the idea of classpath separation so that
the
user classpath is not contaminated by the DUCC classpath.  Thus, in the
JD
there are 2 classpaths, one used by DUCC itself ("DUCC-side") and the
other
used when running user code ("user-side").

When the JD is running on the DUCC-side it uses the classpath specified
in
job.classpath.properties.  User code (e.g. your Collection Reader) does
not
run under this path.

When the JD is running on the user-side, it uses the Java classloading
employing the classpath specified in your job specification.  If this
classpath is incomplete then needed classes will not be loadable.  So
everything needed to run your Collection Reader must be explicitly
specified in the Job specification's (user-side) classpath.

Does the user-side classpath (the one in your job specification)
contain
all classes needed to run your Collection Reader?

Lou.


On Thu, Oct 1, 2015 at 12:52 AM, reshu.agarwal <
reshu.agar...@orkash.com
wrote:

Dear Lou,


I have also tried by specifying the complete path of test.jar i.e.
/home/ducc/Uima_Test/test.jar. Yes, My job required the directories
and
jar
in UserClasspath. Others are the uima and uima as jars. But the
problem
is
these jar are not actually available in jd classpath.

Reshu.

Signature On 09/28/2015 05:51 PM, Lou DeGenaro wrote:

Reshu,


By my eye, the -classpath for the JD itself is correct, as your seems
to
exactly match mine.

With respect to the user specified code, your
ducc.deploy.UserClasspath
differs from mine, as is expected.  However, I notice in your
UserClasspath
the following: /home/ducc/Uima_Test/lib/*:test.jar:.  There is no
path
to
test.jar?  Also, does your Job really use the other directories &
jars
in
UserClasspath?

Lou.

On Mon, Sep 28, 2015 at 7:52 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

The log is:/

1000 Command to exec: /usr/java/jdk1.7.0_71/jre/bin/java

arg[1]:
-DDUCC_HOME=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT
arg[2]:



-Dducc.deploy.configuration=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties
arg[3]: -Dducc.agent.process.state.update.port=56622
arg[4]: -Dducc.process.log.dir=/home/ducc/ducc/logs/67/
arg[5]: -Dducc.process.log.basename=67-JD-S211
arg[6]: -Dducc.job.id=67
arg[7]:



-Dducc.deploy.configuration=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties
arg[8]: -Dducc.deploy.components=jd
arg[9]: -Dducc.job.id=67
arg[10]: -Xmx300M
arg[11]: -Dducc.deploy.JobId=67
arg[12]:



-Dducc.deploy.CollectionReaderXml=desc/collection_reader/DBCollectionReader
arg[13]:



-Dducc.deploy.UserClasspath=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/examples/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/optional/*:/home/ducc/Uima_Test/lib/*:test.jar:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/user/*
arg[14]: -Dducc.deploy.WorkItemTimeout=5
arg[15]: -Dducc.deploy.JobDirectory=/home/ducc/ducc/logs/
arg[16]:
-Dducc.deploy.JpFlowController=org.apache.uima.ducc.FlowController
arg[17]:
-Dducc.deploy.

Re: DUCC 1.1.0- Remain in Completing state.

2016-01-04 Thread reshu.agarwal
I forget to mention one thing i.e. After Killing the job, next job is 
unable to initialize and remain in " WaitingForDriver" state. I have 
also checked sm.log,or.log,pm.log e.t.c. but failed to find any thing. I 
have to restart my DUCC for running job again.


Reshu.

On 01/05/2016 11:14 AM, reshu.agarwal wrote:

Hi,

I am using DUCC 1.1.0 version. I am facing a issue with my job i.e. it 
remains in completing state even after initializing the stop process. 
My job used two processes. And Job Driver logs:


Jan 04, 2016 12:43:13 PM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl 
stop

INFO: Stopping Asynchronous Client.
Jan 04, 2016 12:43:13 PM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl 
stop

INFO: Asynchronous Client Has Stopped.
Jan 04, 2016 12:43:13 PM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl$SharedConnection 
destroy

INFO: UIMA AS Client Shared Connection Has Been Closed
Jan 04, 2016 12:43:13 PM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngine_impl stop

INFO: UIMA AS Client Undeployed All Containers

One process logs:

Jan 04, 2016 12:44:50 PM 
org.apache.uima.adapter.jms.activemq.JmsInputChannel stopChannel
INFO: Stopping Service JMS Transport. Service: ducc.jd.queue.87494 
ShutdownNow false
Jan 04, 2016 12:44:50 PM 
org.apache.uima.adapter.jms.activemq.JmsInputChannel stopChannel
INFO: Controller: ducc.jd.queue.87494 Stopped Listener on Endpoint: 
queue://ducc.jd.queue.87494 Selector: Selector:Command=2000 OR 
Command=2002.


But, other process do not have any log of stopping the process.

The case is of not completely undeploying all processes. I have to use 
command to cancel the process: /ducc_install/bin$ ./ducc_cancel --id 
87494 --dpid 4529.


Some times it cancelled the process otherwise I have to use "kill -9" 
command to kill the job forcefully.


Kindly help.

Thanks in advanced.

Reshu.






DUCC 1.1.0- Remain in Completing state.

2016-01-04 Thread reshu.agarwal

Hi,

I am using DUCC 1.1.0 version. I am facing a issue with my job i.e. it 
remains in completing state even after initializing the stop process. My 
job used two processes. And Job Driver logs:


Jan 04, 2016 12:43:13 PM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl stop
INFO: Stopping Asynchronous Client.
Jan 04, 2016 12:43:13 PM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl stop
INFO: Asynchronous Client Has Stopped.
Jan 04, 2016 12:43:13 PM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl$SharedConnection
 destroy
INFO: UIMA AS Client Shared Connection Has Been Closed
Jan 04, 2016 12:43:13 PM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngine_impl stop
INFO: UIMA AS Client Undeployed All Containers

One process logs:

Jan 04, 2016 12:44:50 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel 
stopChannel
INFO: Stopping Service JMS Transport. Service: ducc.jd.queue.87494 ShutdownNow 
false
Jan 04, 2016 12:44:50 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel 
stopChannel
INFO: Controller: ducc.jd.queue.87494 Stopped Listener on Endpoint: 
queue://ducc.jd.queue.87494 Selector:  Selector:Command=2000 OR Command=2002.

But, other process do not have any log of stopping the process.

The case is of not completely undeploying all processes. I have to use 
command to cancel the process: /ducc_install/bin$ ./ducc_cancel --id 
87494 --dpid 4529.


Some times it cancelled the process otherwise I have to use "kill -9" 
command to kill the job forcefully.


Kindly help.

Thanks in advanced.

Reshu.



Re: DUCC - Work Item Queue Time Management

2015-10-04 Thread reshu.agarwal

Lou,

My job identifies the CR from test.jar but it also uses the other 3rd 
Party libraries which are in lib folder suppose if you are using mysql 
database for getting data and the mysql classes jar is in lib folder 
instead of test.jar.


Hope it will clarify the situation.

Reshu.

On 10/01/2015 06:46 PM, Lou DeGenaro wrote:

Reshu,

I have tried submitting jobs with the following problems:

1. user CP with missing UIMA jars
2. user CP with missing CR jar
3. user CP with CR xml file that specifies non-existent CR class
4. user CP with CR that throws NPE upon construction

In all cases an exception is logged that gives relevant information to
solve the issue.  So far I'm unable to imagine how you are able to create
java.lang.reflect.InvocationTargetException.

Lou.

On Thu, Oct 1, 2015 at 8:06 AM, Lou DeGenaro  wrote:


Reshu,

Are you able to share your (non-confidential) Job specification and
corresponding jar files that demonstrate the problem?

Lou.

On Thu, Oct 1, 2015 at 7:54 AM, reshu.agarwal 
wrote:


Lou,

Yes, classpath specification contains all classes needed to run my
Collection Reader.


On 10/01/2015 05:21 PM, Lou DeGenaro wrote:


Reshu,

In DUCC 2.0.0 we've introduced the idea of classpath separation so that
the
user classpath is not contaminated by the DUCC classpath.  Thus, in the
JD
there are 2 classpaths, one used by DUCC itself ("DUCC-side") and the
other
used when running user code ("user-side").

When the JD is running on the DUCC-side it uses the classpath specified
in
job.classpath.properties.  User code (e.g. your Collection Reader) does
not
run under this path.

When the JD is running on the user-side, it uses the Java classloading
employing the classpath specified in your job specification.  If this
classpath is incomplete then needed classes will not be loadable.  So
everything needed to run your Collection Reader must be explicitly
specified in the Job specification's (user-side) classpath.

Does the user-side classpath (the one in your job specification) contain
all classes needed to run your Collection Reader?

Lou.


On Thu, Oct 1, 2015 at 12:52 AM, reshu.agarwal 
I have also tried by specifying the complete path of test.jar i.e.
/home/ducc/Uima_Test/test.jar. Yes, My job required the directories and
jar
in UserClasspath. Others are the uima and uima as jars. But the problem
is
these jar are not actually available in jd classpath.

Reshu.

Signature On 09/28/2015 05:51 PM, Lou DeGenaro wrote:

Reshu,

By my eye, the -classpath for the JD itself is correct, as your seems
to
exactly match mine.

With respect to the user specified code, your ducc.deploy.UserClasspath
differs from mine, as is expected.  However, I notice in your
UserClasspath
the following: /home/ducc/Uima_Test/lib/*:test.jar:.  There is no path
to
test.jar?  Also, does your Job really use the other directories & jars
in
UserClasspath?

Lou.

On Mon, Sep 28, 2015 at 7:52 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

The log is:/


1000 Command to exec: /usr/java/jdk1.7.0_71/jre/bin/java
   arg[1]: -DDUCC_HOME=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT
   arg[2]:


-Dducc.deploy.configuration=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties
   arg[3]: -Dducc.agent.process.state.update.port=56622
   arg[4]: -Dducc.process.log.dir=/home/ducc/ducc/logs/67/
   arg[5]: -Dducc.process.log.basename=67-JD-S211
   arg[6]: -Dducc.job.id=67
   arg[7]:


-Dducc.deploy.configuration=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties
   arg[8]: -Dducc.deploy.components=jd
   arg[9]: -Dducc.job.id=67
   arg[10]: -Xmx300M
   arg[11]: -Dducc.deploy.JobId=67
   arg[12]:


-Dducc.deploy.CollectionReaderXml=desc/collection_reader/DBCollectionReader
   arg[13]:


-Dducc.deploy.UserClasspath=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/examples/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/optional/*:/home/ducc/Uima_Test/lib/*:test.jar:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/user/*
   arg[14]: -Dducc.deploy.WorkItemTimeout=5
   arg[15]: -Dducc.deploy.JobDirectory=/home/ducc/ducc/logs/
   arg[16]:
-Dducc.deploy.JpFlowController=org.apache.uima.ducc.FlowController
   arg[17]:
-Dducc.deploy.JpAeDescriptor=desc/ae/aggregate/AggDescriptor
   arg[18]:
-Dducc.deploy.JpCcDescriptor=desc/cas_consumer/CasConsumerDescriptor
   arg[19]: -Dducc.deploy.JpDdName=DUCC.Job
   arg[20]: -Dducc.deploy.JpDdDescription=DUCC.Generated
   arg[21]: -Dducc.deploy.JpThreadCount=3
   arg[22]: -Dducc.deploy.JpDdBrokerURL=${broker.name}
   arg[23]: -Dducc.deploy.JpDdBrokerEndpoint=${queue.name}
   arg[24]: -classpath
   arg[25]:


/home/ducc/apache-uima-ducc-2.1.0-SNAPSHO

Re: DUCC - Work Item Queue Time Management

2015-10-01 Thread reshu.agarwal

Lou,

Yes, classpath specification contains all classes needed to run my 
Collection Reader.


On 10/01/2015 05:21 PM, Lou DeGenaro wrote:

Reshu,

In DUCC 2.0.0 we've introduced the idea of classpath separation so that the
user classpath is not contaminated by the DUCC classpath.  Thus, in the JD
there are 2 classpaths, one used by DUCC itself ("DUCC-side") and the other
used when running user code ("user-side").

When the JD is running on the DUCC-side it uses the classpath specified in
job.classpath.properties.  User code (e.g. your Collection Reader) does not
run under this path.

When the JD is running on the user-side, it uses the Java classloading
employing the classpath specified in your job specification.  If this
classpath is incomplete then needed classes will not be loadable.  So
everything needed to run your Collection Reader must be explicitly
specified in the Job specification's (user-side) classpath.

Does the user-side classpath (the one in your job specification) contain
all classes needed to run your Collection Reader?

Lou.


On Thu, Oct 1, 2015 at 12:52 AM, reshu.agarwal 
wrote:


Dear Lou,

I have also tried by specifying the complete path of test.jar i.e.
/home/ducc/Uima_Test/test.jar. Yes, My job required the directories and jar
in UserClasspath. Others are the uima and uima as jars. But the problem is
these jar are not actually available in jd classpath.

Reshu.

Signature On 09/28/2015 05:51 PM, Lou DeGenaro wrote:


Reshu,

By my eye, the -classpath for the JD itself is correct, as your seems to
exactly match mine.

With respect to the user specified code, your ducc.deploy.UserClasspath
differs from mine, as is expected.  However, I notice in your
UserClasspath
the following: /home/ducc/Uima_Test/lib/*:test.jar:.  There is no path to
test.jar?  Also, does your Job really use the other directories & jars in
UserClasspath?

Lou.

On Mon, Sep 28, 2015 at 7:52 AM, reshu.agarwal 
wrote:

The log is:/

1000 Command to exec: /usr/java/jdk1.7.0_71/jre/bin/java
  arg[1]: -DDUCC_HOME=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT
  arg[2]:

-Dducc.deploy.configuration=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties
  arg[3]: -Dducc.agent.process.state.update.port=56622
  arg[4]: -Dducc.process.log.dir=/home/ducc/ducc/logs/67/
  arg[5]: -Dducc.process.log.basename=67-JD-S211
  arg[6]: -Dducc.job.id=67
  arg[7]:

-Dducc.deploy.configuration=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties
  arg[8]: -Dducc.deploy.components=jd
  arg[9]: -Dducc.job.id=67
  arg[10]: -Xmx300M
  arg[11]: -Dducc.deploy.JobId=67
  arg[12]:

-Dducc.deploy.CollectionReaderXml=desc/collection_reader/DBCollectionReader
  arg[13]:

-Dducc.deploy.UserClasspath=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/examples/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/optional/*:/home/ducc/Uima_Test/lib/*:test.jar:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/user/*
  arg[14]: -Dducc.deploy.WorkItemTimeout=5
  arg[15]: -Dducc.deploy.JobDirectory=/home/ducc/ducc/logs/
  arg[16]:
-Dducc.deploy.JpFlowController=org.apache.uima.ducc.FlowController
  arg[17]:
-Dducc.deploy.JpAeDescriptor=desc/ae/aggregate/AggDescriptor
  arg[18]:
-Dducc.deploy.JpCcDescriptor=desc/cas_consumer/CasConsumerDescriptor
  arg[19]: -Dducc.deploy.JpDdName=DUCC.Job
  arg[20]: -Dducc.deploy.JpDdDescription=DUCC.Generated
  arg[21]: -Dducc.deploy.JpThreadCount=3
  arg[22]: -Dducc.deploy.JpDdBrokerURL=${broker.name}
  arg[23]: -Dducc.deploy.JpDdBrokerEndpoint=${queue.name}
  arg[24]: -classpath
  arg[25]:

/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/lib/uima-core.jar:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/apache-log4j/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/webserver/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/http-client/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/apache-camel/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/apache-commons/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/google-gson/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/springframework/*
  arg[26]: org.apache.uima.ducc.common.main.DuccService
/
Reshu


/1001 Command launching.../On 09/28/2015 05:11 PM, Lou DeGenaro wrote:

I take it your are getting the previously posted stack track from the

DUCC
Job's JD log file?  Near the top of that file should be something like:

1000 Command to exec: /home/degenaro/local/sun/jdk1.7.0_79/jre/bin/java
   arg[1]:
-DDUCC_HOME=/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT
   arg[2]:

-Dd

Re: DUCC - Work Item Queue Time Management

2015-09-30 Thread reshu.agarwal

Dear Lou,

I have also tried by specifying the complete path of test.jar i.e. 
/home/ducc/Uima_Test/test.jar. Yes, My job required the directories and 
jar in UserClasspath. Others are the uima and uima as jars. But the 
problem is these jar are not actually available in jd classpath.


Reshu.
Signature On 09/28/2015 05:51 PM, Lou DeGenaro wrote:

Reshu,

By my eye, the -classpath for the JD itself is correct, as your seems to
exactly match mine.

With respect to the user specified code, your ducc.deploy.UserClasspath
differs from mine, as is expected.  However, I notice in your UserClasspath
the following: /home/ducc/Uima_Test/lib/*:test.jar:.  There is no path to
test.jar?  Also, does your Job really use the other directories & jars in
UserClasspath?

Lou.

On Mon, Sep 28, 2015 at 7:52 AM, reshu.agarwal 
wrote:


The log is:/

1000 Command to exec: /usr/java/jdk1.7.0_71/jre/bin/java
 arg[1]: -DDUCC_HOME=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT
 arg[2]:
-Dducc.deploy.configuration=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties
 arg[3]: -Dducc.agent.process.state.update.port=56622
 arg[4]: -Dducc.process.log.dir=/home/ducc/ducc/logs/67/
 arg[5]: -Dducc.process.log.basename=67-JD-S211
 arg[6]: -Dducc.job.id=67
 arg[7]:
-Dducc.deploy.configuration=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties
 arg[8]: -Dducc.deploy.components=jd
 arg[9]: -Dducc.job.id=67
 arg[10]: -Xmx300M
 arg[11]: -Dducc.deploy.JobId=67
 arg[12]:
-Dducc.deploy.CollectionReaderXml=desc/collection_reader/DBCollectionReader
 arg[13]:
-Dducc.deploy.UserClasspath=/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/examples/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/optional/*:/home/ducc/Uima_Test/lib/*:test.jar:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/user/*
 arg[14]: -Dducc.deploy.WorkItemTimeout=5
 arg[15]: -Dducc.deploy.JobDirectory=/home/ducc/ducc/logs/
 arg[16]:
-Dducc.deploy.JpFlowController=org.apache.uima.ducc.FlowController
 arg[17]: -Dducc.deploy.JpAeDescriptor=desc/ae/aggregate/AggDescriptor
 arg[18]:
-Dducc.deploy.JpCcDescriptor=desc/cas_consumer/CasConsumerDescriptor
 arg[19]: -Dducc.deploy.JpDdName=DUCC.Job
 arg[20]: -Dducc.deploy.JpDdDescription=DUCC.Generated
 arg[21]: -Dducc.deploy.JpThreadCount=3
 arg[22]: -Dducc.deploy.JpDdBrokerURL=${broker.name}
 arg[23]: -Dducc.deploy.JpDdBrokerEndpoint=${queue.name}
 arg[24]: -classpath
 arg[25]:
/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/lib/uima-core.jar:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/apache-log4j/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/webserver/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/http-client/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/apache-camel/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/apache-commons/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/google-gson/*:/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/springframework/*
 arg[26]: org.apache.uima.ducc.common.main.DuccService
/
Reshu


/1001 Command launching.../On 09/28/2015 05:11 PM, Lou DeGenaro wrote:


I take it your are getting the previously posted stack track from the DUCC
Job's JD log file?  Near the top of that file should be something like:

1000 Command to exec: /home/degenaro/local/sun/jdk1.7.0_79/jre/bin/java
  arg[1]:
-DDUCC_HOME=/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT
  arg[2]:
-Dducc.deploy.configuration=/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties
  arg[3]: -Dducc.agent.process.state.update.port=47941
  arg[4]:
-Dducc.process.log.dir=/tmp/ducc/driver/kiwi/ducc/logs/71370038/305/
  arg[5]: -Dducc.process.log.basename=305-JD-uima-ducc-demo-3
  arg[6]: -Dducc.job.id=305
  arg[7]:
-Dducc.deploy.configuration=/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/resources/ducc.properties
  arg[8]: -Dducc.deploy.components=jd
  arg[9]: -Dducc.job.id=305
  arg[10]: -Xmx100M
  arg[11]: -Dducc.deploy.JobId=305
  arg[12]:
-Dducc.deploy.CollectionReaderXml=org.apache.uima.ducc.test.randomsleep.FixedSleepCR
  arg[13]:
-Dducc.deploy.CollectionReaderCfg=jobfile=/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/examples/uima-ducc-vm/jobs/most.inputs
compression=1 error_rate=0.0
  arg[14]:
-Dducc.deploy.UserClasspath=/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/examples/*:/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/lib/*:/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-

Re: DUCC - Work Item Queue Time Management

2015-09-28 Thread reshu.agarwal
T/lib/apache-log4j/*:/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/webserver/lib/*:/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/lib/http-client/*:/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/*:/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/lib/apache-camel/*:/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/lib/apache-commons/*:/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/lib/google-gson/*:/home/degenaro/ducc/versions/apache-uima-ducc-2.1.0-SNAPSHOT/lib/springframework/*
 arg[26]: org.apache.uima.ducc.common.main.DuccService
1001 Command launching...

Do the -Dducc.deploy.UserClasspath and -classsapth look right in
yours?  Can you post yours so we can compare and contrast?

Lou.



On Mon, Sep 28, 2015 at 7:26 AM, reshu.agarwal 
wrote:


My CR is in test.jar and third party jars are in
/home/ducc/Uima_test/lib/*. It correctly specified the location of CR
otherwise It will throw "class not find" exception but It showed error in
initialization of third party class.

1.job run perfectly as well as the same specification of classpath worked
for creating DUCC service for the same project.

If I considered the path is somehow incorrect then it will not work even
if I defined the same in jobclasspath.properties. I know I should not touch
it.

Thanks in advance.

Reshu.
Signature

**


On 09/25/2015 05:52 PM, Lou DeGenaro wrote:


Reshu,

Again, you should not be touching jobclasspath.properties.  Your
opportunity to specify classpath is in your DUCC Job submission itself via
the "classpath" keyword.

The exception you posted shows the Job Driver (JD) is attempting to create
an instance of your Collection Reader (CR) based on the classpath
specified
in your submitted DUCC Job, but is unable to do so.  I suspect the
classpath
in your DUCC Job is wrong or the jar files needed are somehow not
available
during runtime?

I presume that your CR is expected to be somewhere in

   /home/ducc/Uima_test/lib/*:
  test.jar

Does this correctly specify the location of your DUCC Job's CR?  (Do you
have extraneous white space in your DUCC Job's specified classpath?)

As a sanity check are you able to run, for example, 1.job?

degenaro@uima-ducc-vm:~/ducc/ducc_runtime/examples/simple$ ducc_submit
--specification 1.job --wait_for_completion --timestamp
Job 85 submitted
25/09/2015 12:03:29 id:85 location:29496@uima-ducc-vm
25/09/2015 12:03:39 id:85 state:WaitingForDriver
25/09/2015 12:03:59 id:85 state:WaitingForResources
25/09/2015 12:04:09 id:85 state:Initializing
25/09/2015 12:04:30 id:85 state:Running total:15 done:6 error:0 retry:0
procs:1
25/09/2015 12:04:40 id:85 state:Running total:15 done:11 error:0 retry:0
procs:1
25/09/2015 12:04:50 id:85 state:Running total:15 done:14 error:0 retry:0
procs:1
25/09/2015 12:05:00 id:85 state:Completing total:15 done:15 error:0
retry:0
procs:1
25/09/2015 12:05:10 id:85 state:Completed total:15 done:15 error:0 retry:0
procs:0
25/09/2015 12:05:10 id:85 rationale:state manager detected normal
completion
25/09/2015 12:05:10 id:85 rc:0


Lou.

On Fri, Sep 25, 2015 at 12:49 AM, reshu.agarwal 
When I classified the required library in classpath like below, Job was
unsuccessful and Status is "DriverProcessFailed".

classpath
/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/examples/*:
/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/lib/*:


/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/*:


/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/optional/*:
  /home/ducc/Uima_test/lib/*:
  test.jar

As It said "Driver Process Failed" and JD's log file showed error about
not finding the classpath in job driver, I just tried to add my library
in
jobclasspath.properties to be sure of problem.

25 Sep 2015 10:03:27,688  INFO JobDriverComponent - T[1]
verifySystemProperties  ducc.deploy.WorkItemTimeout=5
25 Sep 2015 10:03:27,716  INFO JobDriverStateExchanger - T[1]
initializeTarget  http://S211:19988/or
25 Sep 2015 10:03:27,725  INFO JobDriver - T[1] advanceJdState
current=Prelaunch request=Initializing result=Initializing
25 Sep 2015 10:03:32,158 ERROR ProxyLogger - T[1] loggifyUserException
java.lang.reflect.InvocationTargetException
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
  at

sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
  at

sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
  at

org.apache.uima.ducc.container.jd.classload.ProxyJobDriverCollectionReader.prepare(ProxyJobDriverCollectionReader.java:164)
  at

org.apache.uima.ducc.container.jd.classload.ProxyJobDriverCollec

Re: DUCC - Work Item Queue Time Management

2015-09-28 Thread reshu.agarwal
My CR is in test.jar and third party jars are in 
/home/ducc/Uima_test/lib/*. It correctly specified the location of CR 
otherwise It will throw "class not find" exception but It showed error 
in initialization of third party class.


1.job run perfectly as well as the same specification of classpath 
worked for creating DUCC service for the same project.


If I considered the path is somehow incorrect then it will not work even 
if I defined the same in jobclasspath.properties. I know I should not 
touch it.


Thanks in advance.

Reshu.
Signature

**

On 09/25/2015 05:52 PM, Lou DeGenaro wrote:

Reshu,

Again, you should not be touching jobclasspath.properties.  Your
opportunity to specify classpath is in your DUCC Job submission itself via
the "classpath" keyword.

The exception you posted shows the Job Driver (JD) is attempting to create
an instance of your Collection Reader (CR) based on the classpath specified
in your submitted DUCC Job, but is unable to do so.  I suspect the classpath
in your DUCC Job is wrong or the jar files needed are somehow not available
during runtime?

I presume that your CR is expected to be somewhere in

  /home/ducc/Uima_test/lib/*:
 test.jar

Does this correctly specify the location of your DUCC Job's CR?  (Do you
have extraneous white space in your DUCC Job's specified classpath?)

As a sanity check are you able to run, for example, 1.job?

degenaro@uima-ducc-vm:~/ducc/ducc_runtime/examples/simple$ ducc_submit
--specification 1.job --wait_for_completion --timestamp
Job 85 submitted
25/09/2015 12:03:29 id:85 location:29496@uima-ducc-vm
25/09/2015 12:03:39 id:85 state:WaitingForDriver
25/09/2015 12:03:59 id:85 state:WaitingForResources
25/09/2015 12:04:09 id:85 state:Initializing
25/09/2015 12:04:30 id:85 state:Running total:15 done:6 error:0 retry:0
procs:1
25/09/2015 12:04:40 id:85 state:Running total:15 done:11 error:0 retry:0
procs:1
25/09/2015 12:04:50 id:85 state:Running total:15 done:14 error:0 retry:0
procs:1
25/09/2015 12:05:00 id:85 state:Completing total:15 done:15 error:0 retry:0
procs:1
25/09/2015 12:05:10 id:85 state:Completed total:15 done:15 error:0 retry:0
procs:0
25/09/2015 12:05:10 id:85 rationale:state manager detected normal completion
25/09/2015 12:05:10 id:85 rc:0


Lou.

On Fri, Sep 25, 2015 at 12:49 AM, reshu.agarwal 
wrote:


Lewis & Lou,

When I classified the required library in classpath like below, Job was
unsuccessful and Status is "DriverProcessFailed".

classpath
/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/examples/*:
/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/lib/*:

/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/*:

/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/optional/*:
 /home/ducc/Uima_test/lib/*:
 test.jar

As It said "Driver Process Failed" and JD's log file showed error about
not finding the classpath in job driver, I just tried to add my library in
jobclasspath.properties to be sure of problem.

25 Sep 2015 10:03:27,688  INFO JobDriverComponent - T[1]
verifySystemProperties  ducc.deploy.WorkItemTimeout=5
25 Sep 2015 10:03:27,716  INFO JobDriverStateExchanger - T[1]
initializeTarget  http://S211:19988/or
25 Sep 2015 10:03:27,725  INFO JobDriver - T[1] advanceJdState
current=Prelaunch request=Initializing result=Initializing
25 Sep 2015 10:03:32,158 ERROR ProxyLogger - T[1] loggifyUserException
java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
 at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at
org.apache.uima.ducc.container.jd.classload.ProxyJobDriverCollectionReader.prepare(ProxyJobDriverCollectionReader.java:164)
 at
org.apache.uima.ducc.container.jd.classload.ProxyJobDriverCollectionReader.construct(ProxyJobDriverCollectionReader.java:135)
 at
org.apache.uima.ducc.container.jd.classload.ProxyJobDriverCollectionReader.initialize(ProxyJobDriverCollectionReader.java:86)
 at
org.apache.uima.ducc.container.jd.classload.ProxyJobDriverCollectionReader.(ProxyJobDriverCollectionReader.java:72)
 at
org.apache.uima.ducc.container.jd.cas.CasManager.initialize(CasManager.java:51)
 at
org.apache.uima.ducc.container.jd.cas.CasManager.(CasManager.java:45)
 at
org.apache.uima.ducc.container.jd.JobDriver.initialize(JobDriver.java:113)
 at
org.apache.uima.ducc.container.jd.JobDriver.(JobDriver.java:96)
 at
org.apache.uima.ducc.container.jd.JobDriver.getInstance(JobDriver.java:61)
 at
org.apache.uima.ducc.transport.configuration.jd.JobDriverCompo

Re: DUCC - Work Item Queue Time Management

2015-09-28 Thread reshu.agarwal
My CR is in test.jar and third party jars are in 
/home/ducc/Uima_test/lib/*. It correctly specified the location of CR 
otherwise It will through class not find exception but It showed error 
in initialization of third party class.


1.job run perfectly as well as the same specification of classpath 
worked for creating DUCC service for the same project.


If I considered the path is somehow incorrect then it will not work even 
if I defined the same in jobclasspath.properties. I know I should not 
touch it.


Thanks in advance.

Reshu.

On 09/25/2015 05:52 PM, Lou DeGenaro wrote:

Reshu,

Again, you should not be touching jobclasspath.properties.  Your
opportunity to specify classpath is in your DUCC Job submission itself via
the "classpath" keyword.

The exception you posted shows the Job Driver (JD) is attempting to create
an instance of your Collection Reader (CR) based on the classpath specified
in your submitted DUCC Job, but is unable to do so.  I suspect the classpath
in your DUCC Job is wrong or the jar files needed are somehow not available
during runtime?

I presume that your CR is expected to be somewhere in

  /home/ducc/Uima_test/lib/*:
 test.jar

Does this correctly specify the location of your DUCC Job's CR?  (Do you
have extraneous white space in your DUCC Job's specified classpath?)

As a sanity check are you able to run, for example, 1.job?

degenaro@uima-ducc-vm:~/ducc/ducc_runtime/examples/simple$ ducc_submit
--specification 1.job --wait_for_completion --timestamp
Job 85 submitted
25/09/2015 12:03:29 id:85 location:29496@uima-ducc-vm
25/09/2015 12:03:39 id:85 state:WaitingForDriver
25/09/2015 12:03:59 id:85 state:WaitingForResources
25/09/2015 12:04:09 id:85 state:Initializing
25/09/2015 12:04:30 id:85 state:Running total:15 done:6 error:0 retry:0
procs:1
25/09/2015 12:04:40 id:85 state:Running total:15 done:11 error:0 retry:0
procs:1
25/09/2015 12:04:50 id:85 state:Running total:15 done:14 error:0 retry:0
procs:1
25/09/2015 12:05:00 id:85 state:Completing total:15 done:15 error:0 retry:0
procs:1
25/09/2015 12:05:10 id:85 state:Completed total:15 done:15 error:0 retry:0
procs:0
25/09/2015 12:05:10 id:85 rationale:state manager detected normal completion
25/09/2015 12:05:10 id:85 rc:0


Lou.

On Fri, Sep 25, 2015 at 12:49 AM, reshu.agarwal 
wrote:


Lewis & Lou,

When I classified the required library in classpath like below, Job was
unsuccessful and Status is "DriverProcessFailed".

classpath
/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/lib/uima-ducc/examples/*:
/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/lib/*:

/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/*:

/home/ducc/apache-uima-ducc-2.1.0-SNAPSHOT/apache-uima/apache-activemq/lib/optional/*:
 /home/ducc/Uima_test/lib/*:
 test.jar

As It said "Driver Process Failed" and JD's log file showed error about
not finding the classpath in job driver, I just tried to add my library in
jobclasspath.properties to be sure of problem.

25 Sep 2015 10:03:27,688  INFO JobDriverComponent - T[1]
verifySystemProperties  ducc.deploy.WorkItemTimeout=5
25 Sep 2015 10:03:27,716  INFO JobDriverStateExchanger - T[1]
initializeTarget  http://S211:19988/or
25 Sep 2015 10:03:27,725  INFO JobDriver - T[1] advanceJdState
current=Prelaunch request=Initializing result=Initializing
25 Sep 2015 10:03:32,158 ERROR ProxyLogger - T[1] loggifyUserException
java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
 at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at
org.apache.uima.ducc.container.jd.classload.ProxyJobDriverCollectionReader.prepare(ProxyJobDriverCollectionReader.java:164)
 at
org.apache.uima.ducc.container.jd.classload.ProxyJobDriverCollectionReader.construct(ProxyJobDriverCollectionReader.java:135)
 at
org.apache.uima.ducc.container.jd.classload.ProxyJobDriverCollectionReader.initialize(ProxyJobDriverCollectionReader.java:86)
 at
org.apache.uima.ducc.container.jd.classload.ProxyJobDriverCollectionReader.(ProxyJobDriverCollectionReader.java:72)
 at
org.apache.uima.ducc.container.jd.cas.CasManager.initialize(CasManager.java:51)
 at
org.apache.uima.ducc.container.jd.cas.CasManager.(CasManager.java:45)
 at
org.apache.uima.ducc.container.jd.JobDriver.initialize(JobDriver.java:113)
 at
org.apache.uima.ducc.container.jd.JobDriver.(JobDriver.java:96)
 at
org.apache.uima.ducc.container.jd.JobDriver.getInstance(JobDriver.java:61)
 at
org.apache.uima.ducc.transport.configuration.jd.JobDriverComponent.creat

Re: DUCC - Work Item Queue Time Management

2015-09-24 Thread reshu.agarwal
Registry.java:222)
at 
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290)
at 
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192)
at 
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585)
at 
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
at 
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
at 
org.springframework.context.annotation.AnnotationConfigApplicationContext.(AnnotationConfigApplicationContext.java:65)
at 
org.apache.uima.ducc.common.main.DuccService.boot(DuccService.java:160)
at 
org.apache.uima.ducc.common.main.DuccService.main(DuccService.java:289)



Hope this will clear my problem.

Thanks in advance.

Reshu


On 09/24/2015 06:28 PM, Burn Lewis wrote:

For DUCC 2.x the jobclasspath.properties file defines the JD & JP
classpaths for JUST the ducc code in the JD & JP.  The user code in the JD
(your collection reader) and in the JP (your annotator pipeline) uses ONLY
the classpath you provide plus one ducc jar.

Adding UIMA and application jars to the jobclasspath.properties file should
not help your user code (it does help in 1.x which uses a combined
ducc+user classpath.)

So the major change for DUCC 2.0 is that you must specify a complete
classpath for your application.

~Burn

On Thu, Sep 24, 2015 at 7:59 AM, Lou DeGenaro 
wrote:


Reshu,

Absent some extraordinary circumstance, you should not be touching
jobclasspath.properties file.

Specify your classpath requirement using --classpath when you submit your
job or register your service.  This is where you'd add UIMA jars, for
example.

Lou.

On Tue, Sep 22, 2015 at 12:38 AM, reshu.agarwal 
wrote:


Hi,

Thanks for replying. I have downloaded the latest code from github and
build it. Now, the problem of "Missing the -Dducc.deploy.JdURL property"
was resolved.

*Lewis:* I used the provided one resources/jobdriver.classpath's file of
DUCC 2.0.0 and have to do the same in DUCC 2.1.0.'s
resources/jobclasspath.properties file.

I added the required lib folder for my job to this file. Now the file  be
like:

ducc.jobdriver.classpath = \
   ${DUCC_HOME}/lib/uima-ducc/*:\
   ${DUCC_HOME}/apache-uima/lib/uima-core.jar:\
   ${DUCC_HOME}/lib/apache-log4j/*:\
   ${DUCC_HOME}/webserver/lib/*:\
   ${DUCC_HOME}/lib/http-client/*:\
   ${DUCC_HOME}/apache-uima/apache-activemq/lib/*:\
   ${DUCC_HOME}/lib/apache-camel/*:\
   ${DUCC_HOME}/lib/apache-commons/*:\
   ${DUCC_HOME}/lib/google-gson/*:\
   ${DUCC_HOME}/lib/springframework/*:/
___/home/ducc/Uima_pipeline/lib/*<-(I changed here for my job.)_

ducc.jobprocess.classpath = \
   ${DUCC_HOME}/lib/uima-ducc/*:\
   ${DUCC_HOME}/apache-uima/lib/uima-core.jar:\
   ${DUCC_HOME}/lib/apache-log4j/*:\
   ${DUCC_HOME}/webserver/lib/*:\
   ${DUCC_HOME}/lib/http-client/*:\
   ${DUCC_HOME}/apache-uima/apache-activemq/lib/*:\
   ${DUCC_HOME}/apache-uima/apache-activemq/lib/optional/*:\
   ${DUCC_HOME}/lib/apache-camel/*:\
   ${DUCC_HOME}/lib/apache-commons/*:\
   ${DUCC_HOME}/lib/springframework/*

This changes works in DUCC 2.1.0 version and my job completed
successfully. But this is not a solution as these all jars will add to

each

job even if not necessary. This lib folder contains third party jar as

well

as UIMA and UIMA AS jars.


On 09/22/2015 01:56 AM, Burn Lewis wrote:


re your original problem of a missing UIMA class:

It should not be necessary to modify resources/jobdriver.classpath ...
were
you using the one provided with 2.0 or do you have a locally modified

one?

Please let us know what changes to the 2.0 one you had to make.

You should just add the required UIMA jars to the classpath you provide
when you submit the job.  If you provide a deployment descriptor you'll
need to supply all the UIMA-AS jars, e.g.



${DUCC_HOME}/apache-uima/lib/*:${DUCC_HOME}/apache-uima/apache-activemq/lib/*:${DUCC_HOME}/apache-uima/apache-activemq/lib/optional/*

otherwise you probably need only 1 jar, e.g.
*${DUCC_HOME}/apache-uima/lib/uima-core.jar*

Note that in these examples I've used the UIMA jars that are included

with

DUCC, but in general it would be better if you used your own copy of

UIMA,

at whatever level is best for your application.

In DUCC 1.x the DUCC jars and their dependencies were added to the

user's

classpath, but this often caused problems when DUCC code and user code
used
different versions of a 3rd party jar, so in DUCC 2.0 we use a different
classloader for DUCC & user code, and add only one DUCC jar to the

user's

classpath.

~Burn


On Mon, Sep 21, 2015 at 9:18 AM, Jaroslaw Cwiklik 
wrote:

Reshu, if you have maven and svn installed on your machine you can

checkout the latest code from the svn:

svn co h

Re: DUCC - Work Item Queue Time Management

2015-09-23 Thread reshu.agarwal

Hi,

The problem was solved by this work around but this degrade performance 
of different project's jobs as classes will load static code in each job 
even if not in use. Please tell me solution of this so that I will be 
able to use DUCC 2.0.0.


Thanks in advance.

Reshu.

On 09/22/2015 10:08 AM, reshu.agarwal wrote:

Hi,

Thanks for replying. I have downloaded the latest code from github and 
build it. Now, the problem of "Missing the -Dducc.deploy.JdURL 
property" was resolved.


*Lewis:* I used the provided one resources/jobdriver.classpath's file 
of DUCC 2.0.0 and have to do the same in DUCC 2.1.0.'s 
resources/jobclasspath.properties file.


I added the required lib folder for my job to this file. Now the file  
be like:


ducc.jobdriver.classpath = \
  ${DUCC_HOME}/lib/uima-ducc/*:\
  ${DUCC_HOME}/apache-uima/lib/uima-core.jar:\
  ${DUCC_HOME}/lib/apache-log4j/*:\
  ${DUCC_HOME}/webserver/lib/*:\
  ${DUCC_HOME}/lib/http-client/*:\
  ${DUCC_HOME}/apache-uima/apache-activemq/lib/*:\
  ${DUCC_HOME}/lib/apache-camel/*:\
  ${DUCC_HOME}/lib/apache-commons/*:\
  ${DUCC_HOME}/lib/google-gson/*:\
  ${DUCC_HOME}/lib/springframework/*:/
___/home/ducc/Uima_pipeline/lib/*<-(I changed here for my job.)_

ducc.jobprocess.classpath = \
  ${DUCC_HOME}/lib/uima-ducc/*:\
  ${DUCC_HOME}/apache-uima/lib/uima-core.jar:\
  ${DUCC_HOME}/lib/apache-log4j/*:\
  ${DUCC_HOME}/webserver/lib/*:\
  ${DUCC_HOME}/lib/http-client/*:\
  ${DUCC_HOME}/apache-uima/apache-activemq/lib/*:\
  ${DUCC_HOME}/apache-uima/apache-activemq/lib/optional/*:\
  ${DUCC_HOME}/lib/apache-camel/*:\
  ${DUCC_HOME}/lib/apache-commons/*:\
  ${DUCC_HOME}/lib/springframework/*

This changes works in DUCC 2.1.0 version and my job completed 
successfully. But this is not a solution as these all jars will add to 
each job even if not necessary. This lib folder contains third party 
jar as well as UIMA and UIMA AS jars.


On 09/22/2015 01:56 AM, Burn Lewis wrote:

re your original problem of a missing UIMA class:

It should not be necessary to modify resources/jobdriver.classpath 
... were
you using the one provided with 2.0 or do you have a locally modified 
one?

Please let us know what changes to the 2.0 one you had to make.

You should just add the required UIMA jars to the classpath you provide
when you submit the job.  If you provide a deployment descriptor you'll
need to supply all the UIMA-AS jars, e.g.
${DUCC_HOME}/apache-uima/lib/*:${DUCC_HOME}/apache-uima/apache-activemq/lib/*:${DUCC_HOME}/apache-uima/apache-activemq/lib/optional/* 



otherwise you probably need only 1 jar, e.g.
*${DUCC_HOME}/apache-uima/lib/uima-core.jar*

Note that in these examples I've used the UIMA jars that are included 
with
DUCC, but in general it would be better if you used your own copy of 
UIMA,

at whatever level is best for your application.

In DUCC 1.x the DUCC jars and their dependencies were added to the 
user's
classpath, but this often caused problems when DUCC code and user 
code used

different versions of a 3rd party jar, so in DUCC 2.0 we use a different
classloader for DUCC & user code, and add only one DUCC jar to the 
user's

classpath.

~Burn


On Mon, Sep 21, 2015 at 9:18 AM, Jaroslaw Cwiklik 
wrote:


Reshu, if you have maven and svn installed on your machine you can
checkout the latest code from the svn:

svn co https://svn.apache.org/repos/asf/uima/sandbox/uima-ducc/trunk/ .

and  build it with: mvn clean install
You'll get a new ducc tarball in target dir

  Jerry Cwiklik
IBM Watson RTP North Carolina
UIMA Extensions
4205 S MIAMI BLVD
DURHAM , NC , 27703-9141
United States
Building: 502  |  Floor: 02  |  Office: M210
Tel: 919-254-6641  TL:444-6641
Email: cwik...@us.ibm.com

[image: Inactive hide details for Lou DeGenaro ---09/21/2015 08:44:06
AM---Reshu, This is a bug in DUCC 2.0.0. See https://issues.apac]Lou
DeGenaro ---09/21/2015 08:44:06 AM---Reshu, This is a bug in DUCC 
2.0.0.

See https://issues.apache.org/jira/browse/UIMA

From: Lou DeGenaro 
To: user@uima.apache.org
Date: 09/21/2015 08:44 AM
Subject: Re: DUCC - Work Item Queue Time Management
--



Reshu,

This is a bug in DUCC 2.0.0.  See
https://issues.apache.org/jira/browse/UIMA
-4576?jql=project%20%3D%20UIMA.

Presently, you would need download the current DUCC source and build 
a new

tarball to get the fix.

In the mean time, I'll investigate how interim DUCC releases 
(tarballs) are

posted to the Apache website.

Lou.

On Mon, Sep 21, 2015 at 7:25 AM, reshu.agarwal 


wrote:


Hi,

As you said:"In DUCC 2.0 you must explicitly supply UIMA in the
classpath of your submission. This was not the case in DUCC 1.x where

UIMA

was added by DUCC under the covers."

I defined the same but still facing the error. In JD initialization, I
defined the java class library path required in classpath parameter in

job

specification. But it was showing error until I added the same 

Re: DUCC - Work Item Queue Time Management

2015-09-21 Thread reshu.agarwal

Hi,

Thanks for replying. I have downloaded the latest code from github and 
build it. Now, the problem of "Missing the -Dducc.deploy.JdURL property" 
was resolved.


*Lewis:* I used the provided one resources/jobdriver.classpath's file of 
DUCC 2.0.0 and have to do the same in DUCC 2.1.0.'s 
resources/jobclasspath.properties file.


I added the required lib folder for my job to this file. Now the file  
be like:


ducc.jobdriver.classpath = \
  ${DUCC_HOME}/lib/uima-ducc/*:\
  ${DUCC_HOME}/apache-uima/lib/uima-core.jar:\
  ${DUCC_HOME}/lib/apache-log4j/*:\
  ${DUCC_HOME}/webserver/lib/*:\
  ${DUCC_HOME}/lib/http-client/*:\
  ${DUCC_HOME}/apache-uima/apache-activemq/lib/*:\
  ${DUCC_HOME}/lib/apache-camel/*:\
  ${DUCC_HOME}/lib/apache-commons/*:\
  ${DUCC_HOME}/lib/google-gson/*:\
  ${DUCC_HOME}/lib/springframework/*:/
___/home/ducc/Uima_pipeline/lib/*<-(I changed here for my job.)_

ducc.jobprocess.classpath = \
  ${DUCC_HOME}/lib/uima-ducc/*:\
  ${DUCC_HOME}/apache-uima/lib/uima-core.jar:\
  ${DUCC_HOME}/lib/apache-log4j/*:\
  ${DUCC_HOME}/webserver/lib/*:\
  ${DUCC_HOME}/lib/http-client/*:\
  ${DUCC_HOME}/apache-uima/apache-activemq/lib/*:\
  ${DUCC_HOME}/apache-uima/apache-activemq/lib/optional/*:\
  ${DUCC_HOME}/lib/apache-camel/*:\
  ${DUCC_HOME}/lib/apache-commons/*:\
  ${DUCC_HOME}/lib/springframework/*

This changes works in DUCC 2.1.0 version and my job completed 
successfully. But this is not a solution as these all jars will add to 
each job even if not necessary. This lib folder contains third party jar 
as well as UIMA and UIMA AS jars.


On 09/22/2015 01:56 AM, Burn Lewis wrote:

re your original problem of a missing UIMA class:

It should not be necessary to modify resources/jobdriver.classpath ... were
you using the one provided with 2.0 or do you have a locally modified one?
Please let us know what changes to the 2.0 one you had to make.

You should just add the required UIMA jars to the classpath you provide
when you submit the job.  If you provide a deployment descriptor you'll
need to supply all the UIMA-AS jars, e.g.
${DUCC_HOME}/apache-uima/lib/*:${DUCC_HOME}/apache-uima/apache-activemq/lib/*:${DUCC_HOME}/apache-uima/apache-activemq/lib/optional/*

otherwise you probably need only 1 jar, e.g.
*${DUCC_HOME}/apache-uima/lib/uima-core.jar*

Note that in these examples I've used the UIMA jars that are included with
DUCC, but in general it would be better if you used your own copy of UIMA,
at whatever level is best for your application.

In DUCC 1.x the DUCC jars and their dependencies were added to the user's
classpath, but this often caused problems when DUCC code and user code used
different versions of a 3rd party jar, so in DUCC 2.0 we use a different
classloader for DUCC & user code, and add only one DUCC jar to the user's
classpath.

~Burn


On Mon, Sep 21, 2015 at 9:18 AM, Jaroslaw Cwiklik 
wrote:


Reshu, if you have maven and svn installed on your machine you can
checkout the latest code from the svn:

svn co https://svn.apache.org/repos/asf/uima/sandbox/uima-ducc/trunk/ .

and  build it with: mvn clean install
You'll get a new ducc tarball in target dir

  Jerry Cwiklik
IBM Watson RTP North Carolina
UIMA Extensions
4205 S MIAMI BLVD
DURHAM , NC , 27703-9141
United States
Building: 502  |  Floor: 02  |  Office: M210
Tel: 919-254-6641  TL:444-6641
Email: cwik...@us.ibm.com

[image: Inactive hide details for Lou DeGenaro ---09/21/2015 08:44:06
AM---Reshu, This is a bug in DUCC 2.0.0. See https://issues.apac]Lou
DeGenaro ---09/21/2015 08:44:06 AM---Reshu, This is a bug in DUCC 2.0.0.
See https://issues.apache.org/jira/browse/UIMA

From: Lou DeGenaro 
To: user@uima.apache.org
Date: 09/21/2015 08:44 AM
Subject: Re: DUCC - Work Item Queue Time Management
--



Reshu,

This is a bug in DUCC 2.0.0.  See
https://issues.apache.org/jira/browse/UIMA
-4576?jql=project%20%3D%20UIMA.

Presently, you would need download the current DUCC source and build a new
tarball to get the fix.

In the mean time, I'll investigate how interim DUCC releases (tarballs) are
posted to the Apache website.

Lou.

On Mon, Sep 21, 2015 at 7:25 AM, reshu.agarwal 
wrote:


Hi,

As you said:"In DUCC 2.0 you must explicitly supply UIMA in the
classpath of your submission. This was not the case in DUCC 1.x where

UIMA

was added by DUCC under the covers."

I defined the same but still facing the error. In JD initialization, I
defined the java class library path required in classpath parameter in

job

specification. But it was showing error until I added the same in
resources/jobdriver.classpath. After this It was initialized and then
started showing error of "Missing the -Dducc.deploy.JdURL property".

I was getting java.lang.RuntimeException: Missing the -Dducc.deploy.JdURL
property even in 1.job. Why this error is coming?

Thanks in Advance.

Reshu.


On 09/18/2015 02:47 PM, Lou DeGenaro wrote:


Re: DUCC - Work Item Queue Time Management

2015-09-21 Thread reshu.agarwal
The problem is not about explicitly supplying UIMA in the classpath. But 
DUCC's JD is not able to initialize the required jars from the path 
defined in Job descriptor file using "classpath".


It is showing the lib folder in the userClasspath variable in JD's log 
but not actually initialize its jars.


Thanks,
Reshu.

On 09/21/2015 04:55 PM, reshu.agarwal wrote:


Hi,

As you said:"In DUCC 2.0 you must explicitly supply UIMA in the 
classpath of your submission. This was not the case in DUCC 1.x where 
UIMA was added by DUCC under the covers."


I defined the same but still facing the error. In JD initialization, I 
defined the java class library path required in classpath parameter in 
job specification. But it was showing error until I added the same in 
resources/jobdriver.classpath. After this It was initialized and then 
started showing error of "Missing the -Dducc.deploy.JdURL property".


I was getting java.lang.RuntimeException: Missing the 
-Dducc.deploy.JdURL property even in 1.job. Why this error is coming?


Thanks in Advance.

Reshu.

On 09/18/2015 02:47 PM, Lou DeGenaro wrote:

Reshu,

In DUCC 2.0 you must explicitly supply UIMA in the classpath of your
submission.  This was not the case in DUCC 1.x where UIMA was added 
by DUCC

under the covers.

In fact this gives you more flexibility in that you are no loner tied to
using a particular version of UIMA.

Lou.

On Fri, Sep 18, 2015 at 12:24 AM, reshu.agarwal 


wrote:


Jerry,

I have tried DUCC 2.0.0 to run same job on it. I don't know why but 
same
job descriptor didn't work. It showed some exception at 
initialization time

which was not in case of 1.1.0.

Is there any changes regarding job descriptor or service descriptor? 
The

both did not work in my case for DUCC 2.0.0 but for DUCC 1.0.0 and DUCC
1.1.0.

In Service descriptor it shows some spring Framework's class not found
exception. See below:

*java.lang.NoClassDefFoundError:
org/springframework/context/ApplicationListener*

Thanks in advance.

Reshu.


On 09/17/2015 08:15 PM, Jaroslaw Cwiklik wrote:

Hi, can you try Ducc 2.0.0? It was recently released into Apache. 
One of
the key changes was to remove queues as means of transport between 
JD (Job
Driver) and JP (Job Process). Instead, each JP uses HTTP to request 
a Work

Item (CAS) from a JD.

DUCC 1.1.0 has a concept of a WI timeout which I think is 24 hours by
default. A timer is started in a JD when each WI is dispatched to a 
JP. If

the WI does not come back for whatever reason, the timer pops and a JD
will
attempt to retry that WI.

To debug your problem with DUCC 1.1.0 I suggest attaching JMX 
console to a
running JP to see where its threads are. Before doing this, check 
JP logs

to see if there is an exception.

Jerry



On Thu, Sep 17, 2015 at 4:32 AM, reshu.agarwal 


wrote:

My DUCC version is 1.1.0.


On 09/17/2015 11:35 AM, reshu.agarwal wrote:

Hi,
I am facing a problem in DUCC that some documents were shown in 
queue

but
did not get processed. In Job, work item list shows a particular 
work

item's status "queued" and queueing time is "4115 seconds".

I want to set queueing time of work item not more then 1 minute. 
What is
the reason for the same? Is there any method to solve this? How 
can I

set
maximum queueing time for work item?

Thanks in advance.

Reshu.









Re: DUCC - Work Item Queue Time Management

2015-09-21 Thread reshu.agarwal


Hi,

As you said:"In DUCC 2.0 you must explicitly supply UIMA in the 
classpath of your submission. This was not the case in DUCC 1.x where 
UIMA was added by DUCC under the covers."


I defined the same but still facing the error. In JD initialization, I 
defined the java class library path required in classpath parameter in 
job specification. But it was showing error until I added the same in 
resources/jobdriver.classpath. After this It was initialized and then 
started showing error of "Missing the -Dducc.deploy.JdURL property".


I was getting java.lang.RuntimeException: Missing the 
-Dducc.deploy.JdURL property even in 1.job. Why this error is coming?


Thanks in Advance.

Reshu.

On 09/18/2015 02:47 PM, Lou DeGenaro wrote:

Reshu,

In DUCC 2.0 you must explicitly supply UIMA in the classpath of your
submission.  This was not the case in DUCC 1.x where UIMA was added by DUCC
under the covers.

In fact this gives you more flexibility in that you are no loner tied to
using a particular version of UIMA.

Lou.

On Fri, Sep 18, 2015 at 12:24 AM, reshu.agarwal 
wrote:


Jerry,

I have tried DUCC 2.0.0 to run same job on it. I don't know why but same
job descriptor didn't work. It showed some exception at initialization time
which was not in case of 1.1.0.

Is there any changes regarding job descriptor or service descriptor? The
both did not work in my case for DUCC 2.0.0 but for DUCC 1.0.0 and DUCC
1.1.0.

In Service descriptor it shows some spring Framework's class not found
exception. See below:

*java.lang.NoClassDefFoundError:
org/springframework/context/ApplicationListener*

Thanks in advance.

Reshu.


On 09/17/2015 08:15 PM, Jaroslaw Cwiklik wrote:


Hi, can you try Ducc 2.0.0? It was recently released into Apache. One of
the key changes was to remove queues as means of transport between JD (Job
Driver) and JP (Job Process). Instead, each JP uses HTTP to request a Work
Item (CAS) from a JD.

DUCC 1.1.0 has a concept of a WI timeout which I think is 24 hours by
default. A timer is started in a JD when each WI is dispatched to a JP. If
the WI does not come back for whatever reason, the timer pops and a JD
will
attempt to retry that WI.

To debug your problem with DUCC 1.1.0 I suggest attaching JMX console to a
running JP to see where its threads are. Before doing this, check JP logs
to see if there is an exception.

Jerry



On Thu, Sep 17, 2015 at 4:32 AM, reshu.agarwal 
wrote:

My DUCC version is 1.1.0.


On 09/17/2015 11:35 AM, reshu.agarwal wrote:

Hi,

I am facing a problem in DUCC that some documents were shown in queue
but
did not get processed. In Job, work item list shows a particular work
item's status "queued" and queueing time is "4115 seconds".

I want to set queueing time of work item not more then 1 minute. What is
the reason for the same? Is there any method to solve this? How can I
set
maximum queueing time for work item?

Thanks in advance.

Reshu.






Re: DUCC - Work Item Queue Time Management

2015-09-17 Thread reshu.agarwal

It looks like the class-path of job/service is not able to add successfully.

On 09/18/2015 09:54 AM, reshu.agarwal wrote:

Jerry,

I have tried DUCC 2.0.0 to run same job on it. I don't know why but 
same job descriptor didn't work. It showed some exception at 
initialization time which was not in case of 1.1.0.


Is there any changes regarding job descriptor or service descriptor? 
The both did not work in my case for DUCC 2.0.0 but for DUCC 1.0.0 and 
DUCC 1.1.0.


In Service descriptor it shows some spring Framework's class not found 
exception. See below:


*java.lang.NoClassDefFoundError: 
org/springframework/context/ApplicationListener*


Thanks in advance.

Reshu.

On 09/17/2015 08:15 PM, Jaroslaw Cwiklik wrote:

Hi, can you try Ducc 2.0.0? It was recently released into Apache. One of
the key changes was to remove queues as means of transport between JD 
(Job
Driver) and JP (Job Process). Instead, each JP uses HTTP to request a 
Work

Item (CAS) from a JD.

DUCC 1.1.0 has a concept of a WI timeout which I think is 24 hours by
default. A timer is started in a JD when each WI is dispatched to a 
JP. If
the WI does not come back for whatever reason, the timer pops and a 
JD will

attempt to retry that WI.

To debug your problem with DUCC 1.1.0 I suggest attaching JMX console 
to a
running JP to see where its threads are. Before doing this, check JP 
logs

to see if there is an exception.

Jerry



On Thu, Sep 17, 2015 at 4:32 AM, reshu.agarwal 


wrote:


My DUCC version is 1.1.0.


On 09/17/2015 11:35 AM, reshu.agarwal wrote:


Hi,

I am facing a problem in DUCC that some documents were shown in 
queue but

did not get processed. In Job, work item list shows a particular work
item's status "queued" and queueing time is "4115 seconds".

I want to set queueing time of work item not more then 1 minute. 
What is
the reason for the same? Is there any method to solve this? How can 
I set

maximum queueing time for work item?

Thanks in advance.

Reshu.










Re: DUCC - Work Item Queue Time Management

2015-09-17 Thread reshu.agarwal

Jerry,

I have tried DUCC 2.0.0 to run same job on it. I don't know why but same 
job descriptor didn't work. It showed some exception at initialization 
time which was not in case of 1.1.0.


Is there any changes regarding job descriptor or service descriptor? The 
both did not work in my case for DUCC 2.0.0 but for DUCC 1.0.0 and DUCC 
1.1.0.


In Service descriptor it shows some spring Framework's class not found 
exception. See below:


*java.lang.NoClassDefFoundError: 
org/springframework/context/ApplicationListener*


Thanks in advance.

Reshu.

On 09/17/2015 08:15 PM, Jaroslaw Cwiklik wrote:

Hi, can you try Ducc 2.0.0? It was recently released into Apache. One of
the key changes was to remove queues as means of transport between JD (Job
Driver) and JP (Job Process). Instead, each JP uses HTTP to request a Work
Item (CAS) from a JD.

DUCC 1.1.0 has a concept of a WI timeout which I think is 24 hours by
default. A timer is started in a JD when each WI is dispatched to a JP. If
the WI does not come back for whatever reason, the timer pops and a JD will
attempt to retry that WI.

To debug your problem with DUCC 1.1.0 I suggest attaching JMX console to a
running JP to see where its threads are. Before doing this, check JP logs
to see if there is an exception.

Jerry



On Thu, Sep 17, 2015 at 4:32 AM, reshu.agarwal 
wrote:


My DUCC version is 1.1.0.


On 09/17/2015 11:35 AM, reshu.agarwal wrote:


Hi,

I am facing a problem in DUCC that some documents were shown in queue but
did not get processed. In Job, work item list shows a particular work
item's status "queued" and queueing time is "4115 seconds".

I want to set queueing time of work item not more then 1 minute. What is
the reason for the same? Is there any method to solve this? How can I set
maximum queueing time for work item?

Thanks in advance.

Reshu.







Re: DUCC - Work Item Queue Time Management

2015-09-17 Thread reshu.agarwal

My DUCC version is 1.1.0.

On 09/17/2015 11:35 AM, reshu.agarwal wrote:


Hi,

I am facing a problem in DUCC that some documents were shown in queue 
but did not get processed. In Job, work item list shows a particular 
work item's status "queued" and queueing time is "4115 seconds".


I want to set queueing time of work item not more then 1 minute. What 
is the reason for the same? Is there any method to solve this? How can 
I set maximum queueing time for work item?


Thanks in advance.

Reshu.




DUCC - Work Item Queue Time Management

2015-09-16 Thread reshu.agarwal


Hi,

I am facing a problem in DUCC that some documents were shown in queue 
but did not get processed. In Job, work item list shows a particular 
work item's status "queued" and queueing time is "4115 seconds".


I want to set queueing time of work item not more then 1 minute. What is 
the reason for the same? Is there any method to solve this? How can I 
set maximum queueing time for work item?


Thanks in advance.

Reshu.


Re: DUCC- process_dd

2015-04-30 Thread reshu.agarwal

Eddie,

I was using this same scenario and doing hit and try to compare this 
with UIMA AS to get the more scaled pipeline as I think UIMA AS can also 
did this. But I am unable to touch the processing time of DUCC's default 
configuration like you mentioned with UIMA AS.


Can you help me in doing this? I just want to do scaling by using best 
configuration of UIMA AS and DUCC which can be done using process_dd. 
But How??


Thanks in advanced.

Reshu.

On 05/01/2015 03:28 AM, Eddie Epstein wrote:

The simplest way of vertically scaling a Job process is to specify the
analysis pipeline using core UIMA descriptors and then using
--process_thread_count to specify how many copies of the pipeline to
deploy, each in a different thread. No use of UIMA-AS at all. Please check
out the "Raw Text Processing" sample application that comes with DUCC.

On Wed, Apr 29, 2015 at 12:30 AM, reshu.agarwal 
wrote:


Ohh!!! I misunderstand this. I thought this would scale my Aggregate and
AEs both.

I want to scale aggregate as well as individual AEs. Is there any way of
doing this in UIMA AS/DUCC?



On 04/28/2015 07:14 PM, Jaroslaw Cwiklik wrote:


In async aggregate you scale individual AEs not the aggregate as a whole.
The below configuration should do that. Are there any warnings from
dd2spring at startup with your configuration?



  
  
  
  
  
  
  
  
  
  
  
  
  
  
  

Jerry

On Tue, Apr 28, 2015 at 5:20 AM, reshu.agarwal 
wrote:

  Hi,

I was trying to scale my processing pipeline to be run in DUCC
environment
with uima as process_dd. If I was trying to scale using the below given
configuration, the threads started were not as expected:


http://uima.apache.org/resourceSpecifier";>

  Uima v3 Deployment Descripter
  Deploys Uima v3 Aggregate AE using the Advanced
Fixed
Flow
  Controller

  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  




There should be 5 threads of FlowControllerAgg where each thread will
have
5 more threads of each ChunkerDescriptor,NEDescriptor,StemmerDescriptor
and
ConsumerDescriptor.

But I didn't think it is actually happening in case of DUCC.

Thanks in advance.

Reshu.








Re: DUCC- process_dd

2015-04-28 Thread reshu.agarwal


Ohh!!! I misunderstand this. I thought this would scale my Aggregate and 
AEs both.


I want to scale aggregate as well as individual AEs. Is there any way of 
doing this in UIMA AS/DUCC?



On 04/28/2015 07:14 PM, Jaroslaw Cwiklik wrote:

In async aggregate you scale individual AEs not the aggregate as a whole.
The below configuration should do that. Are there any warnings from
dd2spring at startup with your configuration?



 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Jerry

On Tue, Apr 28, 2015 at 5:20 AM, reshu.agarwal 
wrote:


Hi,

I was trying to scale my processing pipeline to be run in DUCC environment
with uima as process_dd. If I was trying to scale using the below given
configuration, the threads started were not as expected:


http://uima.apache.org/resourceSpecifier";>

 Uima v3 Deployment Descripter
 Deploys Uima v3 Aggregate AE using the Advanced Fixed
Flow
 Controller

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 




There should be 5 threads of FlowControllerAgg where each thread will have
5 more threads of each ChunkerDescriptor,NEDescriptor,StemmerDescriptor and
ConsumerDescriptor.

But I didn't think it is actually happening in case of DUCC.

Thanks in advance.

Reshu.







DUCC- process_dd

2015-04-28 Thread reshu.agarwal

Hi,

I was trying to scale my processing pipeline to be run in DUCC 
environment with uima as process_dd. If I was trying to scale using the 
below given configuration, the threads started were not as expected:



http://uima.apache.org/resourceSpecifier";>

Uima v3 Deployment Descripter
Deploys Uima v3 Aggregate AE using the Advanced 
Fixed Flow

Controller




brokerURL="tcp://localhost:61617?jms.useCompression=true" prefetch="0" />


location="../Uima_v3_test/desc/orkash/ae/aggregate/FlowController_Uima.xml" 
/>


key="FlowControllerAgg" internalReplyQueueScaleout="10" 
inputQueueScaleout="10">



key="ChunkerDescriptor">
numberOfInstances="5" />



numberOfInstances="5" />


key="StemmerDescriptor">
numberOfInstances="5" />


key="ConsumerDescriptor">
numberOfInstances="5" />










There should be 5 threads of FlowControllerAgg where each thread will 
have 5 more threads of each 
ChunkerDescriptor,NEDescriptor,StemmerDescriptor and ConsumerDescriptor.


But I didn't think it is actually happening in case of DUCC.

Thanks in advance.

Reshu.




Re: Ducc Problems

2015-02-22 Thread reshu.agarwal

I am running Uima-AS 2.6.0. Please have a look on logs:

Feb 23, 2015 9:37:44 AM org.apache.uima.adapter.jms.service.UIMA_Service 
initialize(67)
INFO: UIMA-AS version 2.6.0


On 02/20/2015 08:20 PM, Jaroslaw Cwiklik wrote:

Reshu, can you confirm if you are running UIMA-AS 2.6.0 or earlier. You can
look at the log for something similar to this:

+--
+ Service Name:Person Title Annotator
+ Service Queue Name:PersonTitleAnnotatorQueue
+ Service Start Time:20 Feb 2015 09:25:43
*+ UIMA AS Version:2.6.0
  <---*
+ UIMA Core Version:2.6.0
+ OS Name:Linux
+ OS Version:2.6.32-279.el6.x86_64
+ OS Architecture:amd64
+ OS CPU Count:4
+ JVM Vendor:IBM Corporation
+ JVM Name:IBM J9 VM
+ JVM Version:2.6

Jerry

On Thu, Feb 19, 2015 at 11:26 PM, reshu.agarwal 
wrote:


Dear Cwiklik,

There is only 2 seconds delay between the last log message and
org.apache.uima.aae.controller.PrimitiveAnalysisEngineController_impl
quiesceAndStop.

Please have a look on the logs:

  Process Received a Message. Is Process target for message:true. Target

PID:22640


configFactory.stop() - stopped route:mina:tcp://localhost:

52449?transferExchange=true&sync=false


Feb 19, 2015 5:39:54 PM org.apache.uima.aae.controller.

PrimitiveAnalysisEngineController_impl quiesceAndStop
INFO: Stopping Controller: ducc.jd.queue.13202
Quiescing UIMA-AS Service. Remaining Number of CASes to Process:0
Feb 19, 2015 5:39:54 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Stopping Service JMS Transport. Service: ducc.jd.queue.13202
ShutdownNow false
Feb 19, 2015 5:39:54 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Controller: ducc.jd.queue.13202 Stopped Listener on Endpoint:
queue://ducc.jd.queue.13202 Selector:  Selector:Command=2000 OR
Command=2002.
Feb 19, 2015 5:39:54 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Stopping Service JMS Transport. Service: ducc.jd.queue.13202
ShutdownNow false
Feb 19, 2015 5:39:54 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Controller: ducc.jd.queue.13202 Stopped Listener on Endpoint:
queue://ducc.jd.queue.13202 Selector:  Selector:Command=2001.
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Stopping Service JMS Transport. Service: ducc.jd.queue.13202
ShutdownNow true
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Controller: ducc.jd.queue.13202 Stopped Listener on Endpoint:
queue://ducc.jd.queue.13202 Selector:  Selector:Command=2000 OR
Command=2002.
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Stopping Service JMS Transport. Service: ducc.jd.queue.13202
ShutdownNow true
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Controller: ducc.jd.queue.13202 Stopped Listener on Endpoint:
queue://ducc.jd.queue.13202 Selector:  Selector:Command=2001.
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Stopping Service JMS Transport. Service: ducc.jd.queue.13202
ShutdownNow false
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Controller: ducc.jd.queue.13202 Stopped Listener on Endpoint:
queue://ducc.jd.queue.13202 Selector:  Selector:Command=2000 OR
Command=2002.
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Stopping Service JMS Transport. Service: ducc.jd.queue.13202
ShutdownNow false
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Controller: ducc.jd.queue.13202 Stopped Listener on Endpoint:
queue://ducc.jd.queue.13202 Selector:  Selector:Command=2001.
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Stopping Service JMS Transport. Service: ducc.jd.queue.13202
ShutdownNow true
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Controller: ducc.jd.queue.13202 Stopped Listener on Endpoint:
queue://ducc.jd.queue.13202 Selector:  Selector:Command=2000 OR
Command=2002.
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Stopping Service JMS Transport. Service: ducc.jd.queue.13202
ShutdownNow true
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInputChannel
stopChannel
INFO: Controller: ducc.jd.queue.13202 Stopped Listener on Endpoint:
queue://ducc.jd.queue.13202 Selector:  Selector:Command=2001.
UIMA-AS Service is Stopping, All CASes Have Been Processed
Feb 19, 2015 5:39:56 PM org.apache.uima.aae.controller.
PrimitiveAnalysisEngineController_impl stop
INFO: Stopping Controller: ducc.jd.queue.13202
Feb 19, 2015 5:39:56 PM org.apache.uima.adapter.jms.activemq.JmsInput

Re: Ducc Problems

2015-02-19 Thread reshu.agarwal
nnel Shutdown Completed

Thanks Reshu.


On 02/20/2015 12:40 AM, Jaroslaw Cwiklik wrote:

One possible explanation for destroy() not getting called is that a process
(JP) may be still working on a CAS when Ducc deallocates the process. Ducc
first asks the process to quiesce and stop and allows it 1 minute to
terminate on its own. If this does not happen, Ducc kills the process via
kill -9. In such case the process will be clobbered and destroy() methods
in UIMA-AS are not called.
There should be some evidence in JP logs at the very end. Look for
something like this:


Process Received a Message. Is Process target for message:true.

Target PID:27520

configFactory.stop() - stopped

route:mina:tcp://localhost:49338?transferExchange=true&sync=false
01:56:22.735 - 94:
org.apache.uima.aae.controller.PrimitiveAnalysisEngineController_impl.quiesceAndStop:
INFO: Stopping Controller: ducc.jd.queue.226091
Quiescing UIMA-AS Service. Remaining Number of CASes to Process:0

Look at the timestamp of >>>>>>>>> Process Received a Message. Is Process
target for message:true.
and compare it to a timestamp of the last log message. Does it look like
there is a long delay?


Jerry

On Wed, Feb 18, 2015 at 2:03 AM, reshu.agarwal 
wrote:


Dear Eddie,

This problem has been resolved by using destroy method in ducc version
1.0.0 but when I upgrade my ducc version from 1.0.0 to 1.1.0 DUCC didn't
call the destroy method.

It also do not call the stop method of CollectionReader as well as
finalize method of any java class as well as destroy/collectionProcessComplete
method of cas consumer.

I want to close my connection to Database after completion of job as well
as want to use batch processing at cas consumer level like
PersonTitleDBWriterCasConsumer.

Thanks in advanced.

Reshu.




On 03/31/2014 04:14 PM, reshu.agarwal wrote:


On 03/28/2014 05:28 PM, Eddie Epstein wrote:


Another alternative would be to do the final flush in the Cas consumer's
destroy method.

Another issue to be aware of, in order to balance resources between jobs,
DUCC uses preemption of job processes scheduled in a "fair-share" class.
This may not be acceptable for jobs which are doing incremental commits.
The solution is to schedule the job in a non-preemptable class.


On Fri, Mar 28, 2014 at 1:22 AM, reshu.agarwal 
wrote:

  On 03/28/2014 01:28 AM, Eddie Epstein wrote:

  Hi Reshu,

The Job model in DUCC is for the Collection Reader to send "work item
CASes", where a work item represents a collection of work to be done
by a
Job Process. For example, a work item could be a file or a subset of a
file
that contains many documents, where each document would be individually
put
into a CAS by the Cas Multiplier in the Job Process.

DUCC is designed so that after processing the "mini-collection"
represented
by the work item,  the Cas Consumer should flush any data. This is
done by
routing the "work item CAS" to the Cas Consumer, after all work item
documents are completed, at which point the CC does the flush.

The sample code described in
http://uima.apache.org/d/uima-ducc-1.0.0/duccbook.html#x1-1380009 uses
the
work item CAS to flush data in exactly this way.

Note that the PersonTitleDBWriterCasConsumer is doing a flush (a
commit)
in
the process method after every 50 documents.

Regards
Eddie



On Thu, Mar 27, 2014 at 1:35 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

   On 03/26/2014 11:34 PM, Eddie Epstein wrote:


   Hi Reshu,


The collectionProcessingComplete() method in UIMA-AS has a
limitation: a
Collection Processing Complete request sent to the UIMA-AS Analysis
Service
is cascaded down to all delegates; however, if a particular delegate
is
scaled-out, only one of the instances of the delegate will get this
call.

Since DUCC is using UIMA-AS to scale out the Job processes, it has no
way
to deliver a CPC to all instances.

The applications we have been running on DUCC have used the Work Item
CAS
as a signal to CAS consumers to do CPC level processing. That is
discussed
in the first reference above, in the paragraph "Flushing Cached
Data".

Eddie



On Wed, Mar 26, 2014 at 9:48 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

On 03/26/2014 06:43 PM, Eddie Epstein wrote:

 Are you using standard UIMA interface code to Solr? If so, which

Cas

  Consumer?

Taking at quick look at the source code for SolrCASConsumer, the
batch
and
collection process complete methods appear to do nothing.

Thanks,
Eddie


On Wed, Mar 26, 2014 at 6:08 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

 On 03/21/2014 11:42 AM, reshu.agarwal wrote:

  Hence we can not attempt batch processing in cas consumer and
it


   increases our process timing. Is there any other option for
that or


is
it a
bug in DUCC?

 Please reply on this problem as if I am sending document in
solr
one by

   one by cas consumer without using batch process and

Re: Ducc Problems

2015-02-17 Thread reshu.agarwal

Dear Eddie,

This problem has been resolved by using destroy method in ducc version 
1.0.0 but when I upgrade my ducc version from 1.0.0 to 1.1.0 DUCC didn't 
call the destroy method.


It also do not call the stop method of CollectionReader as well as 
finalize method of any java class as well as 
destroy/collectionProcessComplete method of cas consumer.


I want to close my connection to Database after completion of job as 
well as want to use batch processing at cas consumer level like 
PersonTitleDBWriterCasConsumer.


Thanks in advanced.

Reshu.



On 03/31/2014 04:14 PM, reshu.agarwal wrote:

On 03/28/2014 05:28 PM, Eddie Epstein wrote:

Another alternative would be to do the final flush in the Cas consumer's
destroy method.

Another issue to be aware of, in order to balance resources between 
jobs,

DUCC uses preemption of job processes scheduled in a "fair-share" class.
This may not be acceptable for jobs which are doing incremental commits.
The solution is to schedule the job in a non-preemptable class.


On Fri, Mar 28, 2014 at 1:22 AM, reshu.agarwal 
wrote:



On 03/28/2014 01:28 AM, Eddie Epstein wrote:


Hi Reshu,

The Job model in DUCC is for the Collection Reader to send "work item
CASes", where a work item represents a collection of work to be 
done by a

Job Process. For example, a work item could be a file or a subset of a
file
that contains many documents, where each document would be 
individually

put
into a CAS by the Cas Multiplier in the Job Process.

DUCC is designed so that after processing the "mini-collection"
represented
by the work item,  the Cas Consumer should flush any data. This is 
done by

routing the "work item CAS" to the Cas Consumer, after all work item
documents are completed, at which point the CC does the flush.

The sample code described in
http://uima.apache.org/d/uima-ducc-1.0.0/duccbook.html#x1-1380009 uses
the
work item CAS to flush data in exactly this way.

Note that the PersonTitleDBWriterCasConsumer is doing a flush (a 
commit)

in
the process method after every 50 documents.

Regards
Eddie



On Thu, Mar 27, 2014 at 1:35 AM, reshu.agarwal 


wrote:

  On 03/26/2014 11:34 PM, Eddie Epstein wrote:

  Hi Reshu,
The collectionProcessingComplete() method in UIMA-AS has a 
limitation: a

Collection Processing Complete request sent to the UIMA-AS Analysis
Service
is cascaded down to all delegates; however, if a particular 
delegate is

scaled-out, only one of the instances of the delegate will get this
call.

Since DUCC is using UIMA-AS to scale out the Job processes, it 
has no

way
to deliver a CPC to all instances.

The applications we have been running on DUCC have used the Work 
Item

CAS
as a signal to CAS consumers to do CPC level processing. That is
discussed
in the first reference above, in the paragraph "Flushing Cached 
Data".


Eddie



On Wed, Mar 26, 2014 at 9:48 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

   On 03/26/2014 06:43 PM, Eddie Epstein wrote:

   Are you using standard UIMA interface code to Solr? If so, 
which Cas



Consumer?

Taking at quick look at the source code for SolrCASConsumer, 
the batch

and
collection process complete methods appear to do nothing.

Thanks,
Eddie


On Wed, Mar 26, 2014 at 6:08 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

On 03/21/2014 11:42 AM, reshu.agarwal wrote:

 Hence we can not attempt batch processing in cas consumer 
and it
  increases our process timing. Is there any other option for 
that or

is
it a
bug in DUCC?

Please reply on this problem as if I am sending document 
in solr

one by

  one by cas consumer without using batch process and committing

solr. It
is
not optimum way to use this. Why ducc is not calling collection
Process
Complete method of Cas Consumer? And If I want to do that then 
What

is
the
way to do this?

I am not able to find any thing about this in DUCC book.

Thanks in Advanced.

--
Thanks,
Reshu Agarwal


Hi Eddie,

  I am not using standard UIMA interface code to Solr. I 
create my

own Cas

Consumer. I will take a look on that too. But the problem is not 
for
particularly to use solr, I can use any source to store my 
output. I

want
to do batch processing and want to use 
collectionProcessComplete. Why

DUCC
is not calling it? I check it with UIMA AS also and my cas 
consumer is

working fine with it and also performing batch processing.

--
Thanks,
Reshu Agarwal


   Hi Eddie,


I am using cas consumer similar to apache uima example:

   "apache-uima/examples/src/org/apache/uima/examples/cpe/
PersonTitleDBWriterCasConsumer.java"

--
Thanks,
Reshu Agarwal


  Hi Eddie,
You are right I know this fact. PersonTitleDBWriterCasConsumer is 
doing a
flush (a commit) in the process method after every 50 documents and 
if less

then 50 documents in cas it will do commit or flush by
collectionProcessComplete method. So, If it is not called then those
docum

DUCC- Heartbeat Packets?

2015-02-10 Thread reshu.agarwal

Hi,

I read in DUCC book about:

Agents monitors nodes, sending heartbeat packets with node statistics to 
interested components (such as the RM and web-server).


Status

   This shows the current state of a machine. Values include:

   defined
   The node is in the DUCCnodes file
   ,
   but no DUCC process has been started there, or else there is a
   communication problem and the state messages are not being
   delivered.
   up
   The node has a DUCC Agent process running on it and the web
   server is receiving regular heartbeat packets from it.
   down
   The node had a healthy DUCC Agent on it at some point in the
   past (since the last DUCC boot), but the web server has stopped
   receiving heartbeats from it.

   The agent may have been manually shut down, may have crashed, or
   there may be a communication problem.

   Additionally, very heavy loads from jobs running the the node
   can cause the DUCC Agents heartbeats to be delayed.

I have some question in my mind i.e.

1.What are Heartbeat Packets?
2.Are they same as defined in this url: http://250bpm.com/blog:22.
3.How daemons broadcast a heartbeat?
4.How Agents nodes send heartbeat packets?

As My DUCC Agents were going down again and again for a particular time 
period.


5.   How can I identify Agents were going down due to network issue?

Thanks in Advanced.

Reshu.


DUCC- Agent1 is on Physical and Agent2 is on virtual=Slow the job process timing

2014-12-18 Thread reshu.agarwal


Hi,

Is there any problem if one Agent node is on Physical(Master) and one 
agent node is on virtual?


I am running a job which is having avg processing timing of 20 min when 
I have configured a single machine DUCC (physical machine)as well as 
when both nodes were on physical machine only.


When I have shifted my one agent node to virtual machine avg processing 
timing of Job was increased to 1 Hour. Here I noticed that my job driver 
was also running only on virtual machine's agent node.


Can we run job driver to specific agent node so that I will be able to 
test any other Case Scenario? Because I also tried to run my job's 
process on agent node of physical machine but it didn't reflect the 
processing time much.


Thanks in advanced.

Reshu.


Re: DUCC-unstable behaviour od ducc

2014-12-10 Thread reshu.agarwal

Dear Lou,

My problem has been resolved. I just increased the max time of receiving 
Heartbeats of agents.


The "unstable behavior" of DUCC 1.1.0 in my case was the node up and 
down problem in both cases either on single instance of DUCC 1.1.0

or running both ducc versions simultaneously.

And Now, I am able to run DUCC 1.1.0 alone. So, Only DUCC 1.1.0 is 
configured.


Thanks for your help. :-)

Reshu.



On 12/08/2014 04:24 PM, Lou DeGenaro wrote:

What is the "unstable behavior" of DUCC 1.1.0 when running it alone?

All kinds of bad things can happen if you run 2 DUCCs on the same set of
machines. I'm willing to help, but am not sure I can if you are running 2
DUCCs - that's fairly complex.  Instead I urge you to run a single DUCC
1.1.0 and let's try to fix what's wrong with running it alone.

Lou.

On Sun, Dec 7, 2014 at 11:40 PM, reshu.agarwal 
wrote:


Yes, I am running both at same time. But I tried only 1.1.0 version to
check the performance.But, due to unstable behaviour I had to run DUCC
1.0.0 and DUCC 1.1.0 at the same time.  I am running DUCC 1.0.0 for running
Jobs and DUCC 1.1.0 for testing purpose.

Do I need to increase heartbeats timing to greater than to 60 sec?
Signature

**Reshu.


On 12/05/2014 05:57 PM, Lou DeGenaro wrote:


You can fetch the latest code containing the bug fix from SVN and build
your own snapshot.  However, this bug is of minimal impact so there is no
pressing need to do so.

Are you trying to run 1.0 and 1.1 at the same time?  This can be very
tricky.  You need to be sure of no overlaps.  I highly recommend that you
pick one or the other.

Lou.

On Fri, Dec 5, 2014 at 6:31 AM, reshu.agarwal 
wrote:

  Dear Lou,

Thanks for confirming this.

Is Bug fixing version available for use?

What can be the reason of delaying in heartbeats? Because machines were
not able to send heartbeats with in 60 seconds so node gets down in DUCC
1.1.0 but DUCC 1.0.0 is working fine on same machines.

My master node is physical and client is on virtual. Can this be a reason
for delaying in heartbeats as well as increase of processing time of job?

Thanks.

Reshu.


On 12/05/2014 04:45 PM, Lou DeGenaro wrote:

  Each node has a DUCC Agent daemon that sends heartbeats.

There was a bug discovered after the release of 1.1 whereby the share
calculation is incorrect (a rounding up problem that you observe).  The
impact of this bug should be minimal.  The bug has been fixed.

Lou.



On Fri, Dec 5, 2014 at 12:41 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

   Lou,


How can a node send heartbeats in DUCC? If you can tell me this I will
be
able to identify problem of down in my nodes.

The other problem which I am facing is:

Memory(GB):total:   31
Memory(GB):usable :   16
Shares:total :8
Shares:inuse:   9


Means actual RAM which is available is 30 GB so shares available should
be
15(2GB per share) but it is showing Memory(GB):usable :   16 and
Shares:total :8.

In DUCC 1.0.0, I don't face this problem.

Please explain me its reason.

Reshu.



On 12/04/2014 06:42 PM, Lou DeGenaro wrote:

   Which of these are no understandable?  If you hover over the column


heading
a little more explanation is given (though not much).

For example, If you hover over Heartbeat(last) you'll see "The elapsed
time
(in seconds) since the last heartbeat".  This should usually be around
60
seconds.  On the system I'm looking at live presently, I see a range
from
9
to 66.  If the number gets too large, the DUCC system will consider
the
node down.  As best as I can tell, it looks like your numbers are 58 &
59
which is perfect.

Lou.

On Thu, Dec 4, 2014 at 7:41 AM, reshu.agarwal <
reshu.agar...@orkash.com
wrote:

Hi,

  Please look this stats:

/StatusNameMemory(GB):usable Memory(GB):total
Swap(GB):inuse
  Swap(GB):freeAlien PIDsShares:total Shares:inuse
Heartbeat
(last)
Total58 70
0 29 9 29
  3
upS144   36 39
0 20 818 2
 59
downS143   22 31
  0 9   111 11
 58
/
I am not able to understand this stats.

Please help.

Reshu.









Re: DUCC-unstable behaviour od ducc

2014-12-07 Thread reshu.agarwal


Yes, I am running both at same time. But I tried only 1.1.0 version to 
check the performance.But, due to unstable behaviour I had to run DUCC 
1.0.0 and DUCC 1.1.0 at the same time.  I am running DUCC 1.0.0 for 
running Jobs and DUCC 1.1.0 for testing purpose.


Do I need to increase heartbeats timing to greater than to 60 sec?
Signature

**Reshu.

On 12/05/2014 05:57 PM, Lou DeGenaro wrote:

You can fetch the latest code containing the bug fix from SVN and build
your own snapshot.  However, this bug is of minimal impact so there is no
pressing need to do so.

Are you trying to run 1.0 and 1.1 at the same time?  This can be very
tricky.  You need to be sure of no overlaps.  I highly recommend that you
pick one or the other.

Lou.

On Fri, Dec 5, 2014 at 6:31 AM, reshu.agarwal 
wrote:


Dear Lou,

Thanks for confirming this.

Is Bug fixing version available for use?

What can be the reason of delaying in heartbeats? Because machines were
not able to send heartbeats with in 60 seconds so node gets down in DUCC
1.1.0 but DUCC 1.0.0 is working fine on same machines.

My master node is physical and client is on virtual. Can this be a reason
for delaying in heartbeats as well as increase of processing time of job?

Thanks.

Reshu.


On 12/05/2014 04:45 PM, Lou DeGenaro wrote:


Each node has a DUCC Agent daemon that sends heartbeats.

There was a bug discovered after the release of 1.1 whereby the share
calculation is incorrect (a rounding up problem that you observe).  The
impact of this bug should be minimal.  The bug has been fixed.

Lou.



On Fri, Dec 5, 2014 at 12:41 AM, reshu.agarwal 
wrote:

  Lou,

How can a node send heartbeats in DUCC? If you can tell me this I will be
able to identify problem of down in my nodes.

The other problem which I am facing is:

Memory(GB):total:   31
Memory(GB):usable :   16
Shares:total :8
Shares:inuse:   9


Means actual RAM which is available is 30 GB so shares available should
be
15(2GB per share) but it is showing Memory(GB):usable :   16 and
Shares:total :8.

In DUCC 1.0.0, I don't face this problem.

Please explain me its reason.

Reshu.



On 12/04/2014 06:42 PM, Lou DeGenaro wrote:

  Which of these are no understandable?  If you hover over the column

heading
a little more explanation is given (though not much).

For example, If you hover over Heartbeat(last) you'll see "The elapsed
time
(in seconds) since the last heartbeat".  This should usually be around
60
seconds.  On the system I'm looking at live presently, I see a range
from
9
to 66.  If the number gets too large, the DUCC system will consider the
node down.  As best as I can tell, it looks like your numbers are 58 &
59
which is perfect.

Lou.

On Thu, Dec 4, 2014 at 7:41 AM, reshu.agarwal 
Please look this stats:

/StatusNameMemory(GB):usable Memory(GB):total
Swap(GB):inuse
 Swap(GB):freeAlien PIDsShares:total Shares:inuse
Heartbeat
(last)
   Total58 70
   0 29 9 29
 3
   upS144   36 39
   0 20 818 2
59
   downS143   22 31
 0 9   111 11
58
/
I am not able to understand this stats.

Please help.

Reshu.








Re: DUCC-unstable behaviour od ducc

2014-12-05 Thread reshu.agarwal

Dear Lou,

Thanks for confirming this.

Is Bug fixing version available for use?

What can be the reason of delaying in heartbeats? Because machines were 
not able to send heartbeats with in 60 seconds so node gets down in DUCC 
1.1.0 but DUCC 1.0.0 is working fine on same machines.


My master node is physical and client is on virtual. Can this be a 
reason for delaying in heartbeats as well as increase of processing time 
of job?


Thanks.

Reshu.

On 12/05/2014 04:45 PM, Lou DeGenaro wrote:

Each node has a DUCC Agent daemon that sends heartbeats.

There was a bug discovered after the release of 1.1 whereby the share
calculation is incorrect (a rounding up problem that you observe).  The
impact of this bug should be minimal.  The bug has been fixed.

Lou.



On Fri, Dec 5, 2014 at 12:41 AM, reshu.agarwal 
wrote:


Lou,

How can a node send heartbeats in DUCC? If you can tell me this I will be
able to identify problem of down in my nodes.

The other problem which I am facing is:

Memory(GB):total:   31
Memory(GB):usable :   16
Shares:total :8
Shares:inuse:   9


Means actual RAM which is available is 30 GB so shares available should be
15(2GB per share) but it is showing Memory(GB):usable :   16 and
Shares:total :8.

In DUCC 1.0.0, I don't face this problem.

Please explain me its reason.

Reshu.



On 12/04/2014 06:42 PM, Lou DeGenaro wrote:


Which of these are no understandable?  If you hover over the column
heading
a little more explanation is given (though not much).

For example, If you hover over Heartbeat(last) you'll see "The elapsed
time
(in seconds) since the last heartbeat".  This should usually be around 60
seconds.  On the system I'm looking at live presently, I see a range from
9
to 66.  If the number gets too large, the DUCC system will consider the
node down.  As best as I can tell, it looks like your numbers are 58 & 59
which is perfect.

Lou.

On Thu, Dec 4, 2014 at 7:41 AM, reshu.agarwal 
wrote:

  Hi,

Please look this stats:

/StatusNameMemory(GB):usable Memory(GB):total Swap(GB):inuse
Swap(GB):freeAlien PIDsShares:total Shares:inuseHeartbeat
(last)
  Total58 70
  0 29 9 29
3
  upS144   36 39
  0 20 818 2
   59
  downS143   22 31
0 9   111 11
   58
/
I am not able to understand this stats.

Please help.

Reshu.







Re: DUCC-unstable behaviour od ducc

2014-12-04 Thread reshu.agarwal

Lou,

How can a node send heartbeats in DUCC? If you can tell me this I will 
be able to identify problem of down in my nodes.


The other problem which I am facing is:

Memory(GB):total:   31
Memory(GB):usable :   16
Shares:total :8
Shares:inuse:   9


Means actual RAM which is available is 30 GB so shares available should 
be 15(2GB per share) but it is showing Memory(GB):usable :   16 and 
Shares:total :8.


In DUCC 1.0.0, I don't face this problem.

Please explain me its reason.

Reshu.


On 12/04/2014 06:42 PM, Lou DeGenaro wrote:

Which of these are no understandable?  If you hover over the column heading
a little more explanation is given (though not much).

For example, If you hover over Heartbeat(last) you'll see "The elapsed time
(in seconds) since the last heartbeat".  This should usually be around 60
seconds.  On the system I'm looking at live presently, I see a range from 9
to 66.  If the number gets too large, the DUCC system will consider the
node down.  As best as I can tell, it looks like your numbers are 58 & 59
which is perfect.

Lou.

On Thu, Dec 4, 2014 at 7:41 AM, reshu.agarwal 
wrote:


Hi,

Please look this stats:

/StatusNameMemory(GB):usable Memory(GB):total Swap(GB):inuse
   Swap(GB):freeAlien PIDsShares:total Shares:inuseHeartbeat
(last)
 Total58 70
 0 29 9 29
   3
 upS144   36 39
 0 20 818 259
 downS143   22 31
   0 9   111 1158
/
I am not able to understand this stats.

Please help.

Reshu.






DUCC-unstable behaviour od ducc

2014-12-04 Thread reshu.agarwal

Hi,

Please look this stats:

/StatusNameMemory(GB):usable Memory(GB):total 
Swap(GB):inuseSwap(GB):freeAlien PIDsShares:total 
Shares:inuseHeartbeat (last)
Total58 70
0 29 9 
293
upS144   36 39
0 20 818 2   
 59
downS143   22 31  
  0 9   111 
1158

/
I am not able to understand this stats.

Please help.

Reshu.



DUCC- unstable behaviour of DUCC 1.1.0

2014-12-04 Thread reshu.agarwal


Hi,

Please look this stats:

/StatusNameMemory(GB):usable Memory(GB):total
Swap(GB):inuseSwap(GB):freeAlien PIDsShares:total
Shares:inuseHeartbeat (last)
Total58 70
0 299  
2913
upS14436   39
0 20818 2   
 59
upS14322   31  
  09111   
1158

/
I am not able to understand this stats.

Please help.

Reshu.


Re: DUCC 1.1.0- How to Run two DUCC version on same machines with different user

2014-11-17 Thread reshu.agarwal

Hi,

I am getting this error on job process as well as on login.

 arg[14]: org.apache.uima.ducc.common.main.DuccService
1001 Command launching...
/usr/java/jdk1.7.0_17/jre/bin/java: error while loading shared libraries: 
libjli.so: cannot open shared object file: No such file or directory


How to resolve this?

Thanks in advanced.
Reshu.

On 11/18/2014 10:10 AM, reshu.agarwal wrote:

Dear Jim,

When I was trying DUCC 1.1.0 on the nodes on which DUCC 1.0.0 was 
running perfectly, I first stopped DUCC 1.0.0 using ./check_ducc -k . 
My Broker ports were different that time as well as I made changes in 
duccling path. When I started DUCC 1.1.0. It looks like working fine. 
But Then I faced agent instability problem so I re- configured DUCC 
1.0.0.


Then I tried to configure DUCC 1.0.0 and DUCC 1.1.0. My Broker ports 
were different and I made all possible changes in ports so, that it 
wouldn't conflict with other ducc's ports. Right Now, I am still 
working to configure both on same nodes.


I will try what you suggested to me. I will let you know if I succeed.

If I missed something please reply me.

Thanks.

Reshu.

On 11/18/2014 04:06 AM, Jim Challenger wrote:

An excellent question by Lou.

Been wracking my brains over what would cause the agent instability 
and this would absolutely do it.


What will likely happen if you try to run 2 DUCCs on the same 
machines if you don't change the broker
port, is the second DUCC will see the broker alive and figure all is 
well.  There are a number of
use-cases where this is acceptable so we don't throw an alert, e.g. 
if you choose to use a non-DUCC

managed broker (as we do here).

To add to the confusion, sometimes the 'activemq stop' that DUCC 
issues doesn't work, for reasons out

of DUCC control, so when you think the broker is down, it isn't.

Try this:
1.  stop all the DUCCS, then use the ps command to make sure there is 
no errant broker. I use this:

   ps auxw | grep DUCC_AMQ_PORT
 and kill -9 any process that it shows.
2.  Now start JUST ONE DUCC, I suggest the 1.1.0, and see if life 
gets better.  1.1.0 has some nice
 things so you'll be better with that if we can make it work for 
you.


Jim

On 11/17/14, 8:03 AM, Lou DeGenaro wrote:
Are these problems related?  That is, are you having the node down 
problem

and the multiple DUCC's problem together on the same set of nodes?

Can you run the either configuration alone without issue?

Lou.

On Mon, Nov 17, 2014 at 7:41 AM, reshu.agarwal 


wrote:


Lou,

I have changed the broker port and ws port too but still faced a 
problem

in starting the ducc1.1.0 version simultaneously.

Reshu.


On 11/17/2014 05:34 PM, Lou DeGenaro wrote:


The broker port is specifiable in ducc.properties.  The default is
ducc.broker.port = 61617.

Lou.

On Mon, Nov 17, 2014 at 5:29 AM, Simon Hafner 
wrote:

  2014-11-17 0:00 GMT-06:00 reshu.agarwal :
I want to run two DUCC version i.e. 1.0.0 and 1.1.0 on same 
machines

with
different user. Can this be possible?


Yes, that should be possible. You'll have to make sure there are no
port conflicts, I'd guess the ActiveMQ port is hardcoded, the rest
might be randomly assigned. Just set that port manually and watch 
out

for any errors during the start to see which other components have
hardcoded port numbers.

Personally, I'd just fire up a VM with qemu or VirtualBox.











DUCC-Un-managed Reservation??

2014-11-17 Thread reshu.agarwal


Hi,

I am bit confused. Why we need un-managed reservation? Suppose we give 
5GB Memory size to this reservation. Can this RAM be consumed by any 
process if required?


In my scenario,  when all RAMs of Nodes was consumed by JOBs, all 
processes went in waiting state. I need some reservation of RAMs for 
this so that it can not be consumed by shares for Job Processes but if 
required internally it could be used.


Can un-managed reservation be used for this?

Thanks in advanced.

Reshu.




Re: DUCC 1.1.0- How to Run two DUCC version on same machines with different user

2014-11-17 Thread reshu.agarwal

Dear Jim,

When I was trying DUCC 1.1.0 on the nodes on which DUCC 1.0.0 was 
running perfectly, I first stopped DUCC 1.0.0 using ./check_ducc -k . My 
Broker ports were different that time as well as I made changes in 
duccling path. When I started DUCC 1.1.0. It looks like working fine. 
But Then I faced agent instability problem so I re- configured DUCC 1.0.0.


Then I tried to configure DUCC 1.0.0 and DUCC 1.1.0. My Broker ports 
were different and I made all possible changes in ports so, that it 
wouldn't conflict with other ducc's ports. Right Now, I am still working 
to configure both on same nodes.


I will try what you suggested to me. I will let you know if I succeed.

If I missed something please reply me.

Thanks.

Reshu.

On 11/18/2014 04:06 AM, Jim Challenger wrote:

An excellent question by Lou.

Been wracking my brains over what would cause the agent instability 
and this would absolutely do it.


What will likely happen if you try to run 2 DUCCs on the same machines 
if you don't change the broker
port, is the second DUCC will see the broker alive and figure all is 
well.  There are a number of
use-cases where this is acceptable so we don't throw an alert, e.g. if 
you choose to use a non-DUCC

managed broker (as we do here).

To add to the confusion, sometimes the 'activemq stop' that DUCC 
issues doesn't work, for reasons out

of DUCC control, so when you think the broker is down, it isn't.

Try this:
1.  stop all the DUCCS, then use the ps command to make sure there is 
no errant broker. I use this:

   ps auxw | grep DUCC_AMQ_PORT
 and kill -9 any process that it shows.
2.  Now start JUST ONE DUCC, I suggest the 1.1.0, and see if life gets 
better.  1.1.0 has some nice

 things so you'll be better with that if we can make it work for you.

Jim

On 11/17/14, 8:03 AM, Lou DeGenaro wrote:
Are these problems related?  That is, are you having the node down 
problem

and the multiple DUCC's problem together on the same set of nodes?

Can you run the either configuration alone without issue?

Lou.

On Mon, Nov 17, 2014 at 7:41 AM, reshu.agarwal 


wrote:


Lou,

I have changed the broker port and ws port too but still faced a 
problem

in starting the ducc1.1.0 version simultaneously.

Reshu.


On 11/17/2014 05:34 PM, Lou DeGenaro wrote:


The broker port is specifiable in ducc.properties.  The default is
ducc.broker.port = 61617.

Lou.

On Mon, Nov 17, 2014 at 5:29 AM, Simon Hafner 
wrote:

  2014-11-17 0:00 GMT-06:00 reshu.agarwal :

I want to run two DUCC version i.e. 1.0.0 and 1.1.0 on same machines
with
different user. Can this be possible?


Yes, that should be possible. You'll have to make sure there are no
port conflicts, I'd guess the ActiveMQ port is hardcoded, the rest
might be randomly assigned. Just set that port manually and watch out
for any errors during the start to see which other components have
hardcoded port numbers.

Personally, I'd just fire up a VM with qemu or VirtualBox.








Re: DUCC 1.1.0- How to Run two DUCC version on same machines with different user

2014-11-17 Thread reshu.agarwal


Dear Lou,

These both problems are different but nodes are same. DUCC 1.0.0 is 
running perfectly on same nodes.


When I was trying to configure DUCC 1.1.0 singly, I faced node down 
problem. Due to this I thought why don't I try to run DUCC 1.0.0 and 
DUCC 1.1.0 simultaneously on same node to compare their behaviour. But, 
Now, I am facing problems in this too.


Reshu.


Signature On 11/17/2014 06:33 PM, Lou DeGenaro wrote:

Are these problems related?  That is, are you having the node down problem
and the multiple DUCC's problem together on the same set of nodes?

Can you run the either configuration alone without issue?

Lou.

On Mon, Nov 17, 2014 at 7:41 AM, reshu.agarwal 
wrote:


Lou,

I have changed the broker port and ws port too but still faced a problem
in starting the ducc1.1.0 version simultaneously.

Reshu.


On 11/17/2014 05:34 PM, Lou DeGenaro wrote:


The broker port is specifiable in ducc.properties.  The default is
ducc.broker.port = 61617.

Lou.

On Mon, Nov 17, 2014 at 5:29 AM, Simon Hafner 
wrote:

  2014-11-17 0:00 GMT-06:00 reshu.agarwal :

I want to run two DUCC version i.e. 1.0.0 and 1.1.0 on same machines
with
different user. Can this be possible?


Yes, that should be possible. You'll have to make sure there are no
port conflicts, I'd guess the ActiveMQ port is hardcoded, the rest
might be randomly assigned. Just set that port manually and watch out
for any errors during the start to see which other components have
hardcoded port numbers.

Personally, I'd just fire up a VM with qemu or VirtualBox.






Re: DUCC 1.1.0- How to Run two DUCC version on same machines with different user

2014-11-17 Thread reshu.agarwal

Lou,

I have changed the broker port and ws port too but still faced a problem 
in starting the ducc1.1.0 version simultaneously.


Reshu.

On 11/17/2014 05:34 PM, Lou DeGenaro wrote:

The broker port is specifiable in ducc.properties.  The default is
ducc.broker.port = 61617.

Lou.

On Mon, Nov 17, 2014 at 5:29 AM, Simon Hafner  wrote:


2014-11-17 0:00 GMT-06:00 reshu.agarwal :

I want to run two DUCC version i.e. 1.0.0 and 1.1.0 on same machines with
different user. Can this be possible?

Yes, that should be possible. You'll have to make sure there are no
port conflicts, I'd guess the ActiveMQ port is hardcoded, the rest
might be randomly assigned. Just set that port manually and watch out
for any errors during the start to see which other components have
hardcoded port numbers.

Personally, I'd just fire up a VM with qemu or VirtualBox.





Re: DUCC-1.1.0: Machines are going down very frequently

2014-11-17 Thread reshu.agarwal

Lou,

I tried to find any sign of error and exception but didn't find any.

Reshu.
On 11/17/2014 05:18 PM, Lou DeGenaro wrote:

Reshu,

Have you tried looking at the log files in DUCC's log directory for signs
of errors or exceptions?  Are any daemons producing core dumps?

Lou.

On Mon, Nov 17, 2014 at 1:21 AM, reshu.agarwal 
wrote:


Dear Lou,

I am using default configuration:

ducc.agent.node.metrics.publish.rate=3
ducc.rm.node.stability = 5

Reshu.


Signature On 11/12/2014 05:03 PM, Lou DeGenaro wrote:


What do you have defined in your ducc.properties for
ducc.rm.node.stability and ducc.agent.node.metrics.publish.rate?  The
Web Server considers a node down according to the following
calculation:

private long getAgentMillisMIA() {
  String location = "getAgentMillisMIA";
  long secondsMIA = DOWN_AFTER_SECONDS*SECONDS_PER_MILLI;
  Properties properties = DuccWebProperties.get();
  String s_tolerance = properties.getProperty("ducc.
rm.node.stability");
  String s_rate =
properties.getProperty("ducc.agent.node.metrics.publish.rate");
  try {
  long tolerance = Long.parseLong(s_tolerance.trim());
  long rate = Long.parseLong(s_rate.trim());
  secondsMIA = (tolerance * rate) / 1000;
  }
  catch(Throwable t) {
  logger.warn(location, jobid, t);
  }
  return secondsMIA;
  }

The default is 65 seconds. Note that the Web Server has no effect on
actual operations in this case.  If is just a reporter of information.

Lou.

On Wed, Nov 12, 2014 at 12:45 AM, reshu.agarwal
 wrote:


Hi,

When I was trying DUCC-1.1.0 on multi machine, I have faced an up-down
status problem in machines. I have configured two machines and these
machines are going down one by one. This makes the DUCC Services disable
and
Jobs to be initialize again and again.

DUCC 1.0.0 was working fine on same machines.

How can I fix this problem? I have also compared ducc.properties file for
both versions. Both are using same configuration to check heartbeats.

Re-Initialization of Jobs are increasing the processing time. Can I
change
or re-configure this process?

Services are getting disabled automatically and showing excessive
Initialization error status on mark over on disabled status but logs are
not
showing any error.

I have to use DUCC 1.0.0 instead of DUCC 1.1.0.

Thanks in Advance.

--
Signature *Reshu Agarwal*






Re: DUCC-1.1.0: Machines are going down very frequently

2014-11-16 Thread reshu.agarwal


Dear Lou,

I am using default configuration:

ducc.agent.node.metrics.publish.rate=3
ducc.rm.node.stability = 5

Reshu.

Signature On 11/12/2014 05:03 PM, Lou DeGenaro wrote:

What do you have defined in your ducc.properties for
ducc.rm.node.stability and ducc.agent.node.metrics.publish.rate?  The
Web Server considers a node down according to the following
calculation:

private long getAgentMillisMIA() {
 String location = "getAgentMillisMIA";
 long secondsMIA = DOWN_AFTER_SECONDS*SECONDS_PER_MILLI;
 Properties properties = DuccWebProperties.get();
 String s_tolerance = properties.getProperty("ducc.rm.node.stability");
 String s_rate =
properties.getProperty("ducc.agent.node.metrics.publish.rate");
 try {
 long tolerance = Long.parseLong(s_tolerance.trim());
 long rate = Long.parseLong(s_rate.trim());
 secondsMIA = (tolerance * rate) / 1000;
 }
 catch(Throwable t) {
 logger.warn(location, jobid, t);
 }
 return secondsMIA;
 }

The default is 65 seconds. Note that the Web Server has no effect on
actual operations in this case.  If is just a reporter of information.

Lou.

On Wed, Nov 12, 2014 at 12:45 AM, reshu.agarwal
 wrote:

Hi,

When I was trying DUCC-1.1.0 on multi machine, I have faced an up-down
status problem in machines. I have configured two machines and these
machines are going down one by one. This makes the DUCC Services disable and
Jobs to be initialize again and again.

DUCC 1.0.0 was working fine on same machines.

How can I fix this problem? I have also compared ducc.properties file for
both versions. Both are using same configuration to check heartbeats.

Re-Initialization of Jobs are increasing the processing time. Can I change
or re-configure this process?

Services are getting disabled automatically and showing excessive
Initialization error status on mark over on disabled status but logs are not
showing any error.

I have to use DUCC 1.0.0 instead of DUCC 1.1.0.

Thanks in Advance.

--
Signature *Reshu Agarwal*





DUCC 1.1.0- How to Run two DUCC version on same machines with different user

2014-11-16 Thread reshu.agarwal

Hi,

I want to run two DUCC version i.e. 1.0.0 and 1.1.0 on same machines 
with different user. Can this be possible?


Thanks in advanced.

Reshu.



DUCC-1.1.0: Machines are going down very frequently

2014-11-11 Thread reshu.agarwal


Hi,

When I was trying DUCC-1.1.0 on multi machine, I have faced an up-down 
status problem in machines. I have configured two machines and these 
machines are going down one by one. This makes the DUCC Services disable 
and Jobs to be initialize again and again.


DUCC 1.0.0 was working fine on same machines.

How can I fix this problem? I have also compared ducc.properties file 
for both versions. Both are using same configuration to check heartbeats.


Re-Initialization of Jobs are increasing the processing time. Can I 
change or re-configure this process?


Services are getting disabled automatically and showing excessive 
Initialization error status on mark over on disabled status but logs are 
not showing any error.


I have to use DUCC 1.0.0 instead of DUCC 1.1.0.

Thanks in Advance.

--
Signature *Reshu Agarwal*



Re: is DUCC having UIMA AS remote Analysis Engine functionality for Deployment descripter?

2014-08-01 Thread reshu.agarwal


Hi Lou,

Thanks for the reply. I haven't gone through Managed and Unmanaged 
Reservations. May be these can be useful for my scenario.





On 07/31/2014 07:00 PM, Lou DeGenaro wrote:

There are several paradigms provided by DUCC:

1. Jobs

For these you use ducc_submit to tell DUCC about your UIMA CR and AE.

2. Services

For these you use ducc_services to register and deploy your UIMA AE.

3. Managed Reservations

For these you use ducc_process_submit to tell DUCC about your arbitrary
process (UIMA, java, C++, whatever...)

4. Unmanaged Reservations

For these you use ducc_reserve to have DUCC find an empty slot on a
machine, but it's up to you to deploy your application there.

Hope this short survey is helpful.

Lou.






On Thu, Jul 31, 2014 at 8:47 AM, Burn Lewis  wrote:


If you independently deploy a UIMA-AS AE without using DUCC, then you can
access that as a service from a DUCC job just the same way you access a
DUCC-deployed service, i.e. with a JMS service descriptor in your job's
AE,  The difference is that you wouldn't declare it as a dependency of your
job since DUCC is not managing the service.

When you submit a DUCC job you can specify all the components (CR CM AE CC)
and DUCC will package the last 3 into a DD, or you can provide an already
created DD along with the CR.

I hope I interpreted your questions correctly.

~Burn


On Thu, Jul 31, 2014 at 6:47 AM, reshu.agarwal 
wrote:


Hi,

Can we deploy a analysis Engine separately and use UIMA AS remote

Analysis

Engine functionality for Deployment Descripter(DD) in Submitting a job?

As there is a functionality in UIMA AS that we can deploy analysis engine
remotely and then use it into DD. I want to have same functionality in
DUCC.\

Please help me. If this work it will help me a lot.

Thanks in advance

--

Reshu Agarwal





--
Thanks,
Reshu Agarwal



Re: is DUCC having UIMA AS remote Analysis Engine functionality for Deployment descripter?

2014-07-31 Thread reshu.agarwal

On 07/31/2014 06:17 PM, Burn Lewis wrote:

i.e. with a JMS service descriptor in your job's
AE,  The difference is that you wouldn't declare it as a dependency of your
job since DUCC is not managing the service.

Hi Lewis,

Yes, you are understanding my question correctly. That is what I want to 
know. Thank you for your reply.


So, There are only two ways:

1.I use DUCC Service and Call this in my AE using UIMA AS Client. 
From this I can also check availability of Service.
2.I need to deploy the particular AE using UIMA AS on some machine 
and then Create DD using this Remote Analysis Engine. From this I don't 
have any knowledge about availability of this Remote Analysis Engine.


--
Thanks,
Reshu Agarwal



Re: problem in calling DUCC Service with ducc_submit

2014-07-31 Thread reshu.agarwal


Dear Burn,

Thanks for the information. But problem is as Eddie Said before, we can 
not use the DUCC Service using ducc_submit, we can just check 
availability of service by using service_dependency in job.


There is only a way that I have to create a Analysis Engine which is 
using ducc service using UIMA AS client.


If there is another way to use DUCC Service in Job so I can use DUCC 
Service as my AE or DD, then please tell me.



On 07/31/2014 06:05 PM, Burn Lewis wrote:

You can register a DUCC service with --autostart set to true and the
service will start when it is registered and stay running.  If your job has
a dependency on that service DUCC will verify that it is running before
starting your job.
If instead you register the service without autostart then DUCC will start
the service (if it is not already running) before starting your job.  When
the last job that references that service ends DUCC will stop the service
after an idle delay defined by the service's --linger value.  If you make
linger very large the service can be made to stay running, even if it is
idle, for many hours between jobs.

~Burn


On Thu, Jul 31, 2014 at 2:41 AM, reshu.agarwal 
wrote:


Hi,

Today, I got understand the reason of why DUCC Job is not using DUCC
Service. But All I want that my initialization Part of a Process_DD should
occur once. I don't need to wait every time for initialization. So, I want
to use DUCC Service in my DUCC JOB.

Is there any way to configure in DUCC that Process_DD initialized once and
Job will start again and again?

On 04/01/2014 05:21 PM, Eddie Epstein wrote:


Declaring a service dependency does not affect application code paths. The
job still needs to connect to the service in the normal way.

DUCC uses services dependency for several reasons: to automatically start
services when needed by a job; to not give resources to a job or service
for which a dependent service is not running; and to post a warning on
running jobs when a dependent service goes "bad".

Eddie


On Tue, Apr 1, 2014 at 1:27 AM, reshu.agarwal 
wrote:

  Hi,

I am again in a problem. I have successfully deployed DUCC UIMA AS
Service
using ducc_service. The service status is available with good health. If
I
try to use my this service using parameter service_dependency to my Job
in
ducc_submit then it is not showing any error but executes only the DB
Collection Reader not this service.

--
Thanks,
Reshu Agarwal




--
Thanks,
Reshu Agarwal





--
Thanks,
Reshu Agarwal



is DUCC having UIMA AS remote Analysis Engine functionality for Deployment descripter?

2014-07-31 Thread reshu.agarwal


Hi,

Can we deploy a analysis Engine separately and use UIMA AS remote 
Analysis Engine functionality for Deployment Descripter(DD) in 
Submitting a job?


As there is a functionality in UIMA AS that we can deploy analysis 
engine remotely and then use it into DD. I want to have same 
functionality in DUCC.\


Please help me. If this work it will help me a lot.

Thanks in advance

--

Reshu Agarwal



Re: problem in calling DUCC Service with ducc_submit

2014-07-30 Thread reshu.agarwal


Hi,

Today, I got understand the reason of why DUCC Job is not using DUCC 
Service. But All I want that my initialization Part of a Process_DD 
should occur once. I don't need to wait every time for initialization. 
So, I want to use DUCC Service in my DUCC JOB.


Is there any way to configure in DUCC that Process_DD initialized once 
and Job will start again and again?


On 04/01/2014 05:21 PM, Eddie Epstein wrote:

Declaring a service dependency does not affect application code paths. The
job still needs to connect to the service in the normal way.

DUCC uses services dependency for several reasons: to automatically start
services when needed by a job; to not give resources to a job or service
for which a dependent service is not running; and to post a warning on
running jobs when a dependent service goes "bad".

Eddie


On Tue, Apr 1, 2014 at 1:27 AM, reshu.agarwal wrote:


Hi,

I am again in a problem. I have successfully deployed DUCC UIMA AS Service
using ducc_service. The service status is available with good health. If I
try to use my this service using parameter service_dependency to my Job in
ducc_submit then it is not showing any error but executes only the DB
Collection Reader not this service.

--
Thanks,
Reshu Agarwal





--
Thanks,
Reshu Agarwal



Re: Infinte initialization of a process even after restarting DUCC

2014-07-09 Thread reshu.agarwal

On 07/09/2014 03:25 PM, Lou DeGenaro wrote:

Upon re-start (presuming you
used the default "warm" start) all previous running jobs are marked as
Completed.  If the the job itself is Completed yet the job-processes
continue to show an active state then this is erroneous information..

Dear Lou,

The Job status in "jobs" is completed, but in the "job processes" ,there 
are 3 processes in which one is showing status " Stopped", one is 
showing status " Stopped" but "Time Run" is increasing(2:15:59:40) and 
in one process status is "starting" and "Time init" is 
increasing(2:15:29:40).


So,

Job State   Time Init  Time Run
Starting2:15:29:4000
Stopped00 2:15:59:40

So, Job is completed but still running. So, This is a bug which needs to 
resolve.



--
Thanks,
Reshu Agarwal



Re: Infinte initialization of a process even after restarting DUCC

2014-07-08 Thread reshu.agarwal

On 07/08/2014 03:44 PM, Jim Challenger wrote:
I like to stop ducc by issuing check_ducc -k a few times after 
stop_ducc.  This sends kill -9 to any ducc components that couldn't 
stop for some reason.  Unfortunately it can't kill zombies but once 
you have done check_ducc -k it should not matter. As Lou mentioned, 
the 1.1.0 release will make some of this situation better but I've 
seen intense analytics leave hardware and software on hosts in states 
that only kill -9 can effectively handle. 

Dear Jim and Lou,

I have tried all check_ducc -k and ./stop_ducc but the Job is showing 
incremented status till now as given below:


Id  Log SizeHost
NamePID State
Scheduler   Reason
Scheduler
or extraordinary status State
Agent   Reason
Agent   ExitTime
InitTime
Run Time
GC  PgInSwap%CPURSS Time
Avg Time
Max Time
Min DoneError   Dis-
patch   Retry   Pre-
emptJConsole
URL
0 	jd.out.log 
 
	0.14 	S144 	8408 	Deallocated 	Voluntary 	Stopped 	
	ExitCode=0 	00 	2:15:59:40 	00 	57 	0.0 	5.0 	0.2 	6 	16 	1 	14 
0 	0 	0 	0 	
0 	jd.out.log 
 
	0.14 	S144 	8408 	Deallocated 	Voluntary 	Stopped 	
	ExitCode=0 	00 	2:15:59:40 	00 	57 	0.0 	5.0 	0.2 	6 	16 	1 	14 
0 	0 	0 	0 	
10849 	696-UIMA-S1-8962.log 
 
	0.02 	S144 	8962 	Deallocated 	

Starting

	2:15:57:46 
 
	00 	00 	0 	0.0 	0.0 	0.0 	









10848 	696-UIMA-S1-8503.log 
 
	0.02 	S144 	8503 	Deallocated 	Purged 	Stopped 	
	ExitCode=0 	50 
 
	2:15:58:15 	02 	1102 	0.0 	46.0 	2.3 	6 	16 	1 	14 	0 	0 	0 	0 	
10852 	696-UIMA-S2-11649.log 
 
	0.04 	S143 	11649 	Deallocated 	Voluntary 	Stopped 	

Discontinued
	ExitCode=0 	31 
 
	00 	02 	0 	0.0 	1.0 	2.2 	












--
Thanks,
Reshu Agarwal



Infinte initialization of a process even after restarting DUCC

2014-07-07 Thread reshu.agarwal


Hi,

I have faced a problem in DUCC after continuous processing in DUCC, the 
job initialization or ending processing go in to infinite loop. So, a 
new job can not be started even after restarting of DUCC and job is 
showing end of job status internally initialization waiting time is 
increasing.


I have read a issue on JIRA where you have talked about this same 
problem . i.e. https://issues.apache.org/jira/browse/UIMA-3645. 



When will you release UIMA DUCC 1.1.0 where you have fixed this issue? 
As we have to restart DUCC after certain period of time.



--
Thanks,
Reshu Agarwal



Re: CanceledByDriver status in DUCC

2014-06-08 Thread reshu.agarwal

On 06/09/2014 09:50 AM, reshu.agarwal wrote:

On 06/06/2014 06:03 PM, Lou DeGenaro wrote:

files for ERROR or WARN messages especially
relative to the MqReaper.

Hi Lou,

I have found an error in DUCC logs or.log:

java.lang.reflect.UndeclaredThrowableException
at com.sun.proxy.$Proxy19.getQueues(Unknown Source)
at 
org.apache.uima.ducc.common.mq.MqHelper.getQueueList(MqHelper.java:152)
at 
org.apache.uima.ducc.orchestrator.maintenance.MqReaper.getJdQueues(MqReaper.java:121)
at 
org.apache.uima.ducc.orchestrator.maintenance.MqReaper.removeUnusedJdQueues(MqReaper.java:159)
at 
org.apache.uima.ducc.orchestrator.maintenance.MaintenanceThread.run(MaintenanceThread.java:106)
Caused by: javax.management.InstanceNotFoundException: 
org.apache.activemq:BrokerName=S1,Type=Broker
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:669)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1463)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
at 
javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:656)

at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:601)
at 
sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)

at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:722)
at 
sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:273)
at 
sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:251)

at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:160)
at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
at 
javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown Source) 

at 
javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:901)
at 
javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:280)

... 5 more


Hi Lou,

This problem has been resolved. I just set ducc.broker.name=localhost 
which is before set as S1.



--
Thanks,
Reshu Agarwal



Re: CanceledByDriver status in DUCC

2014-06-08 Thread reshu.agarwal

On 06/06/2014 06:03 PM, Lou DeGenaro wrote:

files for ERROR or WARN messages especially
relative to the MqReaper.

Hi Lou,

I have found an error in DUCC logs or.log:

java.lang.reflect.UndeclaredThrowableException
at com.sun.proxy.$Proxy19.getQueues(Unknown Source)
at 
org.apache.uima.ducc.common.mq.MqHelper.getQueueList(MqHelper.java:152)
at 
org.apache.uima.ducc.orchestrator.maintenance.MqReaper.getJdQueues(MqReaper.java:121)
at 
org.apache.uima.ducc.orchestrator.maintenance.MqReaper.removeUnusedJdQueues(MqReaper.java:159)
at 
org.apache.uima.ducc.orchestrator.maintenance.MaintenanceThread.run(MaintenanceThread.java:106)
Caused by: javax.management.InstanceNotFoundException: 
org.apache.activemq:BrokerName=S1,Type=Broker
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:669)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1463)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
at 
javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:656)

at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:601)
at 
sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)

at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:722)
at 
sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:273)
at 
sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:251)

at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:160)
at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
at 
javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
Source)
at 
javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:901)
at 
javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:280)

... 5 more

--
Thanks,
Reshu Agarwal



Re: CanceledByDriver status in DUCC

2014-06-06 Thread reshu.agarwal

On 06/05/2014 06:30 PM, Lou DeGenaro wrote:

How long does it take to process your longest work item through your

Hi Lou,

It was 3 Minutes that time. We increased this to 10 minutes now for 
testing as you told me to increase it. I have also restarted DUCC. Till 
now, I didn't face the similar problem. But, If I face this again I will 
update you in this same mail trail. But if you see any other reason for 
this then please let me know.


I have one more question as I have read in DUCC document, Orchestrator 
automatically deletes the job's queue from the Broker when job is over. 
But when I monitor the broker service using jconsole, I saw all the jobs 
queues are still there with in the broker.


Will this can create a problem after a time in DUCC?

Is this what Orchestrator ensure in DUCC?

Thanks in advance.

--
Reshu Agarwal



Re: Can we call Ducc Service using UIMA AS client API?

2014-06-04 Thread reshu.agarwal

On 05/29/2014 06:37 PM, Jaroslaw Cwiklik wrote:

You can use UIMA-AS client API. All you need to know is the endpoint and
the broker url to communicate.

Jerry C


On Wed, May 28, 2014 at 8:53 AM, reshu.agarwal wrote:


Hi,

I am just curious about DUCC Service uses. Can we call Ducc Service using
UIMA AS client API. Or there is any API is available to call DUCC Service
like UIMA AS client API is for UIMA AS Service.

I want to process a text from my Java class which add text in jCas of UIMA
Cas and send this cas to process in DUCC by DUCC service.

Is there any other method to do this?

--
Thanks,
Reshu Agarwal



Thanks, Its working. :-)

--

Reshu Agarwal



Re: CanceledByDriver status in DUCC

2014-06-04 Thread reshu.agarwal

HI Lou,

We have debugged our pipeline. If this is the problem of pipeline or 
code then when we run this batch again, then the same errors must be 
displayed in error log. But the same batch processes successfully 
without any error so, this is not the error at code level.



And the error log is just as given below:

org.apache.uima.resource.ResourceProcessException: Request To Process 
Cas Has Timed-out. Service Queue:ducc.jd.queue.57. Broker: 
tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.useCompression=true&closeAsync=false 
Cas Timed-out on host: 192.168.xx.xxx
at 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl.sendAndReceiveCAS(BaseUIMAAsynchronousEngineCommon_impl.java:2207)
at 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl.sendAndReceiveCAS(BaseUIMAAsynchronousEngineCommon_impl.java:2042)

at org.apache.uima.ducc.jd.client.WorkItem.run(WorkItem.java:142)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:722)
Caused by: org.apache.uima.aae.error.UimaASProcessCasTimeout: UIMA AS 
Client Timed Out Waiting for Reply From Service:ducc.jd.queue.57 
Broker:tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.useCompression=true&closeAsync=false

... 9 more

and after 5 similar type errors, node:null, PID:null, and Cas Timed-out 
on host: null,


node:null PID:null directive:ProcessContinue_CasNoRetry
04 Jun 2014 15:36:25,592 83 ERROR user.err workItemError 57 N/A
org.apache.uima.resource.ResourceProcessException: Request To Process 
Cas Has Timed-out. Service Queue:ducc.jd.queue.57. Broker: 
tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.useCompression=true&closeAsync=false 
Cas Timed-out on host: null



Please suggest me some solution for this problem. Thanks in advanced.

--

Reshu Agarwal



CanceledByDriver status in DUCC

2014-06-03 Thread reshu.agarwal


Hi,

I faces some times the "CanceledByDriver" status to a job in DUCC, If 
error count reaches to 14. The all errors are due to Cas Timed Out 
exception and after 5 continuous errors the Cas timed out on server is 
null. Whole Job is cancelled and documents get skipped by this.


Can I set some configuration in DUCC to reprocess this particular job 
again as new job if job is cancelled by driver?



--
Thanks,
Reshu Agarwal



Can we call Ducc Service using UIMA AS client API?

2014-05-28 Thread reshu.agarwal


Hi,

I am just curious about DUCC Service uses. Can we call Ducc Service 
using UIMA AS client API. Or there is any API is available to call DUCC 
Service like UIMA AS client API is for UIMA AS Service.


I want to process a text from my Java class which add text in jCas of 
UIMA Cas and send this cas to process in DUCC by DUCC service.


Is there any other method to do this?

--
Thanks,
Reshu Agarwal



Re: problem in calling DUCC Service with ducc_submit

2014-04-07 Thread reshu.agarwal

On 04/04/2014 05:39 PM, Eddie Epstein wrote:

Given an aggregate AE, a UIMA-AS service is accessed by defining a delegate
as a JMS service descriptor.
See
http://uima.apache.org/d/uima-as-2.4.2/uima_async_scaleout.html#ugr.async.ov.concepts.jms_descriptor
and the example from the uima-as SDK in
apache-uima-as-2.4.2/examples/descriptors/as

So the AE in a DUCC job would have such a delegate.

Eddie


On Thu, Apr 3, 2014 at 1:00 AM, reshu.agarwal wrote:


On 04/01/2014 05:21 PM, Eddie Epstein wrote:


   The
job still needs to connect to the service in the normal way.

DUCC uses services dependency for several reasons: to automatically start
services when needed by a job; to not give resources to a job or service
for which a dependent service is not running; and to post a warning on
running jobs when a dependent service goes "bad".


Hi Eddie,

I have tried many times but my job could not be connected to DUCC Service.
So, Can you please define a sample job with all parameters which you use to
connect to service.

--
Thanks,
Reshu Agarwal



Dear Eddie,

It is not helping me. As I deployed and use UIMA AS service without any 
problem. But I am facing problem in DUCC. So, I want a sample of DUCC 
Job which is using the DUCC Service. If you can provide me that it will 
be very helpful.


--
Thanks,
Reshu Agarwal



Re: Cas Timeout Exception in DUCC

2014-04-02 Thread reshu.agarwal

On 03/31/2014 04:37 PM, Eddie Epstein wrote:

Reshu,

Please look in the logfile of the job process. Maybe 10 minutes is still
not enough?

Eddie


On Mon, Mar 31, 2014 at 2:42 AM, reshu.agarwal wrote:


On 03/28/2014 05:36 PM, Eddie Epstein wrote:


There is a job specification parameter:
--process_per_item_time_max 
Maximum elapsed time (in minutes) for processing one CAS.

Try setting that to something big enough.

Eddie


On Fri, Mar 28, 2014 at 6:18 AM, reshu.agarwal 
wrote:

  On 03/28/2014 03:23 PM, reshu.agarwal wrote:

  Cas Timed-out on hos

  Hi,

JD.log contains this:

Mar 28, 2014 3:07:10 PM org.apache.uima.aae.delegate.Delegate$1
Delegate.TimerTask.run
WARNING: Timeout While Waiting For Reply From Delegate:ducc.jd.queue.1887
Process CAS Request Timed Out. Configured Reply Window Of 180,000. Cas
Reference Id:-139073d5:145080818b3:-7f7c
Mar 28, 2014 3:07:10 PM org.apache.uima.adapter.jms.
client.ActiveMQMessageSender
run
INFO: UIMA AS Client Message Dispatcher Sending GetMeta Ping To the
Service
Mar 28, 2014 3:07:10 PM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl sendCAS
INFO: Uima AS Client Sent PING Message To Service: ducc.jd.queue.1887
Mar 28, 2014 3:07:10 PM org.apache.uima.adapter.jms.
client.ClientServiceDelegate
handleError
WARNING: Process Timeout - Uima AS Client Didn't Receive Process Reply
Within Configured Window Of:180,000 millis
Mar 28, 2014 3:07:10 PM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl notifyOnTimout
WARNING: Request To Process Cas Has Timed-out. Service
Queue:ducc.jd.queue.1887. Broker: tcp://S1:61616?wireFormat.
maxInactivityDuration=0&jms.useCompression=true&closeAsync=false Cas
Timed-out on host: 192.168...
Mar 28, 2014 3:07:10 PM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl sendAndReceiveCAS
INFO: UIMA AS Handling Exception in sendAndReceive(). CAS
hashcode:1432981672. ThreadMonitor Released Semaphore For Thread ID:42
org.apache.uima.ducc.common.jd.plugin.AbstractJdProcessExceptionHandler:
org.apache.uima.aae.error.UimaASProcessCasTimeout: UIMA AS Client Timed
Out Waiting for Reply From Service:ducc.jd.queue.1887
Broker:tcp://S1:61616?
wireFormat.maxInactivityDuration=0&jms.useCompression=true&
closeAsync=false
org.apache.uima.ducc.common.jd.plugin.AbstractJdProcessExceptionHandler:
directive=ProcessContinue_CasNoRetry, [192.168.:5729]=1, [total]=1

--
Thanks,
Reshu Agarwal


  Dear Eddie,

If I set it to greater then 3 minutes i.e 10 minutes, it still showed
timed out error and did not terminate the job by itself. So, it did not
work for me.


--process_per_item_time_max 
   Maximum elapsed time (in minutes) for processing one CAS.


--
Thanks,
Reshu Agarwal



HI Eddie,

If I increase the process_per_item_time_max, the job is keep waiting for 
that time and still terminates some jobs with the same error and it is 
not like, that particular job is taking this much time as if I run same 
job again that job runs without any error.


--
Thanks,
Reshu Agarwal



Re: problem in calling DUCC Service with ducc_submit

2014-04-02 Thread reshu.agarwal

On 04/01/2014 05:21 PM, Eddie Epstein wrote:

  The
job still needs to connect to the service in the normal way.

DUCC uses services dependency for several reasons: to automatically start
services when needed by a job; to not give resources to a job or service
for which a dependent service is not running; and to post a warning on
running jobs when a dependent service goes "bad".

Hi Eddie,

I have tried many times but my job could not be connected to DUCC 
Service. So, Can you please define a sample job with all parameters 
which you use to connect to service.


--
Thanks,
Reshu Agarwal



Re: What if head node fails in DUCC

2014-04-01 Thread reshu.agarwal

On 04/01/2014 05:28 PM, Eddie Epstein wrote:

Correct. Most DUCC daemons running on the head node are restartable. We
expect to complete this work so that in the case of head node failure a new
head node can automatically be started.

Currently DUCC can be configured such that no active user work is affected
if a head node goes down. However, without the head node no new user
processes are created.

Eddie


On Tue, Apr 1, 2014 at 1:42 AM, reshu.agarwal wrote:


Hi,

I have a question. If head node fails then we are no more able to do UIMA
processing. Can I defined multiple head nodes in DUCC? If one head node is
failed then second node will work as a head node. Is this possible? what is
the backup strategy of DUCC?

--
Thanks,
Reshu Agarwal



Hi Eddie,

Sorry I did not get you. What do you mean by this, "Currently DUCC can 
be configured such that no active user work is affected if a head node 
goes down. "?


As I understand if a node goes down or crashed then all the processes 
which are running on this node will terminate. So, big question is, "How 
DUCC ensures no active user work is affected if head node goes down?"


--
Thanks,
Reshu Agarwal



What if head node fails in DUCC

2014-03-31 Thread reshu.agarwal


Hi,

I have a question. If head node fails then we are no more able to do 
UIMA processing. Can I defined multiple head nodes in DUCC? If one head 
node is failed then second node will work as a head node. Is this 
possible? what is the backup strategy of DUCC?


--
Thanks,
Reshu Agarwal



problem in calling DUCC Service with ducc_submit

2014-03-31 Thread reshu.agarwal


Hi,

I am again in a problem. I have successfully deployed DUCC UIMA AS 
Service using ducc_service. The service status is available with good 
health. If I try to use my this service using parameter 
service_dependency to my Job in ducc_submit then it is not showing any 
error but executes only the DB Collection Reader not this service.


--
Thanks,
Reshu Agarwal



Re: Ducc Problems

2014-03-31 Thread reshu.agarwal

On 03/28/2014 05:28 PM, Eddie Epstein wrote:

Another alternative would be to do the final flush in the Cas consumer's
destroy method.

Another issue to be aware of, in order to balance resources between jobs,
DUCC uses preemption of job processes scheduled in a "fair-share" class.
This may not be acceptable for jobs which are doing incremental commits.
The solution is to schedule the job in a non-preemptable class.


On Fri, Mar 28, 2014 at 1:22 AM, reshu.agarwal wrote:


On 03/28/2014 01:28 AM, Eddie Epstein wrote:


Hi Reshu,

The Job model in DUCC is for the Collection Reader to send "work item
CASes", where a work item represents a collection of work to be done by a
Job Process. For example, a work item could be a file or a subset of a
file
that contains many documents, where each document would be individually
put
into a CAS by the Cas Multiplier in the Job Process.

DUCC is designed so that after processing the "mini-collection"
represented
by the work item,  the Cas Consumer should flush any data. This is done by
routing the "work item CAS" to the Cas Consumer, after all work item
documents are completed, at which point the CC does the flush.

The sample code described in
http://uima.apache.org/d/uima-ducc-1.0.0/duccbook.html#x1-1380009 uses
the
work item CAS to flush data in exactly this way.

Note that the PersonTitleDBWriterCasConsumer is doing a flush (a commit)
in
the process method after every 50 documents.

Regards
Eddie



On Thu, Mar 27, 2014 at 1:35 AM, reshu.agarwal 
wrote:

  On 03/26/2014 11:34 PM, Eddie Epstein wrote:

  Hi Reshu,

The collectionProcessingComplete() method in UIMA-AS has a limitation: a
Collection Processing Complete request sent to the UIMA-AS Analysis
Service
is cascaded down to all delegates; however, if a particular delegate is
scaled-out, only one of the instances of the delegate will get this
call.

Since DUCC is using UIMA-AS to scale out the Job processes, it has no
way
to deliver a CPC to all instances.

The applications we have been running on DUCC have used the Work Item
CAS
as a signal to CAS consumers to do CPC level processing. That is
discussed
in the first reference above, in the paragraph "Flushing Cached Data".

Eddie



On Wed, Mar 26, 2014 at 9:48 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

   On 03/26/2014 06:43 PM, Eddie Epstein wrote:


   Are you using standard UIMA interface code to Solr? If so, which Cas


Consumer?

Taking at quick look at the source code for SolrCASConsumer, the batch
and
collection process complete methods appear to do nothing.

Thanks,
Eddie


On Wed, Mar 26, 2014 at 6:08 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

On 03/21/2014 11:42 AM, reshu.agarwal wrote:

 Hence we can not attempt batch processing in cas consumer and it

  increases our process timing. Is there any other option for that or

is
it a
bug in DUCC?

Please reply on this problem as if I am sending document in solr
one by

  one by cas consumer without using batch process and committing

solr. It
is
not optimum way to use this. Why ducc is not calling collection
Process
Complete method of Cas Consumer? And If I want to do that then What
is
the
way to do this?

I am not able to find any thing about this in DUCC book.

Thanks in Advanced.

--
Thanks,
Reshu Agarwal


Hi Eddie,

  I am not using standard UIMA interface code to Solr. I create my

own Cas


Consumer. I will take a look on that too. But the problem is not for
particularly to use solr, I can use any source to store my output. I
want
to do batch processing and want to use collectionProcessComplete. Why
DUCC
is not calling it? I check it with UIMA AS also and my cas consumer is
working fine with it and also performing batch processing.

--
Thanks,
Reshu Agarwal


   Hi Eddie,


I am using cas consumer similar to apache uima example:

   "apache-uima/examples/src/org/apache/uima/examples/cpe/
PersonTitleDBWriterCasConsumer.java"

--
Thanks,
Reshu Agarwal


  Hi Eddie,

You are right I know this fact. PersonTitleDBWriterCasConsumer is doing a
flush (a commit) in the process method after every 50 documents and if less
then 50 documents in cas it will do commit or flush by
collectionProcessComplete method. So, If it is not called then those
documents can not be committed. That is why I want ducc calls this method.

--
Thanks,
Reshu Agarwal



Hi,

Destroy method worked for me. It did the same what I wanted from 
CollectionProcessComplete method.


--
Thanks,
Reshu Agarwal



Re: Cas Timeout Exception in DUCC

2014-03-30 Thread reshu.agarwal

On 03/28/2014 05:36 PM, Eddie Epstein wrote:

There is a job specification parameter:
--process_per_item_time_max 
   Maximum elapsed time (in minutes) for processing one CAS.

Try setting that to something big enough.

Eddie


On Fri, Mar 28, 2014 at 6:18 AM, reshu.agarwal wrote:


On 03/28/2014 03:23 PM, reshu.agarwal wrote:


Cas Timed-out on hos


Hi,

JD.log contains this:

Mar 28, 2014 3:07:10 PM org.apache.uima.aae.delegate.Delegate$1
Delegate.TimerTask.run
WARNING: Timeout While Waiting For Reply From Delegate:ducc.jd.queue.1887
Process CAS Request Timed Out. Configured Reply Window Of 180,000. Cas
Reference Id:-139073d5:145080818b3:-7f7c
Mar 28, 2014 3:07:10 PM org.apache.uima.adapter.jms.client.ActiveMQMessageSender
run
INFO: UIMA AS Client Message Dispatcher Sending GetMeta Ping To the Service
Mar 28, 2014 3:07:10 PM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl sendCAS
INFO: Uima AS Client Sent PING Message To Service: ducc.jd.queue.1887
Mar 28, 2014 3:07:10 PM org.apache.uima.adapter.jms.client.ClientServiceDelegate
handleError
WARNING: Process Timeout - Uima AS Client Didn't Receive Process Reply
Within Configured Window Of:180,000 millis
Mar 28, 2014 3:07:10 PM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl notifyOnTimout
WARNING: Request To Process Cas Has Timed-out. Service
Queue:ducc.jd.queue.1887. Broker: tcp://S1:61616?wireFormat.
maxInactivityDuration=0&jms.useCompression=true&closeAsync=false Cas
Timed-out on host: 192.168...
Mar 28, 2014 3:07:10 PM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl sendAndReceiveCAS
INFO: UIMA AS Handling Exception in sendAndReceive(). CAS
hashcode:1432981672. ThreadMonitor Released Semaphore For Thread ID:42
org.apache.uima.ducc.common.jd.plugin.AbstractJdProcessExceptionHandler:
org.apache.uima.aae.error.UimaASProcessCasTimeout: UIMA AS Client Timed
Out Waiting for Reply From Service:ducc.jd.queue.1887 Broker:tcp://S1:61616?
wireFormat.maxInactivityDuration=0&jms.useCompression=true&
closeAsync=false
org.apache.uima.ducc.common.jd.plugin.AbstractJdProcessExceptionHandler:
directive=ProcessContinue_CasNoRetry, [192.168.:5729]=1, [total]=1

--
Thanks,
Reshu Agarwal



Dear Eddie,

If I set it to greater then 3 minutes i.e 10 minutes, it still showed 
timed out error and did not terminate the job by itself. So, it did not 
work for me.


--process_per_item_time_max 
  Maximum elapsed time (in minutes) for processing one CAS.


--
Thanks,
Reshu Agarwal



Re: status Lost=1 in DUCC

2014-03-28 Thread reshu.agarwal

On 03/28/2014 05:54 PM, Lou DeGenaro wrote:

Hi Reshu,

Very good.  It would be helpful if you could supply a small sample data
comprising "invalid XML characters" as a test case, to motivate DUCC to
detect and handle this situation more elegantly in terms of allowing the
user to recognize what's wrong.

Lou.


On Fri, Mar 28, 2014 at 12:00 AM, reshu.agarwal wrote:


On 03/27/2014 08:13 PM, Lou DeGenaro wrote:


he data being sent are "values" rather than "keys" in your
CAS?  If so, this is not really a "best practice" for DUCC use.


Hi Lou,

This is not the problem of how I send the data. My document contains some
invalid XML characters. So, problem resolved after I applied filter for
that.

Reshu.


Ya Sure,

Here is a sample document:

"About the Human Rights House Network ( www.humanrightshouse.org ) The 
Human Rights House Network (HRHN) unites 87 human rights NGOs joining 
forces in 18 independent Human Rights Houses in 15 countries in Western 
Balkans, Eastern Europe and South Caucasus, East and Horn of Africa, and 
Western Europe. HRHN???s aim is to protect, empower and support human 
rights organisations locally and unite them in an international network 
of Human Rights Houses. The Human Rights House Foundation (HRHF), based 
in Oslo (Norway) with an office in Geneva (Switzerland), is HRHN???s 
secretariat. HRHF is international partner of the South Caucasus Network 
of Human Rights Defenders and the emerging Balkan Network of Human 
Rights Defenders. HRHF has consultative status with the United Nations 
and HRHN has participatory status with the Council of Europe.
All applicants are requested to e-mail a motivation letter and 
curriculum vitae to: Anna Innocenti, International Advocacy Officer at 
the Human Rights House Foundation (HRHF), at 
ae;e;a.innf;centi@humae;rightshouse.f;rg ."



Specific this line contains some invalid characters:

"All applicants are requested to e-mail a motivation letter and 
curriculum vitae to: Anna Innocenti, International Advocacy Officer at 
the Human Rights House Foundation (HRHF), at 
ae;e;a.innf;centi@humae;rightshouse.f;rg .""


And we can find out the problem by trying the same document in UIMA AS. 
And this problem of invalid character was also in object other then 
document text which is passed in CAS.


--
Thanks,
Reshu Agarwal



Re: Cas Timeout Exception in DUCC

2014-03-28 Thread reshu.agarwal

On 03/28/2014 03:23 PM, reshu.agarwal wrote:

Cas Timed-out on hos

Hi,

JD.log contains this:

Mar 28, 2014 3:07:10 PM org.apache.uima.aae.delegate.Delegate$1 
Delegate.TimerTask.run
WARNING: Timeout While Waiting For Reply From 
Delegate:ducc.jd.queue.1887 Process CAS Request Timed Out. Configured 
Reply Window Of 180,000. Cas Reference Id:-139073d5:145080818b3:-7f7c
Mar 28, 2014 3:07:10 PM 
org.apache.uima.adapter.jms.client.ActiveMQMessageSender run

INFO: UIMA AS Client Message Dispatcher Sending GetMeta Ping To the Service
Mar 28, 2014 3:07:10 PM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl 
sendCAS

INFO: Uima AS Client Sent PING Message To Service: ducc.jd.queue.1887
Mar 28, 2014 3:07:10 PM 
org.apache.uima.adapter.jms.client.ClientServiceDelegate handleError
WARNING: Process Timeout - Uima AS Client Didn't Receive Process Reply 
Within Configured Window Of:180,000 millis
Mar 28, 2014 3:07:10 PM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl 
notifyOnTimout
WARNING: Request To Process Cas Has Timed-out. Service 
Queue:ducc.jd.queue.1887. Broker: 
tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.useCompression=true&closeAsync=false 
Cas Timed-out on host: 192.168...
Mar 28, 2014 3:07:10 PM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl 
sendAndReceiveCAS
INFO: UIMA AS Handling Exception in sendAndReceive(). CAS 
hashcode:1432981672. ThreadMonitor Released Semaphore For Thread ID:42
org.apache.uima.ducc.common.jd.plugin.AbstractJdProcessExceptionHandler: 
org.apache.uima.aae.error.UimaASProcessCasTimeout: UIMA AS Client Timed 
Out Waiting for Reply From Service:ducc.jd.queue.1887 
Broker:tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.useCompression=true&closeAsync=false
org.apache.uima.ducc.common.jd.plugin.AbstractJdProcessExceptionHandler: 
directive=ProcessContinue_CasNoRetry, [192.168.:5729]=1, [total]=1


--
Thanks,
Reshu Agarwal



Cas Timeout Exception in DUCC

2014-03-28 Thread reshu.agarwal


Hi,

I am getting this particular error in some jobs of ducc :

node:192.168.. PID:5729:33 directive:ProcessContinue_CasNoRetry
28 Mar 2014 15:10:10,395 45 ERROR user.err workItemError 1887 1781
org.apache.uima.resource.ResourceProcessException: Request To Process 
Cas Has Timed-out. Service Queue:ducc.jd.queue.1887. Broker: 
tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.useCompression=true&closeAsync=false 
Cas Timed-out on host: 192.168
at 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl.sendAndReceiveCAS(BaseUIMAAsynchronousEngineCommon_impl.java:2207)
at 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl.sendAndReceiveCAS(BaseUIMAAsynchronousEngineCommon_impl.java:2042)

at org.apache.uima.ducc.jd.client.WorkItem.run(WorkItem.java:142)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:722)
Caused by: org.apache.uima.aae.error.UimaASProcessCasTimeout: UIMA AS 
Client Timed Out Waiting for Reply From Service:ducc.jd.queue.1887 
Broker:tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.useCompression=true&closeAsync=false


How to handle it? how we can set process retry for these particular 
documents again?


--
Thanks,
Reshu Agarwal



Re: Ducc Problems

2014-03-27 Thread reshu.agarwal

On 03/28/2014 01:28 AM, Eddie Epstein wrote:

Hi Reshu,

The Job model in DUCC is for the Collection Reader to send "work item
CASes", where a work item represents a collection of work to be done by a
Job Process. For example, a work item could be a file or a subset of a file
that contains many documents, where each document would be individually put
into a CAS by the Cas Multiplier in the Job Process.

DUCC is designed so that after processing the "mini-collection" represented
by the work item,  the Cas Consumer should flush any data. This is done by
routing the "work item CAS" to the Cas Consumer, after all work item
documents are completed, at which point the CC does the flush.

The sample code described in
http://uima.apache.org/d/uima-ducc-1.0.0/duccbook.html#x1-1380009 uses the
work item CAS to flush data in exactly this way.

Note that the PersonTitleDBWriterCasConsumer is doing a flush (a commit) in
the process method after every 50 documents.

Regards
Eddie



On Thu, Mar 27, 2014 at 1:35 AM, reshu.agarwal wrote:


On 03/26/2014 11:34 PM, Eddie Epstein wrote:


Hi Reshu,

The collectionProcessingComplete() method in UIMA-AS has a limitation: a
Collection Processing Complete request sent to the UIMA-AS Analysis
Service
is cascaded down to all delegates; however, if a particular delegate is
scaled-out, only one of the instances of the delegate will get this call.

Since DUCC is using UIMA-AS to scale out the Job processes, it has no way
to deliver a CPC to all instances.

The applications we have been running on DUCC have used the Work Item CAS
as a signal to CAS consumers to do CPC level processing. That is discussed
in the first reference above, in the paragraph "Flushing Cached Data".

Eddie



On Wed, Mar 26, 2014 at 9:48 AM, reshu.agarwal 
wrote:

  On 03/26/2014 06:43 PM, Eddie Epstein wrote:

  Are you using standard UIMA interface code to Solr? If so, which Cas

Consumer?

Taking at quick look at the source code for SolrCASConsumer, the batch
and
collection process complete methods appear to do nothing.

Thanks,
Eddie


On Wed, Mar 26, 2014 at 6:08 AM, reshu.agarwal <
reshu.agar...@orkash.com>
wrote:

   On 03/21/2014 11:42 AM, reshu.agarwal wrote:


   Hence we can not attempt batch processing in cas consumer and it


increases our process timing. Is there any other option for that or is
it a
bug in DUCC?

   Please reply on this problem as if I am sending document in solr
one by


one by cas consumer without using batch process and committing solr. It
is
not optimum way to use this. Why ducc is not calling collection Process
Complete method of Cas Consumer? And If I want to do that then What is
the
way to do this?

I am not able to find any thing about this in DUCC book.

Thanks in Advanced.

--
Thanks,
Reshu Agarwal


   Hi Eddie,


I am not using standard UIMA interface code to Solr. I create my own Cas

Consumer. I will take a look on that too. But the problem is not for
particularly to use solr, I can use any source to store my output. I want
to do batch processing and want to use collectionProcessComplete. Why
DUCC
is not calling it? I check it with UIMA AS also and my cas consumer is
working fine with it and also performing batch processing.

--
Thanks,
Reshu Agarwal


  Hi Eddie,

I am using cas consumer similar to apache uima example:

  "apache-uima/examples/src/org/apache/uima/examples/cpe/
PersonTitleDBWriterCasConsumer.java"

--
Thanks,
Reshu Agarwal



Hi Eddie,

You are right I know this fact. PersonTitleDBWriterCasConsumer is doing 
a flush (a commit) in the process method after every 50 documents and if 
less then 50 documents in cas it will do commit or flush by 
collectionProcessComplete method. So, If it is not called then those 
documents can not be committed. That is why I want ducc calls this method.


--
Thanks,
Reshu Agarwal



Re: status Lost=1 in DUCC

2014-03-27 Thread reshu.agarwal

On 03/27/2014 08:13 PM, Lou DeGenaro wrote:

he data being sent are "values" rather than "keys" in your
CAS?  If so, this is not really a "best practice" for DUCC use.

Hi Lou,

This is not the problem of how I send the data. My document contains 
some invalid XML characters. So, problem resolved after I applied filter 
for that.


Reshu.


Re: Ducc Problems

2014-03-26 Thread reshu.agarwal

On 03/26/2014 11:34 PM, Eddie Epstein wrote:

Hi Reshu,

The collectionProcessingComplete() method in UIMA-AS has a limitation: a
Collection Processing Complete request sent to the UIMA-AS Analysis Service
is cascaded down to all delegates; however, if a particular delegate is
scaled-out, only one of the instances of the delegate will get this call.

Since DUCC is using UIMA-AS to scale out the Job processes, it has no way
to deliver a CPC to all instances.

The applications we have been running on DUCC have used the Work Item CAS
as a signal to CAS consumers to do CPC level processing. That is discussed
in the first reference above, in the paragraph "Flushing Cached Data".

Eddie



On Wed, Mar 26, 2014 at 9:48 AM, reshu.agarwal wrote:


On 03/26/2014 06:43 PM, Eddie Epstein wrote:


Are you using standard UIMA interface code to Solr? If so, which Cas
Consumer?

Taking at quick look at the source code for SolrCASConsumer, the batch and
collection process complete methods appear to do nothing.

Thanks,
Eddie


On Wed, Mar 26, 2014 at 6:08 AM, reshu.agarwal 
wrote:

  On 03/21/2014 11:42 AM, reshu.agarwal wrote:

  Hence we can not attempt batch processing in cas consumer and it

increases our process timing. Is there any other option for that or is
it a
bug in DUCC?

  Please reply on this problem as if I am sending document in solr one by

one by cas consumer without using batch process and committing solr. It
is
not optimum way to use this. Why ducc is not calling collection Process
Complete method of Cas Consumer? And If I want to do that then What is
the
way to do this?

I am not able to find any thing about this in DUCC book.

Thanks in Advanced.

--
Thanks,
Reshu Agarwal


  Hi Eddie,

I am not using standard UIMA interface code to Solr. I create my own Cas
Consumer. I will take a look on that too. But the problem is not for
particularly to use solr, I can use any source to store my output. I want
to do batch processing and want to use collectionProcessComplete. Why DUCC
is not calling it? I check it with UIMA AS also and my cas consumer is
working fine with it and also performing batch processing.

--
Thanks,
Reshu Agarwal



Hi Eddie,

I am using cas consumer similar to apache uima example:

 
"apache-uima/examples/src/org/apache/uima/examples/cpe/PersonTitleDBWriterCasConsumer.java"

--
Thanks,
Reshu Agarwal



Re: status Lost=1 in DUCC

2014-03-26 Thread reshu.agarwal

On 03/26/2014 10:06 PM, Lou DeGenaro wrote:

Hi Reshu,

re: your answers to 5 & 6

6a. Is the data that populates the CAS the "name" of a document or the
document itself?  (The expected expected use of DUCC is to *not* pass the
document contents which may, for example, be very large)

6b.  If it is a "name" or the like, is that something you can share so I
can try to reproduce here?

Lou.


On Wed, Mar 26, 2014 at 9:20 AM, reshu.agarwal wrote:


Hi Lou,


On 03/26/2014 04:27 PM, Lou DeGenaro wrote:


Hi Reshu,

The good news is that DUCC is functional since 1.job works.  So we need to
find out why your particular job fails.

A few more questions:

5. Does your job consist of multiple work items (CASes), and do any of
them
succeed?


My job consists of multiple work Items as well as I have tried a job with
single document. These both type of jobs are succeeded many times but I got
a problem like this on a particular document with in job. if I exclude this
document, my job got succeeded.


  6. DUCC has Job Driver (JD) that employs your CollectionReader (CR) to

fetch CASes that are sent via a broker for processing by one of the
distributed Job Processes (JPs) that each run a copy of your
AnaylsisEngine
(AE).  Normally, as Eddie points out, these CASes comprise some index
that's interpreted by the assigned JP to know which data is to be worked
on.  For example, say you have 100 documents, each 5GB in size named
doc.1,
doc.2,...doc.100.  Your CR sound not pass the actual 5GB document, but
rather "doc.1".  Is that the kind of scheme your are employing?


Lou, I am fetching Batch data from Database and sending reference from the
result set to Cas. I am not using File Processing.

  7. Do you have a small test case that you can share that reliably

demonstrates the problem?


Test Case:

I have two systems with in DUCC cluster with 20 GB RAM each.
I have defined job with these configurations:

classpath_order ducc-before-user
driver_descriptor_CR../collection_reader/DBCollectionReader.xml
process_deployments_max 6
process_descriptor_AE   ../aeAggregate
process_descriptor_CC   ../cas_consumer/CASConsumer
process_failures_limit  50
process_memory_size 4
process_per_item_time_max   3
process_thread_count3
specification   22.job
working_directory   ../ducc/Uima_ducc



I am fetching Data from Database in CR. After executing getNext() method
of CR for the particular document, It prints warning message in JD.log like
this

Mar 26, 2014 9:40:25 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl sendAndReceiveCAS
WARNING:

The document remains in queue till 5 minutes i.e. equals to the queue
waiting time.

Then if the batch size is 100 it shows lost=1 else if 200 then it still
remain in queue until I forcefully terminate the job.




Lou.




On Wed, Mar 26, 2014 at 5:31 AM, reshu.agarwal
wrote:

  On 03/20/2014 06:35 PM, Lou DeGenaro wrote:

  Where does the warning appear, in a log file in the job's log

directory?  Is there any other information related to that warning?

  Hi Lou,

Answers of your questions are given below. Hope it will help:


1. Are you able to run a simple job, such as 1.job from the examples
directory successfully?

Yes, I am able to run that simple job successfully.


2. Where does the warning appear, in a log file in the job's log
directory?  Is there any other information related to that warning?

This warning appears in JD.log file.

After all initialization messages and these messages come:

Mar 26, 2014 9:49:04 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngine_impl setupConnection
INFO: UIMA AS Client Created Shared Connection To Broker:
tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.
useCompression=true&
closeAsync=false
Mar 26, 2014 9:49:04 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngine_impl initializeProducer
INFO: Initializing JMS Message Producer. Broker:
tcp://S1:61616?wireFormat.
maxInactivityDuration=0&jms.useCompression=true&closeAsync=false Queue
Name: ducc.jd.queue.1317
Mar 26, 2014 9:49:04 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngine_impl initializeConsumer
INFO: Initializing JMS Message Consumer. Broker:
tcp://S1:61616?wireFormat.
maxInactivityDuration=0&jms.useCompression=true&closeAsync=false Queue
Name: ID:S144-36678-1395807465286-7:1:1
Mar 26, 2014 9:49:04 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngine_impl initialize
INFO: Asynchronous Client Has Been Initialized. Serialization Strategy:
[SerializationStrategy] Ready To Process.

and then only this warning message comes:

Mar 26, 2014 9:49:27 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl sendAndReceiveCAS
WARNING:
then this messages come:

Mar 26, 2014 9:59:45 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl stop
INFO: Stopping Asynch

Re: Ducc Problems

2014-03-26 Thread reshu.agarwal

On 03/26/2014 06:43 PM, Eddie Epstein wrote:

Are you using standard UIMA interface code to Solr? If so, which Cas
Consumer?

Taking at quick look at the source code for SolrCASConsumer, the batch and
collection process complete methods appear to do nothing.

Thanks,
Eddie


On Wed, Mar 26, 2014 at 6:08 AM, reshu.agarwal wrote:


On 03/21/2014 11:42 AM, reshu.agarwal wrote:


Hence we can not attempt batch processing in cas consumer and it
increases our process timing. Is there any other option for that or is it a
bug in DUCC?


Please reply on this problem as if I am sending document in solr one by
one by cas consumer without using batch process and committing solr. It is
not optimum way to use this. Why ducc is not calling collection Process
Complete method of Cas Consumer? And If I want to do that then What is the
way to do this?

I am not able to find any thing about this in DUCC book.

Thanks in Advanced.

--
Thanks,
Reshu Agarwal



Hi Eddie,

I am not using standard UIMA interface code to Solr. I create my own Cas 
Consumer. I will take a look on that too. But the problem is not for 
particularly to use solr, I can use any source to store my output. I 
want to do batch processing and want to use collectionProcessComplete. 
Why DUCC is not calling it? I check it with UIMA AS also and my cas 
consumer is working fine with it and also performing batch processing.


--
Thanks,
Reshu Agarwal



Re: status Lost=1 in DUCC

2014-03-26 Thread reshu.agarwal


Hi Lou,

On 03/26/2014 04:27 PM, Lou DeGenaro wrote:

Hi Reshu,

The good news is that DUCC is functional since 1.job works.  So we need to
find out why your particular job fails.

A few more questions:

5. Does your job consist of multiple work items (CASes), and do any of them
succeed?
My job consists of multiple work Items as well as I have tried a job 
with single document. These both type of jobs are succeeded many times 
but I got a problem like this on a particular document with in job. if I 
exclude this document, my job got succeeded.



6. DUCC has Job Driver (JD) that employs your CollectionReader (CR) to
fetch CASes that are sent via a broker for processing by one of the
distributed Job Processes (JPs) that each run a copy of your AnaylsisEngine
(AE).  Normally, as Eddie points out, these CASes comprise some index
that's interpreted by the assigned JP to know which data is to be worked
on.  For example, say you have 100 documents, each 5GB in size named doc.1,
doc.2,...doc.100.  Your CR sound not pass the actual 5GB document, but
rather "doc.1".  Is that the kind of scheme your are employing?
Lou, I am fetching Batch data from Database and sending reference from 
the result set to Cas. I am not using File Processing.

7. Do you have a small test case that you can share that reliably
demonstrates the problem?

Test Case:

I have two systems with in DUCC cluster with 20 GB RAM each.
I have defined job with these configurations:

classpath_order ducc-before-user
driver_descriptor_CR../collection_reader/DBCollectionReader.xml
process_deployments_max 6
process_descriptor_AE   ../aeAggregate
process_descriptor_CC   ../cas_consumer/CASConsumer
process_failures_limit  50
process_memory_size 4
process_per_item_time_max   3
process_thread_count3
specification   22.job
working_directory   ../ducc/Uima_ducc



I am fetching Data from Database in CR. After executing getNext() method 
of CR for the particular document, It prints warning message in JD.log 
like this


Mar 26, 2014 9:40:25 AM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl 
sendAndReceiveCAS

WARNING:

The document remains in queue till 5 minutes i.e. equals to the queue 
waiting time.


Then if the batch size is 100 it shows lost=1 else if 200 then it still 
remain in queue until I forcefully terminate the job.





Lou.




On Wed, Mar 26, 2014 at 5:31 AM, reshu.agarwalwrote:


On 03/20/2014 06:35 PM, Lou DeGenaro wrote:


Where does the warning appear, in a log file in the job's log
directory?  Is there any other information related to that warning?


Hi Lou,

Answers of your questions are given below. Hope it will help:


1. Are you able to run a simple job, such as 1.job from the examples
directory successfully?

Yes, I am able to run that simple job successfully.


2. Where does the warning appear, in a log file in the job's log
directory?  Is there any other information related to that warning?

This warning appears in JD.log file.

After all initialization messages and these messages come:

Mar 26, 2014 9:49:04 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngine_impl setupConnection
INFO: UIMA AS Client Created Shared Connection To Broker:
tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.useCompression=true&
closeAsync=false
Mar 26, 2014 9:49:04 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngine_impl initializeProducer
INFO: Initializing JMS Message Producer. Broker: tcp://S1:61616?wireFormat.
maxInactivityDuration=0&jms.useCompression=true&closeAsync=false Queue
Name: ducc.jd.queue.1317
Mar 26, 2014 9:49:04 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngine_impl initializeConsumer
INFO: Initializing JMS Message Consumer. Broker: tcp://S1:61616?wireFormat.
maxInactivityDuration=0&jms.useCompression=true&closeAsync=false Queue
Name: ID:S144-36678-1395807465286-7:1:1
Mar 26, 2014 9:49:04 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngine_impl initialize
INFO: Asynchronous Client Has Been Initialized. Serialization Strategy:
[SerializationStrategy] Ready To Process.

and then only this warning message comes:

Mar 26, 2014 9:49:27 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl sendAndReceiveCAS
WARNING:
then this messages come:

Mar 26, 2014 9:59:45 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl stop
INFO: Stopping Asynchronous Client.
Mar 26, 2014 9:59:45 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl stop
INFO: Asynchronous Client Has Stopped.
Mar 26, 2014 9:59:45 AM org.apache.uima.adapter.jms.client.
BaseUIMAAsynchronousEngineCommon_impl$SharedConnection destroy
INFO: UIMA AS Client Shared Connection Has Been Closed  Mar 26, 2014
9:59:45 AM org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngine_impl
stop



3. Are there any exceptions in any of the logs in the job's log directory?

Yes, When this warning message co

Re: Ducc Problems

2014-03-26 Thread reshu.agarwal

On 03/21/2014 11:42 AM, reshu.agarwal wrote:
Hence we can not attempt batch processing in cas consumer and it 
increases our process timing. Is there any other option for that or is 
it a bug in DUCC? 
Please reply on this problem as if I am sending document in solr one by 
one by cas consumer without using batch process and committing solr. It 
is not optimum way to use this. Why ducc is not calling collection 
Process Complete method of Cas Consumer? And If I want to do that then 
What is the way to do this?


I am not able to find any thing about this in DUCC book.

Thanks in Advanced.

--
Thanks,
Reshu Agarwal



Re: status Lost=1 in DUCC

2014-03-26 Thread reshu.agarwal

On 03/20/2014 06:35 PM, Lou DeGenaro wrote:

Where does the warning appear, in a log file in the job's log
directory?  Is there any other information related to that warning?

Hi Lou,

Answers of your questions are given below. Hope it will help:

1. Are you able to run a simple job, such as 1.job from the examples
directory successfully?

Yes, I am able to run that simple job successfully.

2. Where does the warning appear, in a log file in the job's log
directory?  Is there any other information related to that warning?

This warning appears in JD.log file.

After all initialization messages and these messages come:

Mar 26, 2014 9:49:04 AM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngine_impl 
setupConnection
INFO: UIMA AS Client Created Shared Connection To Broker: 
tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.useCompression=true&closeAsync=false
Mar 26, 2014 9:49:04 AM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngine_impl 
initializeProducer
INFO: Initializing JMS Message Producer. Broker: 
tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.useCompression=true&closeAsync=false
 Queue Name: ducc.jd.queue.1317
Mar 26, 2014 9:49:04 AM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngine_impl 
initializeConsumer
INFO: Initializing JMS Message Consumer. Broker: 
tcp://S1:61616?wireFormat.maxInactivityDuration=0&jms.useCompression=true&closeAsync=false
 Queue Name: ID:S144-36678-1395807465286-7:1:1
Mar 26, 2014 9:49:04 AM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngine_impl initialize
INFO: Asynchronous Client Has Been Initialized. Serialization Strategy: 
[SerializationStrategy] Ready To Process.

and then only this warning message comes:

Mar 26, 2014 9:49:27 AM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl 
sendAndReceiveCAS
WARNING:  


then this messages come:

Mar 26, 2014 9:59:45 AM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl stop
INFO: Stopping Asynchronous Client.
Mar 26, 2014 9:59:45 AM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl stop
INFO: Asynchronous Client Has Stopped.
Mar 26, 2014 9:59:45 AM 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl$SharedConnection
 destroy
INFO: UIMA AS Client Shared Connection Has Been Closed  
Mar 26, 2014 9:59:45 AM org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngine_impl stop



3. Are there any exceptions in any of the logs in the job's log directory?

Yes, When this warning message comes then after successfully processing of all 
documents from DB collection Reader instead of this particular document. This 
Message shows in one of the Process's log file i.e.:

Mar 26, 2014 9:54:04 AM 
org.apache.uima.adapter.jms.activemq.JmsOutputChannel$ConnectionTimer 
startSessionReaperTimer.run
INFO: Thread: 210 Component: CorefernceAggDescriptor Jms Session Inactivity 
Timeout: 5 Minutes on Broker: 
tcp://S1:61616?wireFormat.maxInactivityDuration=0&closeAsync=false

I think this is due to that warning.

4. Does your job use a version of UIMA/UIMA-AS that is different than the
one used by DUCC?

I am using DUCC version 1.0.0 and UIMA version 2.4.2. I am not able to get DUCC 
UIMA version.


--
Thanks and Regards,
Reshu Agarwal
Software Engineer
Orkash Services Pvt Ltd



Re: status Lost=1 in DUCC

2014-03-21 Thread reshu.agarwal

On 03/21/2014 05:06 PM, Eddie Epstein wrote:

Hi Reshu,

Attachments are not delivered to this mailing list.
Given that your application CR is following the guidelines,
please answer Lou's questions.

Eddie



On Fri, Mar 21, 2014 at 12:13 AM, reshu.agarwal wrote:


On 03/21/2014 01:39 AM, Eddie Epstein wrote:


R should not sent documents, only
references to documents, or preferably references to a set of documents.
Please see


Hi Eddie,

I know this fact but after running getNext() of CR it do not reach to the
AE aggregater to process. The queueing time was increasing but processing
time was still on 0.0sec. So, the job driver is not de-queueing this
particular document. and showing the warning message:

org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl
sendAndReceiveCAS.

Why this message is coming? How to resolve this?

I don't get the reason behind it.  Please have a look on attached file.

--
Thanks,
Reshu Agarwal



Thanks for the reply Eddie.

I found the problem, there are some invisible characters with in the 
document, I removed those characters and problem got resolved.


Is this the problem of invisible characters if yes then why??

--
Reshu Agarwal



Ducc Problems

2014-03-20 Thread reshu.agarwal


Hi all,

DUCC does not call collectionProcessComplete method of Cas Consumer. 
Hence we can not attempt batch processing in cas consumer and it 
increases our process timing. Is there any other option for that or is 
it a bug in DUCC?


Thanks in advance

--
Reshu Agarwal



Re: status Lost=1 in DUCC

2014-03-20 Thread reshu.agarwal

On 03/21/2014 01:39 AM, Eddie Epstein wrote:

R should not sent documents, only
references to documents, or preferably references to a set of documents.
Please see

Hi Eddie,

I know this fact but after running getNext() of CR it do not reach to 
the AE aggregater to process. The queueing time was increasing but 
processing time was still on 0.0sec. So, the job driver is not 
de-queueing this particular document. and showing the warning message:


org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl 
sendAndReceiveCAS.

Why this message is coming? How to resolve this?

I don't get the reason behind it.  Please have a look on attached file.

--
Thanks,
Reshu Agarwal



status Lost=1 in DUCC

2014-03-20 Thread reshu.agarwal


Hi,

I am trying to work upon DUCC and facing a problem that on a single 
document it is showing an warning message of 
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl 
sendAndReceiveCAS. Then after remaining in queue for 500 seconds, it 
shows status lost=1.


The same document is processed in UIMA without any exceptions.

I got stuck here. Please help me to get out of it.

--
Thanks,
Reshu Agarwal



Re: Exception found in resource specifier for Soap Service

2013-11-11 Thread reshu.agarwal

Hi,

Marshall, Thanks for the response. The problem has been fixed. Single 
Analysis Engine is working properly.


Now, I want to run Uima Soap Service for the aggregate descriptor. Is it 
possible?? Can I run CPM type of service using Soap Service


Thanks in Advance.

On 11/11/2013 07:32 PM, Marshall Schor wrote:

On 11/11/2013 4:48 AM, reshu.agarwal wrote:

Hi,

yes Marshall I used the same but still this error is coming.

Please post the client's (the UIMA application that is going to be using the web
service) xml descriptor.
This error occurs when UIMA doesn't recognize that descriptor.

-Marshall


On 11/08/2013 12:23 AM, Marshall Schor wrote:

Hi,

The documentation and the examples all have the namespace specified with the
uriSpecifier - can you try adding this?

e.g.

http://uima.apache.org/resourceSpecifier";>

-Marshall
On 11/1/2013 1:13 AM, reshu.agarwal wrote:

Hi,

I am trying  to use UIMA as a Web Service and I have successfully deployed
UIMA analysis engine SOAP service using Axis and Tomcat but when I tried to
use Resource Specifier to call Deployed UIMA Web Service , It gave an
Exception as following :

org.apache.uima.resource.ResourceInitializationException: The Resource Factory
does not know how to create a resource of class
org.apache.uima.analysis_engine.AnalysisEngine from the given
ResourceSpecifier. (Descriptor: file:/media/.../rr.xml)
  at org.apache.uima.UIMAFramework.produceResource(UIMAFramework.java:261)
  at
org.apache.uima.UIMAFramework.produceAnalysisEngine(UIMAFramework.java:326)
  at org.orkash.java_applications.RunWebService.main(RunWebService.java:23)


The Descriptor used to call UIMA Soap Service is given below (rr.xml):


AnalysisEngine
http://localhost:8080/axis/services/urn:PersonTitleAnnotator
SOAP
6



And the Resource Specifier class contains this code to call Deployed UIMA Soap
Service :

  File taeDescriptor = new File("/media/.../rr.xml");
  File inputFile = new File("/home/user/38434924.txt");

  XMLInputSource in = new XMLInputSource(taeDescriptor);
  ResourceSpecifier specifier =
UIMAFramework.getXMLParser().parseResourceSpecifier(in);

  // create Analysis Engine and jCAS
  AnalysisEngine tae =
UIMAFramework.produceAnalysisEngine(specifier);
  JCas jcas = tae.newJCas();

  // read contents of file
  FileInputStream fis = new FileInputStream(inputFile);
  byte[] contents = new byte[(int) inputFile.length()];
  fis.read(contents);
  fis.close();
  String document = new String(contents);

  // send doc through jcas
  jcas.setDocumentText(document);
  tae.process(jcas);



Please Help me out with this.

Thanks in Advance.






--
Thanks and Regards,
Reshu Agarwal
Software Engineer
Orkash Services Pvt Ltd



Re: Exception found in resource specifier for Soap Service

2013-11-11 Thread reshu.agarwal


Hi,

yes Marshall I used the same but still this error is coming.

On 11/08/2013 12:23 AM, Marshall Schor wrote:

Hi,

The documentation and the examples all have the namespace specified with the
uriSpecifier - can you try adding this?

e.g.

http://uima.apache.org/resourceSpecifier";>

-Marshall
On 11/1/2013 1:13 AM, reshu.agarwal wrote:

Hi,

I am trying  to use UIMA as a Web Service and I have successfully deployed
UIMA analysis engine SOAP service using Axis and Tomcat but when I tried to
use Resource Specifier to call Deployed UIMA Web Service , It gave an
Exception as following :

org.apache.uima.resource.ResourceInitializationException: The Resource Factory
does not know how to create a resource of class
org.apache.uima.analysis_engine.AnalysisEngine from the given
ResourceSpecifier. (Descriptor: file:/media/.../rr.xml)
 at org.apache.uima.UIMAFramework.produceResource(UIMAFramework.java:261)
 at 
org.apache.uima.UIMAFramework.produceAnalysisEngine(UIMAFramework.java:326)
 at org.orkash.java_applications.RunWebService.main(RunWebService.java:23)


The Descriptor used to call UIMA Soap Service is given below (rr.xml):


AnalysisEngine
http://localhost:8080/axis/services/urn:PersonTitleAnnotator
SOAP
6



And the Resource Specifier class contains this code to call Deployed UIMA Soap
Service :

 File taeDescriptor = new File("/media/.../rr.xml");
 File inputFile = new File("/home/user/38434924.txt");

 XMLInputSource in = new XMLInputSource(taeDescriptor);
 ResourceSpecifier specifier =
UIMAFramework.getXMLParser().parseResourceSpecifier(in);

 // create Analysis Engine and jCAS
 AnalysisEngine tae = 
UIMAFramework.produceAnalysisEngine(specifier);
 JCas jcas = tae.newJCas();

 // read contents of file
 FileInputStream fis = new FileInputStream(inputFile);
 byte[] contents = new byte[(int) inputFile.length()];
 fis.read(contents);
 fis.close();
 String document = new String(contents);

 // send doc through jcas
 jcas.setDocumentText(document);
 tae.process(jcas);



Please Help me out with this.

Thanks in Advance.




--
Thanks and Regards,
Reshu Agarwal
Software Engineer
Orkash Services Pvt Ltd



Exception found in resource specifier for Soap Service

2013-10-31 Thread reshu.agarwal


Hi,

I am trying  to use UIMA as a Web Service and I have successfully 
deployed UIMA analysis engine SOAP service using Axis and Tomcat but 
when I tried to use Resource Specifier to call Deployed UIMA Web Service 
, It gave an Exception as following :


org.apache.uima.resource.ResourceInitializationException: The Resource 
Factory does not know how to create a resource of class 
org.apache.uima.analysis_engine.AnalysisEngine from the given 
ResourceSpecifier. (Descriptor: file:/media/.../rr.xml)
at 
org.apache.uima.UIMAFramework.produceResource(UIMAFramework.java:261)
at 
org.apache.uima.UIMAFramework.produceAnalysisEngine(UIMAFramework.java:326)
at 
org.orkash.java_applications.RunWebService.main(RunWebService.java:23)



The Descriptor used to call UIMA Soap Service is given below (rr.xml):


AnalysisEngine
http://localhost:8080/axis/services/urn:PersonTitleAnnotator
SOAP
6



And the Resource Specifier class contains this code to call Deployed 
UIMA Soap Service :


File taeDescriptor = new File("/media/.../rr.xml");
File inputFile = new File("/home/user/38434924.txt");

XMLInputSource in = new XMLInputSource(taeDescriptor);
ResourceSpecifier specifier = 
UIMAFramework.getXMLParser().parseResourceSpecifier(in);


// create Analysis Engine and jCAS
AnalysisEngine tae = 
UIMAFramework.produceAnalysisEngine(specifier);

JCas jcas = tae.newJCas();

// read contents of file
FileInputStream fis = new FileInputStream(inputFile);
byte[] contents = new byte[(int) inputFile.length()];
fis.read(contents);
fis.close();
String document = new String(contents);

// send doc through jcas
jcas.setDocumentText(document);
tae.process(jcas);



Please Help me out with this.

Thanks in Advance.

--
Reshu Agarwal



Camel configuration in UIMA-AS

2013-07-28 Thread reshu.agarwal


Hi,

I am not aware about Apache-camel and want to know some questions:

1.What is the use of Apache-camel in Uima-as?
2.How to configure Apache-camel?
3.Is there any documentation which covered all about the 
Apache-camel in UIMA_AS?
4.Is apache camel is used for the distributed apache uima feature if 
no then who?


Thanks in Advance.

--
Thanks and Regards,
Reshu Agarwal



Re: single broker and multiple remote system

2013-07-28 Thread reshu.agarwal

Hi,

My main problem is to use single broker for message transport but for 
processing I want to use different systems resources.


like you(JC) said,"If I understand your scenario, you only need one 
broker.Services can be deployed an different machines and point to the 
same queue (endpoint) and broker (brokerURL) defined in the deployment 
descriptor".


So, How can I configure this scenario.



On 07/26/2013 06:42 PM, Jaroslaw Cwiklik wrote:

Do you want to share your code across nodes? If so, use Network File System
(NFS)

JC


On Fri, Jul 26, 2013 at 12:02 AM, reshu.agarwal wrote:


Yes JC, you are rightly understand my scenario and I also want to put my
code on only one machine. But how can I implement that?

--
Reshu Agarwal




On 07/25/2013 08:02 PM, Jaroslaw Cwiklik wrote:


If I understand your scenario, you only need one broker.Services can be
deployed an different machines and point to the same
queue (endpoint) and broker (brokerURL) defined in the deployment
descriptor.

JC


On Thu, Jul 25, 2013 at 7:49 AM, reshu.agarwal 
**wrote:

  Hi,

I want to consume the memory of more then 1 machines but start the broker
service only on 1 machine.

Is this possible to start system-1 broker and deploy 5 remote services
from system-1 machine on 5 different machine{system-2,system-3,
,system-6} which create queues to system-1 broker but use the memory
of
their machines{system-2,system-3, ,system-6} for processing?

As I understand, the remote services deployment decision is based on the
broker url. So, If I deploy services from system-1 using system-1 broker
url, it will consume only system-1 memory.

But, If I deploy services from system-1 using the system-2 broker url
then
I must start the system-2 broker to consume memory of system-2 and to
deploy the service.

Am I right? So, How can this possible?

Is it possible to make system-1 broker to work as system-2 broker too?
How?


Thanks in advance.

--
Reshu Agarwal





--
Thanks and Regards,
Reshu Agarwal



Re: single broker and multiple remote system

2013-07-25 Thread reshu.agarwal


Yes JC, you are rightly understand my scenario and I also want to put my 
code on only one machine. But how can I implement that?


--
Reshu Agarwal



On 07/25/2013 08:02 PM, Jaroslaw Cwiklik wrote:

If I understand your scenario, you only need one broker.Services can be
deployed an different machines and point to the same
queue (endpoint) and broker (brokerURL) defined in the deployment
descriptor.

JC


On Thu, Jul 25, 2013 at 7:49 AM, reshu.agarwal wrote:


Hi,

I want to consume the memory of more then 1 machines but start the broker
service only on 1 machine.

Is this possible to start system-1 broker and deploy 5 remote services
from system-1 machine on 5 different machine{system-2,system-3,
,system-6} which create queues to system-1 broker but use the memory of
their machines{system-2,system-3, ,system-6} for processing?

As I understand, the remote services deployment decision is based on the
broker url. So, If I deploy services from system-1 using system-1 broker
url, it will consume only system-1 memory.

But, If I deploy services from system-1 using the system-2 broker url then
I must start the system-2 broker to consume memory of system-2 and to
deploy the service.

Am I right? So, How can this possible?

Is it possible to make system-1 broker to work as system-2 broker too? How?


Thanks in advance.

--
Reshu Agarwal





single broker and multiple remote system

2013-07-25 Thread reshu.agarwal


Hi,

I want to consume the memory of more then 1 machines but start the 
broker service only on 1 machine.


Is this possible to start system-1 broker and deploy 5 remote services 
from system-1 machine on 5 different machine{system-2,system-3, 
,system-6} which create queues to system-1 broker but use the memory 
of their machines{system-2,system-3, ,system-6} for processing?


As I understand, the remote services deployment decision is based on the 
broker url. So, If I deploy services from system-1 using system-1 broker 
url, it will consume only system-1 memory.


But, If I deploy services from system-1 using the system-2 broker url 
then I must start the system-2 broker to consume memory of system-2 and 
to deploy the service.


Am I right? So, How can this possible?

Is it possible to make system-1 broker to work as system-2 broker too? How?


Thanks in advance.

--
Reshu Agarwal


Re: use of remoteAnalysisEngine cause failure

2013-07-24 Thread reshu.agarwal

On 07/24/2013 06:13 PM, Jaroslaw Cwiklik wrote:

The "...Cannot publish to a deleted Destination.." is caused by a broker
not being able to deliver a message to a
given queue. It looks like the client has terminated and the broker deleted
its temp reply queue while the service
was processing a CAS.


JC


On Wed, Jul 24, 2013 at 12:11 AM, reshu.agarwal wrote:


On 07/23/2013 05:56 PM, reshu.agarwal wrote:


the heap space for the Memory Pool PS|Surviver space.


Hi,

In Jconsole memory column I noticed that the memory Pool PS Survivor Space
is get fully filled and the processing stop with the error.


Error on process CAS call to remote service:
org.apache.uima.aae.error.**UimaEEServiceException:
org.apache.uima.cas.**CASRuntimeException: "Preexisting FS encountered
but not allowed. "xmi:id=1797"

and in the uima.log I found this error:

org.apache.uima.adapter.jms.**activemq.**JmsEndpointConnection_impl.**send:
WARNING:
javax.jms.**InvalidDestinationException: Cannot publish to a deleted
Destination: temp-queue://ID:user-55759-**1374578217010-0:0:1

I thing it is because of the memory Pool PS Survivor Space. If I am right
Please tell me the solution and If I am wrong then tell me the reason so
that I can resolve this.

Thanks in advance.

--
Reshu Agarwal



Hi,

Thanks for the reply JC, I monitor the console and found, this is 
because of the queue deletion but the queue deletion is because of the 
first exception:


Error on process CAS call to remote service:
org.apache.uima.aae.error.**UimaEEServiceException:
org.apache.uima.cas.**CASRuntimeException: "Preexisting FS encountered
but not allowed. "xmi:id=1797"

I think, when this above mentioned exception occurred, runRemoteAsync.sh 
Service stops so that the temp queue producer, which is started by this 
runRemoteAsync.sh, deleted from the broker. But, Remote Analysis Engine 
services were still sending the messages to this producer queue, hence, 
deleted destination exception occurrs.


I do not understand why the first exceptionoccurs and how to resolve it 
as my all parallel classes do not update any preexisting Feature 
Structure but creating new Feature structures.


As I mentioned before I got the exception if I used more documents and 
this exception is coming on the same set of document suppose(250) but 
not on a particular document but some time after 50 document's 
processing or some time after 100 document's processing and some time 
after 150 or 170 document's processing."


Please tell me why this is happening.

--
Thanks,
Reshu Agarwal



Re: use of remoteAnalysisEngine cause failure

2013-07-23 Thread reshu.agarwal

On 07/23/2013 05:56 PM, reshu.agarwal wrote:

the heap space for the Memory Pool PS|Surviver space.

Hi,

In Jconsole memory column I noticed that the memory Pool PS Survivor 
Space is get fully filled and the processing stop with the error.


Error on process CAS call to remote service:
org.apache.uima.aae.error.UimaEEServiceException:
org.apache.uima.cas.CASRuntimeException: "Preexisting FS encountered but not allowed. 
"xmi:id=1797"

and in the uima.log I found this error:

org.apache.uima.adapter.jms.activemq.JmsEndpointConnection_impl.send: 
WARNING:
javax.jms.InvalidDestinationException: Cannot publish to a deleted 
Destination: temp-queue://ID:user-55759-1374578217010-0:0:1


I thing it is because of the memory Pool PS Survivor Space. If I am 
right Please tell me the solution and If I am wrong then tell me the 
reason so that I can resolve this.


Thanks in advance.

--
Reshu Agarwal



Re: use of remoteAnalysisEngine cause failure

2013-07-23 Thread reshu.agarwal


Hi Marshall,

The issue is the same as you mentioned here It resolved. But I got the 
same error if I used more documents and this error is coming on the same 
set of document suppose(250) but not on a particular document but some 
time after 50 document's processing or some time after 100 document's 
processing and some time after 150 or 170 document's processing. So, It 
is not the issue of the pipeline but I noticed that the heap space for 
the Memory Pool PS|Surviver space.



|
On 07/19/2013 03:11 PM, reshu.agarwal wrote:

On 07/18/2013 10:35 PM, Marshall Schor wrote:

See
http://uima.apache.org/d/uima-as-2.4.0/uima_async_scaleout.html#ugr.async.ov.concepts.parallelFlows 


-Marshall

On 7/18/2013 11:29 AM, Marshall Schor wrote:
When you run in parallel, there's a restriction on what the remote 
annotators

can modify in the CAS: the annotators are not allowed to modify feature
structures that already exist; they can only add new feature 
structures.


The error reported below occurs when the CAS being returned from a 
remote
annotator has modified some feature structure in the CAS that 
existed when it

received the CAS.

-Marshall
On 7/17/2013 4:36 AM, reshu.agarwal wrote:

Hi JC,

Thanks for replying. This works and I also implemented it to my own 
descriptors.
If I used serial flow it works perfectly but if I used parallel 
flow and

deploying all the descriptors services which are in parallel,
it shows the following error:

Error on process CAS call to remote service:
org.apache.uima.aae.error.UimaEEServiceException:
org.apache.uima.cas.CASRuntimeException: "Preexisting FS 
encountered but not

allowed. "xmi:id=17"
 at
org.apache.uima.adapter.jms.activemq.JmsOutputChannel.sendReply(JmsOutputChannel.java:871) 


 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.sendReplyWithException(AggregateAnalysisEngineController_impl.java:1967) 


 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.sendReplyToRemoteClient(AggregateAnalysisEngineController_impl.java:2054) 


 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.replyToClient(AggregateAnalysisEngineController_impl.java:2195) 


 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.finalStep(AggregateAnalysisEngineController_impl.java:1758) 


 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.abortProcessingCas(AggregateAnalysisEngineController_impl.java:1057) 


 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.process(AggregateAnalysisEngineController_impl.java:1099) 


 at
org.apache.uima.aae.error.handler.ProcessCasErrorHandler.handleError(ProcessCasErrorHandler.java:565) 


 at
org.apache.uima.aae.error.ErrorHandlerChain.handle(ErrorHandlerChain.java:57) 


 at
org.apache.uima.aae.handler.input.ProcessResponseHandler.handleProcessResponseFromRemote(ProcessResponseHandler.java:360) 


 at
org.apache.uima.aae.handler.input.ProcessResponseHandler.handle(ProcessResponseHandler.java:704) 


 at
org.apache.uima.adapter.jms.activemq.JmsInputChannel.onMessage(JmsInputChannel.java:702) 


 at
org.apache.uima.adapter.jms.activemq.ConcurrentMessageListener$1.run(ConcurrentMessageListener.java:211) 


 at
org.apache.uima.aae.UimaBlockingExecutor$1.run(UimaBlockingExecutor.java:106) 


 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 


 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 

 at 
org.apache.uima.aae.UimaAsThreadFactory$1.run(UimaAsThreadFactory.java:118)

 at java.lang.Thread.run(Thread.java:722)
Caused by: org.apache.uima.cas.CASRuntimeException: "Preexisting FS
encountered but not allowed. "xmi:id=17"
 at
org.apache.uima.cas.impl.XmiCasDeserializer$XmiCasDeserializerHandler.startElement(XmiCasDeserializer.java:362) 


 at
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:506) 


 at
com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:182) 


 at
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:353) 


 at
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2717) 


 at
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) 


 at
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116) 


 at
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:489) 


 at
com.sun.org.apache.xerces.internal.parsers.XML11Configura

Re: use of remoteAnalysisEngine cause failure

2013-07-19 Thread reshu.agarwal

On 07/18/2013 10:35 PM, Marshall Schor wrote:

See
http://uima.apache.org/d/uima-as-2.4.0/uima_async_scaleout.html#ugr.async.ov.concepts.parallelFlows
-Marshall

On 7/18/2013 11:29 AM, Marshall Schor wrote:

When you run in parallel, there's a restriction on what the remote annotators
can modify in the CAS: the annotators are not allowed to modify feature
structures that already exist; they can only add new feature structures.

The error reported below occurs when the CAS being returned from a remote
annotator has modified some feature structure in the CAS that existed when it
received the CAS.

-Marshall
On 7/17/2013 4:36 AM, reshu.agarwal wrote:

Hi JC,

Thanks for replying. This works and I also implemented it to my own descriptors.
If I used serial flow it works perfectly but if I used parallel flow and
deploying all the descriptors services which are in parallel,
it shows the following error:

Error on process CAS call to remote service:
org.apache.uima.aae.error.UimaEEServiceException:
org.apache.uima.cas.CASRuntimeException: "Preexisting FS encountered but not
allowed. "xmi:id=17"
 at
org.apache.uima.adapter.jms.activemq.JmsOutputChannel.sendReply(JmsOutputChannel.java:871)
 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.sendReplyWithException(AggregateAnalysisEngineController_impl.java:1967)
 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.sendReplyToRemoteClient(AggregateAnalysisEngineController_impl.java:2054)
 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.replyToClient(AggregateAnalysisEngineController_impl.java:2195)
 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.finalStep(AggregateAnalysisEngineController_impl.java:1758)
 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.abortProcessingCas(AggregateAnalysisEngineController_impl.java:1057)
 at
org.apache.uima.aae.controller.AggregateAnalysisEngineController_impl.process(AggregateAnalysisEngineController_impl.java:1099)
 at
org.apache.uima.aae.error.handler.ProcessCasErrorHandler.handleError(ProcessCasErrorHandler.java:565)
 at
org.apache.uima.aae.error.ErrorHandlerChain.handle(ErrorHandlerChain.java:57)
 at
org.apache.uima.aae.handler.input.ProcessResponseHandler.handleProcessResponseFromRemote(ProcessResponseHandler.java:360)
 at
org.apache.uima.aae.handler.input.ProcessResponseHandler.handle(ProcessResponseHandler.java:704)
 at
org.apache.uima.adapter.jms.activemq.JmsInputChannel.onMessage(JmsInputChannel.java:702)
 at
org.apache.uima.adapter.jms.activemq.ConcurrentMessageListener$1.run(ConcurrentMessageListener.java:211)
 at
org.apache.uima.aae.UimaBlockingExecutor$1.run(UimaBlockingExecutor.java:106)
 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at 
org.apache.uima.aae.UimaAsThreadFactory$1.run(UimaAsThreadFactory.java:118)
 at java.lang.Thread.run(Thread.java:722)
Caused by: org.apache.uima.cas.CASRuntimeException: "Preexisting FS
encountered but not allowed. "xmi:id=17"
 at
org.apache.uima.cas.impl.XmiCasDeserializer$XmiCasDeserializerHandler.startElement(XmiCasDeserializer.java:362)
 at
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:506)
 at
com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:182)
 at
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:353)
 at
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2717)
 at
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607)
 at
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116)
 at
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:489)
 at
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835)
 at
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764)
 at
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
 at
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1210)
 at
org.apache.uima.aae.UimaSerializer.deserializeCasFromXmi(UimaSerializer.java:197)
 at
org.apache.uima.aae.handler.input.ProcessResponseHandler.deserialize(ProcessResponseHandler.java:382)
 at
org.apache.uima.aae.handler.input.ProcessResponseHandler.handleProcessResponseFromRemote(ProcessResponseHandler.java:274)
 ... 8 more
Terminating Client

  1   2   >