Hi,
In DUCC 1.x, we are able to do fixed reservation of some of the memory
of Nodes but We are restricted to do "reserve" type of reservation in
DUCC 2.x. I want to know the reason for the same.
I am using ubuntu for DUCC installation and not be able to configure
c-groups in it, So, I have
ature" in this same case.
After removing the code from hasNext() and making equal both counts, Job
got completed successfully.
Reshu.
On 03/26/2016 02:27 PM, Lou DeGenaro wrote:
Does the job's JD log file have any exceptions or errors?
Lou.
On Fri, Mar 25, 2016 at 2:49 AM, reshu.agarwal &
Hi,
I am facing a problem in DUCC 2.0.1 i.e. job is not completing even
after CR hasNext() returned false. I had tested my job with
"all_in_one=local/remote" both. It was successfully completed. But, in
running cluster environment, It failed to stopped and it was
continuously going to
But, I did not need to do this in case of DUCC 1.1.0 and 1.0.0.
Reshu.
Signature
On 01/12/2016 06:35 PM, Lou DeGenaro wrote:
Reshu,
Very good. Looks to me like no DUCC changes are needed with respect to
this issue.
Lou.
On Tue, Jan 12, 2016 at 12:07 AM, reshu.agarwal <reshu.a
.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:347)
at
com.thoughtworks.xstream.io.xml.DomDriver.createReader(DomDriver.java:79)
... 26 more
On 01/12/2016 10:06 AM, reshu.agarwal wrote:
Hi,
I was getting this error after 17 out of 200 documents were processed.
I am unable to find any reason for the same. Plea
Hi,
I was getting this error after 17 out of 200 documents were processed. I
am unable to find any reason for the same. Please see the error below:
INFO: Asynchronous Client Has Been Initialized. Serialization Strategy:
[SerializationStrategy] Ready To Process.
.
Reshu.
On 01/07/2016 02:45 AM, Lou DeGenaro wrote:
Reshu,
Going back through this thread I see that you posted a stack trace on
9/25/15. Is that the entire trace? Is there any CausedBy section?
Lou.
On Wed, Jan 6, 2016 at 1:33 AM, reshu.agarwal <reshu.agar...@orkash.com>
wrote:
Hi,
I am getting message " 1100 Unable to initialize groups for ducc.:
Operation not permitted" when I clicked stop in ducc services.
Please resolve.
Reshu.
you tried ducc_submit with the additional flag:
*--all_in_one local*
?
Lou.
On Mon, Oct 5, 2015 at 12:25 AM, reshu.agarwal <reshu.agar...@orkash.com>
wrote:
Lou,
My job identifies the CR from test.jar but it also uses the other 3rd
Party libraries which are in lib folder suppose if you
initialization. As service has been created without issue even if using
the same initialization method as well as adding *--all_in_one local* in
props file also run job successfully.
Hope this will help.
Thanks in advanced.
Reshu.
On 01/06/2016 11:11 AM, reshu.agarwal wrote:
Dear Lou,
Sorry
Hi,
I am using DUCC 1.1.0 version. I am facing a issue with my job i.e. it
remains in completing state even after initializing the stop process. My
job used two processes. And Job Driver logs:
Jan 04, 2016 12:43:13 PM
org.apache.uima.adapter.jms.client.BaseUIMAAsynchronousEngineCommon_impl
6 11:14 AM, reshu.agarwal wrote:
Hi,
I am using DUCC 1.1.0 version. I am facing a issue with my job i.e. it
remains in completing state even after initializing the stop process.
My job used two processes. And Job Driver logs:
Jan 04, 2016 1
specified in the Job specification's (user-side) classpath.
Does the user-side classpath (the one in your job specification) contain
all classes needed to run your Collection Reader?
Lou.
On Thu, Oct 1, 2015 at 12:52 AM, reshu.agarwal <reshu.agar...@orkash.com>
wrote:
Dear Lou,
I hav
UserClasspath
the following: /home/ducc/Uima_Test/lib/*:test.jar:. There is no path to
test.jar? Also, does your Job really use the other directories & jars in
UserClasspath?
Lou.
On Mon, Sep 28, 2015 at 7:52 AM, reshu.agarwal <reshu.agar...@orkash.com>
wrote:
The log is:/
1000 Command to
rocs:1
25/09/2015 12:05:10 id:85 state:Completed total:15 done:15 error:0 retry:0
procs:0
25/09/2015 12:05:10 id:85 rationale:state manager detected normal completion
25/09/2015 12:05:10 id:85 rc:0
Lou.
On Fri, Sep 25, 2015 at 12:49 AM, reshu.agarwal <reshu.agar...@orkash.com>
wrote:
Lewis
5:10 id:85 state:Completed total:15 done:15 error:0 retry:0
procs:0
25/09/2015 12:05:10 id:85 rationale:state manager detected normal completion
25/09/2015 12:05:10 id:85 rc:0
Lou.
On Fri, Sep 25, 2015 at 12:49 AM, reshu.agarwal <reshu.agar...@orkash.com>
wrote:
Lewis & Lou,
When I cla
l.com>
wrote:
Reshu,
Absent some extraordinary circumstance, you should not be touching
jobclasspath.properties file.
Specify your classpath requirement using --classpath when you submit your
job or register your service. This is where you'd add UIMA jars, for
example.
Lou.
On Tue, S
Mon, Sep 21, 2015 at 7:25 AM, reshu.agarwal <reshu.agar...@orkash.com>
wrote:
Hi,
As you said:"In DUCC 2.0 you must explicitly supply UIMA in the
classpath of your submission. This was not the case in DUCC 1.x where
UIMA
was added by DUCC under the covers."
I defin
where UIMA was added by DUCC
under the covers.
In fact this gives you more flexibility in that you are no loner tied to
using a particular version of UIMA.
Lou.
On Fri, Sep 18, 2015 at 12:24 AM, reshu.agarwal <reshu.agar...@orkash.com>
wrote:
Jerry,
I have tried DUCC 2.0.0 to run same j
Hi,
I am facing a problem in DUCC that some documents were shown in queue
but did not get processed. In Job, work item list shows a particular
work item's status "queued" and queueing time is "4115 seconds".
I want to set queueing time of work item not more then 1 minute. What is
the
where its threads are. Before doing this, check JP logs
to see if there is an exception.
Jerry
On Thu, Sep 17, 2015 at 4:32 AM, reshu.agarwal <reshu.agar...@orkash.com>
wrote:
My DUCC version is 1.1.0.
On 09/17/2015 11:35 AM, reshu.agarwal wrote:
Hi,
I am facing a problem in DUCC tha
, reshu.agarwal reshu.agar...@orkash.com
wrote:
Ohh!!! I misunderstand this. I thought this would scale my Aggregate and
AEs both.
I want to scale aggregate as well as individual AEs. Is there any way of
doing this in UIMA AS/DUCC?
On 04/28/2015 07:14 PM, Jaroslaw Cwiklik wrote:
In async aggregate you
key=ConsumerDescriptor
scaleout
numberOfInstances=5 /
/analysisEngine
/delegates
/analysisEngine
Jerry
On Tue, Apr 28, 2015 at 5:20 AM, reshu.agarwal
Hi,
I was trying to scale my processing pipeline to be run in DUCC
environment with uima as process_dd. If I was trying to scale using the
below given configuration, the threads started were not as expected:
analysisEngineDeploymentDescription
Received a Message. Is Process
target for message:true.
and compare it to a timestamp of the last log message. Does it look like
there is a long delay?
Jerry
On Wed, Feb 18, 2015 at 2:03 AM, reshu.agarwal reshu.agar...@orkash.com
wrote:
Dear Eddie,
This problem has been resolved by using
as
destroy/collectionProcessComplete method of cas consumer.
I want to close my connection to Database after completion of job as
well as want to use batch processing at cas consumer level like
PersonTitleDBWriterCasConsumer.
Thanks in advanced.
Reshu.
On 03/31/2014 04:14 PM, reshu.agarwal
Hi,
I read in DUCC book about:
Agents monitors nodes, sending heartbeat packets with node statistics to
interested components (such as the RM and web-server).
Status
This shows the current state of a machine. Values include:
defined
The node is in the DUCCnodes file
Hi,
Is there any problem if one Agent node is on Physical(Master) and one
agent node is on virtual?
I am running a job which is having avg processing timing of 20 min when
I have configured a single machine DUCC (physical machine)as well as
when both nodes were on physical machine only.
and 1.1 at the same time? This can be very
tricky. You need to be sure of no overlaps. I highly recommend that you
pick one or the other.
Lou.
On Fri, Dec 5, 2014 at 6:31 AM, reshu.agarwal reshu.agar...@orkash.com
wrote:
Dear Lou,
Thanks for confirming this.
Is Bug fixing version available
looking at live presently, I see a range from 9
to 66. If the number gets too large, the DUCC system will consider the
node down. As best as I can tell, it looks like your numbers are 58 59
which is perfect.
Lou.
On Thu, Dec 4, 2014 at 7:41 AM, reshu.agarwal reshu.agar...@orkash.com
wrote
, 2014 at 1:21 AM, reshu.agarwal reshu.agar...@orkash.com
wrote:
Dear Lou,
I am using default configuration:
ducc.agent.node.metrics.publish.rate=3
ducc.rm.node.stability = 5
Reshu.
Signature On 11/12/2014 05:03 PM, Lou DeGenaro wrote:
What do you have defined in your ducc.properties
17, 2014 at 5:29 AM, Simon Hafner reactorm...@gmail.com wrote:
2014-11-17 0:00 GMT-06:00 reshu.agarwal reshu.agar...@orkash.com:
I want to run two DUCC version i.e. 1.0.0 and 1.1.0 on same machines with
different user. Can this be possible?
Yes, that should be possible. You'll have to make sure
the either configuration alone without issue?
Lou.
On Mon, Nov 17, 2014 at 7:41 AM, reshu.agarwal reshu.agar...@orkash.com
wrote:
Lou,
I have changed the broker port and ws port too but still faced a problem
in starting the ducc1.1.0 version simultaneously.
Reshu.
On 11/17/2014 05:34 PM, Lou DeGenaro
, reshu.agarwal
reshu.agar...@orkash.com
wrote:
Lou,
I have changed the broker port and ws port too but still faced a
problem
in starting the ducc1.1.0 version simultaneously.
Reshu.
On 11/17/2014 05:34 PM, Lou DeGenaro wrote:
The broker port is specifiable in ducc.properties. The default
Hi,
I am bit confused. Why we need un-managed reservation? Suppose we give
5GB Memory size to this reservation. Can this RAM be consumed by any
process if required?
In my scenario, when all RAMs of Nodes was consumed by JOBs, all
processes went in waiting state. I need some reservation of
to resolve this?
Thanks in advanced.
Reshu.
On 11/18/2014 10:10 AM, reshu.agarwal wrote:
Dear Jim,
When I was trying DUCC 1.1.0 on the nodes on which DUCC 1.0.0 was
running perfectly, I first stopped DUCC 1.0.0 using ./check_ducc -k .
My Broker ports were different that time as well as I made
Hi,
I want to run two DUCC version i.e. 1.0.0 and 1.1.0 on same machines
with different user. Can this be possible?
Thanks in advanced.
Reshu.
of information.
Lou.
On Wed, Nov 12, 2014 at 12:45 AM, reshu.agarwal
reshu.agar...@orkash.com wrote:
Hi,
When I was trying DUCC-1.1.0 on multi machine, I have faced an up-down
status problem in machines. I have configured two machines and these
machines are going down one by one. This makes
Hi,
When I was trying DUCC-1.1.0 on multi machine, I have faced an up-down
status problem in machines. I have configured two machines and these
machines are going down one by one. This makes the DUCC Services disable
and Jobs to be initialize again and again.
DUCC 1.0.0 was working fine on
the components (CR CM AE CC)
and DUCC will package the last 3 into a DD, or you can provide an already
created DD along with the CR.
I hope I interpreted your questions correctly.
~Burn
On Thu, Jul 31, 2014 at 6:47 AM, reshu.agarwal reshu.agar...@orkash.com
wrote:
Hi,
Can we deploy a analysis
for several reasons: to automatically start
services when needed by a job; to not give resources to a job or service
for which a dependent service is not running; and to post a warning on
running jobs when a dependent service goes bad.
Eddie
On Tue, Apr 1, 2014 at 1:27 AM, reshu.agarwal reshu.agar
that references that service ends DUCC will stop the service
after an idle delay defined by the service's --linger value. If you make
linger very large the service can be made to stay running, even if it is
idle, for many hours between jobs.
~Burn
On Thu, Jul 31, 2014 at 2:41 AM, reshu.agarwal reshu.agar
On 07/09/2014 03:25 PM, Lou DeGenaro wrote:
Upon re-start (presuming you
used the default warm start) all previous running jobs are marked as
Completed. If the the job itself is Completed yet the job-processes
continue to show an active state then this is erroneous information..
Dear Lou,
The
Hi,
I have faced a problem in DUCC after continuous processing in DUCC, the
job initialization or ending processing go in to infinite loop. So, a
new job can not be started even after restarting of DUCC and job is
showing end of job status internally initialization waiting time is
On 06/09/2014 09:50 AM, reshu.agarwal wrote:
On 06/06/2014 06:03 PM, Lou DeGenaro wrote:
files for ERROR or WARN messages especially
relative to the MqReaper.
Hi Lou,
I have found an error in DUCC logs or.log:
java.lang.reflect.UndeclaredThrowableException
at com.sun.proxy.$Proxy19
On 06/05/2014 06:30 PM, Lou DeGenaro wrote:
How long does it take to process your longest work item through your
Hi Lou,
It was 3 Minutes that time. We increased this to 10 minutes now for
testing as you told me to increase it. I have also restarted DUCC. Till
now, I didn't face the similar
HI Lou,
We have debugged our pipeline. If this is the problem of pipeline or
code then when we run this batch again, then the same errors must be
displayed in error log. But the same batch processes successfully
without any error so, this is not the error at code level.
And the error log
On 05/29/2014 06:37 PM, Jaroslaw Cwiklik wrote:
You can use UIMA-AS client API. All you need to know is the endpoint and
the broker url to communicate.
Jerry C
On Wed, May 28, 2014 at 8:53 AM, reshu.agarwal reshu.agar...@orkash.comwrote:
Hi,
I am just curious about DUCC Service uses. Can
Hi,
I faces some times the CanceledByDriver status to a job in DUCC, If
error count reaches to 14. The all errors are due to Cas Timed Out
exception and after 5 continuous errors the Cas timed out on server is
null. Whole Job is cancelled and documents get skipped by this.
Can I set some
Hi,
I am just curious about DUCC Service uses. Can we call Ducc Service
using UIMA AS client API. Or there is any API is available to call DUCC
Service like UIMA AS client API is for UIMA AS Service.
I want to process a text from my Java class which add text in jCas of
UIMA Cas and send
On 04/01/2014 05:21 PM, Eddie Epstein wrote:
The
job still needs to connect to the service in the normal way.
DUCC uses services dependency for several reasons: to automatically start
services when needed by a job; to not give resources to a job or service
for which a dependent service is not
On 03/31/2014 04:37 PM, Eddie Epstein wrote:
Reshu,
Please look in the logfile of the job process. Maybe 10 minutes is still
not enough?
Eddie
On Mon, Mar 31, 2014 at 2:42 AM, reshu.agarwal reshu.agar...@orkash.comwrote:
On 03/28/2014 05:36 PM, Eddie Epstein wrote:
There is a job
Hi,
I have a question. If head node fails then we are no more able to do
UIMA processing. Can I defined multiple head nodes in DUCC? If one head
node is failed then second node will work as a head node. Is this
possible? what is the backup strategy of DUCC?
--
Thanks,
Reshu Agarwal
Hi,
I am getting this particular error in some jobs of ducc :
node:192.168.. PID:5729:33 directive:ProcessContinue_CasNoRetry
28 Mar 2014 15:10:10,395 45 ERROR user.err workItemError 1887 1781
org.apache.uima.resource.ResourceProcessException: Request To Process
Cas Has Timed-out. Service
wrong.
Lou.
On Fri, Mar 28, 2014 at 12:00 AM, reshu.agarwal reshu.agar...@orkash.comwrote:
On 03/27/2014 08:13 PM, Lou DeGenaro wrote:
he data being sent are values rather than keys in your
CAS? If so, this is not really a best practice for DUCC use.
Hi Lou,
This is not the problem of how
On 03/27/2014 08:13 PM, Lou DeGenaro wrote:
he data being sent are values rather than keys in your
CAS? If so, this is not really a best practice for DUCC use.
Hi Lou,
This is not the problem of how I send the data. My document contains
some invalid XML characters. So, problem resolved after
On 03/21/2014 11:42 AM, reshu.agarwal wrote:
Hence we can not attempt batch processing in cas consumer and it
increases our process timing. Is there any other option for that or is
it a bug in DUCC?
Please reply on this problem as if I am sending document in solr one by
one by cas consumer
Hi Lou,
On 03/26/2014 04:27 PM, Lou DeGenaro wrote:
Hi Reshu,
The good news is that DUCC is functional since 1.job works. So we need to
find out why your particular job fails.
A few more questions:
5. Does your job consist of multiple work items (CASes), and do any of them
succeed?
My job
. If it is a name or the like, is that something you can share so I
can try to reproduce here?
Lou.
On Wed, Mar 26, 2014 at 9:20 AM, reshu.agarwal reshu.agar...@orkash.comwrote:
Hi Lou,
On 03/26/2014 04:27 PM, Lou DeGenaro wrote:
Hi Reshu,
The good news is that DUCC is functional since 1.job works
. That is discussed
in the first reference above, in the paragraph Flushing Cached Data.
Eddie
On Wed, Mar 26, 2014 at 9:48 AM, reshu.agarwal reshu.agar...@orkash.comwrote:
On 03/26/2014 06:43 PM, Eddie Epstein wrote:
Are you using standard UIMA interface code to Solr? If so, which Cas
Consumer
Hi all,
DUCC does not call collectionProcessComplete method of Cas Consumer.
Hence we can not attempt batch processing in cas consumer and it
increases our process timing. Is there any other option for that or is
it a bug in DUCC?
Thanks in advance
--
Reshu Agarwal
On 03/21/2014 05:06 PM, Eddie Epstein wrote:
Hi Reshu,
Attachments are not delivered to this mailing list.
Given that your application CR is following the guidelines,
please answer Lou's questions.
Eddie
On Fri, Mar 21, 2014 at 12:13 AM, reshu.agarwal reshu.agar...@orkash.comwrote:
On 03
On 03/21/2014 01:39 AM, Eddie Epstein wrote:
R should not sent documents, only
references to documents, or preferably references to a set of documents.
Please see
Hi Eddie,
I know this fact but after running getNext() of CR it do not reach to
the AE aggregater to process. The queueing time
/resourceSpecifier;
-Marshall
On 11/1/2013 1:13 AM, reshu.agarwal wrote:
Hi,
I am trying to use UIMA as a Web Service and I have successfully deployed
UIMA analysis engine SOAP service using Axis and Tomcat but when I tried to
use Resource Specifier to call Deployed UIMA Web Service , It gave an
Exception
, Marshall Schor wrote:
On 11/11/2013 4:48 AM, reshu.agarwal wrote:
Hi,
yes Marshall I used the same but still this error is coming.
Please post the client's (the UIMA application that is going to be using the web
service) xml descriptor.
This error occurs when UIMA doesn't recognize
Hi,
I am not aware about Apache-camel and want to know some questions:
1.What is the use of Apache-camel in Uima-as?
2.How to configure Apache-camel?
3.Is there any documentation which covered all about the
Apache-camel in UIMA_AS?
4.Is apache camel is used for the distributed
machines and point to the same
queue (endpoint) and broker (brokerURL) defined in the deployment
descriptor.
JC
On Thu, Jul 25, 2013 at 7:49 AM, reshu.agarwal reshu.agar...@orkash.comwrote:
Hi,
I want to consume the memory of more then 1 machines but start the broker
service only on 1 machine
.
JC
On Wed, Jul 24, 2013 at 12:11 AM, reshu.agarwal reshu.agar...@orkash.comwrote:
On 07/23/2013 05:56 PM, reshu.agarwal wrote:
the heap space for the Memory Pool PS|Surviver space.
Hi,
In Jconsole memory column I noticed that the memory Pool PS Survivor Space
is get fully filled
document's
processing and some time after 150 or 170 document's processing. So, It
is not the issue of the pipeline but I noticed that the heap space for
the Memory Pool PS|Surviver space.
|
On 07/19/2013 03:11 PM, reshu.agarwal wrote:
On 07/18/2013 10:35 PM, Marshall Schor wrote:
See
http
On 07/23/2013 05:56 PM, reshu.agarwal wrote:
the heap space for the Memory Pool PS|Surviver space.
Hi,
In Jconsole memory column I noticed that the memory Pool PS Survivor
Space is get fully filled and the processing stop with the error.
Error on process CAS call to remote service
the CAS.
-Marshall
On 7/17/2013 4:36 AM, reshu.agarwal wrote:
Hi JC,
Thanks for replying. This works and I also implemented it to my own descriptors.
If I used serial flow it works perfectly but if I used parallel flow and
deploying all the descriptors services which are in parallel,
it shows
:
On Thu, Jul 18, 2013 at 12:22 AM, reshu.agarwal reshu.agar...@orkash.comwrote:
On 07/17/2013 07:30 PM, Eddie Epstein wrote:
Component Descriptor Editor
Hi,
It is not a flow controller its an aggregate descriptor which is using the
flow controller I just put it into flow controller folder
Hi,
I tried deploying the services with the other example which uses the
flowcontrollerand the services starts but with these errors:
Deployment xml:
deployment protocol=jms provider=activemq
casPool numberOfCASes=20/
service
inputQueue endpoint=CorefDetectorTaeFixedQueue
Hiii,
I am trying to use remote analysis engine as given below::
deployment protocol=jms provider=activemq
casPool numberOfCASes=3 /
service
inputQueue endpoint=CQueue brokerURL=${defaultBrokerURL}/
topDescriptor
import location=UimaC.xml/
74 matches
Mail list logo