Hi,
I have deployed a AE onto a Uima AS node.
But when I use it to analyse some documents, i got OutOfMemoryError: Java heap
space.
I know that the AE is taking large amount of memory due to it loading many
resources.
How can I increase the memory allocated to it in the Uima AS so that I can
a
Swirl writes:
>
> Hi,
> I am trying to use Uima AS to deploy one of my UIMA Aggregate AE.
> I wish to deploy it as a Uima AS Primitive, i.e. I do not need the
delegates
> of this AE to be scaled out. Instead I just want to scale out the Uima
> aggregate AE only. I want t
Hi,
I am trying to use Uima AS to deploy one of my UIMA Aggregate AE.
I wish to deploy it as a Uima AS Primitive, i.e. I do not need the delegates
of this AE to be scaled out. Instead I just want to scale out the Uima
aggregate AE only. I want to deploy the AE in multiple nodes.
So far the docum
Richard Eckart de Castilho writes:
>
> Xalan: Something like this (see source code of EnvironmentCheck):
>
> PrintWriter sendOutputTo = new PrintWriter(System.out, true);
> EnvironmentCheck app = new EnvironmentCheck();
> app.checkEnvironment(sendOutputTo);
>
> You could also try
Richard Eckart de Castilho writes:
>
> You may experience UIMA-2155 [1].
>
> Please check what version of Xalan is used at runtime - it should not be
2.6.0 or earlier.
>
Thanks for your fast response.
How do I check the Xalan version at runtime?
BTW, I tried putting Xalan 2.7.0 and 2.7.1
I have a tomcat web application that uses Uima for text processing.
I used the Uimafit's auto type detection for loading the types.
On uimafit 2.4.0, there was not problem,
after I used 2.4.2 with uimafit legacy support,
I am getting the error below when running the web application:
2014-04-28 1
Richard Eckart de Castilho writes:
>
> If you follow the rules of the Apache Software License, you can copy the
code.
>
> Can you point out to us which issues exactly force you to copy code?
Possibly
> we can up those on our priority lists.
>
> Cheers,
>
Thanks, this being my first projec
Currently there are some issues in some uima
and uimafit classes that I cannot use as is in
my application.
If I copy and paste the source codes for these
classes and use them in my application, does
it violate uima and uimafit license?
swirl writes:
>
> Richard Eckart de Castilho ...> writes:
>
> >
> > Thanks for the report. This hasn't been noticed so far.
> >
> > I am afraid that SimplePipeline currently does not support sharing
> > a resource between the reader and the com
Richard Eckart de Castilho writes:
>
> Thanks for the report. This hasn't been noticed so far.
>
> I am afraid that SimplePipeline currently does not support sharing
> a resource between the reader and the components.
>
> Internally, SimplePipeline instantiates the reader and the components
>
I am creating a pipeline as follows:
a. CollectionReader
b. AnnotatorA
c. AnnotatorB
All the above (including the CollectionReader and the 2 annotators) have a
dependency on an ExternalResource.
Here's a shortened code that I used:
// create the external resource desc
ExternalResourceDescriptio
> Option 2 - let UIMA do the heavy lifting
>
> An alternative and much simple approach might be to create an aggregate which
> does not only contain the engines, but also the reader. Then you don't have
> to
> worry about the reader anymore at all. Just create a UIMA JCasIterator and
> poll C
Richard Eckart de Castilho writes:
>
> For further reference:
>
> https://issues.apache.org/jira/browse/UIMA-3470
>
Thanks for raising the Jira.
I tried looking at the source codes, but I think I am not able to come up with
a solution for this.
Do you have any pointers to get me started?
I have successfully used CasMultiplier to spilt up a document into segments
for further processing using SimplePipeline.runPipeline().
I did this by wrapping the CasMultiplier and the succeeding Annotator within a
aggregate.
But by simply changing the usage of SimplePipeline.runPipeline() to usi
Richard Eckart de Castilho writes:
> In ClearTK, in particular in the machine learning module, there were
> analysis engines for learning classifiers. However, there wasn't one
> AE per training algorithm, rather one passed in the name of a class which
> implemented such an algorithm was passed
According to documentation for UimaFit 2.0 (http://uima.apache.org/d/uimafit-
current/tools.uimafit.book.html), the method
ConfigurationParameterFactory.createConfigurationParameterName() that was used
to generate the prefixed name has been removed.
Does that mean the recommendation for naming p
>
> For part c:
>
> I imagine an algorithm that can scan the main XML file and find the
"sections".
> For each section it finds, it can produce a CAS and initialize that CAS
with the
> section's information.
>
> If this algorithm lives inside an analysis component, then it can use the
"CAS
>
Hi,
I am wondering if anyone has a better idea.
Requirement:
a. I have a pipeline that needs to process a bunch of XML files.
b. The XML files could be on the disk, or from a remote location (available
via a HTTP GET call, e.g. http://example.com/inputFiles/001.xml)
c. Each XML file contain mulit
Richard Eckart de Castilho writes:
>
> I'm using the Cobertura Plugin in Maven and that works just nice.
>
> uimaFIT loads the META-INF/org.uimafit/types.txt files by scanning the
> classpath. So either the file has not been copied from the source folder
> to the classpath when eCobertura is ex
Has anyone tried to run eCortertura's "Cover As" with Uimafit's automatic type
loading?
My types are defined in Maven module inside the src/main/resources folder and
I have a META-INF/org.uimafit/types.txt.
In my main app, I tried to run JUnit for my unit tests and it ran fine. But
when I used
Hi,
I am running uima inside a maven tomcat application, but I could not figure
out how to capture the logs generated by the uima analysis engines and
collection readers.
What is the configuration in tomcat that will let me specify the location of
the uima logs to be stored?
I tried putting lo
Richard Eckart de Castilho writes:
> That said, the obfuscator may have "optimized"
> > private String outputDirectory
> away, because the value is never changed in the code known
> to the obfuscator. It cannot know that uimaFIT changes this
> field using reflection.
>
Thanks for the info.
Hi!
I was trying to create a obfuscated API library based on Uima/Uimafit.
My project was in Maven so I used the Proguard plugin:
http://pastebin.com/T3N3JgVv
When tried to use the obfuscated jar in a separate application, an Uima
parameter "PARAM_OUTDIR" was strangely nullified.
For an un-o
Richard Eckart de Castilho writes:
>
> You should take a look at the JCasIterable (cf. [1] - Example in Groovy,
but
> JCasIterable is a Java class and works nicely in Java too, just I have no
> example in Java).
>
> JCasIterable basically allows you to iterate over the CASes produced by
you
Marshall Schor writes:
>
>
> On 7/16/2013 10:38 PM, swirl wrote:
> > I am wrapping a Uima analysis engine in a Tomcat.
> >
> > This AE loads and parses a large model file (300Mb).
> > The loading time of this model takes 3min. This is unacceptable if users
Hi,
I have this particular requirement for a API that we wrap over a Uima
pipeline.
public List analyse(String inputFolderPath, String modelName);
This method is supposed to accept a collection of files (residing in the
inputFolderPath), run the files (as CAS) through a pipeline of UIMA AEs,
Richard Eckart de Castilho writes:
>
> Am 17.07.2013 um 05:11 schrieb swirl :
>
> > I am wrapping a Uima analysis engine in a Tomcat JSF webapp.
> >
> > This AE loads and parses a large model file (300Mb).
> >
> > I can call the AE and run it usi
I am wrapping a Uima analysis engine in a Tomcat JSF webapp.
This AE loads and parses a large model file (300Mb).
I can call the AE and run it using SimplePipeline.runPipeline() via the
webapp UI.
However, the large model took up a large memory chunk that won't go away even
after the AE is run
I am wrapping a Uima analysis engine in a Tomcat.
This AE loads and parses a large model file (300Mb).
The loading time of this model takes 3min. This is unacceptable if users have
to wait so long to do analysis on one document.
What are the possible ways to reduce the loading time?
One soluti
I am wrapping a Uima analysis engine in a Tomcat.
This AE loads and parses a large model file (300Mb).
The loading time of this model takes 3min. This is unacceptable if users have
to wait so long to do analysis on one document.
What are the possible ways to reduce the loading time?
One soluti
Richard Eckart de Castilho writes:
>
> Hello Greg,
>
> > It's sort of a "maven-like" model (i.e. when using a Nexus server). Or
maybe I should just actually use
> maven and nexus?
> >
> > Has anyone out there tried to create a "UIMA Repository" that can be
directly referenced from a compone
Richard Eckart de Castilho writes:
>
> > Erhmmm, has anybody do something like this before?
> > I really am interested to know how you can do it.
> >
> > To clarify, I am very interested in how you can mix-match different
PEARs,
> > possibly from different open source projects, with different
swirl writes:
>
> I am currently developing a Tomcat application that wraps around Uima to
run
> text mining processes.
> I am confused over what PEAR can be used for and how it can be used in a
Uima-
> wrapped application.
>
> The application is to be deploy
I am currently developing a Tomcat application that wraps around Uima to run
text mining processes.
I am confused over what PEAR can be used for and how it can be used in a Uima-
wrapped application.
The application is to be deployed as a installed web application at our
client's location and i
34 matches
Mail list logo