Hi Jens,
I just want to confirm your information. As you said, the query gets slower the
larger start is, even using filters. The best solution is to get all ids first
(may take some time), and then to get each documents by id successively. There
is a request handler (get) and a Java API method
Hi Jens,
nice tips. I will try that one with the filters, first. I just need to make a
view changes.
Thank you,
Armin
-Ursprüngliche Nachricht-
Von: j...@grivolla.net [mailto:j...@grivolla.net] Im Auftrag von Jens Grivolla
Gesendet: Dienstag, 16. August 2016 13:34
An: user@uima.apache.o
Hi!
Finally, it looks like that Solr causes the high memory consumption. The
SolrClient isn't expected to be used like I did it. But it isn't documented
either. The Solr documentation is very bad. I just happened to find a solution
on the web by accident.
Thanks,
Armin
-Ursprüngliche Nach
Hi Richard!
I've changed the document reader to a kind of no-op-reader, that always sets
the document text to an empty string: same behavior, but much slower increase
in memory usage.
Cheers,
Armin
-Ursprüngliche Nachricht-
Von: Richard Eckart de Castilho [mailto:r...@apache.org]
Gese
Hello Richard!
No, I can't change the reader. It's reading from Solr. The response documents
are put in a queue. The querying logic is done in hasNext(). hasNext() returns
true if the queue is not empty. If the queue is empty, hasNext() sends a
request to Solr and puts the response documents in