ulimit -v unlimited
might help, see
http://stackoverflow.com/questions/8892143/error-when-opening-a-lucene-index-map-failed
Harald.
On 18.08.2014 13:10, Shlomit Rosen wrote:
Hi all,
Using lucene 3.6.2, we are trying to search a pretty small collection.
To open the directory we use Mmap
Hi,
below is an exception I get from one Solr core. According to
https://issues.apache.org/jira/browse/LUCENE-5617 the check that leads
to the exception was introduced recently.
Two things are worth mentioning:
a) contrary to the expectation expressed in the message (file
truncated?), the
Hello Robert,
thanks for showing interest in this case. Find my answer below.
On 23.07.2014 12:58, Robert Muir wrote:
On Wed, Jul 23, 2014 at 6:03 AM, Harald Kirsch
harald.kir...@raytion.com wrote:
Hi,
below is an exception I get from one Solr core. According to
https://issues.apache.org
On 23.07.2014 13:38, Robert Muir wrote:
On Wed, Jul 23, 2014 at 7:29 AM, Harald Kirsch
harald.kir...@raytion.com wrote:
(As a side note: after truncating the file to the expected size+16, at least
the core starts up again. Have not tested anything else yet.)
After applying your truncation
instead.
On 13.11.2013 16:03, Harald Kirsch wrote:
Hello all,
I wonder if a query according to the following rules is possible.
We have several fields with increasing hierarchy, say f_0 to f_{2n}. The
rule to search for a term is that starting with index 0 the first field
to contain a hit
Hello all,
I wonder if a query according to the following rules is possible.
We have several fields with increasing hierarchy, say f_0 to f_{2n}. The
rule to search for a term is that starting with index 0 the first field
to contain a hit defines whether to return the document or not, i.e.:
Zipf, i.e. a small number of event types occurs rather frequently, while
other types of events may appear just once.
What is the most efficient sequence of Lucene operations for such a
scenario?
Harald.
On 07.08.2012 15:39, Harald Kirsch wrote:
Hello Simon,
ok, I'll try this out. Just
.
It was exactly this additional caching that I hoped to avoid. :-(
Harald.
On 06.08.2012 13:55, Simon Willnauer wrote:
hey harald,
On Mon, Aug 6, 2012 at 1:22 PM, Harald Kirsch harald.kir...@raytion.com wrote:
Hi,
in my application I have to write tons of small documents to the index, but
with a twist
Hi,
in my application I have to write tons of small documents to the index,
but with a twist. Many of the documents are actually aggregations of
pieces of information that appear in a data stream, usually close
together, but nevertheless merged with information for other documents.
When
for some time I doubt it is a
bug in Lucene, but I don't see what I am doing wrong. It might be
connected to trying to get the freshest IndexReader for retrieving
documents.
Any better ideas or explanations?
Harald.
--
Harald Kirsch
/ SearcherManager will do the job for you.
simon
On Fri, Aug 3, 2012 at 3:41 PM, Harald Kirsch harald.kir...@raytion.com wrote:
I am trying to (mis)use Lucene a bit like a NoSQL database or, rather, a
persistent map. I am entering 38000 documents at a rate of 1000/s to the
index. Because each item add
Hello Simon,
now that I knew what to search for I found
http://wiki.apache.org/lucene-java/LuceneFAQ#When_is_it_possible_for_document_IDs_to_change.3F
So that clearly explains this issue for me.
Many thanks for your help.
Harald
Am 04.08.2012 07:38, schrieb Harald Kirsch:
Hello Simon
12 matches
Mail list logo