Hi all!
I have a Lucene 4.8 index and want to modify a DocValue of a single document. I
tried to perform indexWriter.updateDocument(term, doc), but it had no effect on
the index.
Could you please point me to the relevant information on how to modify
Document's BinaryDocValue, SortedDocValue
Hi,
Thank you for sharing the blog.I am using FSDirectory.open() in my
program.So, I guess I am using MMapDirectory. It takes about 3 minutes when
I search for a key(which is actually present in 80% of total data) in all
the fields(1000) in this 1 million documents.
Best Regards,
Sreedevi S
On
Hi Rob,
May be you wrap your query in a ConstantScoreQuery?
ahmet
On Thursday, February 5, 2015 9:17 AM, Rob Audenaerde
rob.audenae...@gmail.com wrote:
Hi all,
I'm doing some analytics with a custom Collector on a fairly large number
of searchresults (+-100.000, all the hits that return from
Hello Lucene Users,
I am traversing all documents that contains a given term with following code :
Term term = new Term(field, word);
Bits bits = MultiFields.getLiveDocs(reader);
DocsEnum docsEnum = MultiFields.getTermDocsEnum(reader, bits, field,
term.bytes());
while (docsEnum.nextDoc() !=
I've run into an exception, and I'm trying to understand whether it is
something that can just happen if the index doesn't conform to the
expectations of the TPBJQ, or if I've somehow messed things up in my
extension of that query.
The exception I'm seeing is in BlockJoinScorer.nextDoc().
On Thu, 2015-02-05 at 04:00 +0100, Heeheeya wrote:
i am recently puzzled by performance problem using lucene while the
search result set is large. do you have any advice?
Without any information, how are we to help you?
Start by reading
https://wiki.apache.org/solr/SolrPerformanceProblems
Hi,
If you use FSDirectory.open() it will automatically choose MMapDirectory on 64
bit systems. Please note, virtual memory is != physical RAM. A 64 bit machine
has *always* 1 Terabyte of virtual address space available, this is unrelated
to physical memory (a common misunderstanding about
Hi,
I am doing some performance analysis with lucene. I have 1 million
resources with 1000 attributes.
According to how I index, I will have 1 million documents with 1000 fields.
For me the total data was about 100 GB and while using FSDirectory to store
my indices, index size was almost 6 GB.
I