Where do you get your Lucene/Solr downloads from?
[] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company
Karl, I´m aware of IndexReader.getTermFreqVector, with this I can get all
terms of a document, but I want all terms of a document that matched a
query.
Grant,
Yes, I think I understand. You want to know what terms from your
query matched in a given document.
Yep, that´s what I want. In the
a thought.
Hope this helps,
Grant
On Sep 4, 2007, at 5:01 PM, Rafael Rossini wrote:
Hi all,
In some custom highlighting, I often write a code like this:
SetTerm matchedTerms = new HashSetTerm();
query.rewrite(reader).extractTerms(matchedTerms);
With this code
Hi all,
In some custom highlighting, I often write a code like this:
SetTerm matchedTerms = new HashSetTerm();
query.rewrite(reader).extractTerms(matchedTerms);
With this code the Term Set gets populated by the matched query in your
whole index. Is it possible to this with
Hey Jeff, I didn´t had any luck, I don´t think you´re approach is going to
help me, thanks for the reply. I´ll try a solution that does not require
this kind of problem.
[]s
Rossini
On 7/29/07, Jeff French [EMAIL PROTECTED] wrote:
Rossini, have you had any luck with this? I don't know if
Actually no,
Because I'd like to retrieve terms that were computed on the same
instance of Field. Taking your example to ilustrate better, I have 2
documents, on documentA I structured one field, Field(fieldA, termA
termB, customAnalyzer). On documentB I structured 2 fields, Field(fieldA,
Well... thanks for the help, this was really my last solution (rebuild) but
I think I have no other choice... I really can´t tell exactly if this
corruption was caused by bad hardware or not, but do you guys have any
ideia about what might have happend here? Could I have generated this
corruption
Hi guys,
I have a problem that is kind of tricky:
I have a set of documents that I enrich with dynamic metadata. The
metada name is the fieldName in lucene and the value is the text. For
example:
Rio de Janeiro is a beautiful city. would be indexed in one field called
text, and on
I see, thanks.
On 7/26/07, Mike Klaas [EMAIL PROTECTED] wrote:
On 26-Jul-07, at 10:18 AM, Rafael Rossini wrote:
Yes, I optimized, but in the with SOLR. I don´t know why, but when
optimize
an index with SOLR, it leaves you with about 15 files, instead of
the 3...
You are probably
Miller [EMAIL PROTECTED] wrote:
You know, on second though, a merge shouldn't even try to access a doc
maxdoc (i think). Have you just tried an optimize?
On 7/25/07, Rafael Rossini [EMAIL PROTECTED] wrote:
Hi guys,
Is there a way of deleting a document that, because of some
corruption
Hi guys,
Is there a way of deleting a document that, because of some corruption,
got and docID larger than the maxDoc() ? I´m trying to do this but I get
this Exception:
Exception in thread main java.lang.ArrayIndexOutOfBoundsException: Array
index out of range: 106577
at
Hello all,
I´m using solr in an app, but I´m getting an error that it might be a lucene
problem. When I perform a simple query like q = brasil I´m getting this
exception:
java.lang.ArrayIndexOutOfBoundsException: 1226511
at org.apache.lucene.search.TermScorer.score(TermScorer.java:74)
at
that this introduced a bug. Is
the build you are using after July 4?
Mike
Rafael Rossini [EMAIL PROTECTED] wrote:
Hello all,
I´m using solr in an app, but I´m getting an error that it might be a
lucene
problem. When I perform a simple query like q = brasil I´m getting this
exception
a *
java.io.IOException*: read past EOF.
Any ideias how to fix or delete this document?
On 7/24/07, Rafael Rossini [EMAIL PROTECTED] wrote:
I don´t know the exact date of the build, but it is certainly before July
4, and before the LUCENE-843 patch was committed. My index has 1.119.934docs
of fixing my index without having to rebuild it all from the
ground? It takes lots of hours to re-index my whole collection.
On 7/24/07, Yonik Seeley [EMAIL PROTECTED] wrote:
On 7/24/07, Rafael Rossini [EMAIL PROTECTED] wrote:
I did a litle debug and found that in the TermScorer, the byte[] norms
has
Hi Tanya,
I think one option is to index each log file with 2 fields, the name of
the log file and a line of your log. This way you can do a query like
this: +log_file_name:log1 +line:word1 -(+line:word1 +lineword2)
Hope it helps,
Rossini
On 6/13/07, Tanya Levshina [EMAIL PROTECTED]
()
- Original Message
From: Rafael Rossini [EMAIL PROTECTED]
To: java-user@lucene.apache.org; Otis Gospodnetic
[EMAIL PROTECTED]
Sent: Thursday, July 27, 2006 4:23:56 PM
Subject: Re: Indexing large sets of documents?
Oits,
You mentioned the hadoop project. I check it out not a long time ago and
I
Oits,
You mentioned the hadoop project. I check it out not a long time ago and
I read someting about it did not support the lucene index. Is it possible to
index and then search in a HDFS?
[]s
Rossini
On 7/27/06, Otis Gospodnetic [EMAIL PROTECTED] wrote:
Michael,
Certainly
18 matches
Mail list logo