Since no one else is jumping in, I'll say that I suspect that the span
query code does not bother to check to see if two of the terms are the
same.
I think that would account for the behavior you are seeing. Since the
second SpanTermQuery would match the same term the first one did.
Note that I'm
Hi Sariny,
What Uwe was saying is that the definition for hashCode is found in
the docs for Object, and it applies to all implementations of
hashCode.
It says:
"It is not required that if two objects are unequal according to the
equals(java.lang.Object) method, then calling the hashCode method o
ause.Occur.MUST);
bq.add(trq, BooleanClause.Occur.MUST_NOT);
Tom
On Wed, Mar 10, 2010 at 2:11 PM, Tom Hill wrote:
> Try
>
> -fieldname:[* TO *]
>
> as in
>
>
> http://localhost:8983/solr/select/?q=-weight%3A[*+TO+*]&version=2.2&start=0&rows=10&indent=on
Try
-fieldname:[* TO *]
as in
http://localhost:8983/solr/select/?q=-weight%3A[*+TO+*]&version=2.2&start=0&rows=10&indent=on
Tom
On Wed, Mar 10, 2010 at 1:48 PM, bgd wrote:
> Hi,
> I have a bunch of documents which do not have a particular field defined.
> How can define a query do retrieve on
Hi -
One thing to consider is field norms. If your fields aren't analyzed, this
doesn't apply to you.
But if you do have norms, I believe that it's one by per field with norms x
number of documents. It doesn't matter if the field occurs in a document or
not, it's nTotalFields x nDocs.
So, an ind
The docBoost, IIRC, is stored in a single byte, which combines the doc
boost, the field boost, and the length norm.
(
http://lucene.apache.org/java/2_4_1/api/core/org/apache/lucene/search/Similarity.html#formula_norm
)
Are the lengths of your documents the same? If not, this could be affecting
you
If you tell us WHY you want to do this, rather than HOW you want to do it,
the chances are much better that someone can help.
What's the business motivation here? What does the end user want to
achieve?
Tom
On Tue, Dec 8, 2009 at 8:16 AM, Phanindra Reva wrote:
> Hello,
>Thanks for the
gant way than Java's
> WeakHashMap?
>
> On Mon, Dec 7, 2009 at 4:38 PM, Tom Hill wrote:
> > Hi -
> >
> > If I understand correctly, WeakHashMap does not free the memory for the
> > value (cached data) when the key is nulled, or even when the key is
> garbag
Hi -
If I understand correctly, WeakHashMap does not free the memory for the
value (cached data) when the key is nulled, or even when the key is garbage
collected.
It requires one more step: a method on WeakHashMap must be called to allow
it to release its hard reference to the cached data. It ap
On 3/29/07, Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
Hm, removing duplicates (as determined by a value of a specified document
field) from the results would be nice.
How would your addition affect performance, considering it has to check
the PQ for a previous value for every candidate hit?
On 3/29/07, Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
Hm, removing duplicates (as determined by a value of a specified document
field) from the results would be nice.
How would your addition affect performance, considering it has to check
the PQ for a previous value for every candidate hit?
Hi -
Thanks, Yonik, Chris and Doron for the quick responses.
Doron's comment about combining the queries was the key to what was
causing me problems. I had indeed been combining with other queries,
which results in 'extra' results being returned.
I've attached a sample program below that ill
Hi -
I'm having a bit of trouble building a query to match a range of
values in a field that is not continuous.
For an example, say I want to find all people with last names
starting with A-C, and G-K.
If I use MUST on each element of the range, then I get nothing. This
I think I understan
Is there maximum length to a string that is analazyed and put into a field?
IE if the String is 1 billion characters and analyzed, tokenized, and the
last word in the string only appears once at the end, would searching for
that last word against that field end with a hit for that document?
No
Hi -
Is there a fast way (not easy, but speedy) of getting the count of
documents that match a query?
I need the count, and don't need the docs at this point. If I had a
simple query, (e.g. "book") I can use docFreq(), and it's lightning
fast. If I just run it as a query it's much slower. I'
On Samstag 25 März 2006 00:39, Tom Hill wrote:
> IndexModifier won't work
> in multithreaded scenario, at least as far as I can tell.
Yes it does, but you need to use one IndexModifier object from all classes
(see the javadoc).
Regards
Daniel
I stand corrected (after goi
Hi Thomas,
> > Is it possible to write into the index and delete some documents in the
> > same time?
> Yes, have a look at the IndexModifier class.
If by "the same time" you mean "in one session", or something like
that, then yes, IndexModifier will help.
But if you mean from multiple threa
Hi -
I have an application where I'm using Lucene to index the contents of
a database. That's working fine.
But I have a problem where I'd like to retrieve a subset of the
documents that match a search, based on a join table in the database.
How do people typically handle combining the resu
18 matches
Mail list logo