Thanks, Uwe. I've found the problem. the updateTime field is lost when i
converted my index from an older version.
Another question, is there any detailed tutorial about Lucene 3.0.0?
2009/12/9 Uwe Schindler
> How did you index your date?
>
> I would suggest to reindex the date using NumericFie
How did you index your date?
I would suggest to reindex the date using NumericField! And then query using
NumericRangeQuery. If reindexing is not possible the Query like you have
done, should work. Please give us examples of how you indexed and how you
query.
Uwe
-
Uwe Schindler
H.-H.-Meier-
Hi Mike,
Missed your response on this,
What I was doing was physically removing index/write.lock if older than 8
hours, allowing another process of my indexer to run. I realize in
hindsight that there is no reason why I should be doing this and it was
really stupid. I think I was under the impre
Hi, all
I need to do a date range search like date:[a previous time to null]
I used a filter to do this job, the code is shown below:
Calendar c = Calendar.getInstance();
c.setTimeInMillis(c.getTimeInMillis() -
parameter.getRecentUpdateConstraint()
* RosaCrawlerConstants.ONE_D
Howdy,
I am wondering if anyone has seen
NearSpansUnordered.getPayload() not return payloads that are
verifiably accessible via IR.termPositions? It's a bit confusing
because most of the time they're returned properly.
I suspect the payload logic gets tripped up in
NearSpansUnordered. I'll put to
Sorry wrong word, Germans often have the problem with English "must". It has
to be to be "but you must not".
Uwe
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Steven A Rowe [mailto:sar...@syr.edu]
> S
Hello Tom and Erick,
I am really sorry for posting such a dull
question. Meanwhile I have explored a few other parts of API,
fortunately I have found a place which could exaclty fit for my case.
Thanks for patiently trying to understand my question.. and warning
me.
Bye.
If you tell us WHY you want to do this, rather than HOW you want to do it,
the chances are much better that someone can help.
What's the business motivation here? What does the end user want to
achieve?
Tom
On Tue, Dec 8, 2009 at 8:16 AM, Phanindra Reva wrote:
> Hello,
>Thanks for the
Hi Uwe,
On 12/08/2009 at 9:40 AM, Uwe Schindler wrote:
> After the move to 3.0, you can (but you must not) further update
> your code to use generics, which is not really needed but will
> remove all compiler warnings.
This sounds like you're telling people that although they are able to update
If you store the field unanalyzed it will be indexed as is. You can
then search for it via a TermQuery, or use QueryParser with
PerFieldAnalyzerWrapper specifying KeywordAnalyzer for the field
containing this character.
Another approach is to replace the % with something easier to work
with. You
Thanks for your reply Erick.
In Luke, its also not working. I tried to retrieving values from the field
by specifying the field as the search field and then specify % as the search
parameter which using StandardAnalyzer but nothing is displayed. Also while
Luke shows the query details for other s
Try printing out query.toString() to see what's actually being
sent to the searcher.
You can try the same thing in Luke, specifying StandardAnalyzer
to parse queries.
Are you sure you're specifying the fields in the query and not just the
'%'? That would go against your default field.
When you s
Hello,
Thanks for the reply. *strange* was expected. I am trying to
store field names as payloads, so I need unedited field names during
analysis part. And later my plan is to replace all the field names
with a default value and then store the document in the index. So, If
its possible to g
Hi,
I am a newbie to lucene. I am using Standard Analyzer in my lucene project.
I am indexing some fields which may contain only "%" as a field value and it
indexes fine and i can view the value against the field in the index using
Luke.
However when i try to retrieve the same field using index
You're right, it *does* seem strange
I'm having a really hard time imagining a use-case
for this capability, so it's hard to suggest
an approach. Perhaps you could supply
an outline of your use-case? This may be
an XY problem.
Best
Erick
On Tue, Dec 8, 2009 at 10:12 AM, Phanindra Reva wrote
Hello All,
I am a newbie using Lucene. To be brief, I am just
wondering whether is there a point where we get the access to the
org.apache.lucene.document.Document (which is being indexed at the
moment) after the analysing part is completed but exactly before it
is added to the index
The only difference to 3.0 is, that after moving to 3.0, you can remove lots
of unsafe casts and use generics (which does not work in 2.9, as it is Java
1.4 only). So this is the good thing when directly moving to 3.0.
But as the release notes for 3.0 denote, for new users that want to start
new p
Visiting all the index terms, which must be done for any divisor !=
-1, generates a good amount of garbage. So if you're including that
garbage in your memory measurement, that would explain what you're
seeing, and switching to a memory profiler should show the true RAM
usage.
Mike
On Tue, Dec 8
If you're using reopen, be sure to close the old reader
if the new one isn't identical, something like:
IndexReader new = r.reopen();
if (new != reader) {
... // reader was reopened
reader.close();
}
reader = new;
Erick
On Tue, Dec 8, 2009 at 6:13 AM, Cool The Breezer
wrote:
> Th
You might want to move to 2.9.1 first, find and fix all the deprecations
and *then* move to 3.x.
It seems like more work, but it's actually not. Especially if you have
reasonable
unit tests. Since lots of effort has been put into maintaining backwards
compatibility in the 2.X versions, 2.9.1 shoul
Thanks, so many changes in 3.0.0
On Tue, Dec 8, 2009 at 8:32 PM, Mark Miller wrote:
> Weiwei Wang wrote:
> > Hi,all,
> > I can't not find this class in the downloaded jar and I can't figure
> out
> > what's wrong.
> > Does anybody here know how to fix it?
> >
> >
> Its now in the remote Cont
Weiwei Wang wrote:
> Hi,all,
> I can't not find this class in the downloaded jar and I can't figure out
> what's wrong.
> Does anybody here know how to fix it?
>
>
Its now in the remote Contrib.
--
- Mark
http://www.lucidimagination.com
Hi,all,
I can't not find this class in the downloaded jar and I can't figure out
what's wrong.
Does anybody here know how to fix it?
--
Weiwei Wang
Alex Wang
王巍巍
Room 403, Mengmin Wei Building
Computer Science Department
Gulou Campus of Nanjing University
Nanjing, P.R.China, 210093
Homepage:
Thanks Mike...
As i explained before, I created a small app, which loads all the db, does term
search (using term query) and calculates the memory consumption. I tried this
with divisor value but after 100 there seems to be no difference.
Just load the database with different divisor value and
Thanks Mike for your timely suggestion. Somehow readers are not reopened
properly.
- Original Message
From: Michael McCandless
To: java-user@lucene.apache.org
Sent: Tue, December 8, 2009 3:31:22 PM
Subject: Re: IndexWriter creates multiple .cfs files
IndexWriter takes care of merg
I've opened LUCENE-2135.
Mike
On Tue, Dec 8, 2009 at 5:36 AM, Michael McCandless
wrote:
> This is a rather disturbing implementation detail of WeakHashMap, that
> it needs the one extra step (invoking one of its methods) for its weak
> keys to be reclaimable.
>
> Maybe on IndexReader.close(), Lu
This is a rather disturbing implementation detail of WeakHashMap, that
it needs the one extra step (invoking one of its methods) for its weak
keys to be reclaimable.
Maybe on IndexReader.close(), Lucene should go and evict all entries
in the FieldCache associated with that reader. Ie, step throug
IndexWriter takes care of merging the CFSs down, over time. Have you
changed your mergeFactor? It's odd to see 100s of CFSs.
Or maybe you're not closing the old reader on reopening a new one?
That would prevent deletion of the files.
Mike
On Tue, Dec 8, 2009 at 1:43 AM, Cool The Breezer
wrote
28 matches
Mail list logo