Hey
Have a look at "org.apache.lucene.document.DateTools" class
I think u need not edit Document.java . U shud be able to use existing
classes (atleast from the email content so far)
Also, take a look at org.apache.lucene.queryParser.QueryParser for the
DateTools usage
- Sagar Naik
blue
Blueyben,
What you describe is a general Java time conversion problem, not a Lucene
related one.
You will need to do a search on "Java time format" which should bring you
amongst other links to
http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html.
You might also be interested
Can anyone provide me some guidance?
--
View this message in context:
http://www.nabble.com/FileDocument---Confused-and-Need-Help.-tf4373049.html#a12468920
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
See below..
On 8/31/07, Berlin Brown <[EMAIL PROTECTED]> wrote:
>
> So I am assuming that is not just a matter of "indexing" to that same
> directory as you "indexed" before.
No, that's all it is. When you open an index, for writing, there
is a flag indicating "overwrite or append". So if
you ca
Storing the data in the index, mainly for non-structured data.
We plan to implement something like this ThingDB from http://
demo.openlibrary.org/about/tech, and though that maybe lucene +
JdbcDirectory could act as a backend.
gui
On Sep 3, 2007, at 2:34 PM, Askar Zaidi wrote:
Yes. Every ti
Yes. Every time a user updates a piece of information, you do the update in
the DB as well as the Index. If you are using Hibernate, they have an API
that does this mapping. I am not sure why you plan to store data in the
Index ?? Storing data is the DBs job, searching is the Index job. I would
sug
: Setting writer.setMaxFieldLength(5000) (default is 1)
: seems to eliminate the risk for an OutOfMemoryError,
that's because it now gives up after parsing 5000 tokens.
: To me, it appears that simply calling
:new Field("content", new InputStreamReader(in, "ISO-8859-1"))
: on a plain te
1) I don't understand why the index would get corrupted. We store
huge data
and meta-data using Lucene.
I got that information when lucene 1.4 was the lastest version, may
have changed. I'll trust you.
2) For this, I synced Lucene with the DB operations. If you use
Hibernate,
theres an A
Dear all, I am new to Lucene, am trying with the basics.
Basically I created sample text files with fields as follows:
textid 17
pubdate 63/01/04
pageid 20
I have been trying to edit FileDocument.java to read the fields above and
1. Index “textid” with value “17”
2. Index “pubdate” with value
1) I don't understand why the index would get corrupted. We store huge data
and meta-data using Lucene.
2) For this, I synced Lucene with the DB operations. If you use Hibernate,
theres an API for that. Or, you could just write your own factory methods to
add/delete/edit index documents when a DB o
Hello,
We're starting a new project, which basically catalogs everything we
have in the department (different objects with different metadata),
and as I used Lucene before, I'm preparing a presentation to the
team, as I think it would really simplify the storage of metadata and
documents.
Aha, that's interesting. However...
Setting writer.setMaxFieldLength(5000) (default is 1)
seems to eliminate the risk for an OutOfMemoryError,
even with a JVM with only 64 MB max memory.
(I have tried larger values for JVM max memory, too).
(The name is imho slightly misleading, I would have
12 matches
Mail list logo