Hi ;
I recently upgrade Lucene into a java application to current release
2.0.
As all know the way to write indexed data change with notion of
Field.Store and Field.Index into the lucene document.
Every thing I read seems confused .
Is anyone help me quickly with better option to use to i
Hello,
I use the Poi Api to parse MSword files in order to index the content to
enable lucene search.
For that I download the last jars from Poi (including the scratchdpad
one) and use the parser from lucenebook called POIWordDocHandler.
It works quiet good, but for some files the parser does
PROTECTED]
Envoyé : jeudi 26 janvier 2006 03:01
À : java-user@lucene.apache.org
Objet : Re: encoding
arnaudbuffet wrote:
>For text files, data could be in different languages so different
>encoding. If data are in Turkish for exemple, all special characters and
>accents are not recognized
Hello,
I 've a problem with data i try to index with lucene. I browse a
directory and index text from different types of files throw parsers.
For text files, data could be in different languages so different
encoding. If data are in Turkish for exemple, all special characters and
accents are no
Hi,
I begin working with lucene and need few explanations to do what i want,
thanks for your helpful answers.
I have to add lucene into a java application and I have two targets:
- To enable search throw different types of files, like MS Word, PDF or
Excel files.
I read that each type of docume
Hello,
Today I need few explanations about the possibilities of lucene in order
to implement development on an application.
First, I have to index different things; documents files and bdd tables
to make a search on all theses elements. Is it possible? How can I index
both elements? For the sea