is a duplicate?
Best
Erick
On Mon, Jul 21, 2008 at 9:40 AM, Sebastin [EMAIL PROTECTED] wrote:
at the time search , while querying the data
markrmiller wrote:
Sebastin wrote:
Hi All,
Is there any possibility to avoid duplicate records in lucene 2.3.1?
I don't believe
at the time search , while querying the data
markrmiller wrote:
Sebastin wrote:
Hi All,
Is there any possibility to avoid duplicate records in lucene 2.3.1?
I don't believe that there is a very high performance way to do this.
You are basically going to have to query the index
Hi All,
Is there any possibility to avoid duplicate records in lucene 2.3.1?
--
View this message in context:
http://www.nabble.com/How-to-avoid-duplicate-records-in-lucene-tp18543588p18543588.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
Sebastin wrote:
Hi All,
I am facing this error while doing Indexing text files.can anyone
guide me
how to resolve this issue.
--
View this message in context:
http://www.nabble.com/java.io.Ioexception-cannot-overwrite-fdt-tp18079321p18079321.html
Sent from the Lucene - Java Users
much data. You want to keep your IndexReaders opened for
a while. Multiple requests/threads can share them.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Sebastin [EMAIL PROTECTED]
To: java-user@lucene.apache.org
Sent: Friday
: Sebastin [EMAIL PROTECTED]
To: java-user@lucene.apache.org
Sent: Friday, June 20, 2008 2:04:12 AM
Subject: creating Array of IndexReaders
Hi All,
I need to create dynamic Index Readers based on the user input.
for example
if the user needs to see the records from june 17-june 20
Hi All,
I need to fetch approximately 225 GB of Index Store records in a web page
.the total time to fetch the record and display to the user takes 10
minutes.is it possible to reduce the time to milliseconds
sample code snippet:
IndexReader[] readArray =
{ indexIR1,
Hi All,
Does Lucene supports Billions of data in a single index store of size 14 GB
for every search.I have 3 Index Store of size 14 GB per index i need to
search these index store and retreive the result.it throws out of memory
problem while searching this index stores.
--
View this message in
Hi All,
is there any possibility to create compression store for the
following types of string in lucene index store?
String str = II0264.D05|00022745|ABCDE|03/01/2008 00:23:12|00035|
9840836588| 129382152520| 04F4243B600408|04F4243B600408|
|11919898456123|354943011025810L| CPTBS2I|
Hi All,
I try to store a string Variable as Field.Store.Compress,during
search is there any any inbuilt method to uncompress these records else we
can go for some other algorithm to retreive these records?
--
View this message in context:
a folder in the foloowing format
/200080301-200080316/26588
I index and store the records in that folder.so while searching i get the
modulo and search the records only in that folder.
is it a good way of indexing?
Sebastin wrote:
Hi All,
I am going to create
Hi All,
I am going to create a Lucene Index Store of Size 300 GB per month.I
read Lucene Index Performance tips in wiki.can anyone suggest what are all
the steps need to be followed while dealing with big Indexes.My Index Store
gets updated every second.I used to search 15 days records
Hi All,
Is there any possibility to kill the IndexSearcher Object after every
search.
--
View this message in context:
http://www.nabble.com/how-to-kill-IndexSearcher-object-after-every-search-tf4897436.html#a14026451
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
the memory it use when constructing DateRangeQuery and
plus it will improve search performance as well.
Sebastin wrote:
Hi All,
i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
records using MultiReader class.
here is the following code snippet
not possible to see
the updated records.
could you guide me how to resolve this memory problem.
testn wrote:
As I mentioned, IndexReader is the one that holds the memory. You should
explicitly close the underlying IndexReader to make sure that the reader
releases the memory.
Sebastin wrote
MultiReader which you can cache for a
while and close the oldest index once the date rolls.
Sebastin wrote:
HI testn,
it gives performance improvement while optimizing the Index.
Now i seprate the IndexStore on a daily basis.(ie)
For Every Day it create a new Index store ,sep- 08
of indexes you need to search on,
you just need to open only the latest 15 indexes at a time right? You can
simply create a wrapper that return MultiReader which you can cache for a
while and close the oldest index once the date rolls.
Sebastin wrote:
HI testn,
it gives performance
separating indices in separated storage and use
ParallelReader
Sebastin wrote:
The problem in my pplication are as follows:
1.I am not able to see the updated records in my index
store because i instantiate
IndexReader and IndexSearcher class once
java.io.IoException:File Not Found- Segments is the error message
testn wrote:
What is the error message? Probably Mike, Erick or Yonik can help you
better on this since I'm no one in index area.
Sebastin wrote:
HI testn,
1.I optimize the Large Indexes of size 10 GB
Hi testn,
i wrote the case wrongly actually the error is
java.io.ioexception file not found-segments
testn wrote:
Should the file be segments_8 and segments.gen? Why is it Segment?
The case is different.
Sebastin wrote:
java.io.IoException:File Not Found- Segments
the index?
Mike
testn [EMAIL PROTECTED] wrote:
Should the file be segments_8 and segments.gen? Why is it Segment?
The
case is different.
Sebastin wrote:
java.io.IoException:File Not Found- Segments is the error message
testn wrote:
What is the error message? Probably
i wont close the IndexReader after the First Search.when i instantiate
IndexSearcher object will it display the updated records in that directories
Sebastin wrote:
I set IndexSearcher as the application Object after the first search.
here is my code
The problem in my pplication are as follows:
1.I am not able to see the updated records in my index
store because i instantiate
IndexReader and IndexSearcher class once that is in the first search.further
searches use the same IndexReaders(5 Directories) and IndexSearcher with
, fields and
what is the average document length?
Sebastin wrote:
Hi testn,
i index the dateSc as 070904(2007/09/04) format.i am not using
any timestamp here.how can we effectively reopen the IndexSearcher for
an hour and save the memory because my index gets updated every minute
records are there?
3. Could you also check number of terms in your indices? If there are too
many terms, you could consider chop something in smaller piece for
example... store area code and phone number separately if the numbers are
pretty distributed.
Sebastin wrote:
Hi testn
I set IndexSearcher as the application Object after the first search.
here is my code:
if(searcherOne.isOpen()==(true)){
Directory compressDir2 =
Hi All,
i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
records using MultiReader class.
here is the following code snippet:
Directory indexDir2 =
FSDirectory.getDirectory(indexSourceDir02,false);
Hi Erick,
help me for this search in time efficiently.
Erick Erickson wrote:
This topic has been discussed a number of times, I suggest you
search the mail archives as that will get you very complete answers
more quickly. See
http://www.gossamer-threads.com/lists/lucene/java-user/
://wiki.apache.org/jakarta-lucene/LargeScaleDateRangeProcessing
Sebastin wrote:
Hi All,
i used to search 3 Lucene Index store of size 6 GB,10 GB,10 GB of
records using MultiReader class.
here is the following code snippet:
Directory
Hi all,
Is there any possibility to display Index values
(ie) when we want to search a field we use,
String test=9840836588
Document doc=new Document();
doc.add(new
Field(test,test,Field.Store.NO,Field.Index.NO_NORMS);
as well as your
stored records field, as the unique terms in the contents field will
effectively be stored.
Also don't forget to convert the terms when you search too, otherwise
you won't find anything ;)
Steve.
Sebastin wrote:
When i use the standardAnalyzer storage size
=new Document();
doc.add(contents,contents,Field.Store.NO,Field.Index.TOKENIZED);
doc.add(records,records,Field.Store.YES ,Field.Index.No);
indexWriter.addDocument(document);
please help me to acheive that
Sebastin wrote:
Hi Steve,
thanks for your reply a lot.its now compress upto 50
Hi Erick do u have any idea on this?
jm-27 wrote:
Hi,
I want to make my index as small as possible. I noticed about
field.setOmitNorms(true), I read in the list the diff is 1 byte per
field per doc, not huge but hey...is the only effect the score being
different? I hardly mind about the
Hi All,
i index my document using SimpleAnalyzer() when i search the Indexed
field in the searcher class it doesnt give me the results.help me to sort
out this issue.
My Code:
test=9840836598
test1=bch01
testRecords=(test+ +test1);
could you briefly tell me how to write two analyzers for the two field
Paulo Silveira-3 wrote:
On 5/25/07, karl wettin [EMAIL PROTECTED] wrote:
PerFieldAnalyzerWrapper
that was fast! thanks!
http://lucene.zones.apache.org:8080/hudson/job/Lucene-Nightly/javadoc/
Hi Does anyone give me an idea to reduce the Index size to down.now i am
getting 42% compression in my index store.i want to reduce upto 70%.i use
standardanalyzer to write the document.when i use SimpleAnalyzer it reduce
upto 58% but i couldnt search the document.please help me to acheive.
anyone says would be a guess.
But at a guess, you may be having troubles with capitalization
in your query.
Also, query.toString() will show you what the actual Lucene
query looks like.
Best
Erick
On 6/19/07, Sebastin [EMAIL PROTECTED] wrote:
Hi All,
i index my document
help me to acheive the minimum size
Erick Erickson wrote:
Show us the code you use to index. Are you storing the fields?
omitting norms? Throwing out stop words?
Best
Erick
On 6/19/07, Sebastin [EMAIL PROTECTED] wrote:
Hi Does anyone give me an idea to reduce the Index size
When i use the standardAnalyzer storage size increases.how can i minimize
index store
Sebastin wrote:
String outgoingNumber=9198408365809;
String incomingNumber=9840861114;
String datesc=070601;
String imsiNumber=444021365987;
String callType=1;
//Search Fields
Hi Hossman,
Thanks for your reply.when i index the search fields in my
lucene document,it occupy 20% of the original size.how can i reduce the
reduce the index size.
hossman_lucene wrote:
: I need to store all the attributes of the document i index as part of
the
: index.
40 matches
Mail list logo