Hi,
I did it with Erick's solution.
I did mark index file versions to all documents and delete old documents
before index new documents.
If documents are just update or add, you can use 'automatically replaced'
strategy with document's unique key.
Thanks.
2013/12/2 revolutionizeit
> Did you ge
w why you try to index and search a binary field except for
> range searching.
>
> On Mon, Oct 7, 2013 at 11:23 PM, 장용석 wrote:
>
> > Dear,
> >
> > I have indexing integer field like this
> >
> > -
> > Document doc = new Document();
> >
be careful, the terms index contains more terms with lower precisions
> (bits stripped off), unless you use infinite precisionStep!
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> >
Dear,
I have indexing integer field like this
-
Document doc = new Document();
FieldType fieldType = new FieldType();
fieldType.setIndexed(true);
fieldType.setStored(true);
fieldType.setTokenized(false);
fieldType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS);
fieldType.setStoreT
t and go. That would delete everything indexed before midnight.
> last night (NOW/DAY rounds down).
>
> Note, most of this would be already replaced if your new documents
> had the same value as the old ones, then the old ones
> would be automatically replaced.
>
> Best
>
Hi.
I want indexing all documents once a day and after indexing delete old
index files that indexed before a day.
I think to do this, indexing all documents in new directory and replace
IndexSearcher and IndexWriter with olds, and delete old index directory.
Is there more good indexing strategy?
w it works and luckily don't need to. You can look at
> the source if you need to know.
>
>
> --
> Ian.
>
>
> On Tue, Jan 15, 2013 at 3:16 PM, 장용석 wrote:
> > Hi.
> >
> > What is the best way get highest frequency term from index?
> >
> > I
Hi.
What is the best way get highest frequency term from index?
I think for this, using PriorityQueue and cut off lower frequency term.
But this way need performing loop as all term's count.
Is there better way get highest frequency term?
Thanks.!
--
DEV용식
http://devyongsik.tistory.com
s.com
>
> On Fri, Jan 4, 2013 at 7:59 PM, 장용석 wrote:
> > Hello Mike.
> > Thanks for your reply.
> >
> > It's not an important issue.
> > I'll waiting for next release version including this patch.
> >
> > Thanks.
> >
> &g
statistic
> per-document, currently. We could in theory fix this ... maybe open
> an issue / make a patch if it's important?
>
> -1 return value is actually "valid": it means this statistic is not
> available.
>
> Mike McCandless
>
> http://blog.mikemcc
Hello.
I have some questions.
Document 1 : "learning perl learning java learning ruby"
Document 2 : "perl test"
I have indexed this documents, with StoreTermVectors(true) and
IndexOptions.DOCS_AND_FREQS.
Field name is "f".
And I executed this code.
IndexReader ir = IndexReader.open(dir);
Terms
Thanks It's good for me.
ありがとう. :-)
- Jang
09. 1. 6, Koji Sekiguchi 님이 작성:
>
> That's correct!
>
> Koji
>
>
> 장용석 wrote:
> > Thanks for your advice.
> >
> > If I want to sort some field (for example name is "TITLE") and It must be
>
be tokenized, and
> does not need to be stored (unless you happen to want it back with the
> rest of your document data). In other words:
>
> document.add (new Field ("byNumber", Integer.toString(x),
> Field.Store.NO, Field.Index.NOT_ANALYZED));
>
> Koji
>
>
Hi.
I want to test sorting when search so I was created simple index like this.
String[] samples = {"duck dog","first dog","grammar dog","come dog","basic
dog","intro dog","lipton dog","search dog","servlet dog","jan dog"};
Directory dir = FSDirectory.getDirectory(path);
IndexWriter writer = new
Thanks for your help.
It's really helpful for me.
thanks very much. :-)
-Jang.
--
DEV용식
http://devyongsik.tistory.com
Hi.. :)
I have a simple question..
I have two sample code.
1) TopDocCollector collector = new TopDocCollector(5 * hitsPerPage);
QueryParser parser = new QueryParser(fieldName, analyzer);
query = parser.parse("keyword");
searcher.search(query, collector);
ScoreDoc[] hits = collec
hi :)
first, i'm sorry for my bad English..
I have a question.
In lucene 2.4.0 , Token class constructor public Token(String text, int
start, int end, int flags) is deprecated.
I want to know why and
What constructor is the substitution for this deprecated constructor?
May I use like this?
T
t; > i < hits.length
> >
> > Otherwise, looks good.
> >
> >>
> >>Document doc = searcher.doc(hits[i].doc);
> >> }
> >>
> >> Hope this helps.
> >>
> >> Todd
> >>
> >>
> >> 2008/11/5
hi.
I have a question :)
In lucene 2.3.X I did use Sort class like this..
Sort sort = new Sort("FIELDNAME", true);
Hits hits = searcher.search(query, sort);
but, in lucene 2.4.0 search(Query, Sort) method is deprecated. I was
searched API, so I found this method
search(query, filter, n, sort)
C
re only working with a few thousand
> documents. Instead of delete/add you could use
> IndexWriter.updateDocument().
>
>
> --
> Ian.
>
>
> 2008/9/9 장용석 <[EMAIL PROTECTED]>:
> > Hi~.
> > I hava a question about lucene incremental indexing.
> >
>
Hi~.
I hava a question about lucene incremental indexing.
I want to do incremental indexing my goods data.
For example, I have 4 products datas with
"GOOD_ID","NAME","PRICE","CREATEDATE","UPDATEDATE" colunms.
1, ipod, 3, 2008-11-10:11:00, 2008-11-10:11:00
2, java book, 2, 2008-11-10:11:00
>if(!cvSearches.containsKey(directory))
>{
>cvSearches.put(directory, new IndexSearcher(directory));
>}
>}
>}
>
>return cvSearches.get(directory);
>}
>
> }
>
>
>
r is also closed. It looks like the only way I can reopen the
> IndexSearcher is to reopen the IndexReader and create a new IndexSearcher.
> This leads me back to my original problem.
>
> Is there a better way to handle this rather than keeping the IndexSeacher
> open for the life of t
I think when your doQuery method is run, Directory and Analyzer classes are
new create every time.
If index file's size is very large then create new Directory instance is
pressure to jvm and it takes long time for create new Directory instance.
I suggest that modify the code , Analyzer class and D
right?
> but I'm using nutch this time.
> thank u all the same:)
>
> 2008/8/14 장용석 <[EMAIL PROTECTED]>
>
> > Hi. I was very happy ,you are love Korean language a lot :)
> > So do you want search for special characters?
> >
> > If you want include s
08/8/14, Mr Shore <[EMAIL PROTECTED]>:
>
> can nutch or lucene support search for special characters like .?
> when i search ".net" many result come for "net"
> i want to exclude them
> ps:i love korean language a lot
>
> 2008/8/13 장용석 <[EMAIL PROTECTED]>
ntax.html#Range%20Searches
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
> > From: 장용석 <[EMAIL PROTECTED]>
> > To: java-user@lucene.apache.org
> > Sent: Tuesday, August 12, 2008 6:01:00 AM
>
hi.
I am searching for lucene api or function like query "FIELD > 1000"
For example, a user wants to search a product which price is bigger then
user's input.
If user's input is 1 then result are the products in index just like
"PRICE > 1"
Is there any way to search like that?
thanks.
J
r you in the background. Just
> open the FSDirectory and do your indexing. If I had to guess, though, from
> a quick glance, I think you should do the addIndexes after the ramWriter
> close, but that's just a guess, as I haven't tried it.
>
> -Grant
>
> On Aug 7, 20
hi,
I am using RamDirectory and FSDirectory for indexing documents.
I use RamDirectory as Buffer.
For example,
---
String indexDir = "d:/lucene_data/merge_test_index";
Analyzer analyzer = new StopAnalyzer();
RAMDirectory ramDir= new RAMDirectory();
IndexWriter
30 matches
Mail list logo