I'm looking for a tool to serialize and deserialize Lucene queries. We have
tried using Query.toString(), but some queries return string that couldn't be
parsed by a QueryParser afterwards. The alternative possibility is to use
standard Java serialization mechanism. The reason I'm trying to
Hi,
That's a very good point, I will test with a more realistic text
(like a novel).
Thanks very much for helps, Lisheng
-Original Message-
From: Michael McCandless [mailto:luc...@mikemccandless.com]
Sent: Saturday, July 27, 2013 3:42 AM
To: Lucene Users
Subject: Re: lucene 4.3 seems to
http://leden.lionsclubdehaan.be/ahqig/bikisjprkowoft
Frank Apap
7/28/2013 9:12:39 AM
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail:
Hi Denis,
Indeed, Query.toString() only tries to give a human-understandable
representation of what the query searches for and doesn't guarantee
that it can be parsed again and would give the same query. We don't
provide tools to serialize queries but since query parsing is usually
lightweight
Hi, all
I'm stuck in one simple question, as title says, I think it should have a
simple solution.
Say I use StandardAnalyzer and have two fields in all documents,
StringField(date...) is not tokenized, format is 2013/07/28
TextField(text ...) is tokenized.
QueryParser parse
sorry guys
I think I made a mistake
the parse string I use was date:\2013/07/2*\ text:...
the quoted makes queryparser ignored trailing '*' and analyze the string.
I changed to date:2013\/07\/2* text:..., it works fine.
sorry for the disturb :-)
2013/7/28 Wenbo Zhao zha...@gmail.com
Hi, all
Yeah, it's a shame such a ser/deser feature isn't available in Lucene.
My idea is to have a separate module that the Query classes can delegate to
for serialization and deserialization, handling recursion for nested query
objects, and then have modules for XML, JSON, and a pseudo-Java
PerFieldAnalyzerWrapper
http://lucene.apache.org/core/4_4_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/PerFieldAnalyzerWrapper.html
This analyzer is used to facilitate scenarios where different fields
require different analysis techniques.
-- Jack Krupansky
-Original
Yes with the lookup api. It return the token with b /b appended that`s
why it has to interact with api. But how to how to iterate over my real
index
In scala:
def infixSuggest(){
val sourceindex = new File(/tmp/lucene/1374960475771)
val reader =
You're calling .build once per suggestion, which is not right.
Instead you should call it once, overall, and pass it an iterator that
iterates over all the suggestions you pull from the index.
E.g. fill in TermFreqPayload[] up front by walking through your entire
index, then create
A full JSON query ser/deser would be an especially nice additionto Solr,
allowing direct access to all Lucene Query features even if they haven't been
integrated into the higher level query parsers.
There is nothing we could do, so we wrote one, in fact :) I'll try to elaborate
with the
Bingo..!!..Your solution worked for me.
Thanks a ton. I went through queryparser so many number of times never
knew it can server the purpose so easily.
Never figured out the true significance as I thought I can always create
a normal PhraseQuery with PhraseQuery pq=new PhraseQuery() and
12 matches
Mail list logo