Interesting! If that supports straight lucene syntax then this is golden.
Our system must support full lucene syntax along with "fuzzy" searches
which is why I've been using query_string.
Thanks!
On Thursday, August 21, 2014 10:55:36 AM UTC-7, Ivan Brusic wrote:
>
> One more thing! The match qu
One more thing! The match query does not go through the query parser phase.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-match-query.html#_comparison_to_query_string_field
curl -XPOST "http://localhost:9200/example/example/_search?pretty=true"; -d '
{
"query":
In the ES documentation is talks about escape characters and space is one
of them. Seems like if you escaped the query with a "\ " it would ignore
that during the parsing.
Thanks for your help.
On Thursday, August 21, 2014 10:42:32 AM UTC-7, Ivan Brusic wrote:
>
> In general, if you are using t
In general, if you are using the keyword tokenizer or non analyzed fields,
then query string queries should probably not be used. Phrase queries and
the keyword tokenizer also do not mix well.
Your OR queries succeed because "bug" is a token in your index.
--
Ivan
On Thu, Aug 21, 2014 at 10:26
Any idea why single quotes work?
This works but doesn't match the lucene query syntax.
curl -XPOST "$url/$defaultIndex/example/_search?pretty=true" -d '
{
"query": {
"query_string": {
"query": "name:''exampleof bug''"
}
}
}
'
On Thursday, August 21, 2014 10:09:29 AM UTC-7, Ivan
Well crap. By creating tokens that match it eliminates the exact match I'm
trying correct?
If I indexed two documents with each of the strings below... (assuming the
tokens are generated as you stated above)
exampleof bug
exampleof sample bug
Then ran a query:
name:"exampleof bug"
Would ret
Here is the Lucene issue: https://issues.apache.org/jira/browse/LUCENE-2605
--
Ivan
On Thu, Aug 21, 2014 at 10:09 AM, Ivan Brusic wrote:
> The query string query is a phrase query "\"exampleof bug\""
> The term query is looking for a single token "exampleof bug"
>
> The query parser will not
The query string query is a phrase query "\"exampleof bug\""
The term query is looking for a single token "exampleof bug"
The query parser will not use your tokenizer to parse the phrase. It will
tokenize based on whitespace and then apply the filters to each term. Your
index does not contain the
But the query is this...
name:"exampleof bug"
This should find an exact match in the field name. That exact match token
exists.
The syntax for lucene under "Fields" section shows a double quote is the
correct character for
this. http://lucene.apache.org/core/2_9_4/queryparsersyntax.html The t
I suspect the issue is the way the query parser works. The query phrase
"exampleof bug" will be parsed into a query for the tokens "exampleof" and
"bug" that are adjacent to each other. The issue is that you do not have
two such tokens, instead you have a token with the value "exampleof bug",
which
Also meant to include this in the script.
echo "query_string query using singe quote which does not match lucene
query documentation"
curl -XPOST "$url/$defaultIndex/example/_search?pretty=true" -d '
{
"query": {
"query_string": {
"query": "name:''exampleof bug''"
}
}
}
'
On Th
I have attached a short bash script to recreate the situation. I have a
fairly simple custom analyzer that I want to break on camel case so
lowercase is last. Using the _analyze endpoint I can see the token I am
searching for is generated by the analyzer, however searching for it with
query_str
12 matches
Mail list logo