Hunspell inflection problem

2014-05-26 Thread Tomasz Romanczuk
Hi,
   we use percolate option with hunspell dictionaries and have problems 
with inflection of some word (i.e. *"test"* which is one of the percolated 
queries). When we send documents this query is matched to some completely 
different words (i.e. der, den - danish words). Is there any possibility to 
add some negative rule dictionary or modify existing hunspell dictionary?

thanks in advance for any help,
best regards,
Tomasz Romańczuk

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d7d18b78-b4da-4ba7-b63b-960313c56c2f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Hunspell for danish language

2014-05-26 Thread Tomasz Romanczuk
Hi,
   I'm using hunspell for danish and see strange problem. I have percolated 
some queires. Now I'm sending documents to find matching queries. Query *"test" 
*is matched to all documents that contain word *"der"*. How can I avoid 
this? Is it kind of known bug?

regards,
Tomasz Romańczuk

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/41a7cfe1-1512-469d-bc49-7c170a1801f7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Percolator sometimes doesn't refresh queries

2014-03-20 Thread Tomasz Romanczuk
I indexed 10 queries in percolator index. Next 9 was deleted. 
Sometimes it looks like index didn't refresh (I repeated step many times), 
deleted queries are still matched and returned in  resposne. I tried to 
clear cache and refresh index, but sometimes it doesn't work, my code:

BulkRequestBuilder bulkRequest = client.prepareBulk();
while (some condition) {
bulkRequest.add(client.prepareIndex("_percolator", INDEX_NAME, 
id).setSource(...));
}
while (some condition) {
bulkRequest.add(client.prepareDelete("_percolator", INDEX_NAME, id));
}
BulkResponse response = bulkRequest.setRefresh(true).execute().actionGet();
client.admin().indices().prepareClearCache(INDEX_NAME).execute().actionGet();
client.admin().indices().prepareClearCache(PERCOLATOR).execute().actionGet();
client.admin().indices().prepareRefresh(INDEX_NAME).execute().actionGet();
client.admin().indices().prepareRefresh(PERCOLATOR).execute().actionGet();

How to make sure that after bulk request (response.hasFailures() always 
returns false) percolator index will be refreshed?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4315acb8-8e7e-40bf-a89c-18d8fa476408%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to delete queries from percolator?

2014-03-20 Thread Tomasz Romanczuk
Got it! Seems that clear cache is workaround :)
client.admin().indices().prepareClearCache(INDEX_NAME).execute().actionGet();

W dniu czwartek, 20 marca 2014 11:17:02 UTC+1 użytkownik Tomasz Romanczuk 
napisał:
>
> I have indexed 1 queries in percolator. Next I want to update some of 
> them and delete 9000 queries. I use bulk request, operation seems to finish 
> with success (without fails). But after all deleted queries still are 
> returned. Below code refreshing index:
>
> BulkRequestBuilder bulkRequest = client.prepareBulk();
> while (some condition) {
> bulkRequest.add(client.prepareIndex("_percolator", INDEX_NAME, 
> id).setSource(...));
> }
> while (some condition) {
> bulkRequest.add(client.prepareDelete("_percolator", INDEX_NAME, id));
> }
> BulkResponse response = bulkRequest.setRefresh(true).execute().actionGet();
>
> response.hasFailures() returns *false*. Is there any bug in elasticsearch 
> or am I doing something wrong? Application restart helps, index is 
> refreshed but I want do it online without restarts.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/57b3c75b-e90b-4ef4-9a91-ceb09375c3bd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


How to delete queries from percolator?

2014-03-20 Thread Tomasz Romanczuk
I have indexed 1 queries in percolator. Next I want to update some of 
them and delete 9000 queries. I use bulk request, operation seems to finish 
with success (without fails). But after all deleted queries still are 
returned. Below code refreshing index:

BulkRequestBuilder bulkRequest = client.prepareBulk();
while (some condition) {
bulkRequest.add(client.prepareIndex("_percolator", INDEX_NAME, 
id).setSource(...));
}
while (some condition) {
bulkRequest.add(client.prepareDelete("_percolator", INDEX_NAME, id));
}
BulkResponse response = bulkRequest.setRefresh(true).execute().actionGet();

response.hasFailures() returns *false*. Is there any bug in elasticsearch 
or am I doing something wrong? Application restart helps, index is 
refreshed but I want do it online without restarts.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b7930318-82ff-46ec-a838-2cecad943fff%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Analyzer is closed - ERROR

2014-03-18 Thread Tomasz Romanczuk
I don't have any custom code. My analyzer uses only tokenizer ( *whitespaces 
+ 3 special characters: ( ) -* ) and hunspell for danish language. All I do 
is in my previous post.

W dniu wtorek, 18 marca 2014 13:29:04 UTC+1 użytkownik Itamar Syn-Hershko 
napisał:
>
> Did you write the analyzer that gets run on the server, or are you simply 
> assembling an analysis chain from client without any custom coding on the 
> server side?
>
> --
>
> Itamar Syn-Hershko
> http://code972.com | @synhershko <https://twitter.com/synhershko>
> Freelance Developer & Consultant
> Author of RavenDB in Action <http://manning.com/synhershko/>
>
>
> On Tue, Mar 18, 2014 at 2:18 PM, Tomasz Romanczuk 
> 
> > wrote:
>
>> It's quite simple class:
>> List filterNames = Lists.newArrayList();
>> builder.startObject(FILTER);
>> filterNames.add(FILTER_NAME_1);
>> builder.startObject(FILTER_NAME_1);
>> builder.field("type", "word_delimiter");
>> builder.array("type_table", );
>> builder.endObject();
>>
>> filterNames.add(FILTER_NAME_2);
>> builder.startObject(FILTER_NAME_2);
>> builder.field("type", "hunspell");
>> builder.field("ignoreCase", "false");
>> builder.field("locale", "da_DK");
>> builder.endObject();
>>
>> builder.endObject();
>>
>> builder.startObject("analyzer");
>> builder.startObject(NAME);
>> builder.field("type", "custom");
>> builder.field("tokenizer", "whitespace");
>> builder.array(FILTER, filterNames.toArray(new 
>> String[filterNames.size()]));
>>
>> builder.endObject();
>> builder.endObject();
>>
>> What can be faulty? It properly analyses text. Problem occures only when 
>> I restart module and try to refresh index setting (i.e. change dictionary 
>> language).
>>
>> W dniu wtorek, 18 marca 2014 12:51:28 UTC+1 użytkownik Itamar Syn-Hershko 
>> napisał:
>>>
>>> Your analyzer implementation is probably faulty. Lucene 4.6 started 
>>> being more strict about analyzers lifecycle - I suggest you try it locally 
>>> with plain Lucene code to first verify its implementation follows the life 
>>> cycle rules.
>>>
>>> Reference: http://lucene.apache.org/core/4_6_0/core/
>>> org/apache/lucene/analysis/TokenStream.html<http://www.google.com/url?q=http%3A%2F%2Flucene.apache.org%2Fcore%2F4_6_0%2Fcore%2Forg%2Fapache%2Flucene%2Fanalysis%2FTokenStream.html&sa=D&sntz=1&usg=AFQjCNG3-c_lmcixxA0s0HmVDW4Q8bhqZA>
>>>  
>>> --
>>>
>>> Itamar Syn-Hershko
>>> http://code972.com | @synhershko <https://twitter.com/synhershko>
>>> Freelance Developer & Consultant
>>> Author of RavenDB in Action <http://manning.com/synhershko/>
>>>
>>>
>>> On Tue, Mar 18, 2014 at 1:30 PM, Tomasz Romanczuk wrote:
>>>
>>>>  After starting node I try to refresh index setting (i.e. change 
>>>> analyzer), but something goes wrong, I have an error:
>>>> 2014-03-18 12:02:40,810 WARN  [org.elasticsearch.index.indexing] 
>>>> [alerts_node] [_percolator][0] post listener [org.elasticsearch.index.
>>>> percolator.PercolatorService$RealTimePercolat
>>>> orOperationListener@702f2591] failed
>>>> org.elasticsearch.ElasticSearchException: failed to parse query [316]
>>>> at org.elasticsearch.index.percolator.PercolatorExecutor.
>>>> parseQuery(PercolatorExecutor.java:361)
>>>> at org.elasticsearch.index.percolator.PercolatorExecutor.
>>>> addQuery(PercolatorExecutor.java:332)
>>>> at org.elasticsearch.index.percolator.PercolatorService$
>>>> RealTimePercolatorOperationListener.postIndexUnderLock(
>>>> PercolatorService.java:295)
>>>> at org.elasticsearch.index.indexing.ShardIndexingService.
>>>> postIndexUnderLock(ShardIndexingService.java:140)
>>>> at org.elasticsearch.index.engine.robin.RobinEngine.
>>>> innerIndex(RobinEngine.java:594)
>>>> at org.elasticsearch.index.engine.robin.RobinEngine.
>>>> index(RobinEngine.java:492)
>>>> at org.elasticsearch.index.shard.service.InternalIndexShard.
>>>> performRecoveryOperation(InternalIndexShard.java:703)
>>>>   

Re: Analyzer is closed - ERROR

2014-03-18 Thread Tomasz Romanczuk
It's quite simple class:
List filterNames = Lists.newArrayList();
builder.startObject(FILTER);
filterNames.add(FILTER_NAME_1);
builder.startObject(FILTER_NAME_1);
builder.field("type", "word_delimiter");
builder.array("type_table", );
builder.endObject();

filterNames.add(FILTER_NAME_2);
builder.startObject(FILTER_NAME_2);
builder.field("type", "hunspell");
builder.field("ignoreCase", "false");
builder.field("locale", "da_DK");
builder.endObject();

builder.endObject();

builder.startObject("analyzer");
builder.startObject(NAME);
builder.field("type", "custom");
builder.field("tokenizer", "whitespace");
builder.array(FILTER, filterNames.toArray(new 
String[filterNames.size()]));

builder.endObject();
builder.endObject();

What can be faulty? It properly analyses text. Problem occures only when I 
restart module and try to refresh index setting (i.e. change dictionary 
language).

W dniu wtorek, 18 marca 2014 12:51:28 UTC+1 użytkownik Itamar Syn-Hershko 
napisał:
>
> Your analyzer implementation is probably faulty. Lucene 4.6 started being 
> more strict about analyzers lifecycle - I suggest you try it locally with 
> plain Lucene code to first verify its implementation follows the life cycle 
> rules.
>
> Reference: 
> http://lucene.apache.org/core/4_6_0/core/org/apache/lucene/analysis/TokenStream.html<http://www.google.com/url?q=http%3A%2F%2Flucene.apache.org%2Fcore%2F4_6_0%2Fcore%2Forg%2Fapache%2Flucene%2Fanalysis%2FTokenStream.html&sa=D&sntz=1&usg=AFQjCNG3-c_lmcixxA0s0HmVDW4Q8bhqZA>
>
> --
>
> Itamar Syn-Hershko
> http://code972.com | @synhershko <https://twitter.com/synhershko>
> Freelance Developer & Consultant
> Author of RavenDB in Action <http://manning.com/synhershko/>
>
>
> On Tue, Mar 18, 2014 at 1:30 PM, Tomasz Romanczuk 
> 
> > wrote:
>
>> After starting node I try to refresh index setting (i.e. change 
>> analyzer), but something goes wrong, I have an error:
>> 2014-03-18 12:02:40,810 WARN  [org.elasticsearch.index.indexing] 
>> [alerts_node] [_percolator][0] post listener 
>> [org.elasticsearch.index.percolator.PercolatorService$RealTimePercolat
>> orOperationListener@702f2591] failed
>> org.elasticsearch.ElasticSearchException: failed to parse query [316]
>> at 
>> org.elasticsearch.index.percolator.PercolatorExecutor.parseQuery(PercolatorExecutor.java:361)
>> at 
>> org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:332)
>> at 
>> org.elasticsearch.index.percolator.PercolatorService$RealTimePercolatorOperationListener.postIndexUnderLock(PercolatorService.java:295)
>> at 
>> org.elasticsearch.index.indexing.ShardIndexingService.postIndexUnderLock(ShardIndexingService.java:140)
>> at 
>> org.elasticsearch.index.engine.robin.RobinEngine.innerIndex(RobinEngine.java:594)
>> at 
>> org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:492)
>> at 
>> org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:703)
>> at 
>> org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:224)
>> at 
>> org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:174)
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> at java.lang.Thread.run(Thread.java:619)
>> Caused by: org.apache.lucene.store.AlreadyClosedException: this Analyzer 
>> is closed
>> at 
>> org.apache.lucene.analysis.Analyzer$ReuseStrategy.getStoredValue(Analyzer.java:368)
>> at 
>> org.apache.lucene.analysis.Analyzer$GlobalReuseStrategy.getReusableComponents(Analyzer.java:410)
>> at 
>> org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:173)
>> at 
>> org.elasticsearch.index.search.MatchQuery.parse(MatchQuery.java:203)
>> at 
>> org.elasticsearch.index.query.MatchQueryParser.parse(MatchQueryParser.java:163)
>> at 
>> org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
>> at 
>> org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:107)

Analyzer is closed - ERROR

2014-03-18 Thread Tomasz Romanczuk
After starting node I try to refresh index setting (i.e. change analyzer), 
but something goes wrong, I have an error:
2014-03-18 12:02:40,810 WARN  [org.elasticsearch.index.indexing] 
[alerts_node] [_percolator][0] post listener 
[org.elasticsearch.index.percolator.PercolatorService$RealTimePercolat
orOperationListener@702f2591] failed
org.elasticsearch.ElasticSearchException: failed to parse query [316]
at 
org.elasticsearch.index.percolator.PercolatorExecutor.parseQuery(PercolatorExecutor.java:361)
at 
org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:332)
at 
org.elasticsearch.index.percolator.PercolatorService$RealTimePercolatorOperationListener.postIndexUnderLock(PercolatorService.java:295)
at 
org.elasticsearch.index.indexing.ShardIndexingService.postIndexUnderLock(ShardIndexingService.java:140)
at 
org.elasticsearch.index.engine.robin.RobinEngine.innerIndex(RobinEngine.java:594)
at 
org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:492)
at 
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:703)
at 
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:224)
at 
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:174)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.lucene.store.AlreadyClosedException: this Analyzer is 
closed
at 
org.apache.lucene.analysis.Analyzer$ReuseStrategy.getStoredValue(Analyzer.java:368)
at 
org.apache.lucene.analysis.Analyzer$GlobalReuseStrategy.getReusableComponents(Analyzer.java:410)
at 
org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:173)
at 
org.elasticsearch.index.search.MatchQuery.parse(MatchQuery.java:203)
at 
org.elasticsearch.index.query.MatchQueryParser.parse(MatchQueryParser.java:163)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
at 
org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:107)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
at 
org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:107)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
at 
org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:107)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
at 
org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:93)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
at 
org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:284)
at 
org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:255)
at 
org.elasticsearch.index.percolator.PercolatorExecutor.parseQuery(PercolatorExecutor.java:350)

My code:
node = NodeBuilder.nodeBuilder().settings(builder).build();
node.start();
client = node.getClient();
client.admin().indices().prepareClose(INDEX_NAME).execute().actionGet();
UpdateSettingsRequestBuilder builder = 
client.admin().indices().prepareUpdateSettings();
builder.setIndices(INDEX_NAME);
builder.setSettings(createSettings());
builder.execute().actionGet();
client.admin().indices().prepareOpen(INDEX_NAME).execute().actionGet();

private Builder createSettings() throws IOException {
XContentBuilder builder = 
XContentFactory.jsonBuilder().startObject();
builder.startObject("analysis");
analyzer.appendSettings(builder);
builder.endObject();
builder.endObject();
return 
ImmutableSettings.settingsBuilder().loadFromSource(builder.string());
}

where *analyzer* is a simple class which only adds hunspell dictionary and 
my custom tokenizer.

The problem is that there is a thead making index recovery and during this 
process I'm closing index. How can I avoid this situation? Is there any way 
to check if recovery is in progress?


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/53cef11a-e5c7-4a23-9d2e-8431691f4f73%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: WARN while updating index settings

2014-03-17 Thread Tomasz Romanczuk


W dniu piątek, 14 marca 2014 17:26:58 UTC+1 użytkownik Tomasz Romanczuk 
napisał:
>
> Hi,
> I'm using percolate index to store some queires. After node start 
> (used java API) I try to update index settings:
>
> 
> client.admin().indices().prepareClose(INDEX_NAME).execute().actionGet();
> UpdateSettingsRequestBuilder builder = 
> client.admin().indices().prepareUpdateSettings();
> builder.setIndices(INDEX_NAME);
> builder.setSettings(createSettings());
> builder.execute().actionGet();
> 
> client.admin().indices().prepareOpen(INDEX_NAME).execute().actionGet();
>
> Everything works fine, changes are applied, but in my log I can see 
> warning:
>
> 2014-03-14 16:55:40,896 WARN  [org.elasticsearch.index.indexing] 
> [alerts_node] [_percolator][0] post listener 
> [org.elasticsearch.index.percolator.PercolatorService$RealTimePercolat
> orOperationListener@dd0099] failed
> org.elasticsearch.ElasticSearchException: failed to parse query [299]
> at 
> org.elasticsearch.index.percolator.PercolatorExecutor.parseQuery(PercolatorExecutor.java:361)
> at 
> org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:332)
> at 
> org.elasticsearch.index.percolator.PercolatorService$RealTimePercolatorOperationListener.postIndexUnderLock(PercolatorService.java:295)
> at 
> org.elasticsearch.index.indexing.ShardIndexingService.postIndexUnderLock(ShardIndexingService.java:140)
> at 
> org.elasticsearch.index.engine.robin.RobinEngine.innerIndex(RobinEngine.java:594)
> at 
> org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:492)
> at 
> org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:703)
> at 
> org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:224)
> at 
> org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:174)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: org.apache.lucene.store.AlreadyClosedException: this Analyzer 
> is closed
> at 
> org.apache.lucene.analysis.Analyzer$ReuseStrategy.getStoredValue(Analyzer.java:368)
> at 
> org.apache.lucene.analysis.Analyzer$GlobalReuseStrategy.getReusableComponents(Analyzer.java:410)
> at 
> org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:173)
> at 
> org.elasticsearch.index.search.MatchQuery.parse(MatchQuery.java:203)
> at 
> org.elasticsearch.index.query.MatchQueryParser.parse(MatchQueryParser.java:163)
> at 
> org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
> at 
> org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:107)
> at 
> org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
> at 
> org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:107)
> at 
> org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
> at 
> org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:93)
> at 
> org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
> at 
> org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:284)
> at 
> org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:255)
> at 
> org.elasticsearch.index.percolator.PercolatorExecutor.parseQuery(PercolatorExecutor.java:350)
>
> What is the reason of this WARN and how can I avoid it?
>
> Thanks for any help!
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/17bf659f-3457-405c-8e93-c02ca7960f65%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


WARN while updating index settings

2014-03-14 Thread Tomasz Romanczuk
Hi,
I'm using percolate index to store some queires. After node start (used 
java API) I try to update index settings:


client.admin().indices().prepareClose(INDEX_NAME).execute().actionGet();
UpdateSettingsRequestBuilder builder = 
client.admin().indices().prepareUpdateSettings();
builder.setIndices(INDEX_NAME);
builder.setSettings(createSettings());
builder.execute().actionGet();

client.admin().indices().prepareOpen(INDEX_NAME).execute().actionGet();

Everything works fine, changes are applied, but in my log I can see warning:

2014-03-14 16:55:40,896 WARN  [org.elasticsearch.index.indexing] 
[alerts_node] [_percolator][0] post listener 
[org.elasticsearch.index.percolator.PercolatorService$RealTimePercolat
orOperationListener@dd0099] failed
org.elasticsearch.ElasticSearchException: failed to parse query [299]
at 
org.elasticsearch.index.percolator.PercolatorExecutor.parseQuery(PercolatorExecutor.java:361)
at 
org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:332)
at 
org.elasticsearch.index.percolator.PercolatorService$RealTimePercolatorOperationListener.postIndexUnderLock(PercolatorService.java:295)
at 
org.elasticsearch.index.indexing.ShardIndexingService.postIndexUnderLock(ShardIndexingService.java:140)
at 
org.elasticsearch.index.engine.robin.RobinEngine.innerIndex(RobinEngine.java:594)
at 
org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:492)
at 
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:703)
at 
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:224)
at 
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:174)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.lucene.store.AlreadyClosedException: this Analyzer is 
closed
at 
org.apache.lucene.analysis.Analyzer$ReuseStrategy.getStoredValue(Analyzer.java:368)
at 
org.apache.lucene.analysis.Analyzer$GlobalReuseStrategy.getReusableComponents(Analyzer.java:410)
at 
org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:173)
at 
org.elasticsearch.index.search.MatchQuery.parse(MatchQuery.java:203)
at 
org.elasticsearch.index.query.MatchQueryParser.parse(MatchQueryParser.java:163)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
at 
org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:107)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
at 
org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:107)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
at 
org.elasticsearch.index.query.BoolQueryParser.parse(BoolQueryParser.java:93)
at 
org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:207)
at 
org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:284)
at 
org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:255)
at 
org.elasticsearch.index.percolator.PercolatorExecutor.parseQuery(PercolatorExecutor.java:350)

What is the reason of this WARN and how can I avoid it?

Thanks for any help!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/148a02a9-4e20-442c-9bc8-0037283aa11d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Percolaror with exact match

2014-01-21 Thread Tomasz Romanczuk
Where should I define source as not_analyzed? During index creation (in 
mapping)? or in queries?

W dniu wtorek, 21 stycznia 2014 17:50:02 UTC+1 użytkownik Binh Ly napisał:
>
> You can handle the "matching" part by defining the correct analyzer for 
> the source field. So for example, if you set define source as 
> "not_analyzed", it should perform a case-sensitive exact match without 
> going through text analysis.
>
> On Tuesday, January 21, 2014 10:55:12 AM UTC-5, Tomasz Romanczuk wrote:
>>
>> I'm going to use percolate option to index my queries. Some of them 
>> contains special characters (like "-", "+"). For given document i.e.:
>>
>> doc {
>> source: "www.a-b.com"
>> }
>>
>> I need return only these queries that exactly match to this text. I used 
>> *termQuery* from JavaAPI but it doesn't work. How can I provide "exact 
>> match"?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/add8adb3-516e-494f-b129-8f05146e34ef%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Percolaror with exact match

2014-01-21 Thread Tomasz Romanczuk
I'm going to use percolate option to index my queries. Some of them 
contains special characters (like "-", "+"). For given document i.e.:

doc {
source: "www.a-b.com"
}

I need return only these queries that exactly match to this text. I used 
*termQuery* from JavaAPI but it doesn't work. How can I provide "exact 
match"?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/00a58083-c06d-442e-8550-5ef5c81fe448%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.