Re: Elasticsearch queries are slow.

2014-10-30 Thread Appasaheb Sawant
Thanks Alerto,

We  have 5 nodes, 10 shards, 1 replace and each shard having 28GB of size.


Thanks.

On Thursday, October 30, 2014 6:25:12 PM UTC+5:30, Alberto Paro wrote:
>
> How many shards? if your shards are too small in number, their size it's 
> too big. Typical shards bigger than 10gb gives you bad performance both in 
> writing and in reading due to segment operations. 
>
> hi,
>   Alberto
>
> Sent from my iPhone
>
> On 29/ott/2014, at 12:02, Appasaheb Sawant  > wrote:
>
> I have 7 node of cluster. Each having configuration like 16G RAM, 8 Core 
> cpu, centos 6
>
> Heap Memory is - 9000m
>
>
>- 1 Master (Non data)
>- 1 Capable master (Non data)
>- 5 Data node
>
> Having 10 indexes, one index is big with 55 million documents of number 
> and 254Gi (508Gi)
> size on disk.
>
>
> Every 1 seconds there are 5-10 new documents indexing.
>
> But problem is search is bit slow. Almost taking average of 2000 to 5000 
> ms. Some queries are in 1 secs.
>
> Why is that so?
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearc...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/275027ad-9edf-42a1-a8c5-5841210800a6%40googlegroups.com
>  
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b72aa427-0df1-4f00-bb24-f93fe2ec33d6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch queries are slow.

2014-10-30 Thread Appasaheb Sawant
I have 5 nodes, 10 shards and search shards having 45gb of data.

On Thursday, October 30, 2014 6:25:12 PM UTC+5:30, Alberto Paro wrote:
>
> How many shards? if your shards are too small in number, their size it's 
> too big. Typical shards bigger than 10gb gives you bad performance both in 
> writing and in reading due to segment operations. 
>
> hi,
>   Alberto
>
> Sent from my iPhone
>
> On 29/ott/2014, at 12:02, Appasaheb Sawant  > wrote:
>
> I have 7 node of cluster. Each having configuration like 16G RAM, 8 Core 
> cpu, centos 6
>
> Heap Memory is - 9000m
>
>
>- 1 Master (Non data)
>- 1 Capable master (Non data)
>- 5 Data node
>
> Having 10 indexes, one index is big with 55 million documents of number 
> and 254Gi (508Gi)
> size on disk.
>
>
> Every 1 seconds there are 5-10 new documents indexing.
>
> But problem is search is bit slow. Almost taking average of 2000 to 5000 
> ms. Some queries are in 1 secs.
>
> Why is that so?
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearc...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/275027ad-9edf-42a1-a8c5-5841210800a6%40googlegroups.com
>  
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c60dd472-842d-4875-82ac-489a3f67e4a9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Percolator inconsistent response times. (ES 1.3.1)

2014-10-30 Thread Ali
Hi,

I am using ES 1.3.1.  I have registered 2.8 million saved searches in a 
dedicated percolator index, spread across 16 shards, 2 replicas and 8 nodes 
(separate vm instances on separate physical hardware).  If I run exactly 
the same percolate request  multiple times one after another or even 
intermittently, it can take anywhere from less than 200ms to 5 seconds to 
execute.  Any ideas as to why such a variance in response times?  If you 
need additional information I'll be happy to provide it.


Thanks in advance,

Ali

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/88bc539b-a972-43bf-94e5-ba45347beff5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch Nested Filters being inclusive vs. exclusive

2014-10-30 Thread Aaron Wallis


I have an object mapping that uses nested objects (props in our example) in 
a tag-like fashion. Each tag can belong to a client/user and when we want 
to allow our users to generate query_string style searches against the 
props.name.

Issue is that when we run our query if an object has multiple props and if 
one of the many props match the filter when others don't the object is 
returned, when we want the opposite - if one returns false don't return vs. 
if one returns true return.

I have posted a comprehensive example here: 
https://gist.github.com/d2kagw/1c9d4ef486b7a2450d95

Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ab40c40b-39f9-4ec9-8c29-51de9c00558f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Is there a problem with elasticsearch query_string?

2014-10-30 Thread suleman mubarik
Hi 
i am running a search with accent "chín*" i am using query string but i 
don't get any hits
if i use prefix then i get the correct number of results
is there any problem with query string and it is unable to handle search 
with accent?

here are both search json
{
"query": {
"query_string": {
"query": "chín*",
"fields": [],
"use_dis_max": false,
"default_operator": "and",
"allow_leading_wildcard": false
}
}
} 

{
"query": {
"multi_match": {
"query": "ch?n*",
"fields": [],
"type": "phrase_prefix",
"max_expansions": 100,
"tie_breaker": 0,
"zero_terms_query": "NONE"
}
}
}


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fe9cc25d-7df9-4b65-b9bf-ae43ba5ce054%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Is there a problem with elasticsearch query_string?

2014-10-30 Thread sulemanmubarik
Hi 
i am running a search with accent "chín*" i am using query string but i
don't get any hits
if i use prefix then i get the correct number of results
is there any problem with query string and it is unable to handle search
with accent?

here are both search json
{
"query": {
"query_string": {
"query": "chín*",
"fields": [],
"use_dis_max": false,
"default_operator": "and",
"allow_leading_wildcard": false
}
}
} 

{
"query": {
"multi_match": {
"query": "ch?n*",
"fields": [],
"type": "phrase_prefix",
"max_expansions": 100,
"tie_breaker": 0,
"zero_terms_query": "NONE"
}
}
}



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/Is-there-a-problem-with-elasticsearch-query-string-tp4065560.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1414709973641-4065560.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Help with a larger cluster in EC2 and Node clients failing to join

2014-10-30 Thread Todd Nine
Hey guys,
  We're running some load tests, and we're finding some of our clients are 
having issues joining the cluster.  We have the following setup running in 
EC2.


24 c3.4xlarge ES instances.  These instances are our data storage and 
search instances.
40 c3.xlarge Tomcat instances.  These are our ephemeral webapp instances.



In our tomcat instance, we're using the Node java client.  What I'm finding 
is that as the cluster comes up, only about 36~38 of the tomcat nodes ever 
discover the 24 nodes that are actually storing data.  We're using tcp 
discovery, and have the following code in our client.


 Settings settings = ImmutableSettings.settingsBuilder()

.put( "cluster.name", fig.getClusterName() )

// this assumes that we're using zen for host 
discovery.  Putting an
// explicit set of bootstrap hosts ensures we 
connect to a valid cluster.
.put( "discovery.zen.ping.unicast.hosts", allHosts )
.put( "discovery.zen.ping.multicast.enabled", "false" 
).put( "http.enabled", false )

.put( "client.transport.ping_timeout", 2000 ) // 
milliseconds
.put( "client.transport.nodes_sampler_interval", 100 )
.put( "network.tcp.blocking", true )
.put( "node.client", true )
.put( "node.name", nodeName )

.build();

log.debug( "Creating ElasticSearch client with settings: " + 
settings.getAsMap() );

where allHosts = "ip1:port, ip2:port" etc etc.


Not sure what we're doing wrong.  We have retry code so that if a client 
fails 20 consecutive times with a TransportException, the node instance is 
stopped, and a new instance is created.  This has worked in dealing with 
network hiccups in the past, however we can just never seem to get past the 
38 client mark.  Any ideas what we may be doing wrong?

Thanks,
Todd

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/55924250-3013-4f70-b3d2-b5381f8af993%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Manually set initial internal version number?

2014-10-30 Thread Nicholas Peterson
Hey all,

When you first index a document, is it possible to give it an "initial" 
internal version number?  I'm hoping to make use of these version numbers, 
but just discovered that they're lost when you re-index (via, for instance, 
the Python client's elasticsearch.helpers.reindex(...) method).

Thank you!
Nick

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/052f70d0-dfd1-4de0-943f-a34f9deb8dc7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Building custom panels in Kibana

2014-10-30 Thread Nick B
I'm having some issues in the app.js file when I try to add my custom panel 
to a row. I keep getting the error "Argument 'custTable' is not a function, 
got undefined". custTable is the name of my custom panel. I've attempted 
going through the actual js but it seems like an endless rabbit hole. Any 
suggestions?

Thanks, 
Nick 

On Friday, February 14, 2014 8:25:59 AM UTC-5, Binh Ly wrote:
>
> It's actually not that difficult. Just need a little patience learning 
> AngularJS. The easiest way to start is to look:
>
> 1) src/app/panels is where all the panels live - copy one out of here (I'd 
> start with the text panel), create a new folder - new name based on your 
> panel name, and edit and strip down the editor, module, and js files (just 
> rename your panel name in the code accordingly)
>
> 2) src/config.js is where you will add your panel to make it visible to 
> kibana. Scroll down to the bottom and add it to the list
>
> Assuming you got no syntax errors, clear your browser cache and refresh 
> and you should be able to add your new panel onto the dashboard. If 
> something is not working, just reverse all the processes above and you can 
> go back to your original Kibana state without much problem. You'll probably 
> want to do this in a DEV environment and test it first anyway.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/03eeb10f-b797-41a6-bb8d-0e49865fd1cd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: ES and Java 8. Does it worth the effort ?

2014-10-30 Thread joergpra...@gmail.com
Of course is Java 8 worth the effort.

Some highlights:

- no more permgen OOMs
- improved concurrency implementations:
http://docs.oracle.com/javase/8/docs/technotes/guides/concurrency/changes8.html
- faster hash maps (20%)
- lambda expressions are faster than inner classes

Some concurrency classes like LongAdder are already included in ES.

Java 8 JVM brings also G1 GC to its full extension. G1 GC is not faster
than CMS GC, but it scales much better over multicore and reduces
stop-the-world pauses to milliseconds.

To exploit the full advantage of Java 8, ES would need a large overhaul by
rewriting inner classes to lambda style, streams, fork/join pool etc. As
long as Lucene does not switch to Java 8, the benefit is only partial.

Jörg




On Thu, Oct 30, 2014 at 9:01 PM, Georgi Ivanov 
wrote:

> Hi ,
> I wander if i should start using Java 8 with my ES cluster.
>
> Are there any benefits using Java 8 ?
> For example :
> faster GC , faster Java itself .. anything ES would bebefit from Java 8 ..
> etc
>
>
> Please share your experience.
>
> Georgi
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/9cf82905-63cf-43f2-b14a-de8f21cb4b50%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGS98nt8%2ByPdm14UHR0pvZVzqVVE%3DtQakQhB_8R6arnNw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: search query / analyzer issue dealing with spaces

2014-10-30 Thread Jarrod C
Excellent!  That works perfectly.  Thank you very much Mike.

On Thursday, October 30, 2014 4:21:30 PM UTC-4, Mike Maddox wrote:
>
> Jarod,
>
> The format of your analyzer is wrong. Note that you have to set the filter 
> property. Use this:
>
> {
>   "settings": {
> "analysis": {
>   "analyzer": {
> "my_analyzer": {
>   "type": "custom",
>   "tokenizer": "keyword",
>   "filter": "lowercase"
> }
>   }
> }
>   },
>   "mappings": {
> "car": {
>   "properties": {
> "color": {
>   "type": "string",
>   "analyzer": "my_analyzer"
> }
>   }
> }
>   }
> }
>
> On Thursday, October 30, 2014 12:52:45 PM UTC-7, Jarrod C wrote:
>>
>> Thanks Mike, it appears referencing the 'episode' instead of 'car' from a 
>> previous example was the problem.  That seems to have progressed me further 
>> however my queries are still case sensitive despite lowercase being true. 
>>  Allow me to repost what I have for clarity.  Thanks
>>
>> // mapping
>> curl -XPUT 'http://localhost:9200/myindex/' -d '{
>>   "settings": {
>> "analysis": {
>>   "analyzer": {
>> "my_analyzer": {
>>   "type": "custom",
>>   "tokenizer": "keyword",
>>   "lowercase": true
>> }
>>   }
>> }
>>   },
>>   "mappings": {
>> "car": {
>>   "_source": {
>> "enabled": false
>>   },
>>   "properties": {
>> "color": {
>>   "type": "string",
>>   "analyzer": "my_analyzer"
>>   }
>> }
>>   }
>> }
>> }'
>>
>> //query matches 'Metallic RED' but not 'Metallic Red'
>> GET /myindex/car/_search
>> {
>>"query": {
>>"match": {
>>   "color": "Metallic RED"
>>}   
>>}
>> }
>>  
>>
>> On Thursday, October 30, 2014 2:41:10 PM UTC-4, Mike Maddox wrote:
>>>
>>> Jarrod,
>>>
>>> I understand that you think the analyzer is not the problem. However, 
>>> the original mapping wasn't correctly formatted so the color type was being 
>>> analyzed using the default analyzer which would also cause the query to use 
>>> the default analyzer as well. If you fix the syntax and then change color 
>>> to use your analyzer it will work. One note, your mapping is also incorrect 
>>> in that it references the "episode" type when you're actually adding data 
>>> to the "car" type. Using your analyzer, it would be indexed as one lower 
>>> case string. Now, your query does make a difference but if you have the 
>>> analyzer set correctly, it will analyze the input string using the same 
>>> analyzer that you set in the mapping. You would be better off just doing a 
>>> term query or filter.
>>>
>>> Mike
>>>
>>>
>>> On Thursday, October 30, 2014 8:06:36 AM UTC-7, Jarrod C wrote:

 Thanks for the replies.  Unfortunately the analyzer portion is not the 
 problem (I pasted the original text in the midst of experimentation).  
 When 
 I had "analyzer" : "my_analyzer" in the mapping it didn't make a 
 difference.  I get results from the analysis query below so I assume it 
 was 
 configured properly:
 GET /myindex/_analyze?analyzer=my_analyzer

 However, it does not seem to make a difference between using my custom 
 "my_analyzer" or using "keyword", or even using "index" : "not_analyzed". 
  In each case, if I search for "red" I get back all results when in fact I 
 only want 1.

 Perhaps my query is the problem?

 On Wednesday, October 29, 2014 8:17:40 PM UTC-4, Mike Maddox wrote:
>
> Actually, change it to "index": "not_analyzed" as shown in the JSON.
>
> On Wednesday, October 29, 2014 5:13:46 PM UTC-7, Mike Maddox wrote:
>>
>> Actually, there are two problems here. Change the analyzer to the 
>> name of your custom analyzer and you are missing a curly brace to close 
>> out 
>> the "settings" property. Not sure why it doesn't cause an error but it 
>> definitely doesn't create a mapping. You can check if there is a mapping 
>> by 
>> looking at: http://localhost:9200/myindex/_mapping
>>
>> Here is how it should be:
>>
>>
>> {
>>   "settings": {
>> "analysis": {
>>   "analyzer": {
>> "my_analyzer": {
>>   "type": "custom",
>>   "tokenizer": "keyword",
>>   "lowercase": true
>> }
>>   }
>> }
>>   },
>>   "mappings": {
>> "episode": {
>>   "_source": {
>> "enabled": false
>>   },
>>   "properties": {
>> "color": {
>>   "type": "string",
>>   "index": "not_analyzed"
>> }
>>   }
>> }
>>   }
>> }
>>
>> On Wednesday, October 29, 2014 2:38:36 PM UTC-7, Jarrod C wrote:
>>>
>>> Hello, I am trying to run a query that distinguishes between spaces 
>>> in values.  Let's say I have a field called 'color' 

Re: search query / analyzer issue dealing with spaces

2014-10-30 Thread Mike Maddox
Jarod,

The format of your analyzer is wrong. Note that you have to set the filter 
property. Use this:

{
  "settings": {
"analysis": {
  "analyzer": {
"my_analyzer": {
  "type": "custom",
  "tokenizer": "keyword",
  "filter": "lowercase"
}
  }
}
  },
  "mappings": {
"car": {
  "properties": {
"color": {
  "type": "string",
  "analyzer": "my_analyzer"
}
  }
}
  }
}

On Thursday, October 30, 2014 12:52:45 PM UTC-7, Jarrod C wrote:
>
> Thanks Mike, it appears referencing the 'episode' instead of 'car' from a 
> previous example was the problem.  That seems to have progressed me further 
> however my queries are still case sensitive despite lowercase being true. 
>  Allow me to repost what I have for clarity.  Thanks
>
> // mapping
> curl -XPUT 'http://localhost:9200/myindex/' -d '{
>   "settings": {
> "analysis": {
>   "analyzer": {
> "my_analyzer": {
>   "type": "custom",
>   "tokenizer": "keyword",
>   "lowercase": true
> }
>   }
> }
>   },
>   "mappings": {
> "car": {
>   "_source": {
> "enabled": false
>   },
>   "properties": {
> "color": {
>   "type": "string",
>   "analyzer": "my_analyzer"
>   }
> }
>   }
> }
> }'
>
> //query matches 'Metallic RED' but not 'Metallic Red'
> GET /myindex/car/_search
> {
>"query": {
>"match": {
>   "color": "Metallic RED"
>}   
>}
> }
>  
>
> On Thursday, October 30, 2014 2:41:10 PM UTC-4, Mike Maddox wrote:
>>
>> Jarrod,
>>
>> I understand that you think the analyzer is not the problem. However, the 
>> original mapping wasn't correctly formatted so the color type was being 
>> analyzed using the default analyzer which would also cause the query to use 
>> the default analyzer as well. If you fix the syntax and then change color 
>> to use your analyzer it will work. One note, your mapping is also incorrect 
>> in that it references the "episode" type when you're actually adding data 
>> to the "car" type. Using your analyzer, it would be indexed as one lower 
>> case string. Now, your query does make a difference but if you have the 
>> analyzer set correctly, it will analyze the input string using the same 
>> analyzer that you set in the mapping. You would be better off just doing a 
>> term query or filter.
>>
>> Mike
>>
>>
>> On Thursday, October 30, 2014 8:06:36 AM UTC-7, Jarrod C wrote:
>>>
>>> Thanks for the replies.  Unfortunately the analyzer portion is not the 
>>> problem (I pasted the original text in the midst of experimentation).  When 
>>> I had "analyzer" : "my_analyzer" in the mapping it didn't make a 
>>> difference.  I get results from the analysis query below so I assume it was 
>>> configured properly:
>>> GET /myindex/_analyze?analyzer=my_analyzer
>>>
>>> However, it does not seem to make a difference between using my custom 
>>> "my_analyzer" or using "keyword", or even using "index" : "not_analyzed". 
>>>  In each case, if I search for "red" I get back all results when in fact I 
>>> only want 1.
>>>
>>> Perhaps my query is the problem?
>>>
>>> On Wednesday, October 29, 2014 8:17:40 PM UTC-4, Mike Maddox wrote:

 Actually, change it to "index": "not_analyzed" as shown in the JSON.

 On Wednesday, October 29, 2014 5:13:46 PM UTC-7, Mike Maddox wrote:
>
> Actually, there are two problems here. Change the analyzer to the name 
> of your custom analyzer and you are missing a curly brace to close out 
> the 
> "settings" property. Not sure why it doesn't cause an error but it 
> definitely doesn't create a mapping. You can check if there is a mapping 
> by 
> looking at: http://localhost:9200/myindex/_mapping
>
> Here is how it should be:
>
>
> {
>   "settings": {
> "analysis": {
>   "analyzer": {
> "my_analyzer": {
>   "type": "custom",
>   "tokenizer": "keyword",
>   "lowercase": true
> }
>   }
> }
>   },
>   "mappings": {
> "episode": {
>   "_source": {
> "enabled": false
>   },
>   "properties": {
> "color": {
>   "type": "string",
>   "index": "not_analyzed"
> }
>   }
> }
>   }
> }
>
> On Wednesday, October 29, 2014 2:38:36 PM UTC-7, Jarrod C wrote:
>>
>> Hello, I am trying to run a query that distinguishes between spaces 
>> in values.  Let's say I have a field called 'color' in my index.  Record 
>> 1 
>> has "color" : "metallic red" whereas Record 2 has "color": "metallic" 
>>
>> I want to search for 'metallic' but NOT retrieve 'metallic red', and 
>> a search for 'metallic red' should not return 'red'.  
>>
>> The query below works for 'metallic red' but entering 'red

ES and Java 8. Does it worth the effort ?

2014-10-30 Thread Georgi Ivanov
Hi ,
I wander if i should start using Java 8 with my ES cluster.

Are there any benefits using Java 8 ?
For example :
faster GC , faster Java itself .. anything ES would bebefit from Java 8 .. 
etc


Please share your experience.

Georgi

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9cf82905-63cf-43f2-b14a-de8f21cb4b50%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: search query / analyzer issue dealing with spaces

2014-10-30 Thread Jarrod C
Thanks Mike, it appears referencing the 'episode' instead of 'car' from a 
previous example was the problem.  That seems to have progressed me further 
however my queries are still case sensitive despite lowercase being true. 
 Allow me to repost what I have for clarity.  Thanks

// mapping
curl -XPUT 'http://localhost:9200/myindex/' -d '{
  "settings": {
"analysis": {
  "analyzer": {
"my_analyzer": {
  "type": "custom",
  "tokenizer": "keyword",
  "lowercase": true
}
  }
}
  },
  "mappings": {
"car": {
  "_source": {
"enabled": false
  },
  "properties": {
"color": {
  "type": "string",
  "analyzer": "my_analyzer"
  }
}
  }
}
}'

//query matches 'Metallic RED' but not 'Metallic Red'
GET /myindex/car/_search
{
   "query": {
   "match": {
  "color": "Metallic RED"
   }   
   }
}
 

On Thursday, October 30, 2014 2:41:10 PM UTC-4, Mike Maddox wrote:
>
> Jarrod,
>
> I understand that you think the analyzer is not the problem. However, the 
> original mapping wasn't correctly formatted so the color type was being 
> analyzed using the default analyzer which would also cause the query to use 
> the default analyzer as well. If you fix the syntax and then change color 
> to use your analyzer it will work. One note, your mapping is also incorrect 
> in that it references the "episode" type when you're actually adding data 
> to the "car" type. Using your analyzer, it would be indexed as one lower 
> case string. Now, your query does make a difference but if you have the 
> analyzer set correctly, it will analyze the input string using the same 
> analyzer that you set in the mapping. You would be better off just doing a 
> term query or filter.
>
> Mike
>
>
> On Thursday, October 30, 2014 8:06:36 AM UTC-7, Jarrod C wrote:
>>
>> Thanks for the replies.  Unfortunately the analyzer portion is not the 
>> problem (I pasted the original text in the midst of experimentation).  When 
>> I had "analyzer" : "my_analyzer" in the mapping it didn't make a 
>> difference.  I get results from the analysis query below so I assume it was 
>> configured properly:
>> GET /myindex/_analyze?analyzer=my_analyzer
>>
>> However, it does not seem to make a difference between using my custom 
>> "my_analyzer" or using "keyword", or even using "index" : "not_analyzed". 
>>  In each case, if I search for "red" I get back all results when in fact I 
>> only want 1.
>>
>> Perhaps my query is the problem?
>>
>> On Wednesday, October 29, 2014 8:17:40 PM UTC-4, Mike Maddox wrote:
>>>
>>> Actually, change it to "index": "not_analyzed" as shown in the JSON.
>>>
>>> On Wednesday, October 29, 2014 5:13:46 PM UTC-7, Mike Maddox wrote:

 Actually, there are two problems here. Change the analyzer to the name 
 of your custom analyzer and you are missing a curly brace to close out the 
 "settings" property. Not sure why it doesn't cause an error but it 
 definitely doesn't create a mapping. You can check if there is a mapping 
 by 
 looking at: http://localhost:9200/myindex/_mapping

 Here is how it should be:


 {
   "settings": {
 "analysis": {
   "analyzer": {
 "my_analyzer": {
   "type": "custom",
   "tokenizer": "keyword",
   "lowercase": true
 }
   }
 }
   },
   "mappings": {
 "episode": {
   "_source": {
 "enabled": false
   },
   "properties": {
 "color": {
   "type": "string",
   "index": "not_analyzed"
 }
   }
 }
   }
 }

 On Wednesday, October 29, 2014 2:38:36 PM UTC-7, Jarrod C wrote:
>
> Hello, I am trying to run a query that distinguishes between spaces in 
> values.  Let's say I have a field called 'color' in my index.  Record 1 
> has 
> "color" : "metallic red" whereas Record 2 has "color": "metallic" 
>
> I want to search for 'metallic' but NOT retrieve 'metallic red', and a 
> search for 'metallic red' should not return 'red'.  
>
> The query below works for 'metallic red' but entering 'red' returns 
> both records.  The query also appears to be bypassing Analyzers specified 
> in the mappings (such as keyword) as they have no affect.  What should I 
> change it to instead?
>
> //Query
> GET /myindex/_search
> {
>"query": {
>"match_phrase": {
>   "color": "metallic red"
>}   
>}
> }
>
> //Data
> { "index" : { "_index" : "myindex", "_type" : "car", "_id" : "1" } }
> { "color" : "metallic red" }
> { "index" : { "_index" : "myindex", "_type" : "car", "_id" : "2" } }
> { "color" : "Metallic RED"}
> { "index" : { "_index" : "myindex", "_type" : "car", "_id" : "3" } }
> { "color" : "rEd" }
>

Re: search query / analyzer issue dealing with spaces

2014-10-30 Thread Mike Maddox
Jarrod,

I understand that you think the analyzer is not the problem. However, the 
original mapping wasn't correctly formatted so the color type was being 
analyzed using the default analyzer which would also cause the query to use 
the default analyzer as well. If you fix the syntax and then change color 
to use your analyzer it will work. One note, your mapping is also incorrect 
in that it references the "episode" type when you're actually adding data 
to the "car" type. Using your analyzer, it would be indexed as one lower 
case string. Now, your query does make a difference but if you have the 
analyzer set correctly, it will analyze the input string using the same 
analyzer that you set in the mapping. You would be better off just doing a 
term query or filter.

Mike


On Thursday, October 30, 2014 8:06:36 AM UTC-7, Jarrod C wrote:
>
> Thanks for the replies.  Unfortunately the analyzer portion is not the 
> problem (I pasted the original text in the midst of experimentation).  When 
> I had "analyzer" : "my_analyzer" in the mapping it didn't make a 
> difference.  I get results from the analysis query below so I assume it was 
> configured properly:
> GET /myindex/_analyze?analyzer=my_analyzer
>
> However, it does not seem to make a difference between using my custom 
> "my_analyzer" or using "keyword", or even using "index" : "not_analyzed". 
>  In each case, if I search for "red" I get back all results when in fact I 
> only want 1.
>
> Perhaps my query is the problem?
>
> On Wednesday, October 29, 2014 8:17:40 PM UTC-4, Mike Maddox wrote:
>>
>> Actually, change it to "index": "not_analyzed" as shown in the JSON.
>>
>> On Wednesday, October 29, 2014 5:13:46 PM UTC-7, Mike Maddox wrote:
>>>
>>> Actually, there are two problems here. Change the analyzer to the name 
>>> of your custom analyzer and you are missing a curly brace to close out the 
>>> "settings" property. Not sure why it doesn't cause an error but it 
>>> definitely doesn't create a mapping. You can check if there is a mapping by 
>>> looking at: http://localhost:9200/myindex/_mapping
>>>
>>> Here is how it should be:
>>>
>>>
>>> {
>>>   "settings": {
>>> "analysis": {
>>>   "analyzer": {
>>> "my_analyzer": {
>>>   "type": "custom",
>>>   "tokenizer": "keyword",
>>>   "lowercase": true
>>> }
>>>   }
>>> }
>>>   },
>>>   "mappings": {
>>> "episode": {
>>>   "_source": {
>>> "enabled": false
>>>   },
>>>   "properties": {
>>> "color": {
>>>   "type": "string",
>>>   "index": "not_analyzed"
>>> }
>>>   }
>>> }
>>>   }
>>> }
>>>
>>> On Wednesday, October 29, 2014 2:38:36 PM UTC-7, Jarrod C wrote:

 Hello, I am trying to run a query that distinguishes between spaces in 
 values.  Let's say I have a field called 'color' in my index.  Record 1 
 has 
 "color" : "metallic red" whereas Record 2 has "color": "metallic" 

 I want to search for 'metallic' but NOT retrieve 'metallic red', and a 
 search for 'metallic red' should not return 'red'.  

 The query below works for 'metallic red' but entering 'red' returns 
 both records.  The query also appears to be bypassing Analyzers specified 
 in the mappings (such as keyword) as they have no affect.  What should I 
 change it to instead?

 //Query
 GET /myindex/_search
 {
"query": {
"match_phrase": {
   "color": "metallic red"
}   
}
 }

 //Data
 { "index" : { "_index" : "myindex", "_type" : "car", "_id" : "1" } }
 { "color" : "metallic red" }
 { "index" : { "_index" : "myindex", "_type" : "car", "_id" : "2" } }
 { "color" : "Metallic RED"}
 { "index" : { "_index" : "myindex", "_type" : "car", "_id" : "3" } }
 { "color" : "rEd" }

 //Mapping (no effect for query)
 curl -XPUT 'http://localhost:9200/myindex/' -d '{
 "settings" : {
   "analysis": {
 "analyzer": {
   "my_analyzer":{
 "type": "custom",
 "tokenizer" : "keyword",
 "lowercase" : true
 }
 }
 },
 "mappings" : {
 "episode" : {
 "_source" : { "enabled" : false },
 "properties" : {
 "color" : { "type" : "string", "analyzer" : 
 "not_analyzed" }
 }
 }
 }
 }
 }'


 Thanks!

>>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d3de2cf4-1c3e-4f2f-af11-1bf3931ed54b%40googlegroups.com.
For

Is there any correlation between ttl and version conflicts?

2014-10-30 Thread Sebastian
Hi folks,

I'm using elasticsearch to store and analyse logs. Since I wanted old logs
to be deleted automatically I added a ttl to my mapping. Now I sometimes get
version conflict exceptions when my (PHP) application tries to update a
timestamp in one of the fields. I'm trying to update the field using cURL
and because of session locking it is impossible for a single user to
generate more than one curl_exec()-call at a time. Furthermore I do not
provide version information explicidly in the update query. There are two
elasticsearch servers handling the log index however all queries are handled
by only one of them. The other one is just a standby. The exceptions were
thrown only after setting a ttl so I was wondering if there might be any
correlation? Can any of you shed some light on that matter?

Best regards, Sebastian



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/Is-there-any-correlation-between-ttl-and-version-conflicts-tp4065542.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1414672713198-4065542.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


kibana 1 and newer elasticsearch?

2014-10-30 Thread Henrik Lynggaard Hansen
Hi

As you can see in a previous posting we are working on upgrading from 
kibana 1 to kibana 3, but are currently blocked on finding the right 
approach with regards to  role based authentication.

However we have some stability issues with out current setup (kibana 1, 
elesticsearch 0.20.5, logstash 1.1.x), so I was wondering if I could just 
update the elasticsearch component. 

I know I should most like change logstash output from elasticsearch to 
elasticsearch-http, but would it otherwise work.
- is kiabana 1 compatible with newer elasticsearch?, if so how new can I go?
- would I need to upgrade logstash, or is it enough to upgrade the output 
plugin?

Best regards
Henrik Lynggaard

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5068352c-81f7-487f-a859-b1ed7c5c61f9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Kibana3 and role based security?

2014-10-30 Thread Henrik Lynggaard Hansen
Hi 

I have a question regarding upgrading a old kibana 1 installation to kibana 
3, but there is one of our business requirements which I am not sure how to 
solve. In our organisation there is a requirement to filter log statements 
based on the user accessing, this is mainly so people only get log 
statements that pertain to the projects they are working on and to filter 
out any statements which has business sensitive information. 

In kibana 1 I have solved it by having each log source stamp a security tag 
like "_" and then I use the role based auth based 
branch so each user can only see log messages tagged with the values they 
have been cleared for.

I am trying to figure out how to replicate something like this in kibana 3 
but it doesn't have role based authentication out of the box. 

I have found two possible approaches by googling, but I am not sure which 
one is the way to go and I am missing some documentation on how to do it 
practially

The first approach is using 
alias 
http://www.elasticsearch.org/blog/restricting-users-kibana-filtered-aliases/. 
However it start out by saying nginx is not a suitable proxy, but doesn't 
specify something else to use.

If this is the official solution, is there then some better documentation 
like a how to guide or can someone in here provide some guidance ?

The second approach I have found is a rack 
application, https://github.com/christian-marie/kibana3_auth but it doesn't 
look like to have been updated in a long time... Does anyone know if it 
still works?


A side question, I see a kibana 4 beta has been released and it is back to 
being server based instead of just HTML files.. Does kibana 4 address this 
issue? will setting up role based security be easier with it? Does anyone 
have a guide of how to do it with kibana 4 ?

Best regards
Henrik Lynggaard



-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/15260bac-8dab-43c5-abae-a384ebdff22f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


aggregation giving inconsistent results

2014-10-30 Thread Jay Hilden
I'm running an aggregation and getting the top 5 results.  When I run the 
exact same aggregation on the top 50 results I'm getting totally different 
results.  I expect that when asking for 50 the top 5 should remain the same 
and an additional 45 should be added to the list.  That is not what's 
happening.

Note: I'm aggregating on the non_analyzed part of a multi-field 
authInput.userName, I'm not sure if that makes a difference or not.

*Here is my query: * 

GET prodstarbucks/authEvent/_search
{
  "size": 0,
  "aggs": {
"users": {
  "terms": {
"field": "authInput.userName.userNameNotAnalyzed",
"size": 5
  }
}
  },
  "query": {
"filtered": {
  "query": {
"match_all": {}
  },
  "filter": {
"bool": {
  "must": [
{
  "range": {
"authResult.authEventDate": {
  "gte": "2014-10-01T00:00:00.000",
  "lte": "2014-10-31T00:00:00.000"
}
  }
}
  ]
}
  }
}
  }
}

*RESULT:*
{
   "took": 2171,
   "timed_out": false,
   "_shards": {
  "total": 5,
  "successful": 5,
  "failed": 0
   },
   "hits": {
  "total": 1090455,
  "max_score": 0,
  "hits": []
   },
   "aggregations": {
  "users": {
 "buckets": [
{
   "key": "3D64E4FD-6D25-4E77-966E-A0E059CFD31E",
   "doc_count": 91
},
{
   "key": "3982EC96-DB4C-4A22-AC64-2CFC09D52579",
   "doc_count": 68
},
{
   "key": "674E6691-8A46-4D34-BB31-B78780969311",
   "doc_count": 24
},
{
   "key": "64449480-77AC-4D64-B79D-DDB545BEE472",
   "doc_count": 23
},
{
   "key": "{7CB63FEE-709A-4AD5-AA16-2CFE3282FEE8}",
   "doc_count": 23
}
 ]
  }
   }
}

If I change the aggregation size to be 50, these are my top 5 results:
{
   "took": 2256,
   "timed_out": false,
   "_shards": {
  "total": 5,
  "successful": 5,
  "failed": 0
   },
   "hits": {
  "total": 1090501,
  "max_score": 0,
  "hits": []
   },
   "aggregations": {
  "users": {
 "buckets": [
{
   "key": "3D64E4FD-6D25-4E77-966E-A0E059CFD31E",
   "doc_count": 109
},
{
   "key": "3982EC96-DB4C-4A22-AC64-2CFC09D52579",
   "doc_count": 84
},
{
   "key": "F77E8291-1640-4C3F-8A1A-D6D955AB940A",
   "doc_count": 59
},
{
   "key": "6AC1ED48-8F91-400B-9353-172BB6E1823B",
   "doc_count": 53
},
{
   "key": "52CDF454-92C2-4C32-91F6-AF4F08C8F908",
   "doc_count": 52
},
...


The doc_counts are all different.  Can someone help explain this to me and 
let me know how I might get the correct doc_count even when only asking for 
the top 5 results.

Thank you!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3e7e5a69-59ee-4472-abb5-598258f97341%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: search query / analyzer issue dealing with spaces

2014-10-30 Thread Jarrod C
Thanks for the replies.  Unfortunately the analyzer portion is not the 
problem (I pasted the original text in the midst of experimentation).  When 
I had "analyzer" : "my_analyzer" in the mapping it didn't make a 
difference.  I get results from the analysis query below so I assume it was 
configured properly:
GET /myindex/_analyze?analyzer=my_analyzer

However, it does not seem to make a difference between using my custom 
"my_analyzer" or using "keyword", or even using "index" : "not_analyzed". 
 In each case, if I search for "red" I get back all results when in fact I 
only want 1.

Perhaps my query is the problem?

On Wednesday, October 29, 2014 8:17:40 PM UTC-4, Mike Maddox wrote:
>
> Actually, change it to "index": "not_analyzed" as shown in the JSON.
>
> On Wednesday, October 29, 2014 5:13:46 PM UTC-7, Mike Maddox wrote:
>>
>> Actually, there are two problems here. Change the analyzer to the name of 
>> your custom analyzer and you are missing a curly brace to close out the 
>> "settings" property. Not sure why it doesn't cause an error but it 
>> definitely doesn't create a mapping. You can check if there is a mapping by 
>> looking at: http://localhost:9200/myindex/_mapping
>>
>> Here is how it should be:
>>
>>
>> {
>>   "settings": {
>> "analysis": {
>>   "analyzer": {
>> "my_analyzer": {
>>   "type": "custom",
>>   "tokenizer": "keyword",
>>   "lowercase": true
>> }
>>   }
>> }
>>   },
>>   "mappings": {
>> "episode": {
>>   "_source": {
>> "enabled": false
>>   },
>>   "properties": {
>> "color": {
>>   "type": "string",
>>   "index": "not_analyzed"
>> }
>>   }
>> }
>>   }
>> }
>>
>> On Wednesday, October 29, 2014 2:38:36 PM UTC-7, Jarrod C wrote:
>>>
>>> Hello, I am trying to run a query that distinguishes between spaces in 
>>> values.  Let's say I have a field called 'color' in my index.  Record 1 has 
>>> "color" : "metallic red" whereas Record 2 has "color": "metallic" 
>>>
>>> I want to search for 'metallic' but NOT retrieve 'metallic red', and a 
>>> search for 'metallic red' should not return 'red'.  
>>>
>>> The query below works for 'metallic red' but entering 'red' returns both 
>>> records.  The query also appears to be bypassing Analyzers specified in the 
>>> mappings (such as keyword) as they have no affect.  What should I change it 
>>> to instead?
>>>
>>> //Query
>>> GET /myindex/_search
>>> {
>>>"query": {
>>>"match_phrase": {
>>>   "color": "metallic red"
>>>}   
>>>}
>>> }
>>>
>>> //Data
>>> { "index" : { "_index" : "myindex", "_type" : "car", "_id" : "1" } }
>>> { "color" : "metallic red" }
>>> { "index" : { "_index" : "myindex", "_type" : "car", "_id" : "2" } }
>>> { "color" : "Metallic RED"}
>>> { "index" : { "_index" : "myindex", "_type" : "car", "_id" : "3" } }
>>> { "color" : "rEd" }
>>>
>>> //Mapping (no effect for query)
>>> curl -XPUT 'http://localhost:9200/myindex/' -d '{
>>> "settings" : {
>>>   "analysis": {
>>> "analyzer": {
>>>   "my_analyzer":{
>>> "type": "custom",
>>> "tokenizer" : "keyword",
>>> "lowercase" : true
>>> }
>>> }
>>> },
>>> "mappings" : {
>>> "episode" : {
>>> "_source" : { "enabled" : false },
>>> "properties" : {
>>> "color" : { "type" : "string", "analyzer" : 
>>> "not_analyzed" }
>>> }
>>> }
>>> }
>>> }
>>> }'
>>>
>>>
>>> Thanks!
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/695fa623-c228-4026-a296-1fe9266294c1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch queries are slow.

2014-10-30 Thread Jason Wee
Will adding node improve the query time assuming the rest remain the same?

Jason

On Thu, Oct 30, 2014 at 8:54 PM, Alberto Paro 
wrote:

> How many shards? if your shards are too small in number, their size it's
> too big. Typical shards bigger than 10gb gives you bad performance both in
> writing and in reading due to segment operations.
>
> hi,
>   Alberto
>
> Sent from my iPhone
>
> On 29/ott/2014, at 12:02, Appasaheb Sawant 
> wrote:
>
> I have 7 node of cluster. Each having configuration like 16G RAM, 8 Core
> cpu, centos 6
>
> Heap Memory is - 9000m
>
>
>- 1 Master (Non data)
>- 1 Capable master (Non data)
>- 5 Data node
>
> Having 10 indexes, one index is big with 55 million documents of number
> and 254Gi (508Gi)
> size on disk.
>
>
> Every 1 seconds there are 5-10 new documents indexing.
>
> But problem is search is bit slow. Almost taking average of 2000 to 5000
> ms. Some queries are in 1 secs.
>
> Why is that so?
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/275027ad-9edf-42a1-a8c5-5841210800a6%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/5F7A69E0-C3D6-4990-9F0B-0966D1D04D12%40gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAHO4itx6h_L4DpM%2BK1UpM2LhVvPV5MJbZEbVi01vgBhNQ9Nh2Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Client nodes stop responding

2014-10-30 Thread Jeff Keller


We are running into an issue where our client nodes will stop responding to 
requests that require checking with other nodes. The following is our setup:

1 dedicated master node

2 dedicated data nodes

3 client nodes (master: false; data: false)

Everything is fine and happy for a while, and then after 30-45 minutes, one 
of the client nodes will stop sending responses to queries that require 
talking with other nodes. We are using the HTTP REST API. When things go 
badly, the following will hang:

curl -XGET ‘http://localhost:9200/_search?size=1’

curl -XGET ‘http://localhost:9200/_cat/thread_pool?v’

But the following will succeed (as it can just use metadata on the node 
itself):

curl -XGET ‘http://localhost:9200/_cluster/health?pretty=1’

The problem node doesn’t seem to have any CPU or IO load. We don’t seem to 
be running into heap issues. netstat doesn’t report any connections in 
TIME_WAIT on any of the nodes. If we run queries from the problem client 
node at the command prompt directly at the data node, everything works. So, 
if we instead run:

curl -XGET ‘http://data.node.ip:9200/_search?size=1 
’

It works as expected. This tells me there isn’t a socket exhaustion issue 
since we can make new connections from the problem node to other nodes.

We turned logged all the way up (“ALL”) on one of the client nodes until it 
started failing, but there was nothing in there of interest. The last few 
minutes just had messages about the idle connection reaper running every 
minute.

We tried increasing the various connections_per_node values to:

transport.connections_per_node.bulk => 6

transport.connections_per_node.reg => 12

transport.connections_per_node.state => 2

transport.connections_per_node.ping => 2

This made no noticeable difference.


When one of the client nodes has started having problems, the cluster still 
sees the node as part of the cluster. When we kill the ES process on that 
node, all the other nodes then notice it went away as expected. When we 
restart ES on the problem node, it comes back up and everything works great 
for another 30-45 minutes.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b8febb38-cd43-4102-b4fb-6dcdd9749aa5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


cisco binary files

2014-10-30 Thread José De Araujo
Hi guys,

I was wondering if someone can tell me what are the 2 binary files cisco 
and ciscoh created by elasticsearch user and their use?

Thank you for your input.

Regards

José

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/cfd0b176-c485-4520-bdac-3780914a15a5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


use case and explication for newbie

2014-10-30 Thread Bastien Vlerick
Hi all,

I'm french, excuse my poor english. ;-)


in few words this is my case and my configuration :

   - nxlog for eventslog and sending them to rsyslog in json.
   - syslog on linux and aix servers
   - rsyslog to receive and elasticsearch for indexation/save.
   
Elasticsearch :

   - 1 node
   - index.number_of_shards: 1
   - index.number_of_replicas: 0
   

With nxlog, we've sent a whole windows eventslog to elasticsearch near 
6:30pm. Memory and CPU use grow up during the logs transfert (near 
10minutes) then CPU and memory down to normal. Few minutes later (~30mn), 
CPU/RAM occupation grew up again and stay to 100% on one CPU from 7pm to 
9am this morning until i restart elasticsearch service.
This morning kibana won't respond and elasticsearch directory show indexes 
from years 2011, 2012 and 2013. I think i'ts because of old messages in the 
windows eventlog.

After restarting elasticsearch, everythings came to normal but i've lost 
all logs message between yesterday 7pm and today 9am.



This is what i think, can someone say me if i'm wrong or right ?


The server took in 10minutes near 500MB of logs with only 1 node and 1 
shard. The indexation start and took all the night. I stopped it too soon. 
All the logs in cue were lost when i've restart the service.

Have you some tweaks or tips to optimize my confirguration ?

Regards

Bastien

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/96297107-01ea-445a-a205-03fa03a24cdd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Logstash ignoring characters from beginning of line

2014-10-30 Thread ES USER
I posted this in the logstash group as well but since this is a pretty big 
bug I thought I would link to it here.


https://groups.google.com/forum/#!topic/logstash-users/pk-BEUcTuyM

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4c91df22-3f68-4b01-bf12-652013f043c0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch queries are slow.

2014-10-30 Thread Alberto Paro
How many shards? if your shards are too small in number, their size it's too 
big. Typical shards bigger than 10gb gives you bad performance both in writing 
and in reading due to segment operations. 

hi,
  Alberto

Sent from my iPhone

> On 29/ott/2014, at 12:02, Appasaheb Sawant  wrote:
> 
> I have 7 node of cluster. Each having configuration like 16G RAM, 8 Core cpu, 
> centos 6
> 
> Heap Memory is - 9000m
> 
> 1 Master (Non data)
> 1 Capable master (Non data)
> 5 Data node
> Having 10 indexes, one index is big with 55 million documents of number and 
> 254Gi (508Gi)
> size on disk.
> 
> 
> Every 1 seconds there are 5-10 new documents indexing.
> 
> But problem is search is bit slow. Almost taking average of 2000 to 5000 ms. 
> Some queries are in 1 secs.
> 
> Why is that so?
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/275027ad-9edf-42a1-a8c5-5841210800a6%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5F7A69E0-C3D6-4990-9F0B-0966D1D04D12%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Percolator with lookup terms filter not working?

2014-10-30 Thread Alexander Jiteg
Seems that If I index a document with the given type first, it works.  That 
is probably why my second run works but not the first.

On Thursday, October 30, 2014 9:59:49 AM UTC+1, Alexander Jiteg wrote:
>
> Hi!
>
> I'm trying to use a lookup terms filter for percolation but for some 
> reason I'm not getting any matches when percolating documents that should 
> match the registered percolator.
>
> Example: 
>
> https://gist.github.com/alexndr79/760314b8b5f49157a839#file-percolation_with_terms_lookup-txt
>
> I have noted that if I try to index the same percolator a second time 
> after the first percolation (that gives not matches) it seems that 
> following percolations will give the expected result. 
>
> I'm running ES 1.3.4. 
>
> Suggestions?
>
> /Alex
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b185c84f-328f-4f4b-85c6-50a746919cb6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Search Template with Count API

2014-10-30 Thread Sofiane Cherchalli


I am using Search Templates for performing queries.

How can I get the count with search template?

Example:

PUT blogs/post/1
{
  "title": "title1",
  "desc": "desc1"
}

PUT /blogs/post/2
{
  "title": "title2",
  "desc": "desc2"
}

PUT /_search/template/search_posts
{
  "template": {
"query": {
  "match_all": {}
}
  }
}

GET /blogs/post/_search/template
{
  "template": {
"id": "search_posts"
  },
  "params": {}
}

GET blogs/post/_search
{
  "query": {
"match_all": {}
  }
}

GET blogs/post/_count
{
  "query": {
"match_all": {}
  }
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b54d4ba5-3cee-4248-803b-df66dd4cacca%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Kibana: Create Term panels with date fields (Weekday, Month, Year, etc)?

2014-10-30 Thread Gabriel Birke
I have data with a field called "created_on" with "type":"date" in 
Elasticsearch (format is date_time_no_millis). Is it possible to create 
term panels that show "year", "month" or "weekday" terms?

If that's not possible I imagine I could create a mapping for the 
"created_on" field that defines sub-fields like "created_on.month", 
"created_on.weekday", etc. Can anyone give me an example on how such a 
mapping would look like?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c8c64863-4a34-48b9-b7c4-2ef5bfc2da52%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: encoding is longer than the max length 32766

2014-10-30 Thread Rotem
+1 on this question. 

If the error is generated because of a not_analyzed field, how is it 
possible to instruct ES to drop these values instead of failing the request?


On Tuesday, July 1, 2014 10:22:54 PM UTC+3, Andrew Mehler wrote:
>
> For not analyzed fields, Is there a way of capturing the old behavior? 
>  From what I can tell, you need to specify a tokenizer to have a token 
> filter.
>
> On Tuesday, June 3, 2014 12:18:37 PM UTC-4, Karel Minařík wrote:
>>
>> This is actually a change in Lucene -- previously, the long term was 
>> silently dropped, now it raises an exception, see Lucene ticket 
>> https://issues.apache.org/jira/browse/LUCENE-5710
>>
>> You might want to add a `length` filter to your analyzer (
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-length-tokenfilter.html#analysis-length-tokenfilter
>> ).
>>
>> All in all, it hints at some strange data, because such "immense" term 
>> shouldn't probably be in the index in the first place.
>>
>> Karel
>>
>> On Thursday, May 29, 2014 10:47:37 PM UTC+2, Jeff Dupont wrote:
>>>
>>> We’re running into a peculiar issue when updating indexes with content 
>>> for the document.
>>>
>>>
>>> "document contains at least one immense term in (whose utf8 encoding is 
>>> longer than the max length 32766), all of which were skipped. please 
>>> correct the analyzer to not produce such terms”
>>>
>>>
>>> I’m hoping that there’s a simple fix or setting that can resolve this.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8e6acaf8-7101-4d04-9566-43ea8845013c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: ELASTIC SEARCH - GROUP BY QUERY ON ARRAY

2014-10-30 Thread Raja Sekhar Bhamidipati
Hi Samanth,

I have started working on elasticsearch recently. I'm trying to get some 
result exactly like what you have tried. But the problem is I'm getting sum 
value same for all buckets ( Total price of the document where the prod 
type exists )

For your example the search json I wrote is

{
"aggs": {
"prod_type": {
"terms": {
"field": "orders.prod_type"
},
"aggs": {
"total_price": {
"sum": {
"field": "price"
}
}
}
}
}
}

The result I got is

 "aggregations": {
  "prod_type": {
 "buckets": [
{
   "key": "glp",
   "doc_count": 2,
   "total_price": {
  "value": 400
   }
},
{
   "key": "olp",
   "doc_count": 2,
   "total_price": {
  "value": 400
   }
}
 ]
  }
   }

Could you please help me out in this?

Regards,
Raja


On Wednesday, July 9, 2014 9:21:11 AM UTC+5:30, K.Samanth Kumar Reddy wrote:
>
> Thank you very much. Its working.
>
> Thanks,
> Samanth
>
> On Tuesday, July 8, 2014 4:23:18 PM UTC+5:30, K.Samanth Kumar Reddy wrote:
>>
>> Hi,
>>
>> I am working on elasticsearch for last 2 months. It is really providing 
>> awesome searching capabilities, good json structure documents etc...
>> Currently I am stuck up with the problem on How to write group by query 
>> and get the data.
>>
>> Ex:- In this example company, prod_type are defined as 'not_analyzed'
>>
>> Example Documents:
>>
>> {"company":"ABC","orders":[{"order_no":"OL1", "prod_type" : "OLP", 
>> "price":20}, {"order_no":"OL2", "prod_type" : "OLP", "price":50}, 
>> {"order_no":"OL3", "prod_type" : "GLP", "price":100} ]}
>>
>> {"company":"XYZ","orders":[{"order_no":"OL10", "prod_type" : "GLP", 
>> "price":50}, {"order_no":"OL20", "prod_type" : "OLP", "price":80}, 
>> {"order_no":"OL30", "prod_type" : "GLP", "price":100} ]}
>>
>>
>> My Requirement: I want the elasticsearch query to get the count, 
>> sum(price) based on prod_type
>> SQL Comparision Qry: SELECT COUNT(*), SUM(PRICE) FROM TABLE_NAME GROUP BY 
>> PROD_TYPE
>>
>>
>> Can anyone please help me this? 
>>
>> Please let me know if you need more information.
>>
>>
>> Thanks,
>> Samanth
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/192cf64d-c704-4dc5-ac49-f896c10b3e26%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch server stops due to java.io.IOException break

2014-10-30 Thread Vallabh Bothre

Dear All,

I am facing problems with elasticsearch.
I am unable to get the results, i checked in log files i got the following 
error,

*ERROR:*
2014-10-30 08:52:46,971][DEBUG][action.search.type   ] [Lianda] [135] 
Failed to execute fetch phase
[Error: Runtime.getRuntime().exec("cd").getInputStream(): Cannot run 
program "cd": java.io.IOException: error=2, No such file or directory]
[Near : {... w InputStreamReader(Runtime.getRuntime().exec("cd" }]

*Below are the version i am using,*
elasticsearch version: 0.90.5
java version: 1.6.0_33 64 bit
plugin installed: phonetic

The strange thing is that, whenever i am getting this error, i restart the 
elasticsearch server and its works.
So i think something is getting overloaded.

Any help is very much appreciated.

Thanks,
Vallabh

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/54529920-c306-408e-bd7e-4741e2803a5f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Differences about label your fields with or without @ in Kibana

2014-10-30 Thread Iván Fernández Perea
Hi Brian,

thank you very much. I didn't understand why the @ symbol was used before 
fields in Kibana, it is perfectly clear now. 

I was simply saving some documents from spark streaming to elasticsearch 
and I was using Kibana in order to show streaming data in a histogram 
panel. My documents didn't have any @ symbol before their names and that's 
why default timepicker wasn't using my timestamp field. As you said I think 
it's better not to add the @ symbol to fields and just simply change it in 
Kibana. That way works perfectly.

Thank you again!!
Iván.

El miércoles, 29 de octubre de 2014 21:10:09 UTC+1, Brian escribió:
>
> The @timestamp field, created by logstash by default, has always worked 
> perfectly out-of-the-box with Kibana's time picker and also with curator. 
> Perhaps if you posted one document from your Elasticsearch response it 
> might help.
>
> But I don't recommend that you create your own fields with @ as a prefix 
> character. Straying a bit from your question, I created some R scripts to 
> analyze and plot things in a way that neither Kibana nor Splunk can. What 
> I've noticed is that when I export as CSV, either from Elasticsearch or 
> from Splunk, and then import into R's CSV reader, I notice that:
>
> 1. Elasticsearch's @timestamp field becomes the X.timestamp field in R.
>
> 2. Splunk's _time field becomes the X_time field in R.
>
> Which is one very good reason not to add a @ or _ to the front of your own 
> fields. It's a lot of extra hard-coded processing to figure out the source 
> and then choose the field using R when it's not the same name as the field 
> from Elasticsearch.
>
> But I digress.
>
> Brian
>
> On Wednesday, October 29, 2014 1:20:10 PM UTC-4, Iván Fernández Perea 
> wrote:
>>
>> I was using Kibana and wondering which are the differences between using 
>> or not  an @ sign before field names. It seems that the default (as in 
>> timepicker in the dashboard settings) is using the @ before a field but it 
>> doesn't seem to work in my case. I need to set the Time Field in the 
>> Timepicker with a field name and no @ before it to make it work.
>>
>> Thank you,
>> Iván.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d086b534-2d62-4f5a-bf7a-478c07a0164f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Percolator with lookup terms filter not working?

2014-10-30 Thread Alexander Jiteg
Hi!

I'm trying to use a lookup terms filter for percolation but for some reason 
I'm not getting any matches when percolating documents that should match 
the registered percolator.

Example: 
https://gist.github.com/alexndr79/760314b8b5f49157a839#file-percolation_with_terms_lookup-txt

I have noted that if I try to index the same percolator a second time after 
the first percolation (that gives not matches) it seems that following 
percolations will give the expected result. 

I'm running ES 1.3.4. 

Suggestions?

/Alex

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2e2ebd43-0afd-4330-b4f2-5ed2c00097c1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Elastic Search Returns inconsistent results when running on a single node.

2014-10-30 Thread Yasaswani Koduri
We have indexed Data using elastic search on a single node. And we have a
thread running in the background used for updating the index with the recent
changes.

Now we are using elastic search API's to run the search query. However , the
search query returns inconsistent results. On rerunning the same query
continuously, sometime we get 0 results, sometime partial results and at
sometimes we get complete results. We have not set any timeout for the
queries.

We are facing this issue in a cluster where only one node is an indexing
node .

This is happening on one of the production environments.


Options tried out:
Refreshed index - did not work.
Deleted the index and built it completely again. - still the same issue
inconsistent results

Could you let us know what might be causing this issue.?



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/Elastic-Search-Returns-inconsistent-results-when-running-on-a-single-node-tp4065500.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1414579775233-4065500.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch MutliMatchQuery field boost not contributing to the score

2014-10-30 Thread Sarath
Hi,

 I have indexed few documents and using multimatch query to find match 
based on field 'last_name'. I found that multimatch query search with and 
without field boost is showing me same score. Can someone explain this 
behavior. Iam using elasticsearch 1.3.2

PUT /megacorp/employee/1
{
"first_name" : "John",
"last_name" :  "Smith",
"age" :25,
"about" :  "I love to go rock climbing",
"interests": [ "sports", "music" ]
}

PUT /megacorp/employee/2
{
"first_name" :  "Jane",
"last_name" :   "Smith",
"age" : 32,
"about" :   "I like to collect rock albums",
"interests":  [ "music" ]
}

PUT /megacorp/employee/3
{
"first_name" :  "Douglas",
"last_name" :   "Fir",
"age" : 35,
"about":"I like to build cabinets",
"interests":  [ "forestry" ]
}

POST /megacorp/employee/_search
{
   "query" : {"multi_match" : {
"query":"Smith",
"fields": [ "last_name^5.0"  ] 
  }}
}

POST /megacorp/employee/_search
{
   "query" : {"multi_match" : {
"query":"Smith",
"fields": [ "last_name"  ] 
  }}
}



Thanks,
Sarath

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/43341e2d-56a6-4b54-950f-bda459a9dfed%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elastic Search Returns inconsistent results when running on a single node.

2014-10-30 Thread Yasaswani Koduri
The issue has been resolved. We are setting a timeout value of zero at some
place in our code . This has caused the issue.



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/Elastic-Search-Returns-inconsistent-results-when-running-on-a-single-node-tp4065500p4065530.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1414647297997-4065530.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


curator doesn't find alias on index when unaliasing

2014-10-30 Thread Johan Guldmyr
Hi,

curator 2.0.2
ES 1.3.4
rhel6

If I run this:

curator --port 9199 alias --prefix logstash- --alias logs 
--unalias-older-than 30

I get some nice messages like:

"Index logstash-2013.12.31 does not exist in alias logs; skipping."

http://host:9199/_aliases?pretty shows

"logstash-2013.12.31" : {
  "aliases" : {
"logs" : {}

But http://host:9199/_alias/logs does not have logstash-2013.12.31 in there.

So it seems to me that the alias have the index but the index doesn't have 
the alias?

Help =)

// Johan

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b9d65bc1-8f59-45a0-9bec-1bdc124c6779%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.