Kibana - Changing text color based on content

2014-05-08 Thread Chris Laplante
We have some data that uses the common Red/Yellow/Green verbiage. When 
presenting this data in a Kibana table panel, I would like to modify the 
color of the text. I notice Marvel does this for the global status.

Anyone doing anything like this, or have ideas where to inject it? CSS?

Thanks,

-Chris

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/58bcb3e1-3adc-4539-8270-19663d3701eb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


custom stemmer with elasticsearch / tire / rails

2014-05-08 Thread Oto Iashvili
Hi,

Im' searchinkg to ass new stemmer to elastisearch to use with tire / rails

I've found java file 
(https://github.com/emilis/PolicyFeed/blob/master/src/search/java/org/tartarus/snowball/ext/LithuanianStemmer.java)
 
I've created a jar from this file
I've put it in elasticsearch's lib folder

here my rails file


tire.settings :analysis => {
:filter => {
  "lt_stemmer" => {
"type" => "stemmer",
"name" => "lithuanian",
"rules_path" => "lt_stemmer.jar"
  }
},
:analyzer => {
  "lithuanian" => {
  "type" => "snowball",
  "tokenizer" => "keyword",
  "filter" => ["lowercase", "lt_stemmer"]
},
},
  } do
  mapping do
indexes :titre_lt, :analyzer => "lithuanian"

  end



I succeed them to create index and index data, but when I test, it seems it 
doesn't use the rule in my jar file.

curl -XGET 'localhost:9200/lituanieindex/_analyze?analyzer=lithuanian' -d 
'smulkių, dalinių, pilnų krovinių pervežimas nuosavais arba partnerių 
vilkikais su standartinėmis 92 m3 puspriekabėmis ir 120 m3 autotraukiniais;'



{"tokens":[{"token":"smulkių","start_offset":0,"end_offset":7,"type":"","position":1},{"token":"dalinių","start_offset":9,"end_offset":16,"type":"","position":2},{"token":"pilnų","start_offset":18,"end_offset":23,"type":"","position":3},{"token":"krovinių","start_offset":24,"end_offset":32,"type":"","position":4},{"token":"pervežima","start_offset":33,"end_offset":43,"type":"","position":5},{"token":"nuosavai","start_offset":44,"end_offset":53,"type":"","position":6},{"token":"arba","start_offset":54,"end_offset":58,"type":"","position":7},{"token":"partnerių","start_offset":59,"end_offset":68,"type":"","position":8},{"token":"vilkikai","start_offset":69,"end_offset":78,"type":"","position":9},{"token":"su","start_offset":79,"end_offset":81,"type":"","position":10},{"token":"standartinėmi","start_offset":82,"end_offset":96,"type":"","position":11},{"token":"92","start_offset":97,"end_offset":99,"type":"","position":12},{"token":"m3","start_offset":100,"end_offset":102,"type":"","position":13},{"token":"puspriekabėmi","start_offset":103,"end_offset":117,"type":"","position":14},{"token":"ir","start_offset":118,"end_offset":120,"type":"","position":15},{"token":"120","start_offset":121,"end_offset":124,"type":"","position":16},{"token":"m3","start_offset":125,"end_offset":127,"type":"","position":17},{"token":"autotraukiniai","start_offset":128,"end_offset":143,"type":"","position":18}]}

what do I do wrong ?

thanks for help

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c4bd01c5-832a-42b4-8218-8263ca284f25%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Help with ES 1.x percolator query plz

2014-05-08 Thread JGL
Can anybody help plz?

On Wednesday, May 7, 2014 6:29:35 PM UTC+12, JGL wrote:
>
> Can anybody help plz?
>
> On Tuesday, May 6, 2014 11:53:32 AM UTC+12, JGL wrote:
>>
>>
>> Can anybody help plz?
>>
>> On Monday, May 5, 2014 10:24:09 AM UTC+12, JGL wrote:
>>>
>>>
>>> Hi Martjin,
>>>
>>> The percolator query in the 1st post above is what we registered to the 
>>> percolator and kinda working, which consolidate all IDs in one query string 
>>> for a match query, which seems not quite a elegant solution to us. 
>>>
>>> {
>>>   "_index" : "my_idx",
>>>   "_type" : ".percolator",
>>>   "_id" : "my_query_id",
>>>   "_score" : 1.0, 
>>>   "_source" : {
>>> "query":{
>>>"match":{
>>>   "id":{
>>>   "query":"id1 id2 id3",
>>>   "type":"boolean"
>>>}
>>>}
>>> }
>>>   }
>>> }
>>>
>>>
>>> Another issue is that the above solution is not quite accurate when the 
>>> IDs are UUIDs. For example, if the query we register is as the following
>>>
>>> {
>>>   "_index" : "my_idx",
>>>   "_type" : ".percolator",
>>>   "_id" : "my_query_id",
>>>   "_score" : 1.0, 
>>>   "_source" : {
>>> "query":{
>>>"match":{
>>>   "id":{
>>>   
>>> "query":"1aa808dc-48f0-4de3-8978-*a0293d54b852* 
>>> 6b256fd1-cd04-4e3c-8f38-aaa87ac2220d 1234fd1a-cd04-4e3c-8f38-aaa87142380d",
>>>   "type":"boolean"
>>>}
>>>}
>>> }
>>>   }
>>> }
>>>
>>>
>>> , the percolator return the above query as a match if the document we 
>>> try to percolate is "{"doc" : {"id":"1aa808dc-48f0-4de3-8978-
>>> *00293d54b852*"}}", though we are expecting a no match response here as 
>>> the id in the document does not have a matched ID in the query String. 
>>>
>>> Such false positive response, according to the experimentations we had, 
>>> happens when the doc UUID is almost the same to one of the IDs in the query 
>>> except the the last part of ID. Wondering if there is an explanation for 
>>> such behavior of elasticsearch?
>>>
>>> Our another question is if there is any way we could put the UUID list 
>>> as a list into a query that is working with the percolator, like what we 
>>> can do for inQuery or inFilter. We tried register an inQuery or a query 
>>> wrapping an inFilter. Non of them can work with the percolator, seems the 
>>> percolator only works with the MatchQuery, in which we cannot put the UUID 
>>> list as a list.
>>>
>>> For example the following two queries we tried are not working with 
>>> percolator:
>>>
>>> {
>>>   "_index" : "my_idx",
>>>   "_type" : ".percolator",
>>>   "_id" : "inQuery",
>>>   "_score" : 1.0, "_source" : 
>>> {"query":{"terms":{"id":["1aa808dc-48f0-4de3-8978-a0293d54b852","6b256fd1-cd04-4e3c-8f38-aaa87ac2220d"]}}}
>>> },
>>>
>>>
>>> {
>>>   "_index" : "my_idx",
>>>   "_type" : ".percolator",
>>>   "_id" : "inFilterQ",
>>>   "_score" : 1.0, "_source" : 
>>> {"query":{"filtered":{"query":{"match_all":{}},"filter":{"terms":{"id":["1aa808dc-48f0-4de3-8978-a0293d50b852","6b256fd1-cd04-4e3c-8f38-aaa87ac2220d"]}
>>> }, 
>>>
>>> Thanks for your help!
>>>
>>> Jason
>>>
>>>
>>> On Friday, May 2, 2014 7:34:47 PM UTC+12, Martijn v Groningen wrote:

 Hi,

 Can you share the stored percolator queries and the percolate request 
 that you were initially trying with, but didn't work?\

 Martijn


 On 2 May 2014 11:14, JGL  wrote:

> Can anybody help plz?
>
> -- 
> You received this message because you are subscribed to the Google 
> Groups "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send 
> an email to elasticsearc...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/4ee60836-1922-43e0-8d9b-64ef9bb0b00a%40googlegroups.com
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



 -- 
 Met vriendelijke groet,

 Martijn van Groningen 

>>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/355c1323-5b58-4ff1-9854-56740ac2ec34%40googlegroups.com.
For more

Re: Official .NET client

2014-05-08 Thread Paige Cook
Elasticsearch.Net and NEST are the official .NET Clients for Elasticsearch. 
You can read about them in the recent blog post - introducing 
elasticsearch.net and nest 
1.0.0-beta1
  
Both aggregations and integrated failover are supported in 
Elasticsearch.Net and NEST.

On Tuesday, May 6, 2014 5:51:25 AM UTC-4, Loïc Wenkin wrote:
>
> Hello everybody,
>
> I watched a video 2 or 3 months ago (Facebook and Elasticsearch), and in 
> this video, it was said that it was planned to develop an official .NET 
> client. Do you have some news about it ? Is there a roadmap (or at least, 
> an idea about a release date (2014, 2015 ...)) for this client ?
> Currently, I am using PlainElastic.Net which is a great client (I like the 
> idea to work with strings directly accessible to user, allowing us to 
> easily debug queries), but some features are missing (I think to 
> aggregations, for example, or a kind of integrated failover system).
>
> Any news about it would be appreciated :)
>
> Regards,
> Loïc
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/669e574f-0c60-4836-8e7d-f7ccb18d4ba0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Retrieve 6 products for top 3 users and each one has 2 with highest matching score

2014-05-08 Thread Yao Li
What about nested or parent/child query? How to achieve? 

On Thursday, May 8, 2014 4:45:36 PM UTC-7, Yao Li wrote:
>
> I have a collection of products which belong to few users,  like 
>
> [ 
>   { id: 1, user_id: 1, description: "blabla...", ... }, 
>   { id: 2, user_id: 2, description: "blabla...", ... }, 
>   { id: 3, user_id: 2, description: "blabla...", ... }, 
>   { id: 4, user_id: 3, description: "blabla...", ... }, 
>   { id: 5, user_id: 4, description: "blabla...", ... }, 
>   { id: 6, user_id: 2, description: "blabla...", ... }, 
>   { id: 7, user_id: 3, description: "blabla...", ... }, 
>   { id: 8, user_id: 4, description: "blabla...", ... }, 
>   { id: 9, user_id: 2, description: "blabla...", ... }, 
>   { id: 10, user_id: 3, description: "blabla...", ... }, 
>   { id: 11, user_id: 4, description: "blabla...", ... }, 
>   ... 
> ] 
>
> (the real data has more fields, but most important ones like 1st for 
> product id, 2nd for user id, 3rd for product description.) 
>
> I'd like to retrieve 2 products for top 3 users whose products have 
> highest matching score (matching condition is description includes 
> "fashion" and some other keywords, in this case just use "fashion" as 
> example) : 
>
> [ 
>   { id: 2, user_id: '2', description: "blabla...", ..., _score: 100}, 
>   { id: 3, user_id: '2', description: "blabla...", ..., _score: 95}, 
>   { id: 4, user_id: '3', description: "blabla...", ..., _score: 90}, 
>   { id: 5, user_id: '4', description: "blabla...", ..., _score: 80}, 
>   { id: 7, user_id: '3', description: "blabla...", ..., _score: 70}, 
>   { id: 8, user_id: '4', description: "blabla...", ..., _score: 65}, 
>   ... 
> ] 
>
> I have 3 possible ways to try: 
>
> 1. use term facet to get unique user_id in nested query, then use them for 
> the user id range of outside query which focus on match description with 
> keywords like "fashion". 
>
> I don't know how to implement it in ES (stuck in facet terms iteration and 
> construct user_id range with subquery with facet), try in sql like: 
>
> select id, user_id, description 
> from product 
> where user_id in ( 
>   select distinct user_id 
>   from product 
>   limit 3) 
> order by _score 
> limit 6 
> /* 6  = 2 * 3 */ 
>
> But it cannot guarantee top 6 products coming from 3 different user. 
>
> Also, according to the following two links, it seems facet terms specific 
> information iteration feature has not been implemented in ES so far. 
>
> http://elasticsearch-users.115913.n3.nabble.com/Terms-stats-facet-Additional-information-td4035199.html
>
> https://github.com/elasticsearch/elasticsearch/issues/256
>
> 2.  query with term filed in description matched with keywords like 
> "fashion", at same time do statistics for each user_id with aggregation and 
> limit the count to 2, then pick top 6 products with highest matching score. 
>
> I still don't know how to implement in ES. 
>
> 3. use brute force with multiple queries until find top 3 users, each one 
> has 2 products with highest matching scores. 
>
> I mean use a hash map, key is user_id, value is how many times it appears. 
> Query with matching keywords first, then iterate immediate results and 
> check hash map, if value is less than 2, add to final result product list, 
> otherwise skip it. 
>
> Please let me know if you can figure it out in the above 1st or 2nd way. 
>
> Appreciate in advance. 
> Yao
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8273ae86-1344-4b59-8680-2a82eee98de5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Retrieve 6 products for top 3 users and each one has 2 with highest matching score

2014-05-08 Thread Yao Li
I have a collection of products which belong to few users,  like 

[ 
  { id: 1, user_id: 1, description: "blabla...", ... }, 
  { id: 2, user_id: 2, description: "blabla...", ... }, 
  { id: 3, user_id: 2, description: "blabla...", ... }, 
  { id: 4, user_id: 3, description: "blabla...", ... }, 
  { id: 5, user_id: 4, description: "blabla...", ... }, 
  { id: 6, user_id: 2, description: "blabla...", ... }, 
  { id: 7, user_id: 3, description: "blabla...", ... }, 
  { id: 8, user_id: 4, description: "blabla...", ... }, 
  { id: 9, user_id: 2, description: "blabla...", ... }, 
  { id: 10, user_id: 3, description: "blabla...", ... }, 
  { id: 11, user_id: 4, description: "blabla...", ... }, 
  ... 
] 

(the real data has more fields, but most important ones like 1st for 
product id, 2nd for user id, 3rd for product description.) 

I'd like to retrieve 2 products for top 3 users whose products have highest 
matching score (matching condition is description includes "fashion" and 
some other keywords, in this case just use "fashion" as example) : 

[ 
  { id: 2, user_id: '2', description: "blabla...", ..., _score: 100}, 
  { id: 3, user_id: '2', description: "blabla...", ..., _score: 95}, 
  { id: 4, user_id: '3', description: "blabla...", ..., _score: 90}, 
  { id: 5, user_id: '4', description: "blabla...", ..., _score: 80}, 
  { id: 7, user_id: '3', description: "blabla...", ..., _score: 70}, 
  { id: 8, user_id: '4', description: "blabla...", ..., _score: 65}, 
  ... 
] 

I have 3 possible ways to try: 

1. use term facet to get unique user_id in nested query, then use them for 
the user id range of outside query which focus on match description with 
keywords like "fashion". 

I don't know how to implement it in ES (stuck in facet terms iteration and 
construct user_id range with subquery with facet), try in sql like: 

select id, user_id, description 
from product 
where user_id in ( 
  select distinct user_id 
  from product 
  limit 3) 
order by _score 
limit 6 
/* 6  = 2 * 3 */ 

But it cannot guarantee top 6 products coming from 3 different user. 

Also, according to the following two links, it seems facet terms specific 
information iteration feature has not been implemented in ES so far. 
http://elasticsearch-users.115913.n3.nabble.com/Terms-stats-facet-Additional-information-td4035199.html

https://github.com/elasticsearch/elasticsearch/issues/256

2.  query with term filed in description matched with keywords like 
"fashion", at same time do statistics for each user_id with aggregation and 
limit the count to 2, then pick top 6 products with highest matching score. 

I still don't know how to implement in ES. 

3. use brute force with multiple queries until find top 3 users, each one 
has 2 products with highest matching scores. 

I mean use a hash map, key is user_id, value is how many times it appears. 
Query with matching keywords first, then iterate immediate results and 
check hash map, if value is less than 2, add to final result product list, 
otherwise skip it. 

Please let me know if you can figure it out in the above 1st or 2nd way. 

Appreciate in advance. 
Yao

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/723e0e59-e587-42b5-9fa4-390a27f2e7a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Query string operators seem to not be working correctly

2014-05-08 Thread Erich Lin
My query is in this format:

{
  "query": {
"query_string": {
  "default_field": "_all",
  "query": "QUERY",
  "default_operator": "AND"
}
  }
}


Here are the different outputs for QUERY and their counts:

sofa 2,818
rugs 75,309
red 33,839


red AND rugs 9,441

red AND sofa 149
rugs AND sofa 82


sofa AND rugs AND red 3


(sofa OR rugs) AND red 9,587

(sofa OR rugs) red 9,587

sofa OR (rugs AND red) 12,256

sofa OR (rugs red) 12,256

*sofa *OR rugs AND red 9,441

*sofa OR rugs *red 33,839 


The last two seem to be a bug.  It seems as if the bolded are ignored.  

expect sofa OR rugs AND red == sofa OR (rugs AND red) == 12,256

actual: sofa OR rugs AND red == rugs AND red == 9,441


expect sofa OR rugs red == sofa OR (rugs red) == sofa OR (rugs AND red) == 
12,256

actual: sofa OR rugs red == red == 33,839


*Is this a bug / known issue? I am using ES 0.90.11*


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a5039e4a-edf5-4177-9f80-312d6a24a82f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: performance slowdown after upgrading from 0.90 to 1.1.1

2014-05-08 Thread Kireet
oracle. Yes I saw that, but I didn't see anything in the release notes 
mentioning a performance difference in the 2 jdks. I think there was something 
about some bugfixes. Unless there was something specific, I wouldn't think a 
minor jdk version would cause a huge performance drop, though I guess stranger 
things have happened.

On May 8, 2014, at 6:57 PM, Mark Walkom  wrote:

> OpenJDK or Oracle?
> 
> (The current recommended version is 1.7.0_55.)
> 
> Regards,
> Mark Walkom
> 
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
> 
> 
> On 9 May 2014 08:55, Kireet  wrote:
> 1.7.0_17
> 
> On May 8, 2014, at 6:52 PM, Mark Walkom  wrote:
> 
>> What java version?
>> 
>> Regards,
>> Mark Walkom
>> 
>> Infrastructure Engineer
>> Campaign Monitor
>> email: ma...@campaignmonitor.com
>> web: www.campaignmonitor.com
>> 
>> 
>> On 9 May 2014 08:39, Kireet  wrote:
>> I tried various counts, after a certain point they didn't make much 
>> difference. Also I am not necessarily concerned with improving performance 
>> as much as figuring out why I got a slowdown with the exact same settings in 
>> 1.1.1. I want to be sure we didn't miss some configuration somewhere or some 
>> other issue. Thanks!
>> 
>> 
>> On May 8, 2014, at 6:32 PM, Mark Walkom  wrote:
>> 
>>> Can you yry increasing your bulk count to 1000, or more?
>>> 
>>> Regards,
>>> Mark Walkom
>>> 
>>> Infrastructure Engineer
>>> Campaign Monitor
>>> email: ma...@campaignmonitor.com
>>> web: www.campaignmonitor.com
>>> 
>>> 
>>> On 9 May 2014 05:01, slushi  wrote:
>>> We are testing out release 1.1.1 and during our indexing performance 
>>> testing, we seemed to get significantly slower throughput, the 
>>> document/second rate is about 30% slower. We used the exact same yml file 
>>> and startup settings. The code is also identical except for the breaking 
>>> changes in the java client api (in this case minor naming changes) and 
>>> different elasticsearch/lucene jars. 
>>> 
>>> We have a 4 node test cluster. The test basically creates an index with 4 
>>> shards and no replicas/refreshing. We are indexing documents that are about 
>>> 2KB each using the bulk api (500 documents per request). 
>>> 
>>> Below is some environment info and some settings that we changed from the 
>>> default.
>>> 
>>> MemTotal:   99016988 kB
>>> 
>>> ES_HEAP_SIZE=24g
>>> 
>>> MAX_OPEN_FILES=65535
>>> 
>>> /etc/elasticsearch/elasticsearch.yml :
>>> 
>>> cluster.name: estest
>>> 
>>> node.name: "es1"
>>> 
>>> node.rack: rack2
>>> 
>>> bootstrap.mlockall: true
>>> network.host: 1.2.3.4
>>> 
>>> gateway.recover_after_nodes: 3
>>> 
>>> gateway.expected_nodes: 3
>>> 
>>> discovery.zen.minimum_master_nodes: 1
>>> 
>>> discovery.zen.ping.multicast.enabled: false
>>> 
>>> discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3", "es4"]
>>> indices.memory.index_buffer_size: 50%
>>> 
>>> index.translog.flush_threshold_ops: 5
>>> 
>>> threadpool.search.type: fixed
>>> 
>>> threadpool.search.size: 20
>>> 
>>> threadpool.search.queue_size: 100
>>> 
>>> threadpool.index.type: fixed
>>> threadpool.index.size: 60
>>> 
>>> threadpool.index.queue_size: 200
>>> 
>>> threadpool.bulk.type: fixed
>>> 
>>> threadpool.bulk.size: 50
>>> 
>>> threadpool.bulk.queue_size: 1000
>>> 
>>> cluster.routing.allocation.awareness.attributes: rack
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/edea9e8f-e2f3-424e-b381-3d6dc4b96979%40googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to a topic in the 
>>> Google Groups "elasticsearch" group.
>>> To unsubscribe from this topic, visit 
>>> https://groups.google.com/d/topic/elasticsearch/qydm3PG4Jxw/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to 
>>> elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/CAEM624YYMESG9zYtb9shwEHU1Pkt9Dzn9n%3D6TMegRcx4pyZSkw%40mail.gmail.com.
>>> For more options, visit https://groups.google.com/d/optout.
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/E64BA391-4124-4CAC-BC70-360F91B549BF%40gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
>> 
>> 
>> -- 
>> You received this message because you are subscribed to a topic in

Re: performance slowdown after upgrading from 0.90 to 1.1.1

2014-05-08 Thread Mark Walkom
OpenJDK or Oracle?

(The current recommended version is 1.7.0_55.)

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 9 May 2014 08:55, Kireet  wrote:

> 1.7.0_17
>
> On May 8, 2014, at 6:52 PM, Mark Walkom  wrote:
>
> What java version?
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 9 May 2014 08:39, Kireet  wrote:
>
>> I tried various counts, after a certain point they didn’t make much
>> difference. Also I am not necessarily concerned with improving performance
>> as much as figuring out why I got a slowdown with the exact same settings
>> in 1.1.1. I want to be sure we didn’t miss some configuration somewhere or
>> some other issue. Thanks!
>>
>>
>> On May 8, 2014, at 6:32 PM, Mark Walkom 
>> wrote:
>>
>> Can you yry increasing your bulk count to 1000, or more?
>>
>> Regards,
>> Mark Walkom
>>
>> Infrastructure Engineer
>> Campaign Monitor
>> email: ma...@campaignmonitor.com
>> web: www.campaignmonitor.com
>>
>>
>> On 9 May 2014 05:01, slushi  wrote:
>>
>>> We are testing out release 1.1.1 and during our indexing performance
>>> testing, we seemed to get significantly slower throughput, the
>>> document/second rate is about 30% slower. We used the exact same yml file
>>> and startup settings. The code is also identical except for the breaking
>>> changes in the java client api (in this case minor naming changes) and
>>> different elasticsearch/lucene jars.
>>>
>>> We have a 4 node test cluster. The test basically creates an index with
>>> 4 shards and no replicas/refreshing. We are indexing documents that are
>>> about 2KB each using the bulk api (500 documents per request).
>>>
>>> Below is some environment info and some settings that we changed from
>>> the default.
>>>
>>> MemTotal:   99016988 kB
>>>
>>> ES_HEAP_SIZE=24g
>>>
>>> MAX_OPEN_FILES=65535
>>>
>>>
>>> */etc/elasticsearch/elasticsearch.yml : *
>>>
>>> cluster.name: estest
>>>
>>> node.name: "es1"
>>>
>>> node.rack: rack2
>>>
>>> bootstrap.mlockall: true
>>>
>>> network.host: 1.2.3.4
>>>
>>> gateway.recover_after_nodes: 3
>>>
>>> gateway.expected_nodes: 3
>>>
>>> discovery.zen.minimum_master_nodes: 1
>>>
>>> discovery.zen.ping.multicast.enabled: false
>>>
>>> discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3", "es4"]
>>>
>>> indices.memory.index_buffer_size: 50%
>>>
>>> index.translog.flush_threshold_ops: 5
>>>
>>> threadpool.search.type: fixed
>>>
>>> threadpool.search.size: 20
>>>
>>> threadpool.search.queue_size: 100
>>>
>>> threadpool.index.type: fixed
>>>
>>> threadpool.index.size: 60
>>>
>>> threadpool.index.queue_size: 200
>>>
>>> threadpool.bulk.type: fixed
>>>
>>> threadpool.bulk.size: 50
>>>
>>> threadpool.bulk.queue_size: 1000
>>>
>>> cluster.routing.allocation.awareness.attributes: rack
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/edea9e8f-e2f3-424e-b381-3d6dc4b96979%40googlegroups.com
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/elasticsearch/qydm3PG4Jxw/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAEM624YYMESG9zYtb9shwEHU1Pkt9Dzn9n%3D6TMegRcx4pyZSkw%40mail.gmail.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/E64BA391-4124-4CAC-BC70-360F91B549BF%40gmail.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To 

Re: performance slowdown after upgrading from 0.90 to 1.1.1

2014-05-08 Thread Kireet
1.7.0_17

On May 8, 2014, at 6:52 PM, Mark Walkom  wrote:

> What java version?
> 
> Regards,
> Mark Walkom
> 
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
> 
> 
> On 9 May 2014 08:39, Kireet  wrote:
> I tried various counts, after a certain point they didn't make much 
> difference. Also I am not necessarily concerned with improving performance as 
> much as figuring out why I got a slowdown with the exact same settings in 
> 1.1.1. I want to be sure we didn't miss some configuration somewhere or some 
> other issue. Thanks!
> 
> 
> On May 8, 2014, at 6:32 PM, Mark Walkom  wrote:
> 
>> Can you yry increasing your bulk count to 1000, or more?
>> 
>> Regards,
>> Mark Walkom
>> 
>> Infrastructure Engineer
>> Campaign Monitor
>> email: ma...@campaignmonitor.com
>> web: www.campaignmonitor.com
>> 
>> 
>> On 9 May 2014 05:01, slushi  wrote:
>> We are testing out release 1.1.1 and during our indexing performance 
>> testing, we seemed to get significantly slower throughput, the 
>> document/second rate is about 30% slower. We used the exact same yml file 
>> and startup settings. The code is also identical except for the breaking 
>> changes in the java client api (in this case minor naming changes) and 
>> different elasticsearch/lucene jars. 
>> 
>> We have a 4 node test cluster. The test basically creates an index with 4 
>> shards and no replicas/refreshing. We are indexing documents that are about 
>> 2KB each using the bulk api (500 documents per request). 
>> 
>> Below is some environment info and some settings that we changed from the 
>> default.
>> 
>> MemTotal:   99016988 kB
>> 
>> ES_HEAP_SIZE=24g
>> 
>> MAX_OPEN_FILES=65535
>> 
>> /etc/elasticsearch/elasticsearch.yml :
>> 
>> cluster.name: estest
>> 
>> node.name: "es1"
>> 
>> node.rack: rack2
>> 
>> bootstrap.mlockall: true
>> network.host: 1.2.3.4
>> 
>> gateway.recover_after_nodes: 3
>> 
>> gateway.expected_nodes: 3
>> 
>> discovery.zen.minimum_master_nodes: 1
>> 
>> discovery.zen.ping.multicast.enabled: false
>> 
>> discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3", "es4"]
>> indices.memory.index_buffer_size: 50%
>> 
>> index.translog.flush_threshold_ops: 5
>> 
>> threadpool.search.type: fixed
>> 
>> threadpool.search.size: 20
>> 
>> threadpool.search.queue_size: 100
>> 
>> threadpool.index.type: fixed
>> threadpool.index.size: 60
>> 
>> threadpool.index.queue_size: 200
>> 
>> threadpool.bulk.type: fixed
>> 
>> threadpool.bulk.size: 50
>> 
>> threadpool.bulk.queue_size: 1000
>> 
>> cluster.routing.allocation.awareness.attributes: rack
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/edea9e8f-e2f3-424e-b381-3d6dc4b96979%40googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>> 
>> 
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/elasticsearch/qydm3PG4Jxw/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/CAEM624YYMESG9zYtb9shwEHU1Pkt9Dzn9n%3D6TMegRcx4pyZSkw%40mail.gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/E64BA391-4124-4CAC-BC70-360F91B549BF%40gmail.com.
> For more options, visit https://groups.google.com/d/optout.
> 
> 
> -- 
> You received this message because you are subscribed to a topic in the Google 
> Groups "elasticsearch" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/elasticsearch/qydm3PG4Jxw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/CAEM624bJ2JaF9p-P5tkvzCARBp4N7o%2BOJJPs1PO8HXo1q%2Bqy2Q%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: performance slowdown after upgrading from 0.90 to 1.1.1

2014-05-08 Thread Mark Walkom
What java version?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 9 May 2014 08:39, Kireet  wrote:

> I tried various counts, after a certain point they didn’t make much
> difference. Also I am not necessarily concerned with improving performance
> as much as figuring out why I got a slowdown with the exact same settings
> in 1.1.1. I want to be sure we didn’t miss some configuration somewhere or
> some other issue. Thanks!
>
>
> On May 8, 2014, at 6:32 PM, Mark Walkom  wrote:
>
> Can you yry increasing your bulk count to 1000, or more?
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 9 May 2014 05:01, slushi  wrote:
>
>> We are testing out release 1.1.1 and during our indexing performance
>> testing, we seemed to get significantly slower throughput, the
>> document/second rate is about 30% slower. We used the exact same yml file
>> and startup settings. The code is also identical except for the breaking
>> changes in the java client api (in this case minor naming changes) and
>> different elasticsearch/lucene jars.
>>
>> We have a 4 node test cluster. The test basically creates an index with 4
>> shards and no replicas/refreshing. We are indexing documents that are about
>> 2KB each using the bulk api (500 documents per request).
>>
>> Below is some environment info and some settings that we changed from the
>> default.
>>
>> MemTotal:   99016988 kB
>>
>> ES_HEAP_SIZE=24g
>>
>> MAX_OPEN_FILES=65535
>>
>>
>> */etc/elasticsearch/elasticsearch.yml : *
>>
>> cluster.name: estest
>>
>> node.name: "es1"
>>
>> node.rack: rack2
>>
>> bootstrap.mlockall: true
>>
>> network.host: 1.2.3.4
>>
>> gateway.recover_after_nodes: 3
>>
>> gateway.expected_nodes: 3
>>
>> discovery.zen.minimum_master_nodes: 1
>>
>> discovery.zen.ping.multicast.enabled: false
>>
>> discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3", "es4"]
>>
>> indices.memory.index_buffer_size: 50%
>>
>> index.translog.flush_threshold_ops: 5
>>
>> threadpool.search.type: fixed
>>
>> threadpool.search.size: 20
>>
>> threadpool.search.queue_size: 100
>>
>> threadpool.index.type: fixed
>>
>> threadpool.index.size: 60
>>
>> threadpool.index.queue_size: 200
>>
>> threadpool.bulk.type: fixed
>>
>> threadpool.bulk.size: 50
>>
>> threadpool.bulk.queue_size: 1000
>>
>> cluster.routing.allocation.awareness.attributes: rack
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/edea9e8f-e2f3-424e-b381-3d6dc4b96979%40googlegroups.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/qydm3PG4Jxw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAEM624YYMESG9zYtb9shwEHU1Pkt9Dzn9n%3D6TMegRcx4pyZSkw%40mail.gmail.com
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/E64BA391-4124-4CAC-BC70-360F91B549BF%40gmail.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624bJ2JaF9p-P5tkvzCARBp4N7o%2BOJJPs1PO8HXo1q%2Bqy2Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: performance slowdown after upgrading from 0.90 to 1.1.1

2014-05-08 Thread Kireet
I tried various counts, after a certain point they didn't make much difference. 
Also I am not necessarily concerned with improving performance as much as 
figuring out why I got a slowdown with the exact same settings in 1.1.1. I want 
to be sure we didn't miss some configuration somewhere or some other issue. 
Thanks!


On May 8, 2014, at 6:32 PM, Mark Walkom  wrote:

> Can you yry increasing your bulk count to 1000, or more?
> 
> Regards,
> Mark Walkom
> 
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
> 
> 
> On 9 May 2014 05:01, slushi  wrote:
> We are testing out release 1.1.1 and during our indexing performance testing, 
> we seemed to get significantly slower throughput, the document/second rate is 
> about 30% slower. We used the exact same yml file and startup settings. The 
> code is also identical except for the breaking changes in the java client api 
> (in this case minor naming changes) and different elasticsearch/lucene jars. 
> 
> We have a 4 node test cluster. The test basically creates an index with 4 
> shards and no replicas/refreshing. We are indexing documents that are about 
> 2KB each using the bulk api (500 documents per request). 
> 
> Below is some environment info and some settings that we changed from the 
> default.
> 
> MemTotal:   99016988 kB
> 
> ES_HEAP_SIZE=24g
> 
> MAX_OPEN_FILES=65535
> 
> /etc/elasticsearch/elasticsearch.yml :
> 
> cluster.name: estest
> 
> node.name: "es1"
> 
> node.rack: rack2
> 
> bootstrap.mlockall: true
> 
> network.host: 1.2.3.4
> 
> gateway.recover_after_nodes: 3
> 
> gateway.expected_nodes: 3
> 
> discovery.zen.minimum_master_nodes: 1
> 
> discovery.zen.ping.multicast.enabled: false
> 
> discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3", "es4"]
> 
> indices.memory.index_buffer_size: 50%
> 
> index.translog.flush_threshold_ops: 5
> 
> threadpool.search.type: fixed
> 
> threadpool.search.size: 20
> 
> threadpool.search.queue_size: 100
> 
> threadpool.index.type: fixed
> 
> threadpool.index.size: 60
> 
> threadpool.index.queue_size: 200
> 
> threadpool.bulk.type: fixed
> 
> threadpool.bulk.size: 50
> 
> threadpool.bulk.queue_size: 1000
> 
> cluster.routing.allocation.awareness.attributes: rack
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/edea9e8f-e2f3-424e-b381-3d6dc4b96979%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
> 
> 
> -- 
> You received this message because you are subscribed to a topic in the Google 
> Groups "elasticsearch" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/elasticsearch/qydm3PG4Jxw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/CAEM624YYMESG9zYtb9shwEHU1Pkt9Dzn9n%3D6TMegRcx4pyZSkw%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/E64BA391-4124-4CAC-BC70-360F91B549BF%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: performance slowdown after upgrading from 0.90 to 1.1.1

2014-05-08 Thread Mark Walkom
Can you yry increasing your bulk count to 1000, or more?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 9 May 2014 05:01, slushi  wrote:

> We are testing out release 1.1.1 and during our indexing performance
> testing, we seemed to get significantly slower throughput, the
> document/second rate is about 30% slower. We used the exact same yml file
> and startup settings. The code is also identical except for the breaking
> changes in the java client api (in this case minor naming changes) and
> different elasticsearch/lucene jars.
>
> We have a 4 node test cluster. The test basically creates an index with 4
> shards and no replicas/refreshing. We are indexing documents that are about
> 2KB each using the bulk api (500 documents per request).
>
> Below is some environment info and some settings that we changed from the
> default.
>
> MemTotal:   99016988 kB
>
> ES_HEAP_SIZE=24g
>
> MAX_OPEN_FILES=65535
>
>
> */etc/elasticsearch/elasticsearch.yml :*
>
> cluster.name: estest
>
> node.name: "es1"
>
> node.rack: rack2
>
> bootstrap.mlockall: true
>
> network.host: 1.2.3.4
>
> gateway.recover_after_nodes: 3
>
> gateway.expected_nodes: 3
>
> discovery.zen.minimum_master_nodes: 1
>
> discovery.zen.ping.multicast.enabled: false
>
> discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3", "es4"]
>
> indices.memory.index_buffer_size: 50%
>
> index.translog.flush_threshold_ops: 5
>
> threadpool.search.type: fixed
>
> threadpool.search.size: 20
>
> threadpool.search.queue_size: 100
>
> threadpool.index.type: fixed
>
> threadpool.index.size: 60
>
> threadpool.index.queue_size: 200
>
> threadpool.bulk.type: fixed
>
> threadpool.bulk.size: 50
>
> threadpool.bulk.queue_size: 1000
>
> cluster.routing.allocation.awareness.attributes: rack
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/edea9e8f-e2f3-424e-b381-3d6dc4b96979%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624YYMESG9zYtb9shwEHU1Pkt9Dzn9n%3D6TMegRcx4pyZSkw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Time taken from issue closure on github to ES release?

2014-05-08 Thread Ivan Brusic
Hard to say. There are currently two open issues for version 1.1.2, so it
probably will not be released until those are addressed.

https://github.com/elasticsearch/elasticsearch/issues?labels=v1.1.2&page=1&state=open

They have been releasing minor versions at the rate of about once per
month, so they are about due. Just guessing of course.

Cheers,

Ivan


On Thu, May 8, 2014 at 2:30 PM, T Vinod Gupta  wrote:

> Does someone have visibility into ES release process? I am desperately
> waiting for a release to come out that will fix the below issue. it says
> that the bug is fixed/closed 5 days ago..
>
> https://github.com/elasticsearch/elasticsearch/issues/4887
> MultiSearch hangs forever + EsRejectedExecutionException
>
> thanks
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAHau4ysHzFaiA9UPWh-w0xtYi6YKvr4aketskYgiVLdsvP6FkQ%40mail.gmail.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQCMJi3K_WT4M5NKXFVCr728SXj2VvkQuFTOm1hYLSxxHA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to define mappings using Java API

2014-05-08 Thread Ethan Pailes
This was quite helpful.

I do have one question though. Is elasticSearchMappings of type Settings?

Ethan

On Thursday, January 12, 2012 8:09:01 PM UTC-5, Lorrin Nelson wrote:
>
> Hi David, 
>
> That helped, thanks. After getting the builder stuff working following 
> the example, I switched over to supplying the settings as a JSON 
> string with the following command: 
> CreateIndexResponse createIndexResponse = 
> client.admin().indices() 
> .prepareCreate(indexName) 
> .setSettings(elasticSearchSettings) 
> .addMapping(elasticSearchType, 
> elasticSearchMappings) 
> .execute().actionGet(); 
>
> where elasticSearchSettings is: 
> { 
> "index" : { 
> "number_of_shards" : 2, 
> "number_of_replicas" : 1 
> } 
> } 
>
> and elasticSearchMappings is in the format 
> { 
> "" : { 
> "properties" : { 
> "" : { 
> "type" : ... 
> }, 
> ... 
> } 
> } 
>
> I wasn't able to apply settings using a nested Java Map 
> structure, but I'm happy with supplying the JSON. 
>
> Cheers 
> -Lorrin 
>
> On 11 Jan., 12:53, "David Pilato"  wrote: 
> > Hi Lorrin, 
> > 
> > Have a look here :
> https://github.com/dadoonet/rssriver/blob/master/src/test/java/org/el... 
> > arch/river/rss/AbstractRssRiverTest.java 
> > Create the index with an abstract mapping() method 
> > 
> > And here :
> https://github.com/dadoonet/rssriver/blob/master/src/test/java/org/el... 
> > arch/river/rss/RssRiverTest.java 
> > (for an implementation of the mapping() method) 
> > 
> > HTH 
> > David. 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > > -Message d'origine- 
> > > De : elasti...@googlegroups.com  
> > > [mailto:elasti...@googlegroups.com ] De la part de 
> Lorrin Nelson 
> > > Envoyé : mercredi 11 janvier 2012 20:00 
> > > À : elasticsearch 
> > > Objet : How to definemappingsusingJava API 
> > 
> > > Could someone share a working example ofusingthe Java API to define 
> > > typemappings? 
> > 
> > > I'm trying things like: 
> > > Settings.Builder indexSettings = 
> > > ImmutableSettings.settingsBuilder().loadFromSource( 
> > > XContentFactory.jsonBuilder() 
> > > .startObject() 
> > > .startObject("settings") 
> > > .startObject("index") 
> > 
> > > .field("number_of_shards","BOGUS") 
> > > .endObject() 
> > > .endObject() 
> > > .startObject("mappings") 
> > 
> > > .startObject("my_type_name") 
> > 
> > > .startObject("sequence") 
> > > .field("type", 
> > > "boolean") 
> > > .endObject() 
> > 
> > > .startObject("message.text") 
> > > .field("type", 
> > > "string") 
> > > .field("index", 
> > > "no") 
> > > .endObject() 
> > > .endObject() 
> > > .endObject() 
> > > .endObject().string()); 
> > 
> > > CreateIndexResponse createIndexResponse = client.admin().indices() 
> > > .prepareCreate(newSlice).setSettings(indexSettings) 
> > > .execute().actionGet(); 
> > 
> > > I haven't managed to twiddle the builder structure such that they take 
> > > effect. E.g. message.text is searchable despite not being index, 
> > > sequence accepts numbers despite being boolean, and number_of_shards 
> > > being set to BOGUS doesn't produce any errors. 
> > 
> > > Thanks 
> > > -Lorrin

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/12cd2570-66dd-4934-9daa-7f06d0879208%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Time taken from issue closure on github to ES release?

2014-05-08 Thread T Vinod Gupta
Does someone have visibility into ES release process? I am desperately
waiting for a release to come out that will fix the below issue. it says
that the bug is fixed/closed 5 days ago..

https://github.com/elasticsearch/elasticsearch/issues/4887
MultiSearch hangs forever + EsRejectedExecutionException

thanks

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAHau4ysHzFaiA9UPWh-w0xtYi6YKvr4aketskYgiVLdsvP6FkQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Kibana Password Protected

2014-05-08 Thread Mark Walkom
Yep, look up apache basic authentication or try something like
https://github.com/fangli/kibana-authentication-proxy

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 9 May 2014 07:05, Joshua Bitto  wrote:

> Hello All,
>
> I"m trying to find documentation on how to setup Kibana to be password
> protected. I'm using Centos 6.5(apache) and right now with the basic
> install you can just go to the configured url and see logs without having
> to input credentials. Is there a way to add this?
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/39fcc6b7-1759-40ae-83d0-6b379a8a825e%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624a-y54OkALgEoK1_Ogb8430XEM84krnSSw_szXNA8YfrQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Unknown error in TransportShardSingleOperationAction.java

2014-05-08 Thread ashish jain
Hello,

I have a program where I am sending bursts of bulk index requests within a 
short time to elasticsearch (1.1) using the java API. I send in 1000 
documents (in a bulk request) every 2-5 seconds - initially I was running 
into NoNodeAvailableException/NoShardAvailableException (on the client 
side) and OutOfMemoryException (on the server). To solve this, after every 
few bulk requests I wait for a while before sending any more requests (1s 
wait after 10 bulk requests, 60s wait after every 30 bulk requests - I got 
these numbers after random trial and error in my configuration). With this 
waiting, my program ran much longer, but now I have run into an unknown 
error after I inserted around 3GB (~80,000 documents) of data. The error is 
- 

${
pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}${pattern}org.elasticsearch.action.NoShardAvailableActionException:
 
[records][1] nullat 
org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.perform(
TransportShardSingleOperationAction.java:145)at 
org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.onFailure(
TransportShardSingleOperationAction.java:132)at 
org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.access$900(
TransportShardSingleOperationAction.java:97)at 
org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction$1.run(
TransportShardSingleOperationAction.java:166)at 
java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1142) at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:617)at java.lang.Thread.run(Thread.java:744)

The corresponding error on the server is - 

org.elasticsearch.index.engine.EngineException: [records][1] this 
ReferenceManager is closed at 
org.elasticsearch.index.engine.internal.InternalEngine.acquireSearcher(InternalEngine.java:662)
 
at 
org.elasticsearch.index.engine.internal.InternalEngine.loadCurrentVersionFromIndex(InternalEngine.java:1317)
 
at 
org.elasticsearch.index.engine.internal.InternalEngine.innerIndex(InternalEngine.java:495)
 
at 
org.elasticsearch.index.engine.internal.InternalEngine.index(InternalEngine.java:470)
 
at 
org.elasticsearch.index.shard.service.InternalIndexShard.index(InternalIndexShard.java:396)
 
at 
org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:401)
 
at 
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:157)
 
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:556)
 
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:426)
 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:744)

Additional information - 

1)I also run into occasional ReceiveTimeoutTransportExceptions on GET 
requests (on client) along with several gc memory warnings and some 
outofmemory errors (on server).

May 08, 2014 1:42:05 PM org.elasticsearch.client.transport INFO: [Yuri 
Topolov] failed to get node info for 
[#transport#-1][.local][inet[localhost/127.0.0.1:9300]], disconnecting... 

org.elasticsearch.transport.ReceiveTimeoutTransportException: 
[][inet[localhost/127.0.0.1:9300]][cluster/nodes/info] request_id [12345] 
timed out after [5001ms] at 
org.elasticsearch.transport.TransportService$TimeoutHandler.run(
TransportService.java:356) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1142) at 
java.util.co

Kibana Password Protected

2014-05-08 Thread Joshua Bitto
Hello All,

I"m trying to find documentation on how to setup Kibana to be password 
protected. I'm using Centos 6.5(apache) and right now with the basic 
install you can just go to the configured url and see logs without having 
to input credentials. Is there a way to add this?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/39fcc6b7-1759-40ae-83d0-6b379a8a825e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Wildcards in field names

2014-05-08 Thread ltuhuru
Is there a way to aggregate across fields with wildcards in the name? I have
documents with a variety of structures, and I want to be able to aggregate
across all fields with the name "special_label". That field may occur in
various structural places within the document.

Something like this would be great, but is not supported.

{ 
"aggs" : {
   "aliases" : {
"terms" : {
"field" : "*.special_label",
 }
  }
}



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/Wildcards-in-field-names-tp4055601.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1399580522796-4055601.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Re: maxDocs different between primary and replica shards

2014-05-08 Thread Csaba Dezsényi
I exactly have the same issue!
Does someone have solution for this?

Thanks,
Csaba

2013. november 28., csütörtök 14:26:51 UTC+1 időpontban Klaus Brunner a 
következőt írta:
>
> We're running Elasticsearch (currently 0.90.6) in what I'd call a 
> "replicated" architecture: our indexes are quite small (tens of thousands 
> of documents) and fit easily on a single machine, so we allocate a single 
> shard per index. However, we make sure that they are replicated to each 
> node of our cluster. The whole approach ensures that each application 
> server has its own "local" ES with all data of an index and can keep 
> working autonomously if others fail. This works alright so far.
>
> Now, we're seeing small but visible score discrepancies between ES nodes, 
> specifically between the primary shard and the replicas. Using explain, we 
> found out that the difference is in the maxDocs value. As known and 
> documented, deleted documents may still contribute to the maxDocs value 
> (and thus, affect TF-IDF scores). That's not a problem per se. 
>
> The problem is rather that maxDocs is different between the primary and 
> the replica shards (until we restart ES or force a merge using the optimize 
> call). Depending on whether the primary or a replica is hit with the exact 
> same query, we get different scores because the maxDocs value is different 
> by exactly the number of documents that have been deleted previously.
>
> Is there any way to ensure that maxDocs is the same on primary and replica 
> shards, short of forcing a costly merge?
>
> (Using DFS queries or not makes no difference, as I would expect from my 
> understanding of them - the index isn't really distributed, it's 
> replicated.)
>
> Thanks
>
> Klaus
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/aa7bfbea-8e81-474a-bc5c-edda55e707a5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: default _ttl causes MapperParsingException due to already expired document

2014-05-08 Thread Ravi Gairola
Wow, awesome! That was quick.

Thanks a lot.



On Thursday, May 8, 2014 11:17:36 AM UTC-7, Benjamin Devèze wrote:
>
> Hi Ravi,
>
> After a quick investigation I would say that the problem is here:
>
> https://github.com/mallocator/Elasticsearch-BigQuery-River/blob/master/src/main/java/org/elasticsearch/river/bigquery/BigQueryRiver.java#L391
>
> The timestamp should be set in milliseconds so removing the / 1000 should 
> solve your issue.
>
> Hope this help
>
>
>
> -- 
> Benjamin DEVEZE
>  

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/24313452-97da-4dbd-af7a-1ed6003d93d8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: counting filtered queries

2014-05-08 Thread spicylobe
This worked!  I think the docs should be updated though, they are still 
wrong at:
http://www.elasticsearch.org/guide/en/elasticsearch/client/javascript-api/current/api-reference.html#api-count

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/404955b1-c408-474e-b373-edad34ce99aa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: ANN: new elasticsearch discovery plugin - eskka

2014-05-08 Thread Otis Gospodnetic
Cool, Shikhar,

At Sematext we use both ES and Akka (in SPM ), so 
this is interesting for me to see...  Would it make sense to add a bit more 
to the README. things like:
* why? is something wrong with Zen?
* pros and cons of this vs. Zen vs. ZK

Thanks,
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/


On Thursday, May 8, 2014 3:32:24 AM UTC-4, Shikhar Bhushan wrote:
>
> All Elasticsearch nodes will end up being part of the Akka cluster :) I 
> think you're really asking how many seed nodes you should specify. The seed 
> node list is probably going to be similar to what you might use for 
> zen.unicast.hosts.
>
> Worth noting that besides being initial contact points for when the 
> cluster is starting up, with eskka they are also used for resolving 
> partitions. Given this requirement, you would ideally have 3 or more 
> specified. It is perfectly ok to have all of your nodes listed, if you know 
> their addresses before startup.
>
> https://github.com/shikhar/eskka#configuration
>
>
> On Wed, May 7, 2014 at 9:31 PM, Ivan Brusic 
> > wrote:
>
>> Extremely interesting! What is the recommended size of the Akka cluster 
>> compared to the Elasticsearch cluster?
>>
>> -- 
>> Ivan
>>
>>
>> On Tue, May 6, 2014 at 8:42 PM, shikhar 
>> > wrote:
>>
>>>  Just released 0.1.1
>>>
>>> This version is working well in my manual testing. Automated testing is 
>>> on the roadmap...
>>>  
>>>
>>> On Mon, May 5, 2014 at 10:49 AM, shikhar 
>>> > wrote:
>>>
 See README

 I'd love to have feedback on this first release!

>>>
>>>  -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com .
>>>  To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/CAHWG4DNiwCxbZakzogFfqFxYxabcQ_ysG2_OMd06%3D%2BfDqFEQdA%40mail.gmail.com
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQDjg7Uaw4tTVr2-8cGM7w2sutH6F2XzCVj3JjKK-RCt5g%40mail.gmail.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9b73c46d-66dd-4c29-9e76-feba3f033133%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using wildcards in query_string fields

2014-05-08 Thread ltuhuru
Found my problem. Get rid of the leading type "message" in the field path and
wildcards in the fields work.

curl -XGET localhost:9200/nettest/_search?pretty -d '{
"query" : {
"query_string" : {
"fields" : ["*.ne*"],
"query" : "NET*"
}
}
}'





--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/Using-wildcards-in-query-string-fields-tp4055594p4055595.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1399575721559-4055595.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


performance slowdown after upgrading from 0.90 to 1.1.1

2014-05-08 Thread slushi
We are testing out release 1.1.1 and during our indexing performance 
testing, we seemed to get significantly slower throughput, the 
document/second rate is about 30% slower. We used the exact same yml file 
and startup settings. The code is also identical except for the breaking 
changes in the java client api (in this case minor naming changes) and 
different elasticsearch/lucene jars. 

We have a 4 node test cluster. The test basically creates an index with 4 
shards and no replicas/refreshing. We are indexing documents that are about 
2KB each using the bulk api (500 documents per request). 

Below is some environment info and some settings that we changed from the 
default.

MemTotal:   99016988 kB

ES_HEAP_SIZE=24g

MAX_OPEN_FILES=65535


*/etc/elasticsearch/elasticsearch.yml :*

cluster.name: estest

node.name: "es1"

node.rack: rack2

bootstrap.mlockall: true

network.host: 1.2.3.4

gateway.recover_after_nodes: 3

gateway.expected_nodes: 3

discovery.zen.minimum_master_nodes: 1

discovery.zen.ping.multicast.enabled: false

discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3", "es4"]

indices.memory.index_buffer_size: 50%

index.translog.flush_threshold_ops: 5

threadpool.search.type: fixed

threadpool.search.size: 20

threadpool.search.queue_size: 100

threadpool.index.type: fixed

threadpool.index.size: 60

threadpool.index.queue_size: 200

threadpool.bulk.type: fixed

threadpool.bulk.size: 50

threadpool.bulk.queue_size: 1000

cluster.routing.allocation.awareness.attributes: rack

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/edea9e8f-e2f3-424e-b381-3d6dc4b96979%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Using wildcards in query_string fields

2014-05-08 Thread ltuhuru
I'm a new user, so hopefully this is something really simple. I have two
"message" objects in my index, "typeA" and "typeB", each of which has a
"net" field. I'd like to be able to search for all objects whose net fields
match a pattern.

If I specify the fields explicitly, the query returns the appropriate docs.
If I use a wildcard in the fields for the query_string, the query returns no
hits. Clearly I don't understand something - can anyone point out what I'm
missing?

curl -XPUT localhost:9200/nettest/message/typeA-1 -d '{
"typeA" : {
"net" : "NET11_TEST"
}
}'

curl -XPUT localhost:9200/nettest/message/typeB-1 -d '{
"typeB" : {
"net" : "NET01_TEST"
}
}'

This query returns both docs:
curl -XGET localhost:9200/nettest/_search?pretty=true -d '{
"query" : {
"query_string" : {
"fields" : ["message.typeA.net", "message.typeB.net"],
"query" : "NET*"
}
}
}'

This query returns no docs:
curl -XGET localhost:9200/nettest/_search?pretty=true -d '{
"query" : {
"query_string" : {
"fields" : ["message.*.net"],
"query" : "NET*"
}
}
}'



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/Using-wildcards-in-query-string-fields-tp4055594.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1399573144796-4055594.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Re: default _ttl causes MapperParsingException due to already expired document

2014-05-08 Thread Benjamin Devèze
Hi Ravi,

After a quick investigation I would say that the problem is here:
https://github.com/mallocator/Elasticsearch-BigQuery-River/blob/master/src/main/java/org/elasticsearch/river/bigquery/BigQueryRiver.java#L391

The timestamp should be set in milliseconds so removing the / 1000 should
solve your issue.

Hope this help



-- 
Benjamin DEVEZE

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CABecc28Fa7eM_ixGWzkkhw8c6ACOzc0FX-tkWf51uxFJKJBBbQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Kibana browser support

2014-05-08 Thread sd

   Kibana supports the latest browsers.
   What are the minimum browser versions supported by Kibana 3  for IE, 
Chrome, Mozilla, Safari ?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0e09eedb-568d-4bcd-a177-4f6d1c0f4420%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


default _ttl causes MapperParsingException due to already expired document

2014-05-08 Thread Ravi Gairola
I have a river importing data from Big Query and I import it into an index 
via bulk that has a default _ttl of 30 days configured. I don't set the ttl 
anywhere on the document when importing, so every document should just get 
the ttl set from the default value.

Unfortunately though I keep getting exceptions such as this one:

2014-05-08 00:04:54,819][DEBUG][action.index ] [prod_log_3] 
[prod_-2014.05.08][4], node[7iHEb2ciTsGSaR3LxKbw8w], [P], s[STARTED]: 
Failed to execute [index 
{[prod_-2014.05.08][logging][g-6CRDhSS2OksWfF3OpCTg], 
source[{"message":"Message returned successfully. Size: 
2","timestamp":"1399507397000","level":"INFO","mdc":"{\"time_received\":\"1399507397320\",\"time_responded\":\"1399507397333\",\"user_device\":\"\"xxx\"\",\"response_length\":\"2\",\"user_anchor\":\"0\",\"response_size\":\"2\",\"returned_models\":\"0\",\"user_tag\":\"\"production\"\",\"user_model\":\"\"5.06\"\"}","thread":"Request
 
717C91C3","logger":my.pkg.Servlet"}]}]
org.elasticsearch.index.mapper.MapperParsingException: failed to parse 
[_ttl]
at 
org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:418)
at 
org.elasticsearch.index.mapper.internal.TTLFieldMapper.postParse(TTLFieldMapper.java:177)
at 
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:523)
at 
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:462)
at 
org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:363)
at 
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:215)
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:556)
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:426)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.index.AlreadyExpiredException: already expired 
[prod_context_eng-2014.05.08]/[logging]/[g-6CRDhSS2OksWfF3OpCTg] due to 
expire at [3991507494] and was processed at [1399507494819]
at 
org.elasticsearch.index.mapper.internal.TTLFieldMapper.innerParseCreateField(TTLFieldMapper.java:215)
at 
org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:215)
at 
org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:408)
... 10 more


The mapping for the index looks like this:

logging: {
  _timestamp: {
enabled: true
  },
  _ttl: {
enabled: true,
default: 259200
  },
  properties: {
timestamp: {
  type: string
},
message: {
  type: string
},
level: {
  type: string
},
mdc: {
  type: string
},
thread: {
  type: string
},
logger: {
  type: string
}
  }
}

I've checked if the clocks on each of the three nodes is in sync, and there 
was only negligible skew.
The cluster is running ES version 1.1.1 on GCE using standard n1 instances 
with dedicated disks.

The connector used for nodes to find each other 
is https://github.com/mallocator/Elasticsearch-GCE-Discovery

The river used to import data 
is https://github.com/mallocator/Elasticsearch-BigQuery-River


Any suggestions on what I can do to fix/improve this issue would be very 
welcome.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/dc0d6f32-fc06-4598-9f85-f78e6d342cb3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Node restart?

2014-05-08 Thread Ivan Brusic
To answer my own question, it is in fact disabled:
https://github.com/elasticsearch/elasticsearch/issues/265

-- 
Ivan


On Wed, May 7, 2014 at 5:18 PM, Ivan Brusic  wrote:

> Does nodes restart action work? It is not documented, and whenever I try
> to use it I get:
>
> {
>   "error" : "ElasticSearchIllegalStateException[restart is disabled (for
> now) ]",
>   "status" : 500
> }
>
> Which indicates a failure, but there is nothing else in the logs which
> indicates any issues.
>
>
> https://github.com/elasticsearch/elasticsearch/blob/1.0/src/main/java/org/elasticsearch/action/admin/cluster/node/restart/TransportNodesRestartAction.java#L68
>
> The disabled property is set to false by default. Trying to come up with a
> more graceful way for non-technical users to restart servers.
>
> --
> Ivan
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQAmJZff_v0JGLoq1xCUWE10RHVqgLYtCw9MB7E3bowFig%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: elasticsearch throws ElasticsearchIllegalStateException: node is not configured to store local location

2014-05-08 Thread Nish
I get the error while running logstash 
I tried 127.0.0.1 , "localhost" ,  and it still throws same 
error ; the weird thing is that if I use elasticsearch in the input{} block 
; logstash is able to read from it. 


On Thursday, May 8, 2014 12:32:35 PM UTC-4, Nish wrote:
>
> I am turning on a single node instance (master+data) and trying to use 
> logstash and I see this error: 
>
> log4j, [2014-05-08T16:11:05.571] ERROR: 
> org.elasticsearch.gateway.local.state.meta: 
> [logstash-ip-10-169-36-251-5993-4082] failed to read local state, exiting...
> org.elasticsearch.ElasticsearchIllegalStateException: node is not 
> configured to store local location
>
> This is my elasticsearch.yml: 
>
> cluster.name: elasticsearchtest
> node.name: "node1"
> node.master: true
> node.data: true
> index.number_of_replicas: 0
>
>
> Any idea ? 
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2f4326cb-6fa2-406f-8e4e-9c2d3264917b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch throws ElasticsearchIllegalStateException: node is not configured to store local location

2014-05-08 Thread Nish
I am turning on a single node instance (master+data) and trying to use 
logstash and I see this error: 

log4j, [2014-05-08T16:11:05.571] ERROR: 
org.elasticsearch.gateway.local.state.meta: 
[logstash-ip-10-169-36-251-5993-4082] failed to read local state, exiting...
org.elasticsearch.ElasticsearchIllegalStateException: node is not 
configured to store local location

This is my elasticsearch.yml: 

cluster.name: elasticsearchtest
node.name: "node1"
node.master: true
node.data: true
index.number_of_replicas: 0


Any idea ? 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6d2df641-8afb-4d9d-831f-7bf1ab801d8f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Help deleting by field name

2014-05-08 Thread Ivan Brusic
Unfortunately, Elasticsearch does not support update by query, so you
cannot programatically delete a field. Perhaps there is something that can
be done on the Kibana side.

-- 
Ivan


On Wed, May 7, 2014 at 3:45 PM, Chris Laplante  wrote:

> Trying to just delete the field. The main real world problem I'm facing is
> that if the fields picker is expanded in Kibana and there are 15K of them
> it bogs down the browser. :) Could be a area of improvement there for
> Kibana to impose a limit.
>
> Thanks,
>
> -Chris
>
>
>
> On Wed, May 7, 2014 at 2:37 PM, Ivan Brusic  wrote:
>
>> Are you looking to delete the documents or just the field? If you are
>> trying to delete the entire document that contains the field, you could use
>> a delete by query where the query contains on exists filter:
>>
>>
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-delete-by-query.html
>>
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-exists-filter.html
>>
>> However, you will not be able to use wildcards in the exists filter, so
>> you will need to explicitly state each filter. That's the best I can think
>> of. :)
>>
>> Cheers,
>>
>> Ivan
>>
>>
>> On Mon, May 5, 2014 at 3:25 PM, Chris Laplante wrote:
>>
>>> I have a number of fields (15K) that were created inadvertently.
>>>
>>> How would I delete them all from all indexes based on a pattern of the
>>> field name.
>>>
>>> Eg all the fields I want to delete are in indexes named with the
>>> standard logstash naming and are
>>>
>>> "calculus_data.calculus_*"
>>>
>>> Thanks,
>>>
>>> -Chris
>>>
>>>
>>>
>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>>
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/0ff099ca-5db2-4f34-a6fb-e418d8d38dac%40googlegroups.com
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/elasticsearch/C6rr8dJf_ZM/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQDRZczMoRahZfOvAhezU66zXT9ijq63xfgWj9zcnDWCeg%40mail.gmail.com
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAPWb6tqpLHk46hVeJfVyD9rZHPJP%2BmqPv6dLg8ytHYoeMVr3cA%40mail.gmail.com
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQC1bnH2%3D7qMKz6NAcAuC1AVwVe5gN9OmaytyLJ42-O8vQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: AmazonClientException[Unable to unmarshall error response...] when creating snapshot in S3

2014-05-08 Thread Pete Michel
I was able to resolve my issue.  We have elasticsearch in a private subnet. 
 Access to S3 goes through squid proxy.  I was able to alter the squid 
proxy configuration to fix the issue.

I added the following to /etc/squid/squid.conf...

ignore_error_100 on

For reference, here is the squid proxy docs 
(http://www.squid-cache.org/Doc/config/ignore_expect_100/)

Not sure if you are in the same situation, but hope this helps.

--Pete


On Thursday, May 8, 2014 9:51:28 AM UTC-4, Paulo Correa wrote:
>
> Hi Pete, 
>
> we ran some more tests to see if we could narrow down the problem. We 
> found out that whenever we had an instance that was not in AWS' default-vpc 
> launch an snapshot, the problem occurred. That was the only factor that 
> made the problem happen, so it is not an issue with permissions, role, 
> buckets, etc, but rather may be a routing problem.
>
> We're thinking of submitting a ticket to AWS Support, as we're still 
> facing the error when we launch an instance in our internal VPC.
>
>
> Em quinta-feira, 8 de maio de 2014 10h21min48s UTC-3, Pete Michel escreveu:
>>
>> Paulo,
>>
>> Did you ever figure out your error?  I just encountered the exact same 
>> problem and was hoping you had found a solution
>>
>> Thanks,
>> Pete
>>
>> On Friday, April 25, 2014 6:14:38 PM UTC-4, Paulo Correa wrote:
>>>
>>> I`ve set up  ES v.1.1.1 + AWS-cloud-plugin 2.1.1 on an EC2 instance, 
>>> using a Role to give access to my S3 bucket. Everything seems fine, I can 
>>> put, list and get objects from inside the instance (tested with node.js 
>>> code), but when I try to create an ES snapshot I get the following error 
>>> message:
>>>
>>>
>>> *$ curl -XPUT 
>>> "localhost:9200/_snapshot/my_snapshot_repo/snapshot1?wait_for_completion=true"*
>>> {"error":"SnapshotCreationException[[my_snapshot_repo:snapshot1] failed 
>>> to create snapshot]; nested: IOException[Failed to get 
>>> [snapshot-snapshot1]]; nested: AmazonClientException[Unable to unmarshall 
>>> error response (The declaration for the entity \"ContentType\" must end 
>>> with '>'.). Response Code: 417, Response Text: Expectation Failed]; nested: 
>>> SAXParseException[The declaration for the entity \"ContentType\" must end 
>>> with '>'.]; ","status":500}
>>>
>>> *Full error stack here*: https://gist.github.com/anonymous/11304863
>>>
>>>
>>> *I`ve set up my snapshot repository with:*
>>>
>>> $ curl -XPUT 'http://localhost:9200/_snapshot/my_snapshot_repo' -d '{ 
>>> "type": "s3", "settings": { "bucket": "mybucket","region": "sa-east"}}'
>>>
>>>
>>> My *elasticsearch.yml* aws config is as follows:
>>>
>>> ...
>>> cluster.name: montadores
>>> cloud.aws.region: sa-east-1
>>> discovery:
>>> type: ec2
>>>
>>> discovery.ec2.tag.elasticsearch_cluster: mobile_montadores
>>> cloud.node.auto_attributes: true
>>> ...
>>>
>>>
>>> *Discovery seems OK*, as I only have 1 instance running:
>>>
>>> ...
>>>
>>> [2014-04-25 21:16:53,265][INFO ][cluster.service  ] [Cap 'N 
>>> Hawk] new_master [Cap 'N 
>>> Hawk][M013xeCxQSu8QbZzdNucVA][ip-10-152-89-168][inet[/10.152.89.168:9300]]{aws_availability_zone=sa-east-1a},
>>>  
>>> reason: zen-disco-join (elected_as_master)
>>>
>>> ...
>>>
>>> [2014-04-25 21:16:56,005][INFO ][node ] [Cap 'N 
>>> Hawk] started
>>>  
>>>
>>>
>>> The only similar thing I found to this error was an old closed issue: 
>>> https://github.com/elasticsearch/elasticsearch/issues/1137
>>>
>>>
>>> Can anybody shed some light if I`m doing something wrong?
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/477b6020-911b-406a-b37e-a27543fe1c7e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Changing node and cluster name Version: 1.1.1

2014-05-08 Thread Francesco Audisio
thanks you very much now Work!!! =)

Il giorno giovedì 8 maggio 2014 12:42:57 UTC+2, David Pilato ha scritto:
>
> You should write something like:
>
> node.name: 192.168.0.12.raspi1
>
> Does it work?
>
> -- 
> *David Pilato* | *Technical Advocate* | *Elasticsearch.com*
> @dadoonet  | 
> @elasticsearchfr
>
>
> Le 8 mai 2014 à 09:22:06, Francesco Audisio (cesc...@gmail.com) 
> a écrit:
>
> No i haven't set "Blind Faith",what is it? 
>
> this is my elasticsearch-yml:
>
>  
> https://gist.github.com/Fraaud/89114d2ad6b70daa3437#file-elasticsearch-yml
>
> Il giorno giovedì 8 maggio 2014 09:10:49 UTC+2, David Pilato ha scritto: 
>>
>>  Please use GIST instead of attaching files.
>>  
>>  Did you set name to be "Blind Faith"?
>>  
>>  Could you gist your elasticsearch.yml file?
>>
>>  -- 
>> *David Pilato* | *Technical Advocate* | *Elasticsearch.com* 
>>  @dadoonet  | 
>> @elasticsearchfr
>>  
>>
>> Le 8 mai 2014 à 08:44:49, Francesco Audisio (cesc...@gmail.com) a écrit:
>>
>>  
>> i upload the file log, the problem start in the line 491 
>>
>> Il giorno giovedì 8 maggio 2014 08:19:35 UTC+2, David Pilato ha scritto: 
>>>
>>>  Can you see anything in elasticsearch logs or in system logs?
>>>
>>>  -- 
>>> *David Pilato* | *Technical Advocate* | *Elasticsearch.com* 
>>>  @dadoonet  | 
>>> @elasticsearchfr
>>>  
>>>
>>> Le 7 mai 2014 à 23:13:01, Francesco Audisio (cesc...@gmail.com) a écrit:
>>>
>>>  yes i have uncomment the line but now the Elasticsearch server not 
>>> start properly, i have this error: 
>>>
>>>  sudo /etc/init.d/elasticsearch restart
>>> [ ok ] Stopping Elasticsearch Server: Elasticsearch Server is not 
>>> running but pid file exists, cleaning up.
>>> [ ok ] Starting Elasticsearch Server:.
>>>  
>>> but if i comment the line the server start properly with random node 
>>> name, I do not understand why, I have to compile the entire file?
>>>
>>>
>>> Il giorno mercoledì 7 maggio 2014 22:45:28 UTC+2, David Pilato ha 
>>> scritto: 

  Uncomment the line and put any nam you want:
  
  node.name: My name
  
  Is that what you are looking for?

  -- 
 *David Pilato* | *Technical Advocate* | *Elasticsearch.com* 
  @dadoonet  | 
 @elasticsearchfr
  

 Le 7 mai 2014 à 22:44:00, Francesco Audisio (cesc...@gmail.com) a 
 écrit:

  Hi all, 

 I am a beginner with Marvel, and I already meet a first difficulty =) 

 as I do to change and to assign with IP adress of the pc the name of 
 the node with this file:

  elasticsearch.yml
  
 because inside this file I have found this line 

  # Node names are generated dynamically on startup, so you're relieved
> # from configuring them manually. You can tie this node to a specific 
> name:
> #
> # node.name:
>

  but I don't understand as I do to write the name of the node, it 
 produces in automatic of the nodes names and I know this it is normal 
 however as I do to change him the name?
  
 I repeat I am a beginner I want to learn =D

  --
 You received this message because you are subscribed to the Google 
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/779a286a-96fe-4173-97f8-4a4bca017d50%40googlegroups.com
 .
 For more options, visit https://groups.google.com/d/optout.
  
   --
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/aed42466-db44-409b-b268-d04d2c5e4749%40googlegroups.com
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>  
>>>   --
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/36c4e9e1-c5f9-4c9f-b5b2-7024a2512b3e%4

Scripts reload on demand

2014-05-08 Thread Thomas
Hi,

I was wondering whether there is a way to reload the scripts on demand 
provided under config/scripts. I'm facing a weird situation were although 
the documentation describes that the scripts are loaded every xx amount of 
time (configuration) I do not see that happening and there is no way to see 
a new script I put unless I restart my node(s). Is there a curl request to 
be able to force reload the scripts? Additionally, is there any curl 
command that can display which scripts are loaded into ES Node and which 
are not?

I use elasticsearch 1.1.1 and my scripts are in Groovy (with groovy lang 
plugin installed)

Thank you

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/51e0da62-8934-4e67-9fb8-792353f532da%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ANN] Elasticsearch for Apache Hadoop 2.0 RC1 has been released

2014-05-08 Thread Costin Leau

Hi everyone,

I'm happy to announce that Elasticsearch for Apache Hadoop (aka es-hadoop) 2.0 RC1 has been released. You can read more 
about it on our blog at [1].


Cheers!

[1] http://www.elasticsearch.org/blog/es-hadoop-20-rc1

--
Costin


--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/536B976E.9040501%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: jdk fails with out of memory error / es critical index counts

2014-05-08 Thread Nishchay Shah
ok I got marvel up and running on my test instances


On Mon, May 5, 2014 at 11:27 PM, Mark Walkom wrote:

> You need to install a monitoring plugin to gain better insight into what
> is happening, it makes things a lot easier to visually see
> cluster/node/index state and remove your shell commands, which may not be
> 100% representative of what ES is actually doing.
>
> I suggest elastichq and marvel.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 6 May 2014 13:06, Nishchay Shah  wrote:
>
>> FYI settings:
>> *Master*:
>> [root@ip-10-169-36-251 logstash-2013.12.05]# grep -vE "^$|^#"
>> /xx/elasticsearch-1.1.1/config/elasticsearch.yml
>> cluster.name: elasticsearchtest
>> node.name: "node1"
>> node.master: true
>> node.data: true
>> index.number_of_replicas: 0
>> discovery.zen.ping.multicast.enabled: false
>> discovery.zen.ping.unicast.hosts: ["10.169.36.251", "10.186.152.19"]
>> *Non Master*
>> [root@ip-10-186-152-19 logstash-2013.12.05]# grep -vE "^$|^#"
>> /elasticsearch/es/elasticsearch-1.1.1/config/elasticsearch.yml
>> cluster.name: elasticsearchtest
>> node.name: "node2"
>> node.master: false
>> node.data: true
>> index.number_of_replicas: 0
>> discovery.zen.ping.multicast.enabled: false
>> discovery.zen.ping.unicast.hosts: ["10.169.36.251","10.186.152.19"]
>>
>>
>> On Mon, May 5, 2014 at 11:01 PM, Nishchay Shah wrote:
>>
>>> Probably not.
>>>
>>> I deleted all data from slave and restarted both servers and I see this:
>>>
>>> *Master: *
>>> [root@ip-10-169-36-251 logstash-2013.12.22]#  du -h --max-depth=1
>>> 16M./0
>>> 16M./1
>>> 8.0K./_state
>>> 15M./4
>>> 15M./3
>>> 15M./2
>>> 75M.
>>>
>>> *Data: *
>>>
>>> [root@ip-10-186-152-19 logstash-2013.12.22]# du -h --max-depth=1
>>> 16M./0
>>> 16M./1
>>> 15M./4
>>> 15M./3
>>> 15M./2
>>> 75M.
>>>
>>>
>>> On Mon, May 5, 2014 at 10:53 PM, Mark Walkom 
>>> wrote:
>>>
 Don't copy indexes on the OS level!

 Is your new cluster balancing the shards?

 Regards,
 Mark Walkom

 Infrastructure Engineer
 Campaign Monitor
 email: ma...@campaignmonitor.com
 web: www.campaignmonitor.com


 On 6 May 2014 12:46, Nishchay Shah  wrote:

> Hey Mark,
> Thanks for the response. I have currently created two new medium test
> instances (1 master 1 data only) because I didn't want to mess with the
> main dataset. In my test setup, I have about 600MB of data ; 7 indexes
>
> After looking around a lot I saw that the directory organization is
> /elasticsearch/es/elasticsearch-1.1.1/data/elasticsearchtest/nodes/* number>*/ and the master node has only 1 directory
>
> (master)
> # ls /elasticsearch/es/elasticsearch-1.1.1/data/elasticsearchtest/nodes
> 0
>
> So on node2 I created a "1" directory and moved 1 index from master to
> data ; So master now has six indexes in 0 and data has one in 1.
> When I started elasticsearch after that I got to a point where the
> master is not NOT copying the data back to itself.. but now node2 is
> copying master's data and making a "0" directory ; Also, I am unable to
> query the node2's data !
>
>
>
>
> On Mon, May 5, 2014 at 9:34 PM, Mark Walkom  > wrote:
>
>> Moving data on the OS level without making ES aware can cause
>> difficulties as you are seeing.
>>
>>  A few suggestions on how to resolve this and improve things in
>> general;
>>
>>1. Set your heap size to 31GB.
>>2. Use Oracle's java, not OpenJDK.
>>3. Set bootstrap.mlockall to true, you don't want to swap, ever.
>>
>> Given the large number of indexes you have on node1, and to get to a
>> point where you can move some of these to a new node and stop the root
>> problem, it's going to be worth closing some of the older indexes. So try
>> these steps;
>>
>>1. Stop node2.
>>2. Delete any data from the second node, to prevent things being
>>auto imported again.
>>3. Start node1, or restart it if it's running.
>>4. Close all your indexes older than a month -
>>
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-open-close.html.
>>You can use wildcards in index names to make the update easier. What 
>> this
>>will do is tell ES to not load the index metadata into memory, which 
>> will
>>help with your OOM issue.
>>5. Start node2 and let it join the cluster.
>>6. Make sure the cluster is in a green state. If you're not
>>already, use something like ElasticHQ, kopf or Marvel to monitor 
>> things.
>>7. Let the cluster rebalance the current open indexes.
>>8. Once that is ok and things are stable, reopen your closed
>>indexes a month at a time, and l

Re: Snapshot Duration increasing over time

2014-05-08 Thread Dipesh Patel
Hi Igor

We are using elasticsearch 1.1.1.
Currently we are keeping all snapshots that we make in s3, we haven't yet 
decided on an archive strategy/solution. So at the moment we have 131 
snapshots in the s3 bucket.
So we have about 112 new indices a day.

I'll explain our set up a bit it may well be that we should be doing 
something different. We are grabbing application logs from lots of 
different apps and putting them into elasticsearch. We are using flume to 
do this. So similar to logstash setup. The one major difference is that we 
are creating an index for each application. So if we have 100 apps we will 
create 100 indices each day one for each new app.


Dip

On Thursday, May 8, 2014 2:58:22 PM UTC+1, Igor Motov wrote:
>
> Hi Dipesh,
>
> I have a few questions. Are you still on S3? Which version of 
> elasticsearch are you using? How many snapshots do you currently keep in 
> S3? How fast is your index growing over time?
>
> Igor
>
> On Wednesday, May 7, 2014 6:58:05 AM UTC-4, Dipesh Patel wrote:
>>
>> Hi
>>
>> We've noticed recently that our snapshot durations are increasing over 
>> time. Our rate of flow of data going into elasticsearch has remained fairly 
>> constant. Though we do create new indices everyday ( though this is a fixed 
>> number that doesn't vary from day to day ). We are currently snapshoting, 
>> or trying to snapshot every hour. However with the snapshots taking a 
>> progressively longer time this is proving difficult.
>>
>> Here's some stats showing our time to finish:
>>
>> Name   
>> Duration ( milli )
>>   snapshot_2014-05-01_01:30:00 4497010  snapshot_2014-05-01_03:30:00 
>> 4513037  snapshot_2014-05-01_05:30:00 4770288  
>> snapshot_2014-05-01_07:30:00 5413361  snapshot_2014-05-01_11:30:00 
>> 6978384  snapshot_2014-05-01_13:30:00 6907554  
>> snapshot_2014-05-01_15:30:00 7388500  
>> This is just the tail end originally the snapshots were only taking 7-8 
>> mins to run, they've just been getting progressively longer.
>>
>> Any help to debug etc appreciated.
>>
>> Dip
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/441319d1-a521-4a6b-9671-de541d129029%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Snapshot Duration increasing over time

2014-05-08 Thread Igor Motov
Hi Dipesh,

I have a few questions. Are you still on S3? Which version of elasticsearch 
are you using? How many snapshots do you currently keep in S3? How fast is 
your index growing over time?

Igor

On Wednesday, May 7, 2014 6:58:05 AM UTC-4, Dipesh Patel wrote:
>
> Hi
>
> We've noticed recently that our snapshot durations are increasing over 
> time. Our rate of flow of data going into elasticsearch has remained fairly 
> constant. Though we do create new indices everyday ( though this is a fixed 
> number that doesn't vary from day to day ). We are currently snapshoting, 
> or trying to snapshot every hour. However with the snapshots taking a 
> progressively longer time this is proving difficult.
>
> Here's some stats showing our time to finish:
>
> Name   
> Duration ( milli )
>   snapshot_2014-05-01_01:30:00 4497010  snapshot_2014-05-01_03:30:00 
> 4513037  snapshot_2014-05-01_05:30:00 4770288  
> snapshot_2014-05-01_07:30:00 5413361  snapshot_2014-05-01_11:30:00 6978384  
> snapshot_2014-05-01_13:30:00 6907554  snapshot_2014-05-01_15:30:00 7388500  
> This is just the tail end originally the snapshots were only taking 7-8 
> mins to run, they've just been getting progressively longer.
>
> Any help to debug etc appreciated.
>
> Dip
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/96ca5fa6-f366-4f9e-b3c1-a57b511a341e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


NEST: How can I get the raw JSON that comprises a doc just before it's indexed?

2014-05-08 Thread rianjs
Basically, I'm trying to get the raw JSON that would be sent to 
elasticsearch for indexing, *without actually indexing it*. Is there any 
way to do this? I didn't see anything in the docs or unit tests that would 
be useful.

Thanks,
-Rian

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6906320e-9cef-4934-88c1-f6904a9f2a56%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: AmazonClientException[Unable to unmarshall error response...] when creating snapshot in S3

2014-05-08 Thread Paulo Correa
Hi Pete, 

we ran some more tests to see if we could narrow down the problem. We found 
out that whenever we had an instance that was not in AWS' default-vpc 
launch an snapshot, the problem occurred. That was the only factor that 
made the problem happen, so it is not an issue with permissions, role, 
buckets, etc, but rather may be a routing problem.

We're thinking of submitting a ticket to AWS Support, as we're still facing 
the error when we launch an instance in our internal VPC.


Em quinta-feira, 8 de maio de 2014 10h21min48s UTC-3, Pete Michel escreveu:
>
> Paulo,
>
> Did you ever figure out your error?  I just encountered the exact same 
> problem and was hoping you had found a solution
>
> Thanks,
> Pete
>
> On Friday, April 25, 2014 6:14:38 PM UTC-4, Paulo Correa wrote:
>>
>> I`ve set up  ES v.1.1.1 + AWS-cloud-plugin 2.1.1 on an EC2 instance, 
>> using a Role to give access to my S3 bucket. Everything seems fine, I can 
>> put, list and get objects from inside the instance (tested with node.js 
>> code), but when I try to create an ES snapshot I get the following error 
>> message:
>>
>>
>> *$ curl -XPUT 
>> "localhost:9200/_snapshot/my_snapshot_repo/snapshot1?wait_for_completion=true"*
>> {"error":"SnapshotCreationException[[my_snapshot_repo:snapshot1] failed 
>> to create snapshot]; nested: IOException[Failed to get 
>> [snapshot-snapshot1]]; nested: AmazonClientException[Unable to unmarshall 
>> error response (The declaration for the entity \"ContentType\" must end 
>> with '>'.). Response Code: 417, Response Text: Expectation Failed]; nested: 
>> SAXParseException[The declaration for the entity \"ContentType\" must end 
>> with '>'.]; ","status":500}
>>
>> *Full error stack here*: https://gist.github.com/anonymous/11304863
>>
>>
>> *I`ve set up my snapshot repository with:*
>>
>> $ curl -XPUT 'http://localhost:9200/_snapshot/my_snapshot_repo' -d '{ 
>> "type": "s3", "settings": { "bucket": "mybucket","region": "sa-east"}}'
>>
>>
>> My *elasticsearch.yml* aws config is as follows:
>>
>> ...
>> cluster.name: montadores
>> cloud.aws.region: sa-east-1
>> discovery:
>> type: ec2
>>
>> discovery.ec2.tag.elasticsearch_cluster: mobile_montadores
>> cloud.node.auto_attributes: true
>> ...
>>
>>
>> *Discovery seems OK*, as I only have 1 instance running:
>>
>> ...
>>
>> [2014-04-25 21:16:53,265][INFO ][cluster.service  ] [Cap 'N Hawk] 
>> new_master [Cap 'N 
>> Hawk][M013xeCxQSu8QbZzdNucVA][ip-10-152-89-168][inet[/10.152.89.168:9300]]{aws_availability_zone=sa-east-1a},
>>  
>> reason: zen-disco-join (elected_as_master)
>>
>> ...
>>
>> [2014-04-25 21:16:56,005][INFO ][node ] [Cap 'N Hawk] 
>> started
>>  
>>
>>
>> The only similar thing I found to this error was an old closed issue: 
>> https://github.com/elasticsearch/elasticsearch/issues/1137
>>
>>
>> Can anybody shed some light if I`m doing something wrong?
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/261e366b-232e-4474-ba47-4d01c18af569%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: AmazonClientException[Unable to unmarshall error response...] when creating snapshot in S3

2014-05-08 Thread Pete Michel
Paulo,

Did you ever figure out your error?  I just encountered the exact same 
problem and was hoping you had found a solution

Thanks,
Pete

On Friday, April 25, 2014 6:14:38 PM UTC-4, Paulo Correa wrote:
>
> I`ve set up  ES v.1.1.1 + AWS-cloud-plugin 2.1.1 on an EC2 instance, using 
> a Role to give access to my S3 bucket. Everything seems fine, I can put, 
> list and get objects from inside the instance (tested with node.js code), 
> but when I try to create an ES snapshot I get the following error message:
>
>
> *$ curl -XPUT 
> "localhost:9200/_snapshot/my_snapshot_repo/snapshot1?wait_for_completion=true"*
> {"error":"SnapshotCreationException[[my_snapshot_repo:snapshot1] failed to 
> create snapshot]; nested: IOException[Failed to get [snapshot-snapshot1]]; 
> nested: AmazonClientException[Unable to unmarshall error response (The 
> declaration for the entity \"ContentType\" must end with '>'.). Response 
> Code: 417, Response Text: Expectation Failed]; nested: 
> SAXParseException[The declaration for the entity \"ContentType\" must end 
> with '>'.]; ","status":500}
>
> *Full error stack here*: https://gist.github.com/anonymous/11304863
>
>
> *I`ve set up my snapshot repository with:*
>
> $ curl -XPUT 'http://localhost:9200/_snapshot/my_snapshot_repo' -d '{ 
> "type": "s3", "settings": { "bucket": "mybucket","region": "sa-east"}}'
>
>
> My *elasticsearch.yml* aws config is as follows:
>
> ...
> cluster.name: montadores
> cloud.aws.region: sa-east-1
> discovery:
> type: ec2
>
> discovery.ec2.tag.elasticsearch_cluster: mobile_montadores
> cloud.node.auto_attributes: true
> ...
>
>
> *Discovery seems OK*, as I only have 1 instance running:
>
> ...
>
> [2014-04-25 21:16:53,265][INFO ][cluster.service  ] [Cap 'N Hawk] 
> new_master [Cap 'N 
> Hawk][M013xeCxQSu8QbZzdNucVA][ip-10-152-89-168][inet[/10.152.89.168:9300]]{aws_availability_zone=sa-east-1a},
>  
> reason: zen-disco-join (elected_as_master)
>
> ...
>
> [2014-04-25 21:16:56,005][INFO ][node ] [Cap 'N Hawk] 
> started
>  
>
>
> The only similar thing I found to this error was an old closed issue: 
> https://github.com/elasticsearch/elasticsearch/issues/1137
>
>
> Can anybody shed some light if I`m doing something wrong?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f86f89fc-6517-4e1b-ae6b-bce9e93d24e7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: embedded es test server hangs on startup

2014-05-08 Thread joergpra...@gmail.com
Does it work better if you build the node with local(true) ?

Something like

Node node = nodeBuilder().local(true).settings(settings).build();

For junit testing I also use randomized cluster names for each node.

Otherwise I get a bunch of nodes on a single cluster on the JVM from the
previous runs that do not respond and take long time before timing out. Not
sure what causes this, maybe also the TribeService startup.

To activate tribe node, ES should add a global setting like "tribe.enabled
: false/true"

Jörg


On Thu, May 8, 2014 at 2:59 PM, Jilles van Gurp wrote:

> I'm trying run elasticsearch as part of my jruby tests. Here's some of the
> code I use to do that:
>
> Settings settings = ImmutableSettings.settingsBuilder()
>
> .put("name", nodeName)
>
> .put("cluster.name", "linko-dev-cluster")
>
> .put("index.gateway.type", "none")
>
> .put("gateway.type", "none")
>
> .put("discovery.zen.ping.multicast.enabled", "false")
>
> .put("path.data", indexDir)
>
> .put("path.logs", logDir)
>
> .put("foreground", "true")
>
> .put("http.port", esPort)
>
> .build();
>
>
>
> NodeBuilder nodeBuilder = NodeBuilder.nodeBuilder()
>
> .settings(settings)
>
> .loadConfigSettings(false);
>
> node = nodeBuilder
>
> .build();
>
> // register a shutdown hook
>
> Runtime.getRuntime().addShutdownHook(new Thread() {
>
> @Override
>
> public void run() {
>
> node.close();
>
> }
>
> });
>
> node.start();
>
>
> From my IDE and maven this works fine but when I call this from within
> jruby in my rspec, elasticsearch hangs when I try to start it. A kill -QUIT
> shows that it hangs on some initialization code related to tribe
> functionality:
>
> "main" #1 prio=5 os_prio=31 tid=0x7fefe9801000 nid=0x1903 waiting on
> condition [0x000108eb8000]
>
>java.lang.Thread.State: WAITING (parking)
>
> at sun.misc.Unsafe.park(Native Method)
>
> - parking to wait for  <0x000795b61a58> (a
> java.util.concurrent.CountDownLatch$Sync)
>
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>
> at org.elasticsearch.tribe.TribeService.doStart(TribeService.java:171)
>
> at
> org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
>
> at
> org.elasticsearch.node.internal.InternalNode.start(InternalNode.java:240)
>
> at io.linko.ng.es.EsTestLauncher.start(EsTestLauncher.java:108)
>
>
> I'd really appreciate any workarounds. I don't really need the transport
> running or tribe. I just need to be able to connect via http. I'm using
> jruby 1.7.12, java 1.8, and elasticsearch 1.1.1 pulled in via a maven
> dependency.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/51c219a1-e96b-4f5c-81b5-d8728c7471e3%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH8UP9xUz0w3Z716UXo9sAKhaPWkTZ94mYtGh%2BmUqFYbQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: red status after unexpected stop

2014-05-08 Thread Arnau Bria
Hello,

just an update as I was able to solve the issue. 
The health was:

cluster_name: BigLog
status: red
timed_out: false
number_of_nodes: 1
number_of_data_nodes: 1
active_primary_shards: 1175
active_shards: 1175
relocating_shards: 0
initializing_shards: 0
unassigned_shards: 1195

And the amount of unassigned shards came from the number of replicas:
1 and number of nodes 1 . So , a copy of each shard was expecting a new
node for being replicated.

For solving that, I did remove the repica of each index i.e :

curl -XPUT "http://XXX:9200/logstash-2014.05.01/_settings?";  -d '{
"index" : {
"number_of_replicas" : 0} }
'

Then, I saw that I still had 280 unassigned shards. I got their indexes
by doing:

curl XX:9200/_cluster/state

and then looking for routing_nodes.

As log did not say anything about them, I decided to remove those
indexes:

 curl -XDELETE "http://XX:9200/$a/";

and the refresh:

curl -XPOST "http://X:9200/$a/_refresh";

now my cluster is in green (as I've removed the replicas) and logstash
is working .

{
cluster_name: BigLog
status: green
timed_out: false
number_of_nodes: 1
number_of_data_nodes: 1
active_primary_shards: 1200
active_shards: 1200
relocating_shards: 0
initializing_shards: 0
unassigned_shards: 0
}


Cheers,
Arnau

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/20140508151154.6fb5d82a%40eidolon.
For more options, visit https://groups.google.com/d/optout.


embedded es test server hangs on startup

2014-05-08 Thread Jilles van Gurp
I'm trying run elasticsearch as part of my jruby tests. Here's some of the 
code I use to do that:

Settings settings = ImmutableSettings.settingsBuilder()

.put("name", nodeName)

.put("cluster.name", "linko-dev-cluster")

.put("index.gateway.type", "none")

.put("gateway.type", "none")

.put("discovery.zen.ping.multicast.enabled", "false")

.put("path.data", indexDir)

.put("path.logs", logDir)

.put("foreground", "true")

.put("http.port", esPort)

.build();



NodeBuilder nodeBuilder = NodeBuilder.nodeBuilder()

.settings(settings)

.loadConfigSettings(false);

node = nodeBuilder

.build();

// register a shutdown hook

Runtime.getRuntime().addShutdownHook(new Thread() {

@Override

public void run() {

node.close();

}

});

node.start();


>From my IDE and maven this works fine but when I call this from within 
jruby in my rspec, elasticsearch hangs when I try to start it. A kill -QUIT 
shows that it hangs on some initialization code related to tribe 
functionality:

"main" #1 prio=5 os_prio=31 tid=0x7fefe9801000 nid=0x1903 waiting on 
condition [0x000108eb8000]

   java.lang.Thread.State: WAITING (parking)

at sun.misc.Unsafe.park(Native Method)

- parking to wait for  <0x000795b61a58> (a 
java.util.concurrent.CountDownLatch$Sync)

at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)

at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)

at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)

at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)

at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)

at org.elasticsearch.tribe.TribeService.doStart(TribeService.java:171)

at 
org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)

at org.elasticsearch.node.internal.InternalNode.start(InternalNode.java:240)

at io.linko.ng.es.EsTestLauncher.start(EsTestLauncher.java:108)


I'd really appreciate any workarounds. I don't really need the transport 
running or tribe. I just need to be able to connect via http. I'm using 
jruby 1.7.12, java 1.8, and elasticsearch 1.1.1 pulled in via a maven 
dependency.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/51c219a1-e96b-4f5c-81b5-d8728c7471e3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Corrupt translog entry error in index recovery

2014-05-08 Thread Nitzan Dana
Hi,

I'm using elasticsearch in my logstash setup and having some real troubles 
for a couple of days now.
My elasticsearch cluster state is red and I can see that there are some 
unassigned shards, and in the servers logs I can see that there is a error 
with recovering a specific index.
Do you have any clue about fixing this unfortunate situation?

Thanks!
 

> [2014-05-08 11:00:43,812][INFO ][index.gateway.local  ] [LogstashES3] 
> [logstash-2014.05.05][3] ignoring recovery of a corrupt tran
> slog entry
> org.elasticsearch.index.mapper.MapperParsingException: failed to parse 
> [time]
> at 
> org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:416)
> at 
> org.elasticsearch.index.mapper.multifield.MultiFieldMapper.parse(MultiFieldMapper.java:204)
> at 
> org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:613)
> at 
> org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:466)
> at 
> org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:516)
> at 
> org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:460)
> at 
> org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:353)
> at 
> org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:697)
> at 
> org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:224)
> at 
> org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:174)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: org.elasticsearch.common.jackson.core.JsonParseException: 
> Illegal unquoted character ((CTRL-CHAR, code 0)): has to be escaped using 
> backslash
> to be included in string value
>  at [Source: [B@4960d84; line: 1, column: 402]
> at 
> org.elasticsearch.common.jackson.core.JsonParser._constructError(JsonParser.java:1524)
> at 
> org.elasticsearch.common.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:557)
> at 
> org.elasticsearch.common.jackson.core.base.ParserMinimalBase._throwUnquotedSpace(ParserMinimalBase.java:518)
> at 
> org.elasticsearch.common.jackson.core.json.UTF8StreamJsonParser._finishString2(UTF8StreamJsonParser.java:2220)
> at 
> org.elasticsearch.common.jackson.core.json.UTF8StreamJsonParser._finishString(UTF8StreamJsonParser.java:2150)
> at 
> org.elasticsearch.common.jackson.core.json.UTF8StreamJsonParser.getText(UTF8StreamJsonParser.java:282)
> at 
> org.elasticsearch.common.xcontent.json.JsonXContentParser.text(JsonXContentParser.java:85)
> at 
> org.elasticsearch.common.xcontent.support.AbstractXContentParser.textOrNull(AbstractXContentParser.java:123)
> at 
> org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateFieldForString(StringFieldMapper.java:316)
> at 
> org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateField(StringFieldMapper.java:261)
> at 
> org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:405)
> ... 12 more
> [2014-05-08 11:00:43,813][WARN ][indices.cluster  ] [LogstashES3] 
> [logstash-2014.05.05][3] failed to start shard
> org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: 
> [logstash-2014.05.05][3] failed to recover shard
> at 
> org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:238)
> at 
> org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:174)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.lang.IllegalArgumentException: No type mapped for [92]
> at 
> org.elasticsearch.index.translog.Translog$Operation$Type.fromId(Translog.java:216)
> at 
> org.elasticsearch.index.translog.TranslogStreams.readTranslogOperation(TranslogStreams.java:34)
> at 
> org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:215)
> ... 4 more
> [2014-05-08 11:00:43,814][WARN ][cluster.action.shard ] [LogstashES3] 
> [logstash-2014.05.05][3] sending failed shard for [logstash-
> 2014.05.05][3], node[_v2d00AJTlWnJoNbUGhotA], [P], s[INITIALIZING], 
> indexUUID [Ui9T7dl4RSWjuodOs9tSJA], reason [Failed to start shard, message 
> [IndexShar
> dGatewayRecoveryException[[logstash-2014.05.05][3] failed to recov

Strange appearance of dynamic field

2014-05-08 Thread Michał


Last month in my elasticsearch index appeared weird dynamic mapping 
http://pastebin.com/dykbXEJy. Is seems as if this mapping would create 
itself...

What might be the reason for this? Can I remove those dynamic mappings?

I use version: 0.20.4

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6822cc11-b2cc-4a4c-b2ab-63c2b4edbbb7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Changing node and cluster name Version: 1.1.1

2014-05-08 Thread David Pilato
You should write something like:

node.name: 192.168.0.12.raspi1

Does it work?

-- 
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr


Le 8 mai 2014 à 09:22:06, Francesco Audisio (cesco...@gmail.com) a écrit:

No i haven't set "Blind Faith",what is it?

this is my elasticsearch-yml:

https://gist.github.com/Fraaud/89114d2ad6b70daa3437#file-elasticsearch-yml

Il giorno giovedì 8 maggio 2014 09:10:49 UTC+2, David Pilato ha scritto:
Please use GIST instead of attaching files.

Did you set name to be "Blind Faith"?

Could you gist your elasticsearch.yml file?

-- 
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr


Le 8 mai 2014 à 08:44:49, Francesco Audisio (cesc...@gmail.com) a écrit:


i upload the file log, the problem start in the line 491

Il giorno giovedì 8 maggio 2014 08:19:35 UTC+2, David Pilato ha scritto:
Can you see anything in elasticsearch logs or in system logs?

-- 
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr


Le 7 mai 2014 à 23:13:01, Francesco Audisio (cesc...@gmail.com) a écrit:

yes i have uncomment the line but now the Elasticsearch server not start 
properly, i have this error:

sudo /etc/init.d/elasticsearch restart
[ ok ] Stopping Elasticsearch Server: Elasticsearch Server is not running but 
pid file exists, cleaning up.
[ ok ] Starting Elasticsearch Server:.

but if i comment the line the server start properly with random node name, I do 
not understand why, I have to compile the entire file?


Il giorno mercoledì 7 maggio 2014 22:45:28 UTC+2, David Pilato ha scritto:
Uncomment the line and put any nam you want:

node.name: My name

Is that what you are looking for?

-- 
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr


Le 7 mai 2014 à 22:44:00, Francesco Audisio (cesc...@gmail.com) a écrit:

Hi all,

I am a beginner with Marvel, and I already meet a first difficulty =) 

as I do to change and to assign with IP adress of the pc the name of the node 
with this file:

elasticsearch.yml

because inside this file I have found this line 

# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
# node.name:

 but I don't understand as I do to write the name of the node, it produces in 
automatic of the nodes names and I know this it is normal however as I do to 
change him the name?
 
I repeat I am a beginner I want to learn =D

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearc...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/779a286a-96fe-4173-97f8-4a4bca017d50%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearc...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/aed42466-db44-409b-b268-d04d2c5e4749%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearc...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/36c4e9e1-c5f9-4c9f-b5b2-7024a2512b3e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3b88084e-b42d-4fd1-80d0-339b6567d00a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/etPan.536b5fb1.6b8b4567.9e48%40air-de-david.esprit.tn.
For more options, visit https://groups.google.com/d/optout.


Re: MoreLikeThis can't identify that 2 documents with exactly same attachments are duplicates

2014-05-08 Thread Alex Ksikes
On May 8, 2014 8:09 AM, "Zoran Jeremic"  wrote:
>
> Hi Alex,
>
> Thank you for this explanation. This really helped me to understand how
it works, and now I managed to get results I was expecting just after
setting max_query_terms value to be 0 or some very high value. With these
results in my tests I was able to identify duplicates. I noticed couple of
things though.
>
> - I got much better results with web pages when I indexed attachment as
html source and use text extracted by Jsoup in query, then when I indexed
text extracted from web page as attachment and used text in query. I
suppose that difference is related to the fact that Jsoup did not extract
text in the same way as Tika parser used by ES did.
> - There was significant improvement in the results in the second test
when I have indexed 50 web pages, then in first test when I indexed 10 web
pages. I deleted index before each test. I suppose that this is related to
the tf*idf.
> If so, does it make sense to provide some training set for elasticsearch
that will be used to populate index before system is started to be used?

Perhaps you are asking for a background dataset to bias the selection of
interesting terms. This could make sense depending on your application.

> Could you please define "relevant" in your setting? In a corpus of very
similar documents, is your goal to find the ones which are oddly different?
Have you looked into ES significant terms?
> I have the service that recommends documents to the students based on
their current learning context. It creates tokenized string from titles,
descriptions and keywords of the course lessons student is working at the
moment. I'm using this string as input to the mlt_like_text to find some
interesting resources that could help them.
> I want to avoid having duplicates (or very similar documents) among top
documents that are recommended.
> My idea was that during the documents uploading (before I index it with
elasticsearch) I find if there already exists it's duplicate, and store
this information as ES document field. Later, in query I can specify that
duplicates are not recommended.
>
> Here you should probably strip the html tags, and solely index the text
in its own field.
> As I already mentioned this didn't give me good results for some reason.
>
> Do you think this approach would work fine with large textual documents,
e.g. pdf documents having couple of hundred of pages? My main concern is
related to performances of these queries using like_text, so that's why I
was trying to avoid this approach and use mlt with document id as input.

I don't think this approach would work well in this case, but you should
try. I think what you are after is to either extract good features for your
PDF documents and search on that, or finger printing. This could be
achieved by playing with analyzers.

> Thanks,
> Zoran
>
>
>
> On Wednesday, 7 May 2014 06:14:56 UTC-7, Alex Ksikes wrote:
>>
>> Hi Zoran,
>>
>> In a nutshell 'more like this' creates a large boolean disjunctive query
of 'max_query_terms' number of interesting terms from a text specified in
'like_text'. The interesting terms are picked up with respect to the their
tf-idf scores in the whole corpus. These later parameters could be tuned
with 'min_term_freq', 'min_doc_freq', and 'min_doc_freq' parameters. The
number of boolean clauses that must match is controlled by
'percent_terms_to_match'. In the case of specifying only one field in
'fields', the analyzer used to pick up the terms in 'like_text' is the one
associated with the field, unless specified specified by 'analyzer'. So as
an example, the default is to create a boolean query of 25 interesting
terms where only 30% of the should clauses must match.
>>
>> On Wednesday, May 7, 2014 5:14:11 AM UTC+2, Zoran Jeremic wrote:
>>>
>>> Hi Alex,
>>>
>>>
>>> If you are looking for exact duplicates then hashing the file content,
and doing a search for that hash would do the job.
>>> This trick won't work for me as these are not exact duplicates. For
example, I have 10 students working on the same 100 pages long word
document. Each of these students could change only one sentence and upload
a document. The hash will be different, but it's 99,99 % same documents.
>>> I have the other service that uses mlt_like_text to recommend some
relevant documents, and my problem is if this document has best score, then
all duplicates will be among top hits and instead recommending users with
several most relevant documents I will recommend 10 instances of same
document.
>>
>>
>> Could you please define "relevant" in your setting? In a corpus of very
similar documents, is your goal to find the ones which are oddly different?
Have you looked into ES significant terms?
>>
>>>
>>> If you are looking for near duplicates, then I would recommend
extracting whatever text you have in your html, pdf, doc, indexing that and
running more like this with like_text set to that content.
>>> I tried that as well, and results are ver

Re: more like this on numbers

2014-05-08 Thread Alex Ksikes
Hi Valentin,

For these types of searches, have you looked into range queries, perhaps
combined in a boolean query?

Alex
On May 7, 2014 4:14 PM, "Valentin"  wrote:

> Hi Alex,
>
> thanks. Good idea to convert the numbers into strings. But converting the
> number fields to string won't exactly solve my problem. Only if there would
> be an analyzer which breaks down numbers into multiple tokens. Eg 300 into
> "100", "200", "300"
>
> Cheers,
> Valentin
>
> On Tuesday, May 6, 2014 12:04:53 PM UTC+2, Alex Ksikes wrote:
>>
>> Hi Valentin,
>>
>> As you know, you can only perform mlt on fields which are analyzed.
>> However, you can convert your other fields (number, ..) to text using a
>> multi field with type string at indexing time.
>>
>> Cheers,
>>
>> Alex
>>
>> On Thursday, March 27, 2014 4:31:58 PM UTC+1, Valentin wrote:
>>>
>>> Hi,
>>>
>>> as far as I understand it the more like this query allows to find
>>> documents where the same tokens are used. I wonder if there is a
>>> possibility to find documents where a particular field is compared based on
>>> its value (number).
>>>
>>> Regards
>>> Valentin
>>>
>>> PS: elasticsearch rocks!
>>>
>>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/Wsye6JD__ys/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/195f8fa2-821f-4556-b9ae-8924b35c859f%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAMrXmPdWStJjTaW5%3D27MrMNLHPkK1hihgrs%3DDs-SAiHzHz9eAQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: ANN: new elasticsearch discovery plugin - eskka

2014-05-08 Thread shikhar
On Thu, May 8, 2014 at 1:02 PM, shikhar  wrote:

> Worth noting that besides being initial contact points for when the
> cluster is starting up, with eskka they are also used for resolving
> partitions. Given this requirement, you would ideally have 3 or more
> specified. It is perfectly ok to have all of your nodes listed, if you know
> their addresses before startup.
>

Another idea for what nodes to use as seed: if you are using master-only
nodes, make them seed nodes.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAHWG4DPFje-U8LriQgDMefhUzxOr8CuH1jqKghqMDO3bLYgpvg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: ANN: new elasticsearch discovery plugin - eskka

2014-05-08 Thread shikhar
All Elasticsearch nodes will end up being part of the Akka cluster :) I
think you're really asking how many seed nodes you should specify. The seed
node list is probably going to be similar to what you might use for
zen.unicast.hosts.

Worth noting that besides being initial contact points for when the cluster
is starting up, with eskka they are also used for resolving partitions.
Given this requirement, you would ideally have 3 or more specified. It is
perfectly ok to have all of your nodes listed, if you know their addresses
before startup.

https://github.com/shikhar/eskka#configuration


On Wed, May 7, 2014 at 9:31 PM, Ivan Brusic  wrote:

> Extremely interesting! What is the recommended size of the Akka cluster
> compared to the Elasticsearch cluster?
>
> --
> Ivan
>
>
> On Tue, May 6, 2014 at 8:42 PM, shikhar  wrote:
>
>>  Just released 0.1.1
>>
>> This version is working well in my manual testing. Automated testing is
>> on the roadmap...
>>
>>
>> On Mon, May 5, 2014 at 10:49 AM, shikhar  wrote:
>>
>>> See README 
>>>
>>> I'd love to have feedback on this first release!
>>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>>  To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAHWG4DNiwCxbZakzogFfqFxYxabcQ_ysG2_OMd06%3D%2BfDqFEQdA%40mail.gmail.com
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQDjg7Uaw4tTVr2-8cGM7w2sutH6F2XzCVj3JjKK-RCt5g%40mail.gmail.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAHWG4DO-nGqKuv9i_wFpR0nomTY4mk%3DmkVxvgai3bjO8sSyRxw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Changing node and cluster name Version: 1.1.1

2014-05-08 Thread Francesco Audisio
No i haven't set "Blind Faith",what is it?

this is my elasticsearch-yml:

https://gist.github.com/Fraaud/89114d2ad6b70daa3437#file-elasticsearch-yml

Il giorno giovedì 8 maggio 2014 09:10:49 UTC+2, David Pilato ha scritto:
>
> Please use GIST instead of attaching files.
>
> Did you set name to be "Blind Faith"?
>
> Could you gist your elasticsearch.yml file?
>
> -- 
> *David Pilato* | *Technical Advocate* | *Elasticsearch.com*
> @dadoonet  | 
> @elasticsearchfr
>
>
> Le 8 mai 2014 à 08:44:49, Francesco Audisio (cesc...@gmail.com) 
> a écrit:
>
>
> i upload the file log, the problem start in the line 491 
>
> Il giorno giovedì 8 maggio 2014 08:19:35 UTC+2, David Pilato ha scritto: 
>>
>>  Can you see anything in elasticsearch logs or in system logs?
>>
>>  -- 
>> *David Pilato* | *Technical Advocate* | *Elasticsearch.com* 
>>  @dadoonet  | 
>> @elasticsearchfr
>>  
>>
>> Le 7 mai 2014 à 23:13:01, Francesco Audisio (cesc...@gmail.com) a écrit:
>>
>>  yes i have uncomment the line but now the Elasticsearch server not 
>> start properly, i have this error: 
>>
>>  sudo /etc/init.d/elasticsearch restart
>> [ ok ] Stopping Elasticsearch Server: Elasticsearch Server is not running 
>> but pid file exists, cleaning up.
>> [ ok ] Starting Elasticsearch Server:.
>>  
>> but if i comment the line the server start properly with random node 
>> name, I do not understand why, I have to compile the entire file?
>>
>>
>> Il giorno mercoledì 7 maggio 2014 22:45:28 UTC+2, David Pilato ha scritto: 
>>>
>>>  Uncomment the line and put any nam you want:
>>>  
>>>  node.name: My name
>>>  
>>>  Is that what you are looking for?
>>>
>>>  -- 
>>> *David Pilato* | *Technical Advocate* | *Elasticsearch.com* 
>>>  @dadoonet  | 
>>> @elasticsearchfr
>>>  
>>>
>>> Le 7 mai 2014 à 22:44:00, Francesco Audisio (cesc...@gmail.com) a écrit:
>>>
>>>  Hi all, 
>>>
>>> I am a beginner with Marvel, and I already meet a first difficulty =) 
>>>
>>> as I do to change and to assign with IP adress of the pc the name of the 
>>> node with this file:
>>>
>>>  elasticsearch.yml
>>>  
>>> because inside this file I have found this line 
>>>
>>>  # Node names are generated dynamically on startup, so you're relieved
 # from configuring them manually. You can tie this node to a specific 
 name:
 #
 # node.name:

>>>
>>>  but I don't understand as I do to write the name of the node, it 
>>> produces in automatic of the nodes names and I know this it is normal 
>>> however as I do to change him the name?
>>>  
>>> I repeat I am a beginner I want to learn =D
>>>
>>>  --
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/779a286a-96fe-4173-97f8-4a4bca017d50%40googlegroups.com
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>  
>>>   --
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/aed42466-db44-409b-b268-d04d2c5e4749%40googlegroups.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>  
>>   --
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearc...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/36c4e9e1-c5f9-4c9f-b5b2-7024a2512b3e%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
> --
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3b88084e-b42d-4fd1-80d0-339b6567d00a%

Re: Changing node and cluster name Version: 1.1.1

2014-05-08 Thread David Pilato
Please use GIST instead of attaching files.

Did you set name to be "Blind Faith"?

Could you gist your elasticsearch.yml file?

-- 
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr


Le 8 mai 2014 à 08:44:49, Francesco Audisio (cesco...@gmail.com) a écrit:


i upload the file log, the problem start in the line 491

Il giorno giovedì 8 maggio 2014 08:19:35 UTC+2, David Pilato ha scritto:
Can you see anything in elasticsearch logs or in system logs?

-- 
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr


Le 7 mai 2014 à 23:13:01, Francesco Audisio (cesc...@gmail.com) a écrit:

yes i have uncomment the line but now the Elasticsearch server not start 
properly, i have this error:

sudo /etc/init.d/elasticsearch restart
[ ok ] Stopping Elasticsearch Server: Elasticsearch Server is not running but 
pid file exists, cleaning up.
[ ok ] Starting Elasticsearch Server:.

but if i comment the line the server start properly with random node name, I do 
not understand why, I have to compile the entire file?


Il giorno mercoledì 7 maggio 2014 22:45:28 UTC+2, David Pilato ha scritto:
Uncomment the line and put any nam you want:

node.name: My name

Is that what you are looking for?

-- 
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr


Le 7 mai 2014 à 22:44:00, Francesco Audisio (cesc...@gmail.com) a écrit:

Hi all,

I am a beginner with Marvel, and I already meet a first difficulty =) 

as I do to change and to assign with IP adress of the pc the name of the node 
with this file:

elasticsearch.yml

because inside this file I have found this line 

# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
# node.name:

 but I don't understand as I do to write the name of the node, it produces in 
automatic of the nodes names and I know this it is normal however as I do to 
change him the name?
 
I repeat I am a beginner I want to learn =D

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearc...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/779a286a-96fe-4173-97f8-4a4bca017d50%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearc...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/aed42466-db44-409b-b268-d04d2c5e4749%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/36c4e9e1-c5f9-4c9f-b5b2-7024a2512b3e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/etPan.536b2df9.431bd7b7.944f%40MacBook-Air-de-David.local.
For more options, visit https://groups.google.com/d/optout.


Re: Locking a shard to one data path

2014-05-08 Thread Mark Walkom
If you are using single disk machines, then all your segments will be
created in the one data path (ie system directory).
On linux with a package install, that's usually /var/lib/elasticsearch/

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 8 May 2014 16:59, Michael Salmon  wrote:

> As far as I can tell ES distributes segments over all data paths and if
> you have reliable disks i.e. raid 0 etc. then this is a good policy but if
> you are using single disks then failure of a single disk can affect all
> shards on a node. I am pretty sure that ES can recover from such a failure
> but in my case it means that I am going to go from a few TB that needs to
> be copied to tens of TB.
>
> Does anyone have any practical experience of disk failure and recovery?
>
> Are there any settings to force all segments in a shard to be created in
> the same data path?
>
> I guess that I will need to restrict the number of disks per node and have
> more nodes instead.
>
> /Michael
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/ffcf28f9-8963-4a33-8ae9-7b357d6ed997%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624ZiKn9BMO9wpyf9b-NLYrduVcUahuei_PropaTQ_U2ErQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Locking a shard to one data path

2014-05-08 Thread Michael Salmon
As far as I can tell ES distributes segments over all data paths and if you 
have reliable disks i.e. raid 0 etc. then this is a good policy but if you 
are using single disks then failure of a single disk can affect all shards 
on a node. I am pretty sure that ES can recover from such a failure but in 
my case it means that I am going to go from a few TB that needs to be 
copied to tens of TB.

Does anyone have any practical experience of disk failure and recovery?

Are there any settings to force all segments in a shard to be created in 
the same data path?

I guess that I will need to restrict the number of disks per node and have 
more nodes instead.

/Michael 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ffcf28f9-8963-4a33-8ae9-7b357d6ed997%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.