Re: Benefits of using bulk with node client?

2014-11-10 Thread joergpra...@gmail.com
The node does not parse the bulk, only part of it (the metadata lines for
hashing and routing).

The benefit of bulk requests are simple to see on the network layer.

Assume 1000 docs:

- without bulk, send a request per doc, and wait for response each doc:
client must submit 1000 packets on the wire, and server must submit 1000
responses on the wire back, and  for each doc on inner shard level
send/receive cycle, there are also another 1000 send/receive. Makes around
4000 packets on the wire (worst case is the connected server node does not
hold a shard of the index), with all the delays.

- with bulk, client submits 1 request, server submits subpackets to each
node that holds a shard of the index and submits 1 response back. Makes
around 1 + (2 * n) + 1 packets where n is the number of nodes. With 3
nodes, you have 8 packets instead of 4000.

Same holds for both HTTP and transport protocol, HTTP is only used for
accepting client requests.

Jörg


On Mon, Nov 10, 2014 at 7:34 AM, Rotem rotem.her...@gmail.com wrote:

 I can definitely see the point of using the bulk API when indexing via
 HTTP.

 But is there an advantage of using bulk instead of individual index
 request when using the client node? Since the node parses the bulk and
 routes each request to its proper destination - and it's basically doing
 the same when you submit individual requests - what is the benefit of doing
 a bulk request in this case?

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/28d54f90-f6b8-449a-806f-e873600dfdd5%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/28d54f90-f6b8-449a-806f-e873600dfdd5%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHwtoUKSN%2B-D0jiPrVHaMoUygWTtgR-%3Di84Mn0jPCZSYw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Benefits of using bulk with node client?

2014-11-10 Thread Rotem Hermon

  
  
Makes sense. Thanks!

On 11/10/2014 10:15,
  joergpra...@gmail.com wrote:


  The node does not parse the bulk, only part of it
(the metadata lines for hashing and routing).


The benefit of bulk requests are simple to see on the
  network layer.
  
  
  Assume 1000 docs:
  
  
  - without bulk, send a request per doc, and wait for
response each doc: client must submit 1000 packets on the
wire, and server must submit 1000 responses on the wire
back, and  for each doc on inner shard level send/receive
cycle, there are also another 1000 send/receive. Makes
around 4000 packets on the wire (worst case is the connected
server node does not hold a shard of the index), with all
the delays.
  
  
  - with bulk, client submits 1 request, server submits
subpackets to each node that holds a shard of the index and
submits 1 response back. Makes around 1 + (2 * n) + 1
packets where n is the number of nodes. With 3 nodes, you
have 8 packets instead of 4000.
  
  
  Same holds for both HTTP and transport protocol, HTTP is
only used for accepting client requests.
  
  
  Jörg
  
  

  
  
On Mon, Nov 10, 2014 at 7:34 AM, Rotem
  rotem.her...@gmail.com
  wrote:
  
I can definitely see the point of using the
  bulk API when indexing via HTTP. 
  
  But is there an advantage of using bulk instead of
  individual index request when using the client node? Since
  the node parses the bulk and routes each request to its
  proper destination - and it's basically doing the same
  when you submit individual requests - what is the benefit
  of doing a bulk request in this case?


-- 
You received this message because you are subscribed to
the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails
from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/28d54f90-f6b8-449a-806f-e873600dfdd5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
  


  
  -- 
  You received this message because you are subscribed to a topic in
  the Google Groups "elasticsearch" group.
  To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/rnusTTvNTfg/unsubscribe.
  To unsubscribe from this group and all its topics, send an email
  to elasticsearch+unsubscr...@googlegroups.com.
  To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHwtoUKSN%2B-D0jiPrVHaMoUygWTtgR-%3Di84Mn0jPCZSYw%40mail.gmail.com.
  For more options, visit https://groups.google.com/d/optout.


  




-- 
You received this message because you are subscribed to the Google Groups elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/546077D8.6020409%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


term filter failed on very long fields

2014-11-10 Thread Wang Yong
Hi folks,

 

I was trying to do a term filter on a very long string field, maybe more
then 500 bytes, but I got 0 hit. So, I am wondering if there is a limitation
on the length of field while using term filter. The elasticsearch is 1.3.0
with the map like this :

 

curl -XPUT 'http://localhost:9200/test/_mapping/t' -d '

{

t : {

properties : {

message : {type : string, store : true }

}

}

}

'

For the test, I put a doc into this map first by :

POST /test/t/

{

  message:
123456789012345678901234567890123456789012345678901234567890123456789012345
6789012345678901234567890123456789012345678901234567890123456789012345678901
2345678901234567890123456789012345678901234567890123456789012345678901234567
8901234567890123456789012345678901234567890123456789012345678901234567890123
4567890123456789012345678901234567890123456789012345678901234567890123456789
0123456789012345678901234567890123456789012345678901234567890123456789012345
678901234567890123456789012345678901234567890

}

 

And then, I tried to search by :

GET test/_search

{

  from : 0,

  size : 20,

  query : {

filtered : {

  query : {

match_all : { }

  },

  filter : {

and : {

  filters : [{

term : {

  message :
123456789012345678901234567890123456789012345678901234567890123456789012345
6789012345678901234567890123456789012345678901234567890123456789012345678901
2345678901234567890123456789012345678901234567890123456789012345678901234567
8901234567890123456789012345678901234567890123456789012345678901234567890123
4567890123456789012345678901234567890123456789012345678901234567890123456789
0123456789012345678901234567890123456789012345678901234567890123456789012345
678901234567890123456789012345678901234567890

}

 } ]

}

  }

}

  }

}

 

I got the result:

 

{

   took: 0,

   timed_out: false,

   _shards: {

  total: 1,

  successful: 1,

  failed: 0

   },

   hits: {

  total: 0,

  max_score: null,

  hits: []

   }

}

 

Any comment will be appreciated, thanks a lot!

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/002b01cffcc1%2484429f00%248cc7dd00%24%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: term filter failed on very long fields

2014-11-10 Thread vineeth mohan
Hello Wang ,


Can you disable analyzer and try again .

Thanks
   Vineeth

On Mon, Nov 10, 2014 at 2:07 PM, Wang Yong cnwangy...@gmail.com wrote:

 Hi folks,



 I was trying to do a term filter on a very long string field, maybe more
 then 500 bytes, but I got 0 hit. So, I am wondering if there is a
 limitation on the length of field while using term filter. The
 elasticsearch is 1.3.0 with the map like this :



 curl -XPUT 'http://localhost:9200/test/_mapping/t' -d '

 {

 t : {

 properties : {

 message : {type : string, store : true }

 }

 }

 }

 '

 For the test, I put a doc into this map first by :

 POST /test/t/

 {

   message:
 12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890

 }



 And then, I tried to search by :

 GET test/_search

 {

   from : 0,

   size : 20,

   query : {

 filtered : {

   query : {

 match_all : { }

   },

   filter : {

 and : {

   filters : [{

 term : {

   message :
 12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890

 }

  } ]

 }

   }

 }

   }

 }



 I got the result:



 {

took: 0,

timed_out: false,

_shards: {

   total: 1,

   successful: 1,

   failed: 0

},

hits: {

   total: 0,

   max_score: null,

   hits: []

}

 }



 Any comment will be appreciated, thanks a lot!

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/002b01cffcc1%2484429f00%248cc7dd00%24%40gmail.com
 https://groups.google.com/d/msgid/elasticsearch/002b01cffcc1%2484429f00%248cc7dd00%24%40gmail.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5nC0O%3DD91YMO0mV0YD7k5QKY%2BAZddyVysUapNpGABMMYw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


RE: term filter failed on very long fields

2014-11-10 Thread Wang Yong
Thank you Vineeth,  I changed the mapping to :

 

PUT test/_mapping/t

{

t : {

properties : {

message : {type : string, index: not_analyzed, store : 
true }

}

}

  

}

 

And the result is the same.

 

Wang

 

From: elasticsearch@googlegroups.com [mailto:elasticsearch@googlegroups.com] On 
Behalf Of vineeth mohan
Sent: Monday, November 10, 2014 4:54 PM
To: elasticsearch@googlegroups.com
Subject: Re: term filter failed on very long fields

 

Hello Wang , 



Can you disable analyzer and try again .

Thanks

   Vineeth

 

On Mon, Nov 10, 2014 at 2:07 PM, Wang Yong cnwangy...@gmail.com 
mailto:cnwangy...@gmail.com  wrote:

Hi folks,

 

I was trying to do a term filter on a very long string field, maybe more then 
500 bytes, but I got 0 hit. So, I am wondering if there is a limitation on the 
length of field while using term filter. The elasticsearch is 1.3.0 with the 
map like this :

 

curl -XPUT 'http://localhost:9200/test/_mapping/t' -d '

{

t : {

properties : {

message : {type : string, store : true }

}

}

}

'

For the test, I put a doc into this map first by :

POST /test/t/

{

  message: 
12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890

}

 

And then, I tried to search by :

GET test/_search

{

  from : 0,

  size : 20,

  query : {

filtered : {

  query : {

match_all : { }

  },

  filter : {

and : {

  filters : [{

term : {

  message : 
12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890

}

 } ]

}

  }

}

  }

}

 

I got the result:

 

{

   took: 0,

   timed_out: false,

   _shards: {

  total: 1,

  successful: 1,

  failed: 0

   },

   hits: {

  total: 0,

  max_score: null,

  hits: []

   }

}

 

Any comment will be appreciated, thanks a lot!

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com 
mailto:elasticsearch+unsubscr...@googlegroups.com .
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/002b01cffcc1%2484429f00%248cc7dd00%24%40gmail.com
 
https://groups.google.com/d/msgid/elasticsearch/002b01cffcc1%2484429f00%248cc7dd00%24%40gmail.com?utm_medium=emailutm_source=footer
 .
For more options, visit https://groups.google.com/d/optout.

 

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com 
mailto:elasticsearch+unsubscr...@googlegroups.com .
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5nC0O%3DD91YMO0mV0YD7k5QKY%2BAZddyVysUapNpGABMMYw%40mail.gmail.com
 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5nC0O%3DD91YMO0mV0YD7k5QKY%2BAZddyVysUapNpGABMMYw%40mail.gmail.com?utm_medium=emailutm_source=footer
 .
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/004401cffcc9%2462ce7f80%24286b7e80%24%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Installation instructions

2014-11-10 Thread Jem indig
Hi,
ah gotcha!.
cool - assume there are no hartdcoded passwords anywhere and I can just set
the password as I see fit?

Don't suppose you have any links to documentation for how the setup looks,
best practices etc?

Cheers,
J

On Fri, Nov 7, 2014 at 8:46 PM, Mark Walkom markwal...@gmail.com wrote:

 It shouldn't run as root as the install process creates an elasticsearch
 user.
 Can you do a getent passwd|grep elasticsearch and see if it has created
 one, or not?

 On 8 November 2014 03:56, jemin...@gmail.com wrote:

 Hi,
 I'm looking to do a sensible install of elasticsearch on Redhat.
 If I use
 sudo yum install elasticsearch as described here
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-repositories.html
  then
 everything is owned by root.

 So what is the safest best-practice way to set up a user and do an
 install?

 Cheers,
 J

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/bd29756b-1be9-4295-a991-3066d187deb8%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/bd29756b-1be9-4295-a991-3066d187deb8%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups elasticsearch group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/elasticsearch/InShlT1oD6U/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZnW2iRTT4SkECN0QTfL8REENhwKb18%3DDcsRawTWR%3DMtrg%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZnW2iRTT4SkECN0QTfL8REENhwKb18%3DDcsRawTWR%3DMtrg%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAMa9dfSia_U%3DUBC6ZUg0yR5qLdzpNesnN8MQpr5LhX-PC2SXMg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Installation instructions

2014-11-10 Thread Mark Walkom
You shouldn't need to set the password at all as you shouldn't need to
login as the ES user.

The docs are a great place to start -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html

On 10 November 2014 20:37, Jem indig jemin...@gmail.com wrote:

 Hi,
 ah gotcha!.
 cool - assume there are no hartdcoded passwords anywhere and I can just
 set the password as I see fit?

 Don't suppose you have any links to documentation for how the setup looks,
 best practices etc?

 Cheers,
 J

 On Fri, Nov 7, 2014 at 8:46 PM, Mark Walkom markwal...@gmail.com wrote:

 It shouldn't run as root as the install process creates an elasticsearch
 user.
 Can you do a getent passwd|grep elasticsearch and see if it has created
 one, or not?

 On 8 November 2014 03:56, jemin...@gmail.com wrote:

 Hi,
 I'm looking to do a sensible install of elasticsearch on Redhat.
 If I use
 sudo yum install elasticsearch as described here
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-repositories.html
  then
 everything is owned by root.

 So what is the safest best-practice way to set up a user and do an
 install?

 Cheers,
 J

 --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/bd29756b-1be9-4295-a991-3066d187deb8%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/bd29756b-1be9-4295-a991-3066d187deb8%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups elasticsearch group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/elasticsearch/InShlT1oD6U/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZnW2iRTT4SkECN0QTfL8REENhwKb18%3DDcsRawTWR%3DMtrg%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZnW2iRTT4SkECN0QTfL8REENhwKb18%3DDcsRawTWR%3DMtrg%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAMa9dfSia_U%3DUBC6ZUg0yR5qLdzpNesnN8MQpr5LhX-PC2SXMg%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAMa9dfSia_U%3DUBC6ZUg0yR5qLdzpNesnN8MQpr5LhX-PC2SXMg%40mail.gmail.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZk6MVVYVBrv057d8EOSuaGrA%2B47k6sHwOy%2BKmSN5uVkHA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: simple_query_string has no analyze_wildcard

2014-11-10 Thread joergpra...@gmail.com
Would it help to work on a pull request for simple_query_string to support
analyze_wildcard?

Jörg

On Wed, Nov 5, 2014 at 6:08 PM, joergpra...@gmail.com joergpra...@gmail.com
 wrote:

 Hi,

 the query_string query


 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html

 has been extended by a heuristic to analyze wildcarded terms some time ago.

 https://github.com/elasticsearch/elasticsearch/issues/787

 I would like to use simple_query_string also with analyzed wildcard terms,
 but there is no analyze_wildcard parameter. It gives an error.

 Is it possible to add the parameter analyze_wildcard to
 simple_query_string?

 It is not a good solution to have to switch back to query_string query if
 users accidentally enter terms with wildcard characters.

 Best,

 Jörg



-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEEkr0Pnrru4MmvnUxG%3DoTsvrkx6FYi%2BcVdeEmpkWO_ZA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Decay Function question

2014-11-10 Thread Britta Weber
Hi Mario,

I added an image which hopefully explains a little better how decay
functions work here:
https://github.com/elasticsearch/elasticsearch/pull/8420/files

 - origin is the start (x-value) of the slope, in this case the date 9/17/2013

yes

 - for all points up to offset, the slope is flat, so there's no negative 
 scoring for the 5 days after 9/17/2013

The decay function never returns a negative value. It will always be
between 0 and 1. For each value within +- offset from the defined
origin the decay function will just return 1.

 - the end of the slope is scale

No, decay function will decrease further. The scale parameter just
steers how quickly the function approaches 0.

- at the data point 'scale', the slope will have a y-value (on the
graph) of decay (0.5)

yes.

 My test show that they actually get a 'zero' multiplier for their score, so 
 basically their scores all end up being zero. I was under the impression that 
 they would get a score multiplier of 0.5.

No, the score will decrease further until it reaches 0.  If you need
to have documents outside the scale range to have a value of 0.5 you
need to define a separate function which then adjusts the score for
these documents as needed.

On Thu, Nov 6, 2014 at 8:07 PM, Marlo Epres mep...@gmail.com wrote:
 I've begun using the decay function in order to promote more recent results
 in our index. In particular I'm using what's documented here:

 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-function-score-query.html

 Here's the date example they use (let's assume gaussian slope):

 DECAY_FUNCTION: {
 FIELD_NAME: {
   origin: 2013-09-17,
   scale: 10d,
   offset: 5d,
   decay : 0.5
 }
 }

 So in trying to visualize this decay, is it safe to make the following
 assumptions in terms of associating these input values to a graph:

 - origin is the start (x-value) of the slope, in this case the date
 9/17/2013
 - for all points up to offset, the slope is flat, so there's no negative
 scoring for the 5 days after 9/17/2013
 - the end of the slope is scale
 - at the data point 'scale', the slope will have a y-value (on the graph) of
 decay (0.5)

 The last point I am not so sure about. Furthermore, I'm unclear as to what
 happens for articles outside of scale, so past 10 days. My test show that
 they actually get a 'zero' multiplier for their score, so basically their
 scores all end up being zero. I was under the impression that they would get
 a score multiplier of 0.5. Any help would be appreciated.

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/32d518ad-2b63-4f13-a952-0f408722bd79%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALhJbBimcB2xX49FqyXyXye7ZUrsq3gDPT31gHF0p4%2B3q%2BaXRA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch server is down or unreachable

2014-11-10 Thread Prakash Dutta
Hi

I am also facing same issues. Read some where that elasticsearch in not 
running but when I tried with the url : http://localhost:9200, it is 
showing the below message

{
  status : 200,
  name : Tzabaoth,
  cluster_name : elasticsearch,
  version : {
number : 1.4.0,
build_hash : bc94bd81298f81c656893ab1d30a99356066,
build_timestamp : 2014-11-05T14:26:12Z,
build_snapshot : false,
lucene_version : 4.10.2
  },
  tagline : You Know, for Search
}

it means, elasticsearch is running. Added below two lines in elasticsearch.yml

http.cors.allow-origin: /.*/
http.cors.allow-credentials: true

After doing all this steps also, facing same issues.

Please suggest me the way to solve this.

Thanks in advance to all.

Regards
Prakash






On Thursday, October 30, 2014 2:51:12 AM UTC+5:30, versne...@gmail.com 
wrote:

 Hello there,

 I keep getting this message when i start Kibana. But if i go look on my 
 server i see that elasticsearch is running perfectly 

 *root@server:~# service elasticsearch status*
 * * elasticsearch is running*





 *Connection FailedPossibility #1: Your elasticsearch server is down or 
 unreachableThis can be caused by a network outage, or a failure of the 
 Elasticsearch process. If you have recently run a query that required a 
 terms facet to be executed it is possible the process has run out of memory 
 and stopped. Be sure to check your Elasticsearch logs for any sign of 
 memory pressure.Possibility #2: You are running Elasticsearch 1.4 or 
 higherElasticsearch 1.4 ships with a security setting that prevents Kibana 
 from connecting. You will need to set http.cors.allow-origin in your 
 elasticsearch.yml to the correct protocol, hostname, and port (if not 80) 
 that your access Kibana from. Note that if you are running Kibana in a 
 sub-url, you should exclude the sub-url path and only include the protocol, 
 hostname and port. For example, http://mycompany.com:8080 
 http://mycompany.com:8080, not http://mycompany.com:8080/kibana 
 http://mycompany.com:8080/kibana.Click back, or the home button, when you 
 have resolved the connection issue*

 Plz some help


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6aaac82e-edfb-400e-a4a8-013950ce6eab%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Fast exclusion of earlier queries

2014-11-10 Thread Kristoffer Johansson
Hi

I'm currently evaluating ElasticSearch to be used by a selection-engine 
at my company. The selection engine will be used to answer questions like 
how many people are there between 20 and 30 years old in the city of 
Stockholm. The critical thing for this system is to give fast feedback on 
the counts (within a second), the extraction of the identities is not as 
time critical.

One requirement of the system is to make multiple queries but still keep 
unique identities. E.g: you should be able to make two queries, for example 
all people in stockholm and all males between 20 and 25 and then the 
second query should not include anyone living in stockholm. We have solved 
this by negating the filter of the first query and using it in the second, 
and because of ElasticSearch filter caching this gives us really nice 
performance.

Now to the real challenge: Any of these queries can contain a limit so the 
above example can be all people in Stockholm limited to 1 and all 
males between 20 and 25. In this case the result of the second query 
should contain documents that is selected by the first query but is not 
among the 1 chosen by that limit. Now we cannot rely on negated filters 
any more because now we have to investigate the result set to find out what 
documents actually hit the first query. And because one query can hit 
millions of documents, this is, of course, really slow.

Have anyone of you considered this kind of requirement before, and do you 
have a suggestion to how we can solve it with reasonable performance?

My team will now examine the possibility of creating this functionality in 
ElasticSearch. We would like to be able to start a transaction in ES that 
keeps track of all document identities that has been selected by any query 
within the transaction. Then we can always exclude these identities from a 
query to create the described uniqueness. Do any of you know if this is 
feasible, and do you have some suggestions for our implementation?

Regards
Kristoffer

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2950add6-bb3b-4b42-a6c3-0148a783abd9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch Aggregation time

2014-11-10 Thread Ankur Goel
 

query : {

filtered : {

  query : {

match_all : { }

  },

  filter : {

bool : {

  must : {

bool : {

  must : {

terms : {

  isActive : [ true ]

}

  }

}

  }

}

  }

}

  },

  aggregations : {

revenue : {

  filter : {

match_all : { }

  },

  aggregations : {

revenueUSD : {

  range : {

field : revenueUSD,

ranges : [ {

  to : 1.0

}, {

  from : 1.0,

  to : 5.0

}, {

  from : 5.0,

  to : 50.0

}, {

  from : 50.0,

  to : 100.0

}, {

  from : 100.0,

  to : 1000.0

}, {

  from : 1000.0

} ]

  }

}

  }

}

  }

}
this is a sample , the match all is usually replaced by some query



On Wednesday, 5 November 2014 19:38:42 UTC+5:30, Adrien Grand wrote:

 Can you please show the json of the request that you send to elasticsearch?

 On Wed, Nov 5, 2014 at 10:52 AM, Ankur Goel ankr...@gmail.com 
 javascript: wrote:

 hi ,

 we are trying to run some aggregation over around 5 million documents 
 with  cardinality of the fields of the order of 1000 , the aggregation is a 
 filter aggregation which wraps underlying term aggregation .  Right now 
 it's taking around 1.2 secs on an average to compute it , the time 
 increases when no. of documents are increased or I try to do multiple 
 aggregations. we have aws extra large machines, shards 3 and replication 2 
 . 

 1.) can we improve this time (will like it to get it within 1 sec) , I 
 can see very little if any of field cache being used
 2.) how does this scale , it increases with number of documents , how can 
 I offset that (increasing nodes , replication , sharding  ??)
 3.) are there any better options (plugins or a different platform for 
 aggregating data )


 regards

 Ankur Goel


  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/fb73f5bd-24a4-4065-9253-39aa8dd9dfe0%40googlegroups.com
  
 https://groups.google.com/d/msgid/elasticsearch/fb73f5bd-24a4-4065-9253-39aa8dd9dfe0%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Adrien Grand
  

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/deb3e7e4-751a-4d7e-92d5-28be42b11e76%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch Aggregation time

2014-11-10 Thread Ankur Goel
 

query : {

filtered : {

  query : {

match_all : { }

  },

  filter : {

bool : {

  must : {

bool : {

  must : {

terms : {

  isActive : [ true ]

}

  }

}

  }

}

  }

}

  },

  aggregations : {

revenueFilter : {

  filter : {

match_all : { }

  },

  aggregations : {

revenue : {

  range : {

field : revenue,

ranges : [ {

  to : 1.0

}, {

  from : 1.0,

  to : 5.0

}, {

  from : 5.0,

  to : 50.0

}, {

  from : 50.0,

  to : 100.0

}, {

  from : 100.0,

  to : 1000.0

}, {

  from : 1000.0

} ]

  }

}

  }

}

  }

}

On Wednesday, 5 November 2014 19:38:42 UTC+5:30, Adrien Grand wrote:

 Can you please show the json of the request that you send to elasticsearch?

 On Wed, Nov 5, 2014 at 10:52 AM, Ankur Goel ankr...@gmail.com 
 javascript: wrote:

 hi ,

 we are trying to run some aggregation over around 5 million documents 
 with  cardinality of the fields of the order of 1000 , the aggregation is a 
 filter aggregation which wraps underlying term aggregation .  Right now 
 it's taking around 1.2 secs on an average to compute it , the time 
 increases when no. of documents are increased or I try to do multiple 
 aggregations. we have aws extra large machines, shards 3 and replication 2 
 . 

 1.) can we improve this time (will like it to get it within 1 sec) , I 
 can see very little if any of field cache being used
 2.) how does this scale , it increases with number of documents , how can 
 I offset that (increasing nodes , replication , sharding  ??)
 3.) are there any better options (plugins or a different platform for 
 aggregating data )


 regards

 Ankur Goel


  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/fb73f5bd-24a4-4065-9253-39aa8dd9dfe0%40googlegroups.com
  
 https://groups.google.com/d/msgid/elasticsearch/fb73f5bd-24a4-4065-9253-39aa8dd9dfe0%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Adrien Grand
  

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c83b1ddc-6a4b-4f24-ba3d-f48a8cb108c2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Installation instructions

2014-11-10 Thread Jem indig
ok - and when it runs as a daemon/service it runs as the elasticsearch user.

When I installed it, using yum, I used sudo yum install elasticsearch which
means that root owns the files. I guess this is standard?

2. If I want it to run interactively should I sudo and run it?

Sorry for the dumb quesitons - linux skills are very rusty

Cheers,
J

On Mon, Nov 10, 2014 at 9:38 AM, Mark Walkom markwal...@gmail.com wrote:

 You shouldn't need to set the password at all as you shouldn't need to
 login as the ES user.

 The docs are a great place to start -
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html

 On 10 November 2014 20:37, Jem indig jemin...@gmail.com wrote:

 Hi,
 ah gotcha!.
 cool - assume there are no hartdcoded passwords anywhere and I can just
 set the password as I see fit?

 Don't suppose you have any links to documentation for how the setup
 looks, best practices etc?

 Cheers,
 J

 On Fri, Nov 7, 2014 at 8:46 PM, Mark Walkom markwal...@gmail.com wrote:

 It shouldn't run as root as the install process creates an elasticsearch
 user.
 Can you do a getent passwd|grep elasticsearch and see if it has
 created one, or not?

 On 8 November 2014 03:56, jemin...@gmail.com wrote:

 Hi,
 I'm looking to do a sensible install of elasticsearch on Redhat.
 If I use
 sudo yum install elasticsearch as described here
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-repositories.html
  then
 everything is owned by root.

 So what is the safest best-practice way to set up a user and do an
 install?

 Cheers,
 J

 --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/bd29756b-1be9-4295-a991-3066d187deb8%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/bd29756b-1be9-4295-a991-3066d187deb8%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups elasticsearch group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/elasticsearch/InShlT1oD6U/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZnW2iRTT4SkECN0QTfL8REENhwKb18%3DDcsRawTWR%3DMtrg%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZnW2iRTT4SkECN0QTfL8REENhwKb18%3DDcsRawTWR%3DMtrg%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAMa9dfSia_U%3DUBC6ZUg0yR5qLdzpNesnN8MQpr5LhX-PC2SXMg%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAMa9dfSia_U%3DUBC6ZUg0yR5qLdzpNesnN8MQpr5LhX-PC2SXMg%40mail.gmail.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups elasticsearch group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/elasticsearch/InShlT1oD6U/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZk6MVVYVBrv057d8EOSuaGrA%2B47k6sHwOy%2BKmSN5uVkHA%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZk6MVVYVBrv057d8EOSuaGrA%2B47k6sHwOy%2BKmSN5uVkHA%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAMa9dfRKNcWhB2-k0-Re78LkWr04_qv8PCbRgzqEi4KVOjN5Vg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Installation instructions

2014-11-10 Thread Mark Walkom
Yep it's owned by root but that's ok, the places ES needs to write to have
the right permissions.

Why do you want to run ES interactively though? It will log everything to
/var/log/elasticsearch if you need to look at the output. You can use sudo
to run it as the ES user (eg
http://www.cyberciti.biz/open-source/command-line-hacks/linux-run-command-as-different-user/
).

On 10 November 2014 21:56, Jem indig jemin...@gmail.com wrote:

 ok - and when it runs as a daemon/service it runs as the elasticsearch
 user.

 When I installed it, using yum, I used sudo yum install elasticsearch
 which means that root owns the files. I guess this is standard?

 2. If I want it to run interactively should I sudo and run it?

 Sorry for the dumb quesitons - linux skills are very rusty

 Cheers,
 J

 On Mon, Nov 10, 2014 at 9:38 AM, Mark Walkom markwal...@gmail.com wrote:

 You shouldn't need to set the password at all as you shouldn't need to
 login as the ES user.

 The docs are a great place to start -
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html

 On 10 November 2014 20:37, Jem indig jemin...@gmail.com wrote:

 Hi,
 ah gotcha!.
 cool - assume there are no hartdcoded passwords anywhere and I can just
 set the password as I see fit?

 Don't suppose you have any links to documentation for how the setup
 looks, best practices etc?

 Cheers,
 J

 On Fri, Nov 7, 2014 at 8:46 PM, Mark Walkom markwal...@gmail.com
 wrote:

 It shouldn't run as root as the install process creates an
 elasticsearch user.
 Can you do a getent passwd|grep elasticsearch and see if it has
 created one, or not?

 On 8 November 2014 03:56, jemin...@gmail.com wrote:

 Hi,
 I'm looking to do a sensible install of elasticsearch on Redhat.
 If I use
 sudo yum install elasticsearch as described here
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-repositories.html
  then
 everything is owned by root.

 So what is the safest best-practice way to set up a user and do an
 install?

 Cheers,
 J

 --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/bd29756b-1be9-4295-a991-3066d187deb8%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/bd29756b-1be9-4295-a991-3066d187deb8%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups elasticsearch group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/elasticsearch/InShlT1oD6U/unsubscribe
 .
 To unsubscribe from this group and all its topics, send an email to
 elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZnW2iRTT4SkECN0QTfL8REENhwKb18%3DDcsRawTWR%3DMtrg%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZnW2iRTT4SkECN0QTfL8REENhwKb18%3DDcsRawTWR%3DMtrg%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAMa9dfSia_U%3DUBC6ZUg0yR5qLdzpNesnN8MQpr5LhX-PC2SXMg%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAMa9dfSia_U%3DUBC6ZUg0yR5qLdzpNesnN8MQpr5LhX-PC2SXMg%40mail.gmail.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups elasticsearch group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/elasticsearch/InShlT1oD6U/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZk6MVVYVBrv057d8EOSuaGrA%2B47k6sHwOy%2BKmSN5uVkHA%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZk6MVVYVBrv057d8EOSuaGrA%2B47k6sHwOy%2BKmSN5uVkHA%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 

Re: ES 1.3.4 scrolling never ends

2014-11-10 Thread Yarden Bar
One issue I identified is the heap size was too small for the query, I've 
increased the heap memory and the CircuitBreakerException stopped happening.

But the scrolling still returning the SAME result.

An updated code example is below:
import org.elasticsearch.action.search.SearchType
import org.elasticsearch.client.transport.TransportClient
import org.elasticsearch.common.settings.ImmutableSettings
import org.elasticsearch.common.transport.InetSocketTransportAddress
import org.elasticsearch.common.unit.TimeValue
import org.elasticsearch.index.query.{FilterBuilders, QueryBuilders}
import org.elasticsearch.search.Scroll
import org.elasticsearch.search.sort.SortOrder

val es_settings = ImmutableSettings.settingsBuilder().put(transport.sniff, 
true).put(cluster.name, test_acm_es).build()
var client = new TransportClient(es_settings).addTransportAddress(new 
InetSocketTransportAddress(myServer,9300))
val query = QueryBuilders.filteredQuery(QueryBuilders.matchAllQuery(), 
FilterBuilders.queryFilter(
 QueryBuilders.queryString(((market:2 AND feed:55) OR (market:2 AND 
feed:32)
var result = client.prepareSearch(orderbook-2014.11.03).setTypes(List(
level):_*).setQuery(query).setSearchType(SearchType.DFS_QUERY_THEN_FETCH).
setSize(1).addSort(updateNo, SortOrder.ASC).setScroll(new Scroll(
TimeValue.timeValueMinutes(5))).get()
var scrollId = 
var itr = 0
do {
 scrollId = result.getScrollId
 result = client.prepareSearchScroll(scrollId).setScroll(TimeValue.
timeValueMinutes(3)).get()
 println(sIteration=$itr, scrollResult=${result.getHits.getHits.length})
// println()
// result.getHits.getHits.foreach(h = println(h.getId))
// println()
 itr+=1
} while (result.getHits.getHits.length != 0)

enabling the print block reveals that the searchHit array is the same for 
each iteration...

Thanks,
Yarden

On Wednesday, November 5, 2014 2:48:46 PM UTC+2, Yarden Bar wrote:

 Hi all,

 I'm encountering a strange behavior when executing a search-scroll on a 
 single node of ES-1.3.4 with Java client.

 The scenario is as follows:

1. Start a single node of version 1.3.4
2. Add snapshot repository pointing to version 1.1.1 snapshots
3. Restore snapshots version 1.1.1 snapshot to 1.3.4 node
4. Execute search on an index with 
5. client.prepareSearch(my_index*).setQuery(QueryBuilders.
filteredQuery(QueryBuilders.matchAllQuery(), FilterBuilders.queryFilter
(
  QueryBuilders.queryString(s$terms AND 
snapshotNo:[${mdp.fromSnapshot} TO ${mdp.toSnapshot}]) ))   )
 .addFields(OBFields.values.map(_.toString).toList: _*).setSize(
pageSize).addSort(OBFields.updateNo.toString, SortOrder.ASC)
.setScroll(TimeValue.timeValueMinutes(3)).execute().actionGet()


6. Execute the following search scroll 
client.prepareSearchScroll(scrollId).setScroll(TimeValue.
timeValueMinutes(3)).execute().actionGet()

 I have a loop iterating over #6, providing the same scrollId and checking 
 for (result.getHits().getHits().legth == 0) to terminate.
 I keep getting the same result 'page' with the same amount of results.


 Any Idea??


 Thanks,
 Yarden


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/66e02775-17dd-4ea0-a8b3-39eb7e2a7aca%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Nodes not joining after 1.4.0 upgrade

2014-11-10 Thread Boaz Leskes
Hi,

The logs you mentioned indicate that the nodes try to join the cluster but 
it takes too long for a complete verification cycle (connect back to node 
and publish cluster state to it) takes too long. It seems there is 
something going on your masters.

Can you check the logs over there? Also are you using multicast or unicast 
discovery?

On Sunday, November 9, 2014 8:36:06 AM UTC+1, Janet Sullivan wrote:

  More hours of working – even when I get a 1.4.0 cluster up, masters 
 wouldn’t fail over – when I took master1 down, neither master2 or master3 
 would promote themselves.   In 1.4.0-beta it fails over quickly.

  
  
 *From:* elasticsearch@googlegroups.com [mailto:
 elasticsearch@googlegroups.com] *On Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 11:11 PM
 *To:* elasticsearch@googlegroups.com
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
  
  

 OK, it also happens to some degree with 1.4.0-beta, although overall it’s 
 much better on beta.  I wasn’t able to get my 12 node cluster up on 1.4.0 
 after several hours of fiddling, but 1.4.0-beta did come up.

  
  
 *From:* elasticsearch@googlegroups.com [
 mailto:elasticsearch@googlegroups.com elasticsearch@googlegroups.com] *On 
 Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 9:26 PM
 *To:* elasticsearch@googlegroups.com
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
  
  

 But it DOES happen with 1.3.5.   Hmmm….

  
  
 *From:* elasticsearch@googlegroups.com [
 mailto:elasticsearch@googlegroups.com elasticsearch@googlegroups.com] *On 
 Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 9:24 PM
 *To:* elasticsearch@googlegroups.com
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
  
  

 Note:  This issue doesn’t happen with 1.4.0-beta1

  
  
 *From:* elasticsearch@googlegroups.com [
 mailto:elasticsearch@googlegroups.com elasticsearch@googlegroups.com] *On 
 Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 8:46 PM
 *To:* elasticsearch@googlegroups.com
 *Subject:* Nodes not joining after 1.4.0 upgrade
  
  

 I’ve upgraded a couple of clusters to 1.4.0 from 1.3.4.  On both of them, 
 I had nodes that spewed the following, and were slow to join, if they 
 joined at all:

  

 [2014-11-09 04:33:45,995][INFO ][discovery.zen] [gnslogstash3] 
 failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:34:49,776][INFO ][discovery.zen] [gnslogstash3] 
 failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:35:53,571][INFO ][discovery.zen] [gnslogstash3] 
 failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:36:57,353][INFO ][discovery.zen] [gnslogstash3] 
 failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:38:01,120][INFO ][discovery.zen] [gnslogstash3] 
 failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:39:04,885][INFO ][discovery.zen] [gnslogstash3] 
 failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:40:08,657][INFO ][discovery.zen] [gnslogstash3] 
 failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

  

 I’m able to telnet to port 9300 on gnslogstash10 in this example from 
 gnslogstash3 with no issue, but this cluster doesn’t want to bring all its 
 nodes up.  The more nodes added, the more likely a join will fail.  In this 
 example, 9 nodes are up, but 3 nodes don’t want to join.  L  Thoughts?

 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 

Re: Elasticsearch server is down or unreachable

2014-11-10 Thread Drew Town
I got it to work by adding

http.cors.enabled: true

and leaving the allow-origin out and the allow-credentials out.  
allow-origin seems to default to anything if it is not present. cors is 
disabled by default.

Drew

On Monday, November 10, 2014 4:16:32 AM UTC-6, Prakash Dutta wrote:

 Hi

 I am also facing same issues. Read some where that elasticsearch in not 
 running but when I tried with the url : http://localhost:9200, it is 
 showing the below message

 {
   status : 200,
   name : Tzabaoth,
   cluster_name : elasticsearch,
   version : {
 number : 1.4.0,
 build_hash : bc94bd81298f81c656893ab1d30a99356066,
 build_timestamp : 2014-11-05T14:26:12Z,
 build_snapshot : false,
 lucene_version : 4.10.2
   },
   tagline : You Know, for Search
 }

 it means, elasticsearch is running. Added below two lines in elasticsearch.yml

 http.cors.allow-origin: /.*/
 http.cors.allow-credentials: true

 After doing all this steps also, facing same issues.

 Please suggest me the way to solve this.

 Thanks in advance to all.

 Regards
 Prakash






 On Thursday, October 30, 2014 2:51:12 AM UTC+5:30, versne...@gmail.com 
 wrote:

 Hello there,

 I keep getting this message when i start Kibana. But if i go look on my 
 server i see that elasticsearch is running perfectly 

 *root@server:~# service elasticsearch status*
 * * elasticsearch is running*





 *Connection FailedPossibility #1: Your elasticsearch server is down or 
 unreachableThis can be caused by a network outage, or a failure of the 
 Elasticsearch process. If you have recently run a query that required a 
 terms facet to be executed it is possible the process has run out of memory 
 and stopped. Be sure to check your Elasticsearch logs for any sign of 
 memory pressure.Possibility #2: You are running Elasticsearch 1.4 or 
 higherElasticsearch 1.4 ships with a security setting that prevents Kibana 
 from connecting. You will need to set http.cors.allow-origin in your 
 elasticsearch.yml to the correct protocol, hostname, and port (if not 80) 
 that your access Kibana from. Note that if you are running Kibana in a 
 sub-url, you should exclude the sub-url path and only include the protocol, 
 hostname and port. For example, http://mycompany.com:8080 
 http://mycompany.com:8080, not http://mycompany.com:8080/kibana 
 http://mycompany.com:8080/kibana.Click back, or the home button, when you 
 have resolved the connection issue*

 Plz some help



-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/304b326c-c16b-46d5-843e-eab083d7e2a7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: simple_query_string has no analyze_wildcard

2014-11-10 Thread joergpra...@gmail.com
The pull request is at
https://github.com/elasticsearch/elasticsearch/pull/8422

Best,

Jörg

On Mon, Nov 10, 2014 at 10:54 AM, joergpra...@gmail.com 
joergpra...@gmail.com wrote:

 Would it help to work on a pull request for simple_query_string to support
 analyze_wildcard?

 Jörg

 On Wed, Nov 5, 2014 at 6:08 PM, joergpra...@gmail.com 
 joergpra...@gmail.com wrote:

 Hi,

 the query_string query


 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html

 has been extended by a heuristic to analyze wildcarded terms some time
 ago.

 https://github.com/elasticsearch/elasticsearch/issues/787

 I would like to use simple_query_string also with analyzed wildcard
 terms, but there is no analyze_wildcard parameter. It gives an error.

 Is it possible to add the parameter analyze_wildcard to
 simple_query_string?

 It is not a good solution to have to switch back to query_string query if
 users accidentally enter terms with wildcard characters.

 Best,

 Jörg




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGszQ3aYOj_wHBu%2BYaMkhng0_5qXJsEC51TuQ%2BJJxCMGQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: how to search non indexed field in elasticsearch

2014-11-10 Thread Nikolas Everett
Use _source['service'] instead. Much slower but doesn't need to the field
indexed.
On Nov 10, 2014 1:01 AM, ramky panguluri.ramakris...@gmail.com wrote:

 Thanks Nikolas.
 I tried the query but it failed to search on non-indexed field.

 Query i used is
 {
   filter: {
 script: {
script: doc['service'].value == http
 }
   }
 }
 service is non-indexed field.

 Exception after execution is
 {[x3a9BIGLRwOdhwpsaUZbrw][siem0511][0]:
 QueryPhaseExecutionException[[siem0511][0]:
 query[ConstantScore(cache(_type:siem))],from[0],size[10]: Query Failed
 [Failed to execute main query]]; nested: CompileException[[Error: No field
 found for [service] in mapping with types [siem]]\n[Near : {...
 doc['service'].value == http }]\n ^\n[Line: 1, Column: 1]];
 nested: ElasticsearchIllegalArgumentException[No field found for [service]
 in mapping with types [siem]]; }

 Please help.

 Thanks in advance.

 Regards
 Ramky




 On Friday, November 7, 2014 5:49:04 PM UTC+5:30, Nikolas Everett wrote:

 The first example on http://www.elasticsearch.org/guide/en/elasticsearch/
 reference/current/query-dsl-script-filter.html#query-dsl-script-filter
 should just if you replace  with .equals

 On Fri, Nov 7, 2014 at 2:11 AM, ramky panguluri@gmail.com wrote:

 Thanks Nikolas Everett for your quick reply.

 Can you please provide me example to execute the same. I tried multiple
 times but unable to execute.

 Thanks in advance

 On Thursday, November 6, 2014 9:44:55 PM UTC+5:30, Nikolas Everett wrote:

 You can totally use a script filter checking the field against
 _source.  Its super duper duper slow but you can do it if you need it
 rarely.

 On Thu, Nov 6, 2014 at 11:13 AM, Ivan Brusic iv...@brusic.com wrote:

 You cannot search/filter on a non-indexed field.

 --
 Ivan

 On Wed, Nov 5, 2014 at 11:45 PM, ramakrishna panguluri 
 panguluri@gmail.com wrote:

 I have 10 fields inserted into elasticsearch out of which 5 fields
 are indexed.
 Is it possible to search on non indexed field?

 Thanks in advance.


 Regards
 Rama Krishna P

 --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/c63ac6bb-8717-470e-a5e4-01a8bd75b769%40goo
 glegroups.com
 https://groups.google.com/d/msgid/elasticsearch/c63ac6bb-8717-470e-a5e4-01a8bd75b769%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/CALY%3DcQDD0JYJeX%2BCmV%3DGACekwofjUYFQvoS
 WQ86Th3r-MBWZtw%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQDD0JYJeX%2BCmV%3DGACekwofjUYFQvoSWQ86Th3r-MBWZtw%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/4d3e5636-1124-4dcb-b6be-b62f893cae39%
 40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/4d3e5636-1124-4dcb-b6be-b62f893cae39%40googlegroups.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/725d437c-988d-4462-98db-dc281c7e438c%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/725d437c-988d-4462-98db-dc281c7e438c%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd1Yp%2BY%3Dm%2BRhrJMMCBFdJ_Vg57PH8fP9_WE3neWB6cjH%3DA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Fast exclusion of earlier queries

2014-11-10 Thread joergpra...@gmail.com
Can you move the limit to the end of the filter chain? Then you could apply
the filters as before, plus a limit at the end.

If not, you could experiment
with org.elasticsearch.common.lucene.search.LimitFilter. This filter takes
a number of matches out of a filter.

Jörg

On Mon, Nov 10, 2014 at 11:42 AM, Kristoffer Johansson 
kristoffer.s.johans...@gmail.com wrote:

 Hi

 I'm currently evaluating ElasticSearch to be used by a selection-engine
 at my company. The selection engine will be used to answer questions like
 how many people are there between 20 and 30 years old in the city of
 Stockholm. The critical thing for this system is to give fast feedback on
 the counts (within a second), the extraction of the identities is not as
 time critical.

 One requirement of the system is to make multiple queries but still keep
 unique identities. E.g: you should be able to make two queries, for example
 all people in stockholm and all males between 20 and 25 and then the
 second query should not include anyone living in stockholm. We have solved
 this by negating the filter of the first query and using it in the second,
 and because of ElasticSearch filter caching this gives us really nice
 performance.

 Now to the real challenge: Any of these queries can contain a limit so the
 above example can be all people in Stockholm limited to 1 and all
 males between 20 and 25. In this case the result of the second query
 should contain documents that is selected by the first query but is not
 among the 1 chosen by that limit. Now we cannot rely on negated filters
 any more because now we have to investigate the result set to find out what
 documents actually hit the first query. And because one query can hit
 millions of documents, this is, of course, really slow.

 Have anyone of you considered this kind of requirement before, and do you
 have a suggestion to how we can solve it with reasonable performance?

 My team will now examine the possibility of creating this functionality in
 ElasticSearch. We would like to be able to start a transaction in ES that
 keeps track of all document identities that has been selected by any query
 within the transaction. Then we can always exclude these identities from a
 query to create the described uniqueness. Do any of you know if this is
 feasible, and do you have some suggestions for our implementation?

 Regards
 Kristoffer

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/2950add6-bb3b-4b42-a6c3-0148a783abd9%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/2950add6-bb3b-4b42-a6c3-0148a783abd9%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGPVaqsRXNsd7ipq5695F_5vhxVvXVNYW1o38WcYsx5jg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Shards UNASSIGNED even tho they exist on disk

2014-11-10 Thread Johan Öhr
Hi,

I have a problem with a few index, some of the shards (both replica and 
primary) are UNASSIGNED, my cluster stays yellow.

This is what the master says about that:
[2014-11-10 06:53:01,223][WARN ][cluster.action.shard ] [node-master] 
[index][9] received shard failed for [index][9], 
node[9g2_kOrDSt-57UVI1bLfFg], [P], s[STARTED], indexUUID 
[20P5SMNFTZyrUEVyUPCsbQ], reason [master 
[node-master][07ZcjsurR3iIVsH6iSX0jw][data-node][inet[/xx.xx.xx.xx:9300]]{data=false,
 
master=true} marked shard as started, but shard has not been created, mark 
shard as failed]

http://host:9200/index/_stats_shards:{failed:0,successful:13,total:
20

This happend when i dropped a node, and let it replicate itself together, 
replication factor is 1 (two shards identical)
I did it on two nodes, worked perfectly, then on the third node, i have 92 
SHARDS Unassigned

The only different between the first two nodes and the third is that it ran 
with these settings:


  cluster.routing.allocation.disk.threshold_enabled: true,

  cluster.routing.allocation.disk.watermark.low: 0.85,

cluster.routing.allocation.disk.watermark.high: 0.90,

cluster.info.update.interval: 60s,

indices.recovery.concurrent_streams: 10,

cluster.routing.allocation.node_concurrent_recoveries: 40,


Any idea if this can be fixed? 

Ive tried to clean up the masters and restarted them, nothing
Ive tried to delete _state on data-node on these index, nothing

Thanks for help :)

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1706f0bc-5aef-42dc-bcbe-8f62efd25faf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Shards UNASSIGNED even tho they exist on disk

2014-11-10 Thread Johan Öhr
Another thing i just noticed..

The first problematic index with the problem, is from 2014-10-08,
We upgraded to 1.3.2 from 1.01 2014-10-07, indexes is created nighthly.

Den måndagen den 10:e november 2014 kl. 14:45:09 UTC+1 skrev Johan Öhr:

 Hi,

 I have a problem with a few index, some of the shards (both replica and 
 primary) are UNASSIGNED, my cluster stays yellow.

 This is what the master says about that:
 [2014-11-10 06:53:01,223][WARN ][cluster.action.shard ] [node-master] 
 [index][9] received shard failed for [index][9], 
 node[9g2_kOrDSt-57UVI1bLfFg], [P], s[STARTED], indexUUID 
 [20P5SMNFTZyrUEVyUPCsbQ], reason [master 
 [node-master][07ZcjsurR3iIVsH6iSX0jw][data-node][inet[/xx.xx.xx.xx:9300]]{data=false,
  
 master=true} marked shard as started, but shard has not been created, mark 
 shard as failed]

 http://host:9200/index/_stats_shards:{failed:0,successful:13,
 total:20

 This happend when i dropped a node, and let it replicate itself together, 
 replication factor is 1 (two shards identical)
 I did it on two nodes, worked perfectly, then on the third node, i have 92 
 SHARDS Unassigned

 The only different between the first two nodes and the third is that it 
 ran with these settings:


   cluster.routing.allocation.disk.threshold_enabled: true,

   cluster.routing.allocation.disk.watermark.low: 0.85,

 cluster.routing.allocation.disk.watermark.high: 0.90,

 cluster.info.update.interval: 60s,

 indices.recovery.concurrent_streams: 10,

 cluster.routing.allocation.node_concurrent_recoveries: 40,


 Any idea if this can be fixed? 

 Ive tried to clean up the masters and restarted them, nothing
 Ive tried to delete _state on data-node on these index, nothing

 Thanks for help :)



-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/03686f7e-df0b-4fcd-89fd-ebc66a0ebc05%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: ES 1.3.4 scrolling never ends

2014-11-10 Thread joergpra...@gmail.com
You must initiate scan/scroll with search type SCAN. The scan/scroll
pattern is like this

SearchRequest  searchRequest = new
SearchRequestBuilder(client).setQuery(QueryBuilders.matchAllQuery()).request();
searchRequest.searchType(SearchType.SCAN).scroll(request.getTimeout());
SearchResponse searchResponse = client.search(searchRequest).actionGet();
// get total hits here before entering the loop
while (searchResponse.getScrollId() != null) {
searchResponse =
client.prepareSearchScroll(searchResponse.getScrollId())

.setScroll(request.getTimeout()).execute().actionGet();
long hits = searchResponse.getHits().getHits().length;
// process hits of a scroll here

}

Jörg

On Mon, Nov 10, 2014 at 1:27 PM, Yarden Bar ayash.jor...@gmail.com wrote:

 One issue I identified is the heap size was too small for the query, I've
 increased the heap memory and the CircuitBreakerException stopped happening.

 But the scrolling still returning the SAME result.

 An updated code example is below:
 import org.elasticsearch.action.search.SearchType
 import org.elasticsearch.client.transport.TransportClient
 import org.elasticsearch.common.settings.ImmutableSettings
 import org.elasticsearch.common.transport.InetSocketTransportAddress
 import org.elasticsearch.common.unit.TimeValue
 import org.elasticsearch.index.query.{FilterBuilders, QueryBuilders}
 import org.elasticsearch.search.Scroll
 import org.elasticsearch.search.sort.SortOrder

 val es_settings = ImmutableSettings.settingsBuilder().put(
 transport.sniff, true).put(cluster.name, test_acm_es).build()
 var client = new TransportClient(es_settings).addTransportAddress(new
 InetSocketTransportAddress(myServer,9300))
 val query = QueryBuilders.filteredQuery(QueryBuilders.matchAllQuery(),
 FilterBuilders.queryFilter(
  QueryBuilders.queryString(((market:2 AND feed:55) OR (market:2 AND
 feed:32)
 var result = client.prepareSearch(orderbook-2014.11.03).setTypes(List(
 level):_*).setQuery(query).setSearchType(SearchType.DFS_QUERY_THEN_FETCH
 ).setSize(1).addSort(updateNo, SortOrder.ASC).setScroll(new Scroll(
 TimeValue.timeValueMinutes(5))).get()
 var scrollId = 
 var itr = 0
 do {
  scrollId = result.getScrollId
  result = client.prepareSearchScroll(scrollId).setScroll(TimeValue.
 timeValueMinutes(3)).get()
  println(sIteration=$itr, scrollResult=${result.getHits.getHits.length})
 // println()
 // result.getHits.getHits.foreach(h = println(h.getId))
 // println()
  itr+=1
 } while (result.getHits.getHits.length != 0)

 enabling the print block reveals that the searchHit array is the same for
 each iteration...

 Thanks,
 Yarden

 On Wednesday, November 5, 2014 2:48:46 PM UTC+2, Yarden Bar wrote:

 Hi all,

 I'm encountering a strange behavior when executing a search-scroll on a
 single node of ES-1.3.4 with Java client.

 The scenario is as follows:

1. Start a single node of version 1.3.4
2. Add snapshot repository pointing to version 1.1.1 snapshots
3. Restore snapshots version 1.1.1 snapshot to 1.3.4 node
4. Execute search on an index with
5. client.prepareSearch(my_index*).setQuery(QueryBuilders.
filteredQuery(QueryBuilders.matchAllQuery(), FilterBuilders.
queryFilter(
  QueryBuilders.queryString(s$terms AND 
 snapshotNo:[${mdp.fromSnapshot}
TO ${mdp.toSnapshot}]) ))   )
 .addFields(OBFields.values.map(_.toString).toList: _*).setSize(
pageSize).addSort(OBFields.updateNo.toString, SortOrder.ASC)
.setScroll(TimeValue.timeValueMinutes(3)).execute().actionGet
()


6. Execute the following search scroll
client.prepareSearchScroll(scrollId).setScroll(TimeValue.tim
eValueMinutes(3)).execute().actionGet()

 I have a loop iterating over #6, providing the same scrollId and checking
 for (result.getHits().getHits().legth == 0) to terminate.
 I keep getting the same result 'page' with the same amount of results.


 Any Idea??


 Thanks,
 Yarden

  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/66e02775-17dd-4ea0-a8b3-39eb7e2a7aca%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/66e02775-17dd-4ea0-a8b3-39eb7e2a7aca%40googlegroups.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGgEhDf210fHVx%2BNqj-qFc5xu32zTp9FkK3W1Dtpi%3DJgg%40mail.gmail.com.
For more 

Re: ES 1.3.4 scrolling never ends

2014-11-10 Thread Yarden Bar
Hi Jorg,

I cant use scan type because I need the documents sorted ASC on a field, scan 
returns the documents in the order they indexed.

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1eb0d4dd-1659-48a2-929b-194ebd531465%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: ES 1.3.4 scrolling never ends

2014-11-10 Thread joergpra...@gmail.com
Scan is not really the order the docs are indexed (it depends on how the
index segments in the shards return the docs).

But anyway, you can not scroll over a sorted result set.

Jörg

On Mon, Nov 10, 2014 at 3:12 PM, Yarden Bar ayash.jor...@gmail.com wrote:

 Hi Jorg,

 I cant use scan type because I need the documents sorted ASC on a field,
 scan returns the documents in the order they indexed.

 Thanks

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/1eb0d4dd-1659-48a2-929b-194ebd531465%40googlegroups.com
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEfbojPZCOesok%2B5jdkRu6Z5CExTDzAVnwTCZXveL7dHw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch rolling restart problem

2014-11-10 Thread lagarutte via elasticsearch
Hi,
i have one ELS 1.1.2 cluster with 7 nodes.
800GB data.

When i shutdown a node for various reasons, ELS automatically rebalance the 
missing shard on the other node.

To prevent this, I tried this (specified in the official doc) :
transient : {
cluster.routing.allocation.enable : none }

ans then i issue a node shtudown.

Effectively, the relevant shards are now unassigned and ELS don't try to 
reallocate them.

But when i restart the node, they still remain as unassigned.
And then when i set back :
transient : {
cluster.routing.allocation.enable : all }

=  ELS reallocate unassigned shard to ALL nodes instead of the restarted 
node.

What's wrong ? 
What's the correct procedure ?

regards
jean

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/cc317335-9412-42dc-b549-74eb91ba9d6b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Nodes not joining after 1.4.0 upgrade

2014-11-10 Thread Valentin
I had similar issues when upgrading from 1.3.4 to 1.4 
from my elasticsearch.yml

discovery.zen.ping.multicast.enabled: false

discovery.zen.ping.unicast.hosts:.

I could get it up and running after restarting the whole cluster (which was 
bad since I'm using it for realtime logging). 

On Monday, November 10, 2014 1:34:12 PM UTC+1, Boaz Leskes wrote:

 Hi,

 The logs you mentioned indicate that the nodes try to join the cluster but 
 it takes too long for a complete verification cycle (connect back to node 
 and publish cluster state to it) takes too long. It seems there is 
 something going on your masters.

 Can you check the logs over there? Also are you using multicast or unicast 
 discovery?

 On Sunday, November 9, 2014 8:36:06 AM UTC+1, Janet Sullivan wrote:

  More hours of working – even when I get a 1.4.0 cluster up, masters 
 wouldn’t fail over – when I took master1 down, neither master2 or master3 
 would promote themselves.   In 1.4.0-beta it fails over quickly.

  
  
 *From:* elasti...@googlegroups.com javascript: [mailto:
 elasti...@googlegroups.com javascript:] *On Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 11:11 PM
 *To:* elasti...@googlegroups.com javascript:
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
  
  

 OK, it also happens to some degree with 1.4.0-beta, although overall it’s 
 much better on beta.  I wasn’t able to get my 12 node cluster up on 1.4.0 
 after several hours of fiddling, but 1.4.0-beta did come up.

  
  
 *From:* elasti...@googlegroups.com javascript: [
 mailto:ela...@googlegroups.com javascript:] *On Behalf Of *Janet 
 Sullivan
 *Sent:* Saturday, November 08, 2014 9:26 PM
 *To:* elasti...@googlegroups.com javascript:
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
  
  

 But it DOES happen with 1.3.5.   Hmmm….

  
  
 *From:* elasti...@googlegroups.com javascript: [
 mailto:ela...@googlegroups.com javascript:] *On Behalf Of *Janet 
 Sullivan
 *Sent:* Saturday, November 08, 2014 9:24 PM
 *To:* elasti...@googlegroups.com javascript:
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
  
  

 Note:  This issue doesn’t happen with 1.4.0-beta1

  
  
 *From:* elasti...@googlegroups.com javascript: [
 mailto:ela...@googlegroups.com javascript:] *On Behalf Of *Janet 
 Sullivan
 *Sent:* Saturday, November 08, 2014 8:46 PM
 *To:* elasti...@googlegroups.com javascript:
 *Subject:* Nodes not joining after 1.4.0 upgrade
  
  

 I’ve upgraded a couple of clusters to 1.4.0 from 1.3.4.  On both of them, 
 I had nodes that spewed the following, and were slow to join, if they 
 joined at all:

  

 [2014-11-09 04:33:45,995][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:34:49,776][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:35:53,571][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:36:57,353][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:38:01,120][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:39:04,885][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:40:08,657][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

  

 I’m able to telnet to port 9300 on gnslogstash10 in this example from 
 gnslogstash3 with no issue, but this cluster doesn’t want to bring all its 
 nodes up.  The more nodes added, the more likely a join will fail.  In this 
 example, 9 nodes are up, but 3 nodes don’t want to join.  L  Thoughts?

 -- 

Re: accidently ran another instance of elasticsearch on a few nodes

2014-11-10 Thread Ivan Brusic
To avoid this situation in the future, besides using a service to start
elasticsearch, you can enforce the max nodes setting:

node.max_local_storage_nodes: 1

-- 
Ivan

On Sun, Nov 9, 2014 at 5:24 PM, Mark Walkom markwal...@gmail.com wrote:

 Yellow means unassigned replicas, try removing them and then adding them
 back.

 Once your cluster is green you can stop one of the nodes with the extra
 data and then delete the extra directory, just make sure you let the other
 nodes rebalance and your cluster is green again before deleting, otherwise
 you may risk losing data.

 On 8 November 2014 19:12, Johan Öhr johan@gmail.com wrote:

 Hi, while trying to set up an another process as master, i believe i for
 some time ran multiple instances of elasticsearch on three nodes.

 On these nodes, its looks like this:
 /var/lib/elasticsearch/elasticsearch/indices/0
 /var/lib/elasticsearch/elasticsearch/indices/1

 On my other nodes, that are fine it looks like:
 /var/lib/elasticsearch/elasticsearch/indices/0

 So, there is alot of data in the 1-directory on three nodes, and these
 shards will not be ASSIGNED, my cluster stays yellow.

 This mistake happend 1 week ago, since then i have restarted ES a couple
 of times, but it was just now that i got the problem.

 How can i fix this?

 At the moment im running 5 nodes, where 3 runs another instance of
 elasticsearch, as just masters

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/3c852fff-611a-45fb-b1c2-e5962d733977%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/3c852fff-611a-45fb-b1c2-e5962d733977%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZnUn1hHAy60TTiOxo8b5rdt0N35dDYjq7Abb3M8pNNEjg%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZnUn1hHAy60TTiOxo8b5rdt0N35dDYjq7Abb3M8pNNEjg%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBUueXaZ64g5%2B%2B3R69iFxJWo3oHQXq3cuhKt4CfYB33Dg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Looking for a sexy solution for Aggregations

2014-11-10 Thread Ivan Brusic
The only solution that I can think of is to execute your query with a post
filter, not a filtered query. In this way, your aggregations by default
will not be filtered. You can then have two histograms, one with the post
filter used as an aggregation filter, and the other one left alone.

-- 
Ivan

On Fri, Nov 7, 2014 at 11:04 AM, kazoompa rha...@p3g.org wrote:

 Hi,

 Consider the aggregation below:

 Sociodemographic_economic_characteristics: {
   terms: {
 field: Sociodemographic_economic_characteristics,
 size: 0,
 min_doc_count: 0,
 order: {
   _term: asc
 }
   }
 }

 This is the result without any filters:

 Enter code here...

- Sociodemographic_economic_characteristics: {
   - buckets: [
  - {
 - key: Age
 - doc_count: 93
  }
  - {
 - key: Education
 - doc_count: 42
  }
  - {
 - key: Ethnic_race_religion
 - doc_count: 17
  }
  - {
 - key: Family_hh_struct
 - doc_count: 55
  }
  - {
 - key: Income
 - doc_count: 10
  }
  - {
 - key: Labour_retirement
 - doc_count: 150
  }
  - {
 - key: Marital_status
 - doc_count: 20
  }
  - {
 - key: Residence
 - doc_count: 20
  }
  - {
 - key: Sex
 - doc_count: 7
  }
   ]
}


 This is the result with the filter:


- Sociodemographic_economic_characteristics: {
   - buckets: [
  - {
 - key: Age
 - doc_count: 0
  }
  - {
 - key: Education
 - doc_count: 0
  }
  - {
 - key: Ethnic_race_religion
 - doc_count: 0
  }
  - {
 - key: Family_hh_struct
 - doc_count: 0
  }
  - {
 - key: Income
 - doc_count: 0
  }
  - {
 - key: Labour_retirement
 - doc_count: 150
  }
  - {
 - key: Marital_status
 - doc_count: 0
  }
  - {
 - key: Residence
 - doc_count: 0
  }
  - {
 - key: Sex
 - doc_count: 0
  }
   ]
}


 I would like to find a way to have the two combined ine one query search
 such that the client can show the info in this manner:

 Age: 0/93
 Education: 0/42
 Ethnic_race_religion: 0/17
 Family_hh_struct: 0/55
 Income: 0/10
 Labour_retirement:150/150
 ...


 As alternatives, I considered doing a Multiple Search or two independent
 search queries, but is there any way to do this in one go using the
 Elasticsearch goodies (nested aggs, etc)

 Thanks,
 Ramin




  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/350c0c57-4f4d-41a0-ab37-e5075b2ddccb%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/350c0c57-4f4d-41a0-ab37-e5075b2ddccb%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBKAF_COLnA8wR4m-Hnc4AMH%2BojkGy6sVan-CPU8PwnVQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Disabling dynamic mapping

2014-11-10 Thread pulkitsinghal
What does the json in the CURL request for this look like?

The dynamic creation of mappings for unmapped types can be completely 
disabled by setting *index.mapper.dynamic* to false.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-dynamic-mapping.html#mapping-dynamic-mapping

Thanks!
- Pulkit

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a9131f0a-3a3d-4617-96ac-d77c12d9d48c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ANN] Experimental Highlighter 0.0.13 released

2014-11-10 Thread Nikolas Everett
I released version 0.0.13 of the experimental highlighter.  This version a
problem with Lucene flavored Regular Expressions that can cause compiling
them to consume amazing amounts of memory.  It still targets Elasticsearch
1.3.X.

Cheers,

Nik

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd2JjU67ygDrRMLtrOJaTi3VFRrMX_Uz%3DQnpsDiMoYwdvg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


min_score doesn't seem to be working with _count api

2014-11-10 Thread Roly Vicaria
Hello,

I'm trying to pass a min_score parameter using the _count api, but it 
doesn't seem to work. When I add it via query string, it just gets ignored 
and returns the full count. When I add it to the query using min_score : 
1, then I get the following exception response: 

{count:0,_shards:{total:5,successful:0,failed:5,failures:[{index:recruitment,shard:1,reason:BroadcastShardOperationFailedException[[recruitment][1]
 
]; nested: QueryParsingException[[recruitment] request does not support 
[min_score]]; 
},{index:recruitment,shard:0,reason:BroadcastShardOperationFailedException[[recruitment][0]
 
]; nested: QueryParsingException[[recruitment] request does not support 
[min_score]]; 
},{index:recruitment,shard:3,reason:BroadcastShardOperationFailedException[[recruitment][3]
 
]; nested: QueryParsingException[[recruitment] request does not support 
[min_score]]; 
},{index:recruitment,shard:2,reason:BroadcastShardOperationFailedException[[recruitment][2]
 
]; nested: QueryParsingException[[recruitment] request does not support 
[min_score]]; 
},{index:recruitment,shard:4,reason:BroadcastShardOperationFailedException[[recruitment][4]
 
]; nested: QueryParsingException[[recruitment] request does not support 
[min_score]]; }]}}

Has anyone else run into this?

Thanks,
Roly

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/061adad5-3ba9-457f-9b59-dc822b5f123d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: min_score doesn't seem to be working with _count api

2014-11-10 Thread Roly Vicaria
Also, I'm trying this on v1.3.2

On Monday, November 10, 2014 10:54:33 AM UTC-5, Roly Vicaria wrote:

 Hello,

 I'm trying to pass a min_score parameter using the _count api, but it 
 doesn't seem to work. When I add it via query string, it just gets ignored 
 and returns the full count. When I add it to the query using min_score : 
 1, then I get the following exception response: 

 {count:0,_shards:{total:5,successful:0,failed:5,failures:[{index:recruitment,shard:1,reason:BroadcastShardOperationFailedException[[recruitment][1]
  
 ]; nested: QueryParsingException[[recruitment] request does not support 
 [min_score]]; 
 },{index:recruitment,shard:0,reason:BroadcastShardOperationFailedException[[recruitment][0]
  
 ]; nested: QueryParsingException[[recruitment] request does not support 
 [min_score]]; 
 },{index:recruitment,shard:3,reason:BroadcastShardOperationFailedException[[recruitment][3]
  
 ]; nested: QueryParsingException[[recruitment] request does not support 
 [min_score]]; 
 },{index:recruitment,shard:2,reason:BroadcastShardOperationFailedException[[recruitment][2]
  
 ]; nested: QueryParsingException[[recruitment] request does not support 
 [min_score]]; 
 },{index:recruitment,shard:4,reason:BroadcastShardOperationFailedException[[recruitment][4]
  
 ]; nested: QueryParsingException[[recruitment] request does not support 
 [min_score]]; }]}}

 Has anyone else run into this?

 Thanks,
 Roly


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b8d2dbfe-6b07-404e-887a-11ef3b46426a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Experimental Highlighter 0.0.13 released

2014-11-10 Thread Nikolas Everett
On Mon, Nov 10, 2014 at 10:54 AM, Nikolas Everett nik9...@gmail.com wrote:

 I released version 0.0.13 of the experimental highlighter.  This version a
 problem with Lucene flavored Regular Expressions that can cause compiling
 them to consume amazing amounts of memory.  It still targets Elasticsearch
 1.3.X.


And here is the link containing docs and installation instructions:
https://github.com/wikimedia/search-highlighter

 Nik

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd3LsqP-ygm_rXBTpTdNUwgJWg0ZwiKjw2iOekbcaXXEeQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[ANN] Trigram accelerated regex queries for Elasticsearch version 0.0.2 released

2014-11-10 Thread Nikolas Everett
On Friday I released version 0.0.2 of an Elasticsearch plugin to perform
accelerated regular expression search against source documents.  This
version has stability and speed improvements for complex queries.  It:
1.  Prevents the compilation step from consuming tons and tons of memory.
Now it'll throw an exception if it tries to compile a regex that is too big.
2.  Prevents complex regular expressions from performing hundreds of term
queries against the trigrams.  There is now a parameter to limit the number
of term queries that are attempted to prefilter the documents.  This
prevents memory exhaustion.
3.  Speeds up some of the internals of the compilation step several orders
of magnitude for complex queries.

If you are brave enough to have used version 0.0.1 of this plugin you
should certainly upgrade to version 0.0.2.

As always you can try it on our beta site:
* Find links to files or templates
http://simple.wikipedia.beta.wmflabs.org/w/index.php?title=Special%3ASearchprofile=defaultsearch=insource%3A%2F\[\[%28file%3A|template%3A%29[^\]]*\]\]%2Ffulltext=Search
* Find links within 10 characters of each other
http://simple.wikipedia.beta.wmflabs.org/w/index.php?title=Special%3ASearchprofile=defaultsearch=insource%3A%2F\[\[[^\]]*\]\].{0%2C20}\[\[[^\]]*\]\]%2Ffulltext=Search
* Prove that complex queries don't eat all of memory
http://simple.wikipedia.beta.wmflabs.org/wiki/Special:Search?search=insource%3A%2F\[\[%28Datei|File|Bild|Image%29%3A[^]]*alt%3D[^]|}]{50%2C200}%2Fgo=Search

Nik

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd3aCYYZ1dFwj33un1HFzY4N%2BhK%2BsHc39VDuk9SBZoD%3Dgw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Trading index performance for search performance

2014-11-10 Thread Ben George
Really helpful answer!  When you say 'invoke warmers' are you saying to 
simply set index.warmer.enabled = true ?  Also, in terms of ordering should 
warmers be enabled before or after an explicit optimize + refresh in a 
scenario where we need the index 100% ready for search before continuing ?

eg:
1) 
adminClient.indices().prepareOptimize(index).setMaxNumSegments(1).setForce(true).execute().actionGet();
2) adminClient.indices().prepareRefresh(index).execute().actionGet(); // 
Need to do this explicitly so we can wait for it to finish before 
proceeding.
3) set refresh_interval = 1, index.warmer.enabled = true


On Thursday, 17 July 2014 17:35:54 UTC+1, Jörg Prante wrote:

 The 30m docs may have characteristics (volume, term freqs, mappings) so ES 
 limits are reached within your specific configuration. This is hard to 
 guess without knowing more facts.

 Beside improving merge configuration, you might be able to sacrifice 
 indexing time by assigning limited daily indexing time windows to your 
 clients. 

 The indexing process can then be divided into steps:

 - connect to cluster
 - create index with n shards and replica level 0
 - create mappings
 - disable refresh rate
 - start bulk index
 - stop bulk index
 - optimize to segment num 1
 - enable refresh rate
 - add replica levels in order to handle maximum search workload
 - invoke warmers
 - disconnect from cluster

 After the clients have completed indexing, you have a fully optimized 
 cluster, on which you can put full search load with aggregations etc. with 
 the highest performance, but while searching you should keep the indexing 
 silent (or set it even to read only).

 You do not need to scale vertically by adding hardware to the existing 
 servers. Scaling horizontally by adding nodes on more servers for the 
 replicas the method ES was designed for. Adding nodes will drastically 
 improve the search capabilities with regard to facets/aggregations.

 Jörg


 On Thu, Jul 17, 2014 at 5:56 PM, jnortey jeremy...@gmail.com 
 javascript: wrote:

 At the moment, we're able to bulk index data at a rate faster than we 
 actually need. Indexing is not as important to use as being able to quickly 
 search for data. Once we start reaching ~30 million documents indexed, we 
 start to see performance decreasing in ours search queries. What are the 
 best techniques for sacrificing indexing time in order to improve search 
 performance?


 A bit more info:

 - We have the resources to improve our hardware (memory, CPU, etc) but 
 we'd like to maximize the improvements that can be made programmatically or 
 using properties before going for hardware increases.

 - Our searches make very heavy uses of faceting and aggregations.

 - When we run the optimize query, we see *significant* improvements in 
 our search times (between 50% and 80% improvements), but as documented, 
 this is usually a pretty expensive operation. Is there a way to sacrifice 
 indexing time in order to have Elasticsearch index the data more 
 efficiently? (I guess sort of mimicking the optimization behavior at index 
 time)
  
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/0e134001-9a55-40c5-a8fc-4c1485a3e6fc%40googlegroups.com
  
 https://groups.google.com/d/msgid/elasticsearch/0e134001-9a55-40c5-a8fc-4c1485a3e6fc%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a6d345be-c408-4d7c-a794-5ade13826048%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: hardware recommendation for dedicated client node

2014-11-10 Thread Terence Tung
can anyone please help me? 

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a5585f15-297b-4c16-8881-0a57f8902617%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: hardware recommendation for dedicated client node

2014-11-10 Thread Nikolas Everett
I don't use client nodes so I can't speak from experience here.  Most of
the gathering steps I can think of amount to merging sorted lists which
isn't particularly intense.  I think aggregations (another thing I don't
use) can be more intense at the client node but I'm not sure.

My recommendation is to start by sending requests directly to the data
nodes and only start to investigate client nodes if you have trouble with
that and diagnose that trouble as being something that'd move to a client
node if you had them.  Its a nice thing to have in your back pocket but it
just hasn't come up for me.

Nik

On Mon, Nov 10, 2014 at 11:17 AM, Terence Tung tere...@teambanjo.com
wrote:

 can anyone please help me?

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/a5585f15-297b-4c16-8881-0a57f8902617%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/a5585f15-297b-4c16-8881-0a57f8902617%40googlegroups.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd3AT3L9Mm1EeYkpp-X5%2Bttp6MnmzZVG6VOnq2HAmDsFmw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: min_score doesn't seem to be working with _count api

2014-11-10 Thread Ivan Brusic
Just a guess, but I would assume that the count API does not score
documents, which is why it is faster, leading to a setting such as
min_score to be obsolete.

-- 
Ivan

On Mon, Nov 10, 2014 at 10:55 AM, Roly Vicaria roly...@gmail.com wrote:

 Also, I'm trying this on v1.3.2


 On Monday, November 10, 2014 10:54:33 AM UTC-5, Roly Vicaria wrote:

 Hello,

 I'm trying to pass a min_score parameter using the _count api, but it
 doesn't seem to work. When I add it via query string, it just gets ignored
 and returns the full count. When I add it to the query using min_score :
 1, then I get the following exception response:

 {count:0,_shards:{total:5,successful:0,failed:5,
 failures:[{index:recruitment,shard:1,reason:
 BroadcastShardOperationFailedException[[recruitment][1] ]; nested:
 QueryParsingException[[recruitment] request does not support
 [min_score]]; },{index:recruitment,shard:0,reason:
 BroadcastShardOperationFailedException[[recruitment][0] ]; nested:
 QueryParsingException[[recruitment] request does not support
 [min_score]]; },{index:recruitment,shard:3,reason:
 BroadcastShardOperationFailedException[[recruitment][3] ]; nested:
 QueryParsingException[[recruitment] request does not support
 [min_score]]; },{index:recruitment,shard:2,reason:
 BroadcastShardOperationFailedException[[recruitment][2] ]; nested:
 QueryParsingException[[recruitment] request does not support
 [min_score]]; },{index:recruitment,shard:4,reason:
 BroadcastShardOperationFailedException[[recruitment][4] ]; nested:
 QueryParsingException[[recruitment] request does not support
 [min_score]]; }]}}

 Has anyone else run into this?

 Thanks,
 Roly

  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/b8d2dbfe-6b07-404e-887a-11ef3b46426a%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/b8d2dbfe-6b07-404e-887a-11ef3b46426a%40googlegroups.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBd3TuBk0CwkBT%2Bq5KzSqEpHTJwe5GU1F8iaMGU5iFJLw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Trading index performance for search performance

2014-11-10 Thread joergpra...@gmail.com
Yes, I mean index.warmer.enabled = true. This is a switch for global
enabling/disabling warmers.

If you have configured warmers at index creation time - see description at

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-warmers.html

and warmers are enabled by the global switch, then you should disable
warmers before bulk indexing, and re-enabling warmers after bulk indexing.
This is described in the documentation as well:

This can be handy when doing initial bulk indexing: disable pre registered
warmers to make indexing faster and less expensive and then enable it.

You should re-enable warmers after index optimization and after index
refresh.

Jörg



On Mon, Nov 10, 2014 at 5:09 PM, Ben George bengeo...@gmail.com wrote:

 Really helpful answer!  When you say 'invoke warmers' are you saying to
 simply set index.warmer.enabled = true ?  Also, in terms of ordering should
 warmers be enabled before or after an explicit optimize + refresh in a
 scenario where we need the index 100% ready for search before continuing ?

 eg:
 1)
 adminClient.indices().prepareOptimize(index).setMaxNumSegments(1).setForce(true).execute().actionGet();
 2) adminClient.indices().prepareRefresh(index).execute().actionGet(); //
 Need to do this explicitly so we can wait for it to finish before
 proceeding.
 3) set refresh_interval = 1, index.warmer.enabled = true


 On Thursday, 17 July 2014 17:35:54 UTC+1, Jörg Prante wrote:

 The 30m docs may have characteristics (volume, term freqs, mappings) so
 ES limits are reached within your specific configuration. This is hard to
 guess without knowing more facts.

 Beside improving merge configuration, you might be able to sacrifice
 indexing time by assigning limited daily indexing time windows to your
 clients.

 The indexing process can then be divided into steps:

 - connect to cluster
 - create index with n shards and replica level 0
 - create mappings
 - disable refresh rate
 - start bulk index
 - stop bulk index
 - optimize to segment num 1
 - enable refresh rate
 - add replica levels in order to handle maximum search workload
 - invoke warmers
 - disconnect from cluster

 After the clients have completed indexing, you have a fully optimized
 cluster, on which you can put full search load with aggregations etc. with
 the highest performance, but while searching you should keep the indexing
 silent (or set it even to read only).

 You do not need to scale vertically by adding hardware to the existing
 servers. Scaling horizontally by adding nodes on more servers for the
 replicas the method ES was designed for. Adding nodes will drastically
 improve the search capabilities with regard to facets/aggregations.

 Jörg


 On Thu, Jul 17, 2014 at 5:56 PM, jnortey jeremy...@gmail.com wrote:

 At the moment, we're able to bulk index data at a rate faster than we
 actually need. Indexing is not as important to use as being able to quickly
 search for data. Once we start reaching ~30 million documents indexed, we
 start to see performance decreasing in ours search queries. What are the
 best techniques for sacrificing indexing time in order to improve search
 performance?


 A bit more info:

 - We have the resources to improve our hardware (memory, CPU, etc) but
 we'd like to maximize the improvements that can be made programmatically or
 using properties before going for hardware increases.

 - Our searches make very heavy uses of faceting and aggregations.

 - When we run the optimize query, we see *significant* improvements in
 our search times (between 50% and 80% improvements), but as documented,
 this is usually a pretty expensive operation. Is there a way to sacrifice
 indexing time in order to have Elasticsearch index the data more
 efficiently? (I guess sort of mimicking the optimization behavior at index
 time)

 --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/0e134001-9a55-40c5-a8fc-4c1485a3e6fc%
 40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/0e134001-9a55-40c5-a8fc-4c1485a3e6fc%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/a6d345be-c408-4d7c-a794-5ade13826048%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/a6d345be-c408-4d7c-a794-5ade13826048%40googlegroups.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.



Re: Disabling dynamic mapping

2014-11-10 Thread Brian
This is what I put into the elasticsearch.yml file when I start 
Elasticsearch for use in a non-ELK environment:

# Do not automatically create an index when a document is loaded, and do
# not automatically index unknown (unmapped) fields:

action.auto_create_index: false
index.mapper.dynamic: false

And here's a complete example of a curl input document that I use to create 
an index with the desired types in which I don't want new indices, new 
types, or new fields to be automatically created:

{
  settings : {
index : {
  number_of_shards : 1,
  analysis : {
char_filter : { },
filter : {
  english_snowball_filter : {
type : snowball,
language : English
  }
},
analyzer : {
  english_standard_analyzer : {
type : custom,
tokenizer : standard,
filter : [ standard, lowercase, asciifolding ]
  },
  english_stemming_analyzer : {
type : custom,
tokenizer : standard,
filter : [ standard, lowercase, asciifolding, 
english_snowball_filter ]
  }
}
  }
}
  },
  mappings : {
_default_ : {
  dynamic : strict
},
person : {
  _all : {
enabled : false
  },
  properties : {
telno : {
  type : string,
  analyzer : english_standard_analyzer
},
gn : {
  type : string,
  analyzer : english_standard_analyzer
},
sn : {
  type : string,
  analyzer : english_stemming_analyzer
},
o : {
  type : string,
  analyzer : english_stemming_analyzer
}
  }
}
  }
}

By the way, I never mix indices that are used for more standard database 
queries with the indices used by the ELK stack. Those are two separate 
Elasticsearch clusters entirely; the former is locked down as shown above, 
while the latter is left in its default free form method of automatically 
creating indices and new fields on the fly, just as Splunk and ELK and 
other log analysis tools do.

I hope this helps.

Brian

On Monday, November 10, 2014 10:45:38 AM UTC-5, pulkitsinghal wrote:

 What does the json in the CURL request for this look like?

 The dynamic creation of mappings for unmapped types can be completely 
 disabled by setting *index.mapper.dynamic* to false.

 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-dynamic-mapping.html#mapping-dynamic-mapping

 Thanks!
 - Pulkit


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9b257ef5-3b87-43fe-a64b-1114da64d671%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch rolling restart problem

2014-11-10 Thread Nikolas Everett
You've followed the right procedure.  The problem is that Elasticsearch
doesn't always restore the shards back on the node that they came from.  If
the restarted shard and the current master shard have diverge at all it'll
have to sync files _somewhere_ to make sure that the restarted shard gets
all the changes.  Since shards diverge all the time even if there aren't
updates while the node is down you can expect this.

Speeding this process up has been an open issue for many many months.

Nik

On Mon, Nov 10, 2014 at 11:49 AM, joergpra...@gmail.com 
joergpra...@gmail.com wrote:

 Reallocation to all nodes is the expected behavior.

 Jörg

 On Mon, Nov 10, 2014 at 3:55 PM, lagarutte via elasticsearch 
 elasticsearch@googlegroups.com wrote:

 Hi,
 i have one ELS 1.1.2 cluster with 7 nodes.
 800GB data.

 When i shutdown a node for various reasons, ELS automatically rebalance
 the missing shard on the other node.

 To prevent this, I tried this (specified in the official doc) :
 transient : {
 cluster.routing.allocation.enable : none }

 ans then i issue a node shtudown.

 Effectively, the relevant shards are now unassigned and ELS don't try to
 reallocate them.

 But when i restart the node, they still remain as unassigned.
 And then when i set back :
 transient : {
 cluster.routing.allocation.enable : all }

 =  ELS reallocate unassigned shard to ALL nodes instead of the restarted
 node.

 What's wrong ?
 What's the correct procedure ?

 regards
 jean

  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/cc317335-9412-42dc-b549-74eb91ba9d6b%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/cc317335-9412-42dc-b549-74eb91ba9d6b%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFY_%3D1JZzE%3Dmxov0dF1QeBF2NNDtXYcDj9%3D88Bu5gjvRg%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFY_%3D1JZzE%3Dmxov0dF1QeBF2NNDtXYcDj9%3D88Bu5gjvRg%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd3XafuJS4o7r_KkTndz1T5fdtgVnQWGMYFJM7Sab28kYg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Entity Matching in ES

2014-11-10 Thread Ap
How can we do Entity matching in ES ?

Eg. A Company name can have different variations. 

1. USA Tech Ltd
2. USA Tech LLC
3. USA Tech Asia Ltd

If the above data is present in ES in the Name field, and a 4th value = 
USA Euro Tech Ltd then it should identify that all the Names are same.

How can we do that in ES ?

Right now, I am trying to use Fuzzy on the complete data set (~100K to 1Mn 
docs) and getting the top 20 matches, loading them in memory and running an 
external Jaro Wrinkler library ( Java-Lucene) on the 20 matches.

Is there a way to directly do Entity matching on the fields in ES ?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/39d8d73e-bfe0-491f-8c33-90bb0d5426b3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: ES 1.3.4 scrolling never ends

2014-11-10 Thread Brian
A while back, I wrote my own post-query response sorting so that I could 
handle cases that Elasticsearch didn't. One case was sorting a scan query. 
I used a Java TreeSet class and could also limit it to the top 'N' 
(configurable) items. It is very, very quick, pretty much adding no 
overhead to the existing scan logic. And it supports an arbitrarily complex 
compound sort key, much like an SQL ORDERBY statement; it's very easy to 
construct.

Probably not useful for a normal user query, but it is very useful for an 
ad-hoc query in which I wish to scan across an indeterminately large result 
set but still sort the results. 

One of these days, it might make a good plug-in candidate. But I am not 
sure how to integrate it with the scan API, so for now it's just part of 
the Java client layer.

Brian

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/74e311f5-ae54-4da1-9369-567e7bf03272%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Case sensitive/insensitive search combination in phrase/proximity query

2014-11-10 Thread Zdeněk
Hi,
is there any way how to search part of phrase as case-sensitive and part as 
case-insensitive?

The only solution I found for case sensitive/insensitive querying is to 
have multiple analyzers applied to one field (one analyzer with lowercase 
token filter and one without)

With this solution I can search in following way

Field.lowercase: My Phrase

or

Field.sensitive: My Phrase

*But what to do if I whould like to search My as case sensitive and 
Phrase as case insensitive?*

I found *span_near* query but error message sais that *Clauses must have 
same field*

Thanks,
Zdenek

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/de3a6085-5ecb-4486-a3aa-1a7162aaeed4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Query_string search containing a dash has unexpected results

2014-11-10 Thread Dave Reed
I'm not using the standard analyzer, I'm using a pattern that will break 
the text on all non-word characters, like this:

analyzer: {
letterordigit: {
type: pattern,
pattern: [^\\p{L}\\p{N}]+
}
}


I have verified that the message field is being broke up into the tokens I 
expect (example in my first post).

So when I run a search for message:welcome-doesnotmatch, I'm expecting that 
string to be broken into tokens like so:

welcome
doesnotmatch

And for the search to therefore find 0 documents. But it doesn't -- it 
finds 1 document, the document that contains my sample message, which does 
not include the token doesnotmatch.

So why on Earth would this search match that document? It is behaving as if 
everything after the - is completely ignored. It does not matter what I 
put there, it will still match the document.

This is coming up because an end user is searching for a hyphenated word, 
like battle-axe, and it's matching a document that does not contain the 
word axe at all.



On Friday, November 7, 2014 12:24:30 AM UTC-8, Jun Ohtani wrote:

 Hi Dave,

 I think the reason is your message field using standard analyzer.
 Standard analyzer divide text by -.
 If you change analyzer to whitespace analyzer, it matches 0 documents.

 _validate API is useful for checking exact query.
 Example request: 

 curl -XGET /YOUR_INDEX/_validate/query?explain -d'
 {
   query: {
 query_string: {
   query: id:3955974 AND message:welcome-doesnotmatchanything
 }
   }
 }'

 You can get the following response. In this example, message field is 
 index: not_analyzed.
 {
valid: true,
_shards: {
   total: 1,
   successful: 1,
   failed: 0
},
explanations: [
   {
  index: YOUR_INDEX,
  valid: true,
  explanation: +id:3955974 +message:welcome-doesnotmatchanything
   }
]
 }


 See: 
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-validate.html#search-validate

 I hope that those help you out.

 Regards,
 Jun


 2014-11-07 9:47 GMT+09:00 Dave Reed infin...@gmail.com javascript::

 I have a document with a field message, that contains the following 
 text (truncated):

 Welcome to test.com!

 The assertion field is mapped to have an analyzer that breaks that string 
 into the following tokens:

 welcome
 to
 test
 com

 But, when I search with a query like this:

 {
   query: {

 query_string: {
   query: id:3955974 AND message:welcome-doesnotmatchanything
 }
   }
 }



 To my surprise, it finds the document (3955974 is the document id). The 
 dash and everything after it seems to be ignored, because it does not 
 matter what I put there, it will still match the document.

 I've tried escaping it:

 {
   query: {
 query_string: {
   query: id:3955974 AND message:welcome\\-doesnotmatchanything
 }
   }
 }
 (note the double escape since it has to be escaped for the JSON too)

 But that makes no difference. I still get 1 matching document. If I put 
 it in quotes it works:

 {
   query: {
 query_string: {
   query: id:3955974 AND message:\welcome-doesnotmatchanything\
 }
   }
 }

 It works, meaning it matches 0 documents, since that document does not 
 contain the doesnotmatchanything token. That's great, but I don't 
 understand why the unquoted version does not work. This query is being 
 generated so I can't easily just decide to start quoting it, and I can't 
 always do that anyway since the user is sometimes going to use wildcards, 
 which can't be quoted if I want them to function. I was under the 
 assumption that an EscapedUnquotedString is the same as a quoted unespaced 
 string (in other words, foo:a\b\c === foo:abc, assuming all special 
 characters are escaped in the unquoted version).

 I'm only on ES 1.01, but I don't see anything new or changes that would 
 have impacted this behavior in later versions.

 Any insights would be helpful! :)




  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/1dbfa1d5-7301-460b-ae9c-3665cfa79c96%40googlegroups.com
  
 https://groups.google.com/d/msgid/elasticsearch/1dbfa1d5-7301-460b-ae9c-3665cfa79c96%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




 -- 
 ---
 Jun Ohtani
 blog : http://blog.johtani.info
  

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: Case sensitive/insensitive search combination in phrase/proximity query

2014-11-10 Thread Nikolas Everett
I don't believe there is a way to do that now.

On Mon, Nov 10, 2014 at 12:22 PM, Zdeněk zdenek.s...@gmail.com wrote:

 Hi,
 is there any way how to search part of phrase as case-sensitive and part
 as case-insensitive?

 The only solution I found for case sensitive/insensitive querying is to
 have multiple analyzers applied to one field (one analyzer with lowercase
 token filter and one without)

 With this solution I can search in following way

 Field.lowercase: My Phrase

 or

 Field.sensitive: My Phrase

 *But what to do if I whould like to search My as case sensitive and
 Phrase as case insensitive?*

 I found *span_near* query but error message sais that *Clauses must have
 same field*

 Thanks,
 Zdenek

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/de3a6085-5ecb-4486-a3aa-1a7162aaeed4%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/de3a6085-5ecb-4486-a3aa-1a7162aaeed4%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd1eHnEPq1oV%2BCBPS0i-aKXyJVmLLawtqDz2V%3DO13t6LKA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: accidently ran another instance of elasticsearch on a few nodes

2014-11-10 Thread Johan Öhr
Thank you

I fixed the problem by taking down a node, delete wrong directory, start the 
node. It worked fine with the first two nodes. But not the third ..

See 
https://groups.google.com/forum/m/?utm_medium=emailutm_source=footer#!topic/elasticsearch/7LZVzQkcAtA

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/dd0258ee-bc4d-4b05-b3f8-98985e71da62%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


does there exists an exists query

2014-11-10 Thread Volker
I would like to know whether there is an exists query in ES.

I know that there is an exists filter, but I would like to have an exists 
query. So documents, where a field exists, should be rated higher than the 
ones, where the field does not exists. But if there is no document in that 
query, where the field exists, it should still return the other documents. 
This would not work with the exists filter, as far as I know. 

I know, that I could index an additional field, with the value (e.g.) true, 
when the field exists. But I would rather not have this additional data in 
the index.

So, what is the best solution for this use case?

thanks in advance!

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e82e64a1-53d7-4067-9a91-ce77fda8c85f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: does there exists an exists query

2014-11-10 Thread Ivan Brusic
Off the top of my head, the easiest option would be to use a constant score
query. Wrap the original query and provide a boost to documents that
satisfy your exist filter.

Cheers,

Ivan

On Mon, Nov 10, 2014 at 12:54 PM, Volker s...@klest.de wrote:

 I would like to know whether there is an exists query in ES.

 I know that there is an exists filter, but I would like to have an exists
 query. So documents, where a field exists, should be rated higher than the
 ones, where the field does not exists. But if there is no document in that
 query, where the field exists, it should still return the other documents.
 This would not work with the exists filter, as far as I know.

 I know, that I could index an additional field, with the value (e.g.)
 true, when the field exists. But I would rather not have this additional
 data in the index.

 So, what is the best solution for this use case?

 thanks in advance!

  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/e82e64a1-53d7-4067-9a91-ce77fda8c85f%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/e82e64a1-53d7-4067-9a91-ce77fda8c85f%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQDjKdKh7GquS%3DvND0vTbfmYGwF4bqLy16MSXCnCeJR2-w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Yet another invalid internal transport message format error

2014-11-10 Thread Jack Park
The console trace snippet is below.

The bug started for reasons I do not understand.
Facts:
There is an instance of ES on 9300 on another box.
This instance was set to 9250 and was running fine until I attempted to
start another localhost on this box at 9350 and noticed that they (9250 and
9350) wanted to play nice with each other, the upshot being that I couldn't
boot into 9350 with any client.

So, 9400 (was 9250) is running alone now, and still cannot get it to boot
properly. I am booting from a NodeJS client es.js, and it has been running
fine. In fact, the other ES instance on 9300 is driving an es.js online
client.

A socket closes somewhere early in testing for index existence, and all the
options being sent are the same as before.

What am I missing?
Many thanks in advance



[2014-11-10 10:38:00,085][WARN ][transport.netty  ] [Boomslang]
exception caught on transport layer [[id: 0xe12df19f, /127.0.0.1:54752 =
/127.0.0.1:9400]], closing connection
java.io.StreamCorruptedException: invalid internal transport message format
at
org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeH
eaderFrameDecoder.java:46)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callD
ecode(FrameDecoder.java:425)

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAH6s0fwM45f%3Dziz1%3DO6WCfW6c6-eq9_45o8r%3DZOXO6owAPJ9ow%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch rolling restart problem

2014-11-10 Thread lagarutte via elasticsearch
ok thank for your explanation.
It's a major concern as ELS is scalabe and when a node goes down, we have a 
rebalancing process which can take lot ot time.
i find it strange that this point has not been adressed long time ago

I think with a big cluster (100 nodes) then the cluster is permanently 
rebalacing (consuming network and performance) as nodes crash frequently.

Is it the same if i put the index in read only mode ?



Le lundi 10 novembre 2014 17:58:19 UTC+1, Nikolas Everett a écrit :

 You've followed the right procedure.  The problem is that Elasticsearch 
 doesn't always restore the shards back on the node that they came from.  If 
 the restarted shard and the current master shard have diverge at all it'll 
 have to sync files _somewhere_ to make sure that the restarted shard gets 
 all the changes.  Since shards diverge all the time even if there aren't 
 updates while the node is down you can expect this.

 Speeding this process up has been an open issue for many many months.

 Nik

 On Mon, Nov 10, 2014 at 11:49 AM, joerg...@gmail.com javascript: 
 joerg...@gmail.com javascript: wrote:

 Reallocation to all nodes is the expected behavior.

 Jörg

 On Mon, Nov 10, 2014 at 3:55 PM, lagarutte via elasticsearch 
 elasti...@googlegroups.com javascript: wrote:

 Hi,
 i have one ELS 1.1.2 cluster with 7 nodes.
 800GB data.

 When i shutdown a node for various reasons, ELS automatically rebalance 
 the missing shard on the other node.

 To prevent this, I tried this (specified in the official doc) :
 transient : {
 cluster.routing.allocation.enable : none }

 ans then i issue a node shtudown.

 Effectively, the relevant shards are now unassigned and ELS don't try to 
 reallocate them.

 But when i restart the node, they still remain as unassigned.
 And then when i set back :
 transient : {
 cluster.routing.allocation.enable : all }

 =  ELS reallocate unassigned shard to ALL nodes instead of the 
 restarted node.

 What's wrong ? 
 What's the correct procedure ?

 regards
 jean

  -- 
 You received this message because you are subscribed to the Google 
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/cc317335-9412-42dc-b549-74eb91ba9d6b%40googlegroups.com
  
 https://groups.google.com/d/msgid/elasticsearch/cc317335-9412-42dc-b549-74eb91ba9d6b%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFY_%3D1JZzE%3Dmxov0dF1QeBF2NNDtXYcDj9%3D88Bu5gjvRg%40mail.gmail.com
  
 https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFY_%3D1JZzE%3Dmxov0dF1QeBF2NNDtXYcDj9%3D88Bu5gjvRg%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0d9f1254-560d-4124-8075-b5d6679be4a3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch rolling restart problem

2014-11-10 Thread Nikolas Everett
I'm not sure if putting the cluster in readonly mode will help.  I can't do
that with my system so I can't test it.

I'd be _much_ happier if it only took a minute or two to perform a restart
on each node rather than the hours it can take.

Nik

On Mon, Nov 10, 2014 at 2:02 PM, lagarutte via elasticsearch 
elasticsearch@googlegroups.com wrote:

 ok thank for your explanation.
 It's a major concern as ELS is scalabe and when a node goes down, we have
 a rebalancing process which can take lot ot time.
 i find it strange that this point has not been adressed long time ago

 I think with a big cluster (100 nodes) then the cluster is permanently
 rebalacing (consuming network and performance) as nodes crash frequently.

 Is it the same if i put the index in read only mode ?



 Le lundi 10 novembre 2014 17:58:19 UTC+1, Nikolas Everett a écrit :

 You've followed the right procedure.  The problem is that Elasticsearch
 doesn't always restore the shards back on the node that they came from.  If
 the restarted shard and the current master shard have diverge at all it'll
 have to sync files _somewhere_ to make sure that the restarted shard gets
 all the changes.  Since shards diverge all the time even if there aren't
 updates while the node is down you can expect this.

 Speeding this process up has been an open issue for many many months.

 Nik

 On Mon, Nov 10, 2014 at 11:49 AM, joerg...@gmail.com joerg...@gmail.com
 wrote:

 Reallocation to all nodes is the expected behavior.

 Jörg

 On Mon, Nov 10, 2014 at 3:55 PM, lagarutte via elasticsearch 
 elasti...@googlegroups.com wrote:

 Hi,
 i have one ELS 1.1.2 cluster with 7 nodes.
 800GB data.

 When i shutdown a node for various reasons, ELS automatically rebalance
 the missing shard on the other node.

 To prevent this, I tried this (specified in the official doc) :
 transient : {
 cluster.routing.allocation.enable : none }

 ans then i issue a node shtudown.

 Effectively, the relevant shards are now unassigned and ELS don't try
 to reallocate them.

 But when i restart the node, they still remain as unassigned.
 And then when i set back :
 transient : {
 cluster.routing.allocation.enable : all }

 =  ELS reallocate unassigned shard to ALL nodes instead of the
 restarted node.

 What's wrong ?
 What's the correct procedure ?

 regards
 jean

  --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/cc317335-9412-42dc-b549-74eb91ba9d6b%
 40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/cc317335-9412-42dc-b549-74eb91ba9d6b%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/CAKdsXoFY_%3D1JZzE%3Dmxov0dF1QeBF2NNDtXYcDj9%
 3D88Bu5gjvRg%40mail.gmail.com
 https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFY_%3D1JZzE%3Dmxov0dF1QeBF2NNDtXYcDj9%3D88Bu5gjvRg%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/0d9f1254-560d-4124-8075-b5d6679be4a3%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/0d9f1254-560d-4124-8075-b5d6679be4a3%40googlegroups.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd2rugu45LJmEzBL%3Dw_2xrWDUPb5bPBfvD13b-CsFd6xUw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Nodes not joining after 1.4.0 upgrade

2014-11-10 Thread Boaz Leskes
Hi VincentYou should be able to do a rolling upgrade to version 1.4 from 1.3.4. 
Can you say more about the issues your were seeing? Any errors in the logs?




Cheers,

Boaz


—
Sent from Mailbox

On Mon, Nov 10, 2014 at 4:05 PM, Valentin plet...@gmail.com wrote:

 I had similar issues when upgrading from 1.3.4 to 1.4 
 from my elasticsearch.yml
 discovery.zen.ping.multicast.enabled: false
 discovery.zen.ping.unicast.hosts:.
 I could get it up and running after restarting the whole cluster (which was 
 bad since I'm using it for realtime logging). 
 On Monday, November 10, 2014 1:34:12 PM UTC+1, Boaz Leskes wrote:

 Hi,

 The logs you mentioned indicate that the nodes try to join the cluster but 
 it takes too long for a complete verification cycle (connect back to node 
 and publish cluster state to it) takes too long. It seems there is 
 something going on your masters.

 Can you check the logs over there? Also are you using multicast or unicast 
 discovery?

 On Sunday, November 9, 2014 8:36:06 AM UTC+1, Janet Sullivan wrote:

  More hours of working – even when I get a 1.4.0 cluster up, masters 
 wouldn’t fail over – when I took master1 down, neither master2 or master3 
 would promote themselves.   In 1.4.0-beta it fails over quickly.

  
  
 *From:* elasti...@googlegroups.com javascript: [mailto:
 elasti...@googlegroups.com javascript:] *On Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 11:11 PM
 *To:* elasti...@googlegroups.com javascript:
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
  
  

 OK, it also happens to some degree with 1.4.0-beta, although overall it’s 
 much better on beta.  I wasn’t able to get my 12 node cluster up on 1.4.0 
 after several hours of fiddling, but 1.4.0-beta did come up.

  
  
 *From:* elasti...@googlegroups.com javascript: [
 mailto:ela...@googlegroups.com javascript:] *On Behalf Of *Janet 
 Sullivan
 *Sent:* Saturday, November 08, 2014 9:26 PM
 *To:* elasti...@googlegroups.com javascript:
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
  
  

 But it DOES happen with 1.3.5.   Hmmm….

  
  
 *From:* elasti...@googlegroups.com javascript: [
 mailto:ela...@googlegroups.com javascript:] *On Behalf Of *Janet 
 Sullivan
 *Sent:* Saturday, November 08, 2014 9:24 PM
 *To:* elasti...@googlegroups.com javascript:
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
  
  

 Note:  This issue doesn’t happen with 1.4.0-beta1

  
  
 *From:* elasti...@googlegroups.com javascript: [
 mailto:ela...@googlegroups.com javascript:] *On Behalf Of *Janet 
 Sullivan
 *Sent:* Saturday, November 08, 2014 8:46 PM
 *To:* elasti...@googlegroups.com javascript:
 *Subject:* Nodes not joining after 1.4.0 upgrade
  
  

 I’ve upgraded a couple of clusters to 1.4.0 from 1.3.4.  On both of them, 
 I had nodes that spewed the following, and were slow to join, if they 
 joined at all:

  

 [2014-11-09 04:33:45,995][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:34:49,776][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:35:53,571][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:36:57,353][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:38:01,120][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:39:04,885][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:40:08,657][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

  

 I’m able to telnet to port 9300 

How do you link Indices to Kibana from Elasticsearch?

2014-11-10 Thread Petey D
Hey All,

This might be a simple question but it is just really stumping me. How do 
you link Indices to your Kibana dashboard and if you have many indices how 
do you search all or each individually? I have all services running at the 
latest versions, but Kibana keeps saying there are no indices.


Regards,

Pete


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f7c489de-f451-4828-bc14-6568797aaf91%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Delete all of type across all indices

2014-11-10 Thread Michael Irwin
I'm using logstash to store logs. I'd like to delete all logstash entries 
of type 'error'. I checked out the Delete by Query API, but I can't seem to 
figure out how to do what I want in this situation. Help!

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ff84d215-b909-4b29-bcdc-87dfdcfda5f2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: ES cluster become red

2014-11-10 Thread Brian
Moshe,

Exactly!

What you might wish to do is add a Wait for Yellow query before doing any 
queries, or a Wait for Green request before doing any updates. That way, 
you can deterministically wait for the appropriate status before continuing.

For example: Loop on the following until it succeeds, some timeout expires 
after repeatedly catching NoNodeAvailableException, or else some other 
serious exception is thrown:

client.admin().cluster().prepareHealth().setTimeout(timeout)
.setWaitForYellowStatus().execute().actionGet();
Hope this helps!

Brian

On Sunday, November 9, 2014 8:22:58 AM UTC-5, Moshe Recanati wrote:

 Update
 After couple of seconds or minutes the cluster became green.
 I assume this is after ES stabilized with data,




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ecd2f91b-c322-4df8-b411-d47f59f356a4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Aggregating on nested fields

2014-11-10 Thread Ivan Brusic
Reproducible gist: https://gist.github.com/brusic/81e1552ffd49a1f6a7aa

Surely I cannot be the only one to have encountered this issue.

-- 
Ivan

On Mon, Nov 10, 2014 at 12:53 PM, Ivan Brusic i...@brusic.com wrote:

 Is it possible to aggregate only on the nested documents that are returned
 by a (filtered) query? For what I can tell when using a nested aggregation,
 it will function on all nested documents of the parent documents whose
 nested document satisfy a nested query/filter. Did that make sense? :) Is
 this the same limitation as issue #3022? I know that number by heart by now.

 For example, I have 3 simple documents, where the nstd object is defined
 as nested:

 {
   name : foo,
   nstd : [
 {
 ID : 1
 }
   ]
 }
 '

 {
   name : bar,
   nstd : [
 {
 ID : 2
 }
   ]
 }
 '

 {
   name : baz,
   nstd : [
 {
 ID : 1
 },
 {
 ID : 2
 }
   ]
 }
 '

 I then execute a simple nested query:

query: {
   filtered: {
  query: {
 match_all: {}
  },
  filter: {
 nested: {
path: nstd,
filter: {
   term: {
  nstd.ID: 1
   }
}
 }
  }
   }
}

 If I aggregate on the nstd.ID field, I will always get back results for
 nested documents that were excluded by the filter:

 buckets: [
{
   key: 1,
   doc_count: 2
},
{
   key: 2,
   doc_count: 1
}
 ]

 Since the ID:2 field does not match the filter, it should not be returned
 with the aggregation. I have tried using a filter aggregation with the same
 filter used in the filtered query, but I receive the same results.

 Cheers,

 Ivan


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBUa1XbQRoAtgoj71kjrgHOW5PS%3Dr08U2ZDie9HXKs_2A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: How do you link Indices to Kibana from Elasticsearch?

2014-11-10 Thread RaviKumar Potnuru
Hi Pete,
 
Please look  at this 
http://www.elasticsearch.org/guide/en/kibana/current/using-kibana-for-the-first-time.html
 
artical for setting up indices: 
http://www.elasticsearch.org/guide/en/kibana/current/using-kibana-for-the-first-time.html#configuring-another-index
 
If you have multiple indices, then mentioned them all with a comma 
separater in Index Settings.
 
 
Thanks and Regards,
Ravi.
On Monday, November 10, 2014 12:19:38 PM UTC-7, Petey D wrote:

 Hey All,

 This might be a simple question but it is just really stumping me. How do 
 you link Indices to your Kibana dashboard and if you have many indices how 
 do you search all or each individually? I have all services running at the 
 latest versions, but Kibana keeps saying there are no indices.


 Regards,

 Pete




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/11634afb-91b3-4165-ad84-3ea87720f54f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Decoupling Data and indexing

2014-11-10 Thread Amish Asthana
Hi
Is there a way we can decouple data and associated mapping/indexing in 
Elasticsearch itself.
Basically store the raw data as source( json or some other format)  and 
various mapping/index can be used on top of that.
I understand that one can use an outside database or file system, but can 
it be natively achieved in ES itself.

Basically we are trying to see how our ES instance will work when we have 
to change mapping of existing and continuously incoming data without any 
downtime for the end user.
We have an added wrinkle that our indexing has to be edit aware for 
versioning purpose; unlike ES where each edit is a new record.
regards and thanks
amish

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0bb1f5ef-3991-4568-9891-018baf79ebae%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Query_string search containing a dash has unexpected results

2014-11-10 Thread Amish Asthana
Can you run the validate query output. That will be helpful.
amish

On Thursday, November 6, 2014 4:47:12 PM UTC-8, Dave Reed wrote:

 I have a document with a field message, that contains the following text 
 (truncated):

 Welcome to test.com!

 The assertion field is mapped to have an analyzer that breaks that string 
 into the following tokens:

 welcome
 to
 test
 com

 But, when I search with a query like this:

 {
   query: {

 query_string: {
   query: id:3955974 AND message:welcome-doesnotmatchanything
 }
   }
 }



 To my surprise, it finds the document (3955974 is the document id). The 
 dash and everything after it seems to be ignored, because it does not 
 matter what I put there, it will still match the document.

 I've tried escaping it:

 {
   query: {
 query_string: {
   query: id:3955974 AND message:welcome\\-doesnotmatchanything
 }
   }
 }
 (note the double escape since it has to be escaped for the JSON too)

 But that makes no difference. I still get 1 matching document. If I put it 
 in quotes it works:

 {
   query: {
 query_string: {
   query: id:3955974 AND message:\welcome-doesnotmatchanything\
 }
   }
 }

 It works, meaning it matches 0 documents, since that document does not 
 contain the doesnotmatchanything token. That's great, but I don't 
 understand why the unquoted version does not work. This query is being 
 generated so I can't easily just decide to start quoting it, and I can't 
 always do that anyway since the user is sometimes going to use wildcards, 
 which can't be quoted if I want them to function. I was under the 
 assumption that an EscapedUnquotedString is the same as a quoted unespaced 
 string (in other words, foo:a\b\c === foo:abc, assuming all special 
 characters are escaped in the unquoted version).

 I'm only on ES 1.01, but I don't see anything new or changes that would 
 have impacted this behavior in later versions.

 Any insights would be helpful! :)






-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7790c6fc-5578-4434-9bd2-fd846e59a997%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Query_string search containing a dash has unexpected results

2014-11-10 Thread Dave Reed
Yes of course :) Here we go:

{
   
   - valid: true
   - _shards: {
  - total: 1
  - successful: 1
  - failed: 0
   }
   - explanations: [
  - {
 - index: index_v1
 - valid: true
 - explanation: message:welcome message:doesnotmatch
  }
   ]

}

It pasted a little weird but that's it.



On Monday, November 10, 2014 2:25:33 PM UTC-8, Amish Asthana wrote:

 Can you run the validate query output. That will be helpful.
 amish




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/83422fed-2e1c-4e27-825e-5bd9f334f85a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Nodes not joining after 1.4.0 upgrade

2014-11-10 Thread Janet Sullivan
I’m also using unicast discovery, as multicast doesn’t work on Azure.  I ended 
up in a bad position - 1.4 wouldn’t come up all the way, but 1.3.4 wouldn’t 
accept shards with the new lucene version.  I ended up rebuilding the cluster, 
and I’m going to have to backfill from text logs.  A fresh 1.4 cluster works 
fine, but after two days I couldn’t get the upgraded cluster to work.  I’m glad 
to hear someone else had a similar issue.

On Nov 10, 2014, at 7:05 AM, Valentin 
plet...@gmail.commailto:plet...@gmail.com wrote:

I had similar issues when upgrading from 1.3.4 to 1.4
from my elasticsearch.yml

discovery.zen.ping.multicast.enabled: false


discovery.zen.ping.unicast.hosts:.

I could get it up and running after restarting the whole cluster (which was bad 
since I'm using it for realtime logging).

On Monday, November 10, 2014 1:34:12 PM UTC+1, Boaz Leskes wrote:
Hi,

The logs you mentioned indicate that the nodes try to join the cluster but it 
takes too long for a complete verification cycle (connect back to node and 
publish cluster state to it) takes too long. It seems there is something going 
on your masters.

Can you check the logs over there? Also are you using multicast or unicast 
discovery?

On Sunday, November 9, 2014 8:36:06 AM UTC+1, Janet Sullivan wrote:
More hours of working – even when I get a 1.4.0 cluster up, masters wouldn’t 
fail over – when I took master1 down, neither master2 or master3 would promote 
themselves.   In 1.4.0-beta it fails over quickly.

From: elasti...@googlegroups.comjavascript: 
[mailto:elasti...@googlegroups.comjavascript:] On Behalf Of Janet Sullivan
Sent: Saturday, November 08, 2014 11:11 PM
To: elasti...@googlegroups.comjavascript:
Subject: RE: Nodes not joining after 1.4.0 upgrade

OK, it also happens to some degree with 1.4.0-beta, although overall it’s much 
better on beta.  I wasn’t able to get my 12 node cluster up on 1.4.0 after 
several hours of fiddling, but 1.4.0-beta did come up.

From: elasti...@googlegroups.comjavascript: 
[mailto:ela...@googlegroups.comjavascript:] On Behalf Of Janet Sullivan
Sent: Saturday, November 08, 2014 9:26 PM
To: elasti...@googlegroups.comjavascript:
Subject: RE: Nodes not joining after 1.4.0 upgrade

But it DOES happen with 1.3.5.   Hmmm….

From: elasti...@googlegroups.comjavascript: 
[mailto:ela...@googlegroups.comjavascript:] On Behalf Of Janet Sullivan
Sent: Saturday, November 08, 2014 9:24 PM
To: elasti...@googlegroups.comjavascript:
Subject: RE: Nodes not joining after 1.4.0 upgrade

Note:  This issue doesn’t happen with 1.4.0-beta1

From: elasti...@googlegroups.comjavascript: 
[mailto:ela...@googlegroups.comjavascript:] On Behalf Of Janet Sullivan
Sent: Saturday, November 08, 2014 8:46 PM
To: elasti...@googlegroups.comjavascript:
Subject: Nodes not joining after 1.4.0 upgrade

I’ve upgraded a couple of clusters to 1.4.0 from 1.3.4.  On both of them, I had 
nodes that spewed the following, and were slow to join, if they joined at all:

[2014-11-09 04:33:45,995][INFO ][discovery.zen] [gnslogstash3] 
failed to send join request to master 
[[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
task.]]
[2014-11-09 04:34:49,776][INFO ][discovery.zen] [gnslogstash3] 
failed to send join request to master 
[[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
task.]]
[2014-11-09 04:35:53,571][INFO ][discovery.zen] [gnslogstash3] 
failed to send join request to master 
[[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
task.]]
[2014-11-09 04:36:57,353][INFO ][discovery.zen] [gnslogstash3] 
failed to send join request to master 
[[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
task.]]
[2014-11-09 04:38:01,120][INFO ][discovery.zen] [gnslogstash3] 
failed to send join request to master 
[[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
task.]]
[2014-11-09 04:39:04,885][INFO ][discovery.zen] [gnslogstash3] 
failed to send join request to master 
[[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
task.]]
[2014-11-09 04:40:08,657][INFO ][discovery.zen] [gnslogstash3] 
failed to send join request to master 
[[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting 

Re: Query_string search containing a dash has unexpected results

2014-11-10 Thread Dave Reed
Also interesting... if I run the query with explain=true, I see information 
in the details about the welcome token, but there's no mention at all 
about the doesnotmatch token. I guess it wouldn't mention it though, 
since if it did, the document shouldn't match in the first place.

On Monday, November 10, 2014 2:45:05 PM UTC-8, Dave Reed wrote:

 Yes of course :) Here we go:

 {

- valid: true
- _shards: {
   - total: 1
   - successful: 1
   - failed: 0
}
- explanations: [
   - {
  - index: index_v1
  - valid: true
  - explanation: message:welcome message:doesnotmatch
   }
]

 }

 It pasted a little weird but that's it.



 On Monday, November 10, 2014 2:25:33 PM UTC-8, Amish Asthana wrote:

 Can you run the validate query output. That will be helpful.
 amish




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/632d1e74-31a0-42f2-ad09-40e3030449d9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Query_string search containing a dash has unexpected results

2014-11-10 Thread Amish Asthana
I created a test index using your pattern and I am seeing the appropriate 
behaviour.
I am assuming you are using the same analyzer for search/query as well as 
ensuring that your DEFAULT OPERATOR is AND.
Note that using the welcome-doesnotmatchanything analzyzer will break into 
two tokens with OR and your document will match unless you use AND.
amish

On Monday, November 10, 2014 2:48:06 PM UTC-8, Dave Reed wrote:

 Also interesting... if I run the query with explain=true, I see 
 information in the details about the welcome token, but there's no 
 mention at all about the doesnotmatch token. I guess it wouldn't mention 
 it though, since if it did, the document shouldn't match in the first place.

 On Monday, November 10, 2014 2:45:05 PM UTC-8, Dave Reed wrote:

 Yes of course :) Here we go:

 {

- valid: true
- _shards: {
   - total: 1
   - successful: 1
   - failed: 0
}
- explanations: [
   - {
  - index: index_v1
  - valid: true
  - explanation: message:welcome message:doesnotmatch
   }
]

 }

 It pasted a little weird but that's it.



 On Monday, November 10, 2014 2:25:33 PM UTC-8, Amish Asthana wrote:

 Can you run the validate query output. That will be helpful.
 amish




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6f17d388-83c9-4d75-8f6f-8af3b4dc954b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Query_string search containing a dash has unexpected results

2014-11-10 Thread Dave Reed
My default operator doesn't matter if I understand it correctly, because 
I'm specifying the operate explicitly. Also, I can reproduce this behavior 
using a single search term, so there's no operator to speak of. Unless 
you're  saying that the default operator applies to a single term query if 
it is broken into tokens?
 

 Note that using the welcome-doesnotmatchanything analzyzer will break 
 into two tokens with OR and your document will match unless you use AND


This concerns me... my search looks like:

message:welcome-doesnotmatchanything

I cannot break that into an AND. The entire thing is a value provided by 
the end user. You're saying I should on the app side break the string they 
entered into tokens and join them with ANDs? That doesn't seem viable...

Let me back up and say what I'm expecting the user to be able to do. 
There's a single text box where they can enter a search query, with the 
following rules:
1. The user may use a trailing wildcard, e.g. foo*
2. The user may enter multiple terms separated by a space. Only documents 
containing all of the terms will match.
3. The user might enter special characters, such as in battle-axe, simply 
because that is what they think they should search for, which should match 
documents containing battle and axe (the same as a search for battle 
axe).

To that end, I am taking their search string and forming a search like this:

message:searchterm AND...

Where the string is split on spaces and joined with the AND clauses. For 
each individual part of the search phrase, I take care of escaping special 
characters (except * since I am allowing them to use wildcards). For 
example, if they entered foo bar!, I would generate this query:

message:foo AND message:bar\!

The problem is they are entering battle-axe, causing me to generate this:

message:battle\-axe

But that ends up being the same as:

(message:battle OR message:axe)

I guess that is what I was not expecting. Because of this behavior, I have 
to know from my app point of view what tokens I should be splitting the 
original string on, so that I can join them back together with ANDs. But 
that means basically reimplementing the tokenizer on my end, does it not? 
There must be a better way? Like specifying I want those terms to be joined 
with ANDs instead?

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/924a04d5-4163-41b5-a7e7-e3ca2982d078%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Query_string search containing a dash has unexpected results

2014-11-10 Thread Dave Reed
Ok... specifying default_operator: AND worked

In that case, I'd like to say that the docs on that option are incomplete 
or confusing. It says:

The default operator used if no explicit operator is specified. For example, 
with a default operator of OR, the query capital of Hungary is translated 
to capital OR of OR Hungary, and with default operator of AND, the same 
query is translated to capital AND of AND Hungary. The default value is OR.

That's all well and good, but my query does not have multiple terms like 
that. I have a single term for a single field. The default operator is 
applying to the resulting tokens of that, after they are generated by the 
analyzer. I assumed that the default operator applied at the level of the 
query being parsed and that had nothing at all to do with the analyzer. 
Making that clearer could have saved me a lot of time :)

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c1a058ca-b179-495a-8b82-e65fece4f99f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Query_string search containing a dash has unexpected results

2014-11-10 Thread Amish Asthana
No I am not saying that . I am saying this :
GET  my_index_v1/mytype/_search
{
  query: {
query_string: {
  default_field: name,
  query: welcome-doesnotmatchanything,
  default_operator: AND
}
  }
}

Here I will not get a match as expected. If I do not specify then OR is the 
deafult operator and it will match.
amish


On Monday, November 10, 2014 4:01:14 PM UTC-8, Dave Reed wrote:

 My default operator doesn't matter if I understand it correctly, because 
 I'm specifying the operate explicitly. Also, I can reproduce this behavior 
 using a single search term, so there's no operator to speak of. Unless 
 you're  saying that the default operator applies to a single term query if 
 it is broken into tokens?
  

 Note that using the welcome-doesnotmatchanything analzyzer will break 
 into two tokens with OR and your document will match unless you use AND


 This concerns me... my search looks like:

 message:welcome-doesnotmatchanything

 I cannot break that into an AND. The entire thing is a value provided by 
 the end user. You're saying I should on the app side break the string they 
 entered into tokens and join them with ANDs? That doesn't seem viable...

 Let me back up and say what I'm expecting the user to be able to do. 
 There's a single text box where they can enter a search query, with the 
 following rules:
 1. The user may use a trailing wildcard, e.g. foo*
 2. The user may enter multiple terms separated by a space. Only documents 
 containing all of the terms will match.
 3. The user might enter special characters, such as in battle-axe, 
 simply because that is what they think they should search for, which should 
 match documents containing battle and axe (the same as a search for 
 battle axe).

 To that end, I am taking their search string and forming a search like 
 this:

 message:searchterm AND...

 Where the string is split on spaces and joined with the AND clauses. For 
 each individual part of the search phrase, I take care of escaping special 
 characters (except * since I am allowing them to use wildcards). For 
 example, if they entered foo bar!, I would generate this query:

 message:foo AND message:bar\!

 The problem is they are entering battle-axe, causing me to generate this:

 message:battle\-axe

 But that ends up being the same as:

 (message:battle OR message:axe)

 I guess that is what I was not expecting. Because of this behavior, I have 
 to know from my app point of view what tokens I should be splitting the 
 original string on, so that I can join them back together with ANDs. But 
 that means basically reimplementing the tokenizer on my end, does it not? 
 There must be a better way? Like specifying I want those terms to be joined 
 with ANDs instead?


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b20d4b80-2ebd-4b5c-a1e5-a434c2d68598%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Query_string search containing a dash has unexpected results

2014-11-10 Thread Dave Reed
Yes, and this was the key, thank you so much. But see my reply above about 
the docs on that param being confusing. That was really the source of the 
problem for me.

On Monday, November 10, 2014 4:15:05 PM UTC-8, Amish Asthana wrote:

 No I am not saying that . I am saying this :
 GET  my_index_v1/mytype/_search
 {
   query: {
 query_string: {
   default_field: name,
   query: welcome-doesnotmatchanything,
   default_operator: AND
 }
   }
 }

 Here I will not get a match as expected. If I do not specify then OR is 
 the deafult operator and it will match.
 amish


 On Monday, November 10, 2014 4:01:14 PM UTC-8, Dave Reed wrote:

 My default operator doesn't matter if I understand it correctly, because 
 I'm specifying the operate explicitly. Also, I can reproduce this behavior 
 using a single search term, so there's no operator to speak of. Unless 
 you're  saying that the default operator applies to a single term query if 
 it is broken into tokens?
  

 Note that using the welcome-doesnotmatchanything analzyzer will break 
 into two tokens with OR and your document will match unless you use AND


 This concerns me... my search looks like:

 message:welcome-doesnotmatchanything

 I cannot break that into an AND. The entire thing is a value provided by 
 the end user. You're saying I should on the app side break the string they 
 entered into tokens and join them with ANDs? That doesn't seem viable...

 Let me back up and say what I'm expecting the user to be able to do. 
 There's a single text box where they can enter a search query, with the 
 following rules:
 1. The user may use a trailing wildcard, e.g. foo*
 2. The user may enter multiple terms separated by a space. Only documents 
 containing all of the terms will match.
 3. The user might enter special characters, such as in battle-axe, 
 simply because that is what they think they should search for, which should 
 match documents containing battle and axe (the same as a search for 
 battle axe).

 To that end, I am taking their search string and forming a search like 
 this:

 message:searchterm AND...

 Where the string is split on spaces and joined with the AND clauses. For 
 each individual part of the search phrase, I take care of escaping special 
 characters (except * since I am allowing them to use wildcards). For 
 example, if they entered foo bar!, I would generate this query:

 message:foo AND message:bar\!

 The problem is they are entering battle-axe, causing me to generate 
 this:

 message:battle\-axe

 But that ends up being the same as:

 (message:battle OR message:axe)

 I guess that is what I was not expecting. Because of this behavior, I 
 have to know from my app point of view what tokens I should be splitting 
 the original string on, so that I can join them back together with ANDs. 
 But that means basically reimplementing the tokenizer on my end, does it 
 not? There must be a better way? Like specifying I want those terms to be 
 joined with ANDs instead?



-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4d64842d-6374-465d-b261-452d845a3985%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Inconsistent backup status between _all and _status view

2014-11-10 Thread caio80
Hey all,

in ES 1.3.2, on a 3 nodes cluster, it seems that when I get the _all list 
of snapshots (/_snapshot/my_backup/_all), they are all successful:

[...]
1786_v3_2014-10-23
], 
shards: {
failed: 0, 
successful: 2035, 
total: 2035
}, 
snapshot: backup-dev-20141110-152900, 
start_time: 2014-11-10T23:29:43.436Z, 
start_time_in_millis: 1415662183436, 
state: SUCCESS
}
]
}

However, when asking for the _status of the specific label:

r...@svc2.dev.domain.local # curl -s -XGET 
http://localhost:9200/_snapshot/my_backup/backup-dev-20141110-152900/_status
{error:RemoteTransportException[[Hannibal 
King][inet[/192.168.221.33:9300]][cluster/snapshot/status]]; nested: 
IndexShardRestoreFailedException[[3_v3_2014-11-03][0] failed to read shard 
snapshot file]; nested: 
FileNotFoundException[/var/vmware/backup/indices/3_v3_2014-11-03/0/snapshot-backup-dev-20141110-152900
 
(No such file or directory)]; ,status:500}

Is this expected (maybe status digs deeper and checks for files presence 
before returning success) or should I open a bug? Thanks in advance...

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/95193521-d068-4a09-9f18-9ffa24c118b4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: how to search non indexed field in elasticsearch

2014-11-10 Thread ramky
Thanks Nikolas. Filtering of non-index data is working now using _source.

Regards
Rama Krishna P

On Monday, November 10, 2014 7:08:02 PM UTC+5:30, Nikolas Everett wrote:

 Use _source['service'] instead. Much slower but doesn't need to the field 
 indexed. 
 On Nov 10, 2014 1:01 AM, ramky panguluri@gmail.com javascript: 
 wrote:

 Thanks Nikolas.
 I tried the query but it failed to search on non-indexed field.

 Query i used is
 {
   filter: {
 script: {
script: doc['service'].value == http
 }
   }
 }
 service is non-indexed field.

 Exception after execution is
 {[x3a9BIGLRwOdhwpsaUZbrw][siem0511][0]: 
 QueryPhaseExecutionException[[siem0511][0]: 
 query[ConstantScore(cache(_type:siem))],from[0],size[10]: Query Failed 
 [Failed to execute main query]]; nested: CompileException[[Error: No field 
 found for [service] in mapping with types [siem]]\n[Near : {... 
 doc['service'].value == http }]\n ^\n[Line: 1, Column: 1]]; 
 nested: ElasticsearchIllegalArgumentException[No field found for [service] 
 in mapping with types [siem]]; }

 Please help.

 Thanks in advance.

 Regards
 Ramky




 On Friday, November 7, 2014 5:49:04 PM UTC+5:30, Nikolas Everett wrote:

 The first example on http://www.elasticsearch.org/
 guide/en/elasticsearch/reference/current/query-dsl-
 script-filter.html#query-dsl-script-filter should just if you replace  
 with .equals

 On Fri, Nov 7, 2014 at 2:11 AM, ramky panguluri@gmail.com wrote:

 Thanks Nikolas Everett for your quick reply.

 Can you please provide me example to execute the same. I tried multiple 
 times but unable to execute.

 Thanks in advance

 On Thursday, November 6, 2014 9:44:55 PM UTC+5:30, Nikolas Everett 
 wrote:

 You can totally use a script filter checking the field against 
 _source.  Its super duper duper slow but you can do it if you need it 
 rarely.

 On Thu, Nov 6, 2014 at 11:13 AM, Ivan Brusic iv...@brusic.com wrote:

 You cannot search/filter on a non-indexed field.

 -- 
 Ivan

 On Wed, Nov 5, 2014 at 11:45 PM, ramakrishna panguluri 
 panguluri@gmail.com wrote:

 I have 10 fields inserted into elasticsearch out of which 5 fields 
 are indexed.
 Is it possible to search on non indexed field?

 Thanks in advance.


 Regards
 Rama Krishna P

 -- 
 You received this message because you are subscribed to the Google 
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, 
 send an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/c63ac6bb-871
 7-470e-a5e4-01a8bd75b769%40googlegroups.com 
 https://groups.google.com/d/msgid/elasticsearch/c63ac6bb-8717-470e-a5e4-01a8bd75b769%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  -- 
 You received this message because you are subscribed to the Google 
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, 
 send an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/CALY%3DcQDD0JYJeX%2BCmV%3DGACekwofjUYFQvoS
 WQ86Th3r-MBWZtw%40mail.gmail.com 
 https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQDD0JYJeX%2BCmV%3DGACekwofjUYFQvoSWQ86Th3r-MBWZtw%40mail.gmail.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


  -- 
 You received this message because you are subscribed to the Google 
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/4d3e5636-1124-4dcb-b6be-b62f893cae39%
 40googlegroups.com 
 https://groups.google.com/d/msgid/elasticsearch/4d3e5636-1124-4dcb-b6be-b62f893cae39%40googlegroups.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/725d437c-988d-4462-98db-dc281c7e438c%40googlegroups.com
  
 https://groups.google.com/d/msgid/elasticsearch/725d437c-988d-4462-98db-dc281c7e438c%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

NRT Get API - VS - IDS Filters

2014-11-10 Thread michael
Hey guys,

I'm creating a custom Elasticsearch plugin that would take in a Bitset 
containing document IDs, and returns a filtered search result based on the 
IDs given. Ideally, this would utilize the same near-realtime feature as 
the GET by ID request does.

I've come to conclude (via some local testing) that the IDs query/filter 
does not support realtime... Is there a way to make this custom filter work 
with NRT? If not, would different approach should I be considering?

Here's the filter plugin so 
far: https://gist.github.com/schonfeld/5a487b34e786f0c52244


Thanks,
- Michael

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/91078127-cd70-45c5-b21f-eab4f7321ea2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Integrating elasticsearch and kibana with PostgreSQL

2014-11-10 Thread Rahul Khengare
Hi All,

I have source input which stores data in PostgreSQL database in JSON format.
I want to perform the analytical operation on data present in PostgreSQL 
and visualize results of operations.

I have found one open source analytical tool called Kibana (integrated with 
elasticsearch) which will visualize the data.
http://www.elasticsearch.org/overview/kibana/

we can provide data present in PostgreSQL to elastic search through use of 
jdbc-river
https://github.com/jprante/elasticsearch-river-jdbc/wiki/Step-by-step-recipe-for-setting-up-the-river-with-PostgreSQL

I am trying to integrate PostgreSQL, elasticsearch and Kibana to perform 
analytical operations and visualize data.

Anybody tried something like that, your views and suggestion will be useful.
Thanks in advance


Regards,
Rahul Khengare

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ab31b8ae-f96b-40f8-a299-cedac2f52cf0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Is it possible to send ES cluster logs over kafka pipeline ?

2014-11-10 Thread Darsh
Is it possible to send ES cluster logs over kafka pipeline ? I want to send 
all ES cluster logs(Errors,Exceptions,connection issue etc in ES cluster 
itself) to our HDFS cluster for analysing. We already have a kafka pipeline 
setup for our apps to send their logs. I was wondering if ES allows to send 
its logs over Kakfa?

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/960c671c-b3ef-4f90-86e6-cd3823153e55%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Is it possible to send ES cluster logs over kafka pipeline ?

2014-11-10 Thread Mark Walkom
You can use logstash to do this, there is a community plugin for kafka as
well.

On 11 November 2014 16:59, Darsh darsh.pa...@gmail.com wrote:

 Is it possible to send ES cluster logs over kafka pipeline ? I want to
 send all ES cluster logs(Errors,Exceptions,connection issue etc in ES
 cluster itself) to our HDFS cluster for analysing. We already have a kafka
 pipeline setup for our apps to send their logs. I was wondering if ES
 allows to send its logs over Kakfa?

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/960c671c-b3ef-4f90-86e6-cd3823153e55%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/960c671c-b3ef-4f90-86e6-cd3823153e55%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZkw3qKB9AkRD4E2_nYi0zHejhweHrLRP9warBLXRAiiPw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Disabling _source and using stored fields

2014-11-10 Thread Peter van der Weerd
Hi,

I have a use case of very large documents (size10MB) where the metadata 
(title, author, etc) is small.
I thought it could be beneficial to separate the body from the metadata and 
use different fields for them, because in a result list, you typically only 
need the metadata.
So, I disabled the _source field and stored both the body as the metadata 
as fields.

However, while storing and indexing works as expected, I'm not able to get 
my data back.
Complex fields (ie objects) cannot be retrieved and return an exception: 
ElasticsearchIllegalArgumentException[field [s3] isn't a leaf field] 

Is this approach unsupported? Am I doing something wrong?
A small example:

curl -XPOST localhost:9200/.test?pretty=true -d {'
  mappings: {
test: {
  _source : {enabled : false},
 properties: {
s1: {  type: integer, index: no,   store: yes  },
s2: {  type: integer, index: no,   store: yes  },
s3: {  type: object,  index: no,   store: yes  }
 }} 
  }
}'

curl -XPOST localhost:9200/.test/test/1 -d {'
  s1: 123,
  s2: [1,2,3,4,5],
  s3: [{x:1, y:2, z:3}]
}'

sleep 1

#will succeed
curl -XPOST localhost:9200/.test/_search?pretty=true -d {'
  fields: [s1, s2]
}'

#will fail with ElasticsearchIllegalArgumentException[field [s3] isn't a 
leaf field]
curl -XPOST localhost:9200/.test/_search?pretty=true -d {'
  fields: [s3]
}'


I am aware that I could use the _source field and just exclude the body 
from it. But I expect that fetching the complete _source is costly.
I would like to measure the impact of the 2 solutions.


Thanks,
Peter

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/525eb286-8a20-43cb-ba79-aab07e58b4a4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Nodes not joining after 1.4.0 upgrade

2014-11-10 Thread Boaz Leskes
Hi Janet,

Was there anything in the master logs,  i.e., gnslogstash10 in your 
example? 

Cheers,
Boaz

On Monday, November 10, 2014 11:47:22 PM UTC+1, Janet Sullivan wrote:

  I’m also using unicast discovery, as multicast doesn’t work on Azure.  I 
 ended up in a bad position - 1.4 wouldn’t come up all the way, but 1.3.4 
 wouldn’t accept shards with the new lucene version.  I ended up rebuilding 
 the cluster, and I’m going to have to backfill from text logs.  A fresh 1.4 
 cluster works fine, but after two days I couldn’t get the upgraded cluster 
 to work.  I’m glad to hear someone else had a similar issue.

  On Nov 10, 2014, at 7:05 AM, Valentin plet...@gmail.com wrote:

  I had similar issues when upgrading from 1.3.4 to 1.4  
 from my elasticsearch.yml
  
 discovery.zen.ping.multicast.enabled: false

  discovery.zen.ping.unicast.hosts:.

 I could get it up and running after restarting the whole cluster (which 
 was bad since I'm using it for realtime logging). 

 On Monday, November 10, 2014 1:34:12 PM UTC+1, Boaz Leskes wrote: 

 Hi, 

  The logs you mentioned indicate that the nodes try to join the cluster 
 but it takes too long for a complete verification cycle (connect back to 
 node and publish cluster state to it) takes too long. It seems there is 
 something going on your masters.

  Can you check the logs over there? Also are you using multicast or 
 unicast discovery?

 On Sunday, November 9, 2014 8:36:06 AM UTC+1, Janet Sullivan wrote: 

  More hours of working – even when I get a 1.4.0 cluster up, masters 
 wouldn’t fail over – when I took master1 down, neither master2 or master3 
 would promote themselves.   In 1.4.0-beta it fails over quickly.
  
   
 *From:* elasti...@googlegroups.com [mailto:elasti...@googlegroups.com] *On 
 Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 11:11 PM
 *To:* elasti...@googlegroups.com
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
   
  
 OK, it also happens to some degree with 1.4.0-beta, although overall 
 it’s much better on beta.  I wasn’t able to get my 12 node cluster up on 
 1.4.0 after several hours of fiddling, but 1.4.0-beta did come up.
  
   
 *From:* elasti...@googlegroups.com [mailto:ela...@googlegroups.com] *On 
 Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 9:26 PM
 *To:* elasti...@googlegroups.com
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
   
  
 But it DOES happen with 1.3.5.   Hmmm….
  
   
 *From:* elasti...@googlegroups.com [mailto:ela...@googlegroups.com] *On 
 Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 9:24 PM
 *To:* elasti...@googlegroups.com
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
   
  
 Note:  This issue doesn’t happen with 1.4.0-beta1
  
   
 *From:* elasti...@googlegroups.com [mailto:ela...@googlegroups.com] *On 
 Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 8:46 PM
 *To:* elasti...@googlegroups.com
 *Subject:* Nodes not joining after 1.4.0 upgrade
   
  
 I’ve upgraded a couple of clusters to 1.4.0 from 1.3.4.  On both of 
 them, I had nodes that spewed the following, and were slow to join, if they 
 joined at all:
  
  
 [2014-11-09 04:33:45,995][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:34:49,776][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:35:53,571][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:36:57,353][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:38:01,120][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:39:04,885][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:40:08,657][INFO ][discovery.zen  

Re: Nodes not joining after 1.4.0 upgrade

2014-11-10 Thread Boaz Leskes
One more thing - do you use the Azure plugin for ES?

On Tuesday, November 11, 2014 7:50:47 AM UTC+1, Boaz Leskes wrote:

 Hi Janet,

 Was there anything in the master logs,  i.e., gnslogstash10 in your 
 example? 

 Cheers,
 Boaz

 On Monday, November 10, 2014 11:47:22 PM UTC+1, Janet Sullivan wrote:

  I’m also using unicast discovery, as multicast doesn’t work on Azure. 
  I ended up in a bad position - 1.4 wouldn’t come up all the way, but 1.3.4 
 wouldn’t accept shards with the new lucene version.  I ended up rebuilding 
 the cluster, and I’m going to have to backfill from text logs.  A fresh 1.4 
 cluster works fine, but after two days I couldn’t get the upgraded cluster 
 to work.  I’m glad to hear someone else had a similar issue.

  On Nov 10, 2014, at 7:05 AM, Valentin plet...@gmail.com wrote:

  I had similar issues when upgrading from 1.3.4 to 1.4  
 from my elasticsearch.yml
  
 discovery.zen.ping.multicast.enabled: false

  discovery.zen.ping.unicast.hosts:.

 I could get it up and running after restarting the whole cluster (which 
 was bad since I'm using it for realtime logging). 

 On Monday, November 10, 2014 1:34:12 PM UTC+1, Boaz Leskes wrote: 

 Hi, 

  The logs you mentioned indicate that the nodes try to join the cluster 
 but it takes too long for a complete verification cycle (connect back to 
 node and publish cluster state to it) takes too long. It seems there is 
 something going on your masters.

  Can you check the logs over there? Also are you using multicast or 
 unicast discovery?

 On Sunday, November 9, 2014 8:36:06 AM UTC+1, Janet Sullivan wrote: 

  More hours of working – even when I get a 1.4.0 cluster up, masters 
 wouldn’t fail over – when I took master1 down, neither master2 or master3 
 would promote themselves.   In 1.4.0-beta it fails over quickly.
  
   
 *From:* elasti...@googlegroups.com [mailto:elasti...@googlegroups.com] *On 
 Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 11:11 PM
 *To:* elasti...@googlegroups.com
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
   
  
 OK, it also happens to some degree with 1.4.0-beta, although overall 
 it’s much better on beta.  I wasn’t able to get my 12 node cluster up on 
 1.4.0 after several hours of fiddling, but 1.4.0-beta did come up.
  
   
 *From:* elasti...@googlegroups.com [mailto:ela...@googlegroups.com] *On 
 Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 9:26 PM
 *To:* elasti...@googlegroups.com
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
   
  
 But it DOES happen with 1.3.5.   Hmmm….
  
   
 *From:* elasti...@googlegroups.com [mailto:ela...@googlegroups.com] *On 
 Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 9:24 PM
 *To:* elasti...@googlegroups.com
 *Subject:* RE: Nodes not joining after 1.4.0 upgrade
   
  
 Note:  This issue doesn’t happen with 1.4.0-beta1
  
   
 *From:* elasti...@googlegroups.com [mailto:ela...@googlegroups.com] *On 
 Behalf Of *Janet Sullivan
 *Sent:* Saturday, November 08, 2014 8:46 PM
 *To:* elasti...@googlegroups.com
 *Subject:* Nodes not joining after 1.4.0 upgrade
   
  
 I’ve upgraded a couple of clusters to 1.4.0 from 1.3.4.  On both of 
 them, I had nodes that spewed the following, and were slow to join, if 
 they 
 joined at all:
  
  
 [2014-11-09 04:33:45,995][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:34:49,776][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:35:53,571][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:36:57,353][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:38:01,120][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 master=true}], reason [ElasticsearchTimeoutException[Timeout waiting for 
 task.]]

 [2014-11-09 04:39:04,885][INFO ][discovery.zen] 
 [gnslogstash3] failed to send join request to master 
 [[gnslogstash10][9nx_f_NiQtCntgnD2q7k0g][gnslogstash10][inet[/10.0.0.29:9300]]{data=false,
  
 

Kibana 4 - filters in a dashboard

2014-11-10 Thread Alex
Dear Elasticsearch team,

could you possibly please clarify a question regarding Kibana 4 vs 
Kibana 3 Dashboard feature.

In Kibana 3 Dashboard I was able to interact with widgets i.e drill-down 
the data by clicking on basically any of the widgets to add more filters to 
current dashboard.
Does it suppose to work the same way in Kibana 4 as well? 

It does not work for me at all now and I'm wondering is that's something 
that I can not figure out how to do or is it something that's not 
implemented yet.

Thanks in advance,
Alex.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/aabaa0a3-20c0-4f3d-8241-9a3f3953624e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.