Re: Filtering term with _ doesn't work

2015-03-09 Thread João Lima
Thanks David, I solved the problem.
The field was being analyzed, so I did this:
mappings: {
  produto_cadastro: {
 properties: {
id_instancia: {
   type: string,
   index: not_analyzed,
   include_in_all: false
}
 }
  }
   }

And it worked perfectly.

http://www.betalabs.com.br/*João* *Lima*
joao.l...@betalabs.com.br jo%c3%83%c2%a3o.l...@betalabs.com.br
C. 11996609309 T.(11) 3522 6826
Rua Bandeira Paulista, 702, 12º andar - Itaim Bibi
04532-002 - São Paulo - SP - Brasil
www.betalabs.com.br | Curta facebook.com/betalabsbr
http://www.facebook.com/betalabsbr | Siga @betalabsbr
http://twitter.com/#!/betalabsbr
Esta mensagem contém informação confidencial, legalmente protegida e
destinada ao uso exclusivo da pessoa acima nomeada. Caso o leitor não seja
o seu destinatário, fica desde já notificado que a divulgação ou utilização
da mesma são estritamente proibidas. Se esta mensagem foi recebida por
engano, queira por favor nos informar imediatamente, respondendo este
e-mail.

This message transmission is intended only for the use of the addressee and
may contain confidential information. If you are not the intended
recipient, you are hereby notified that any use or dissemination of this
communication is strictly prohibited. If received in error, please notify
us immediately, by replying this message.

On Fri, Mar 6, 2015 at 10:53 AM, David Pilato da...@pilato.fr wrote:

 Use a not analyzed field. You field is analyzed here with the standard
 analyzer.
 Term filter compares the string you pass with the invented index.

 Have a look at _analyze API as well. Should help to understand what
 happens.

 David

 Le 6 mars 2015 à 13:33, João Lima joao.l...@betalabs.com.br a écrit :

 id_instancia: {
type: string,
include_in_all: false
 }

 On Friday, March 6, 2015 at 9:29:21 AM UTC-3, David Pilato wrote:

 What is your mapping for this field?

 --
 *David Pilato* - Developer | Evangelist
 *Elasticsearch.com http://Elasticsearch.com*
 @dadoonet https://twitter.com/dadoonet | @elasticsearchfr
 https://twitter.com/elasticsearchfr | @scrutmydocs
 https://twitter.com/scrutmydocs




 Le 6 mars 2015 à 13:27, João Lima joao...@betalabs.com.br a écrit :

 I'm trying to make a query like this:

 {
 query :{
 filtered :{
 query :{
 multi_match : {
 query : Vonder Carregador de Baterias CBV 0900 - 110V,
 fields : [ nome^10, descricao ],
 operator : or
 }
 },
 filter : {
 bool : {
 must : [
 {term : { ativo : 1 }},
 {term : {id_instancia: Master_cleaner}},
 ]
 }
 }
 }
 }
 }
 But when I filter some term with _ it doesn't work.

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/481d9ee8-f5f0-43b2-a54d-06d9a6f25ae7%
 40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/481d9ee8-f5f0-43b2-a54d-06d9a6f25ae7%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/554effd6-d633-46ac-8af9-95088cda70de%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/554effd6-d633-46ac-8af9-95088cda70de%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.

  --
 You received this message because you are subscribed to a topic in the
 Google Groups elasticsearch group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/elasticsearch/rRgyD_hn8R8/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/A0B6AFB5-796E-4759-BA05-F752F636F3B0%40pilato.fr
 https://groups.google.com/d/msgid/elasticsearch/A0B6AFB5-796E-4759-BA05-F752F636F3B0%40pilato.fr?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPiGn9ynKYEjuWS%3D7poLmWTB5z_Ks8f5%3DZWO%3DP8XrbfzWxAn9A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


zend discovery feature of es to avoid split brain problem and heap memory usage of ES

2015-03-09 Thread phani . nadiminti
Hi All,

   I have few doubts in es,how es managing heap memory.

  
 my cluster have following configurations all three nodes are master 
and data eligible in my cluster
 
  node 1  : 12 GB Heap   total RAM : 24 GB
  node 2  : 12 GB Heap   total RAM : 24 GB
  node 3  : 12 GB Heap   total RAM : 24 GB

  I have an idea how elasticsearch will serve requests by sending 
requests to all nodes in a cluster but now my question is how elasticsearch 
will manage heap space when serving requests. is it manage, load by using 
heap space from all three servers ? How it will manage heap space 
efficiently.

  and 

   to avoid split brain i need to add following properties to 
elasticsearch.yml file.
  
  discovery.zen.minimum_master_nodes : 2
  discovery.zen.ping.timeout: 3 

 All three nodes in my cluster master and data eligible so is 
it required to add the above properties in all three nodes(in 
elasticsearch.yml file ) in my cluster? 
  
 please help me in this concepts.


Thanks
phani

 

   

 
   

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/74119a88-1d12-4448-bf88-df9066c42fbc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


How ELK stores data

2015-03-09 Thread vikas gopal
Hi Experts,

I am totally new to this tool, so I have couple of basic queries 

1) How ELK stores indexed data. Like traditional analytic tools stores data 
in flat files or in their own database . 
2) How we can perform historical search
3) How license is provided , I mean is it based on data indexed per day ?
4) If I want to start do I need to download 3 tools 
(ElasticSearch,Logstash, Kibana)

Please assist

Thanks
VG

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/26f99ae1-7b99-467f-94d3-71a01b3b6ce7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Related to cluster java version

2015-03-09 Thread phani . nadiminti
Hi All,

I have a cluster with three nodes two machines have java version java 
version 1.7.0_55 and one machine java version is java version 
1.7.0_51.The new machine need to add to existing cluster soon.

I have following question please clarify me.

   Is there any restriction in elasticsearch that cluster should 
contains same version of java on each node?


   


Thanks,
phani

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fbb2cab8-7379-49ac-b3f9-0564a39a80fd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How ELK stores data

2015-03-09 Thread Magnus Bäck
On Monday, March 09, 2015 at 16:34 CET,
 vikas gopal vikas.ha...@gmail.com wrote:

 I am totally new to this tool, so I have couple of basic queries
 1) How ELK stores indexed data. Like traditional analytic tools
 stores data in flat files or in their own database .

Elasticsearch is based on Lucene and the data is stored in
whatever format Lucene uses. This isn't something you have
to care about.

 2) How we can perform historical search

Using the regular query APIs. Sorry for such a general answer
but your question is very general.

 3) How license is provided , I mean is it based on data
 indexed per day ?

It's free Apache-licensed software so you don't have to pay
anything. If you feel you need a support contract that's
being offered at a couple of different levels. I'm sure there
are third parties offering similar services.

http://www.elasticsearch.com/support/

 4) If I want to start do I need to download 3 tools
 (ElasticSearch,Logstash, Kibana)

If you want the whole stack from log collection to storage
to visualization then yes, you need all three. But apart
from a dependency from Kibana to Elasticsearch the tools
are independent.

I suggest you download them and try them out. That's the
quickest way to figure out whether the tool stack (or a subset
thereof) fits your needs. There are also a number of videos
available.

-- 
Magnus Bäck| Software Engineer, Development Tools
magnus.b...@sonymobile.com | Sony Mobile Communications

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/20150309161010.GA18116%40seldlx20533.corpusers.net.
For more options, visit https://groups.google.com/d/optout.


Re: ES - settings/mappings - globally for an index - index: not_analyzed and analyzer:whitespace - new feature or not supported.

2015-03-09 Thread Magnus Bäck
On Thursday, March 05, 2015 at 22:56 CET,
 Mark Walkom markwal...@gmail.com wrote:

  On 6 March 2015 at 01:39, KaranM [1]karan.mu...@gmail.com wrote:
  
  I want to globally set the following for all current string fields
  for an Index and also for the future(new) string fields on that
  Index.
  
  Can some some one send example or link that has example, I was
  researching and could not find one.
  
  index: not_analyzed,Â
  analyzer:whitespace
 
 You cannot set it globally, you have to do it for each field.

Wait, isn't this what dynamic templates are for?

http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/custom-dynamic-mapping.html#dynamic-templates

-- 
Magnus Bäck| Software Engineer, Development Tools
magnus.b...@sonymobile.com | Sony Mobile Communications

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/20150309161404.GB18116%40seldlx20533.corpusers.net.
For more options, visit https://groups.google.com/d/optout.


Snapshot Scaling Problems

2015-03-09 Thread Andy Nemzek
Hello,

My company is using the ELK stack.  Right now we have a very small amount 
of data actually being sent to elastic search (probably a couple hundred 
logstash entries a day if that), however, the data that is getting logged 
is very important.  I recently set up snapshots to help protect this data.  

I take 1 snapshot a day, I delete snapshots that are older than 20 days, 
and each snapshot is comprised of all the logstash indexes in 
elasticsearch.  It's also a business requirement that we are able to search 
at least a year's worth of data, so I can't close logstash indexes unless 
they're older than at least a year.

Now, we've been using logstash for several months and each day it creates a 
new index.  We've found that even though there is very little data in these 
indexes, it's taking upwards of 30 minutes to take a snapshot of all of 
them and each day it appears to take 20 - 100 seconds longer than the last. 
 It is also taking about 30 minutes to delete a single snapshot, which is 
done each day as a part of cleaning up old snapshots.  So, the whole 
process is is taking about an hour each day and appears to be growing 
longer very quickly.

Am I doing something wrong here or is this kind of thing expected?  It's 
seems pretty strange that it's taking so long with the little amount of 
data we have.  I've looked through the snapshot docs several times and 
there doesn't appear to be much talk about how the process scales.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0add3377-4b49-4a82-a233-e005113ab1b9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


How can I change _score based on string lenght ?

2015-03-09 Thread Arnaud Coutant
Dear Members,

When I get result of my multi match request based on two words I get this:

Iphone 6C OR
Iphone 6C ARGENT

I would like that this result has the same score then order it by cheapest 
price first (float value), is it possible ?

Currently if Iphone 6C OR = 700 and iphone 6C ARGENT = 600, Iphone 6C 
OR is first. It's not what I want.

Thanks in advance for your help.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/24b90c36-e4dc-4c3a-b7a3-5f6285adb047%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Snapshot Scaling Problems

2015-03-09 Thread Andy Nemzek
I forgot to mention that we're also using the S3 snapshot plugin to store 
the snapshots in an S3 bucket.  Perhaps this might be part of the 
performance problems?


On Monday, March 9, 2015 at 1:58:33 PM UTC-5, Mark Walkom wrote:

 How many indices are there, are you using the default shard count (5)? Are 
 you optimising older indices?

 The snapshot takes segments, so it may be that there is a lot of them to 
 copy. You could try optimising your old indices, eg older than 7 days, down 
 to a single segment and then see if that helps.

 Be aware though, the optimise is a resource heavy operation, so unless you 
 have a lot of resources you should only run one at a time.

 On 10 March 2015 at 05:18, Andy Nemzek bitk...@gmail.com javascript: 
 wrote:

 Hello,

 My company is using the ELK stack.  Right now we have a very small amount 
 of data actually being sent to elastic search (probably a couple hundred 
 logstash entries a day if that), however, the data that is getting logged 
 is very important.  I recently set up snapshots to help protect this data.  

 I take 1 snapshot a day, I delete snapshots that are older than 20 days, 
 and each snapshot is comprised of all the logstash indexes in 
 elasticsearch.  It's also a business requirement that we are able to search 
 at least a year's worth of data, so I can't close logstash indexes unless 
 they're older than at least a year.

 Now, we've been using logstash for several months and each day it creates 
 a new index.  We've found that even though there is very little data in 
 these indexes, it's taking upwards of 30 minutes to take a snapshot of all 
 of them and each day it appears to take 20 - 100 seconds longer than the 
 last.  It is also taking about 30 minutes to delete a single snapshot, 
 which is done each day as a part of cleaning up old snapshots.  So, the 
 whole process is is taking about an hour each day and appears to be growing 
 longer very quickly.

 Am I doing something wrong here or is this kind of thing expected?  It's 
 seems pretty strange that it's taking so long with the little amount of 
 data we have.  I've looked through the snapshot docs several times and 
 there doesn't appear to be much talk about how the process scales.

 Thanks!

 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/0add3377-4b49-4a82-a233-e005113ab1b9%40googlegroups.com
  
 https://groups.google.com/d/msgid/elasticsearch/0add3377-4b49-4a82-a233-e005113ab1b9%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e5844cef-e8e2-4822-b77b-b6606b409eb8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: new noteid assigned after restart?

2015-03-09 Thread Mark Walkom
The node ID is auto generated, you can set the node.name though.

What is the problem with things being added back in though?

On 9 March 2015 at 13:23, Daniel Li daniell...@gmail.com wrote:

 Hi,

 I noticed a nodeid is always reassigned after the node is shutdown and
 started again (restart). Is this by design? This caused master has to
 remove the old node and add the new node, whenever the node is crashed and
 restarted.

 Is there a way to have the node to keep the same id after restart?

 thanks
 Daniel

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/434203f0-c30f-439b-b58e-78f2841b1518%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/434203f0-c30f-439b-b58e-78f2841b1518%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-%3D%3DyUj5efU-J%2Bq_Z7xB%3DzUmXD%3DGKZfMas9LKixNQACGw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: new noteid assigned after restart?

2015-03-09 Thread Daniel Li
Found this - 

@Override protected void doStart() throws ElasticsearchException { 
add(localNodeMasterListeners); this.clusterState = ClusterState.
builder(clusterState).blocks(initialBlocks).build(); this.updateTasksExecutor 
= EsExecutors.newSinglePrioritizing(daemonThreadFactory(settings, 
UPDATE_THREAD_NAME)); this.reconnectToNodes = 
threadPool.schedule(reconnectInterval, 
ThreadPool.Names.GENERIC, new ReconnectToNodes()); MapString, String 
nodeAttributes = discoveryNodeService.buildAttributes(); // note, we rely 
on the fact that its a new id each time we start, see FD and kill -9 
handling final String nodeId = DiscoveryService.generateNodeId(settings); 
DiscoveryNode localNode = new DiscoveryNode(settings.get(name), nodeId, 
transportService.boundAddress().publishAddress(), nodeAttributes, version); 
DiscoveryNodes.Builder nodeBuilder = DiscoveryNodes.builder().put(localNode)
.localNodeId(localNode.id()); this.clusterState = ClusterState.
builder(clusterState).nodes(nodeBuilder).blocks(initialBlocks).build(); }
On Monday, March 9, 2015 at 1:23:03 PM UTC-7, Daniel Li wrote:

 Hi,

 I noticed a nodeid is always reassigned after the node is shutdown and 
 started again (restart). Is this by design? This caused master has to 
 remove the old node and add the new node, whenever the node is crashed and 
 restarted.

 Is there a way to have the node to keep the same id after restart?

 thanks
 Daniel


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/36950376-ae35-42a4-9c86-a2b39c4aea9c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


query multiple words with AND and OR operators

2015-03-09 Thread Leandro Camargo
Hi.

Lets say I query my website with interesting question term.
I want to match occurrences *preferably* with both words, and add to the 
results less important entries with 1 to N-1 passed words in the term.
Is there a way to perform one single search to achieve that?

Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/89529351-9ddc-4a25-a38e-b54b76d2d2d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch import configuration files

2015-03-09 Thread Michael Power
Hello,

Is there anyway to reconfigure elasticsearch without changing the main 
/etc/elasticsearch/elasticsearch.yml file?

We want to setup an elasticsearch.yml file that is common for all our test 
environments.  Then we want an additional file that is specific to the 
environment.  That environment specific file might change the cluster name, 
change the discovery mode, etc.

Is this possible with elasticsearch?  Can we give it a list of locations 
where it can find the elasticsearch.yml

For example this is supported:

path.data: [/mnt/first, /mnt/second]

Is this supported?
path.conf: [/etc/elasticsearch, /etc/elasticsearch.d/*]

or this?

path.conf: [/etc/elasticsearch, /etc/elasticsearch-local/]

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9c3b279b-e4d6-4923-90a2-60fedbc5ea80%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Snapshot Scaling Problems

2015-03-09 Thread Mark Walkom
How many indices are there, are you using the default shard count (5)? Are
you optimising older indices?

The snapshot takes segments, so it may be that there is a lot of them to
copy. You could try optimising your old indices, eg older than 7 days, down
to a single segment and then see if that helps.

Be aware though, the optimise is a resource heavy operation, so unless you
have a lot of resources you should only run one at a time.

On 10 March 2015 at 05:18, Andy Nemzek bitkno...@gmail.com wrote:

 Hello,

 My company is using the ELK stack.  Right now we have a very small amount
 of data actually being sent to elastic search (probably a couple hundred
 logstash entries a day if that), however, the data that is getting logged
 is very important.  I recently set up snapshots to help protect this data.

 I take 1 snapshot a day, I delete snapshots that are older than 20 days,
 and each snapshot is comprised of all the logstash indexes in
 elasticsearch.  It's also a business requirement that we are able to search
 at least a year's worth of data, so I can't close logstash indexes unless
 they're older than at least a year.

 Now, we've been using logstash for several months and each day it creates
 a new index.  We've found that even though there is very little data in
 these indexes, it's taking upwards of 30 minutes to take a snapshot of all
 of them and each day it appears to take 20 - 100 seconds longer than the
 last.  It is also taking about 30 minutes to delete a single snapshot,
 which is done each day as a part of cleaning up old snapshots.  So, the
 whole process is is taking about an hour each day and appears to be growing
 longer very quickly.

 Am I doing something wrong here or is this kind of thing expected?  It's
 seems pretty strange that it's taking so long with the little amount of
 data we have.  I've looked through the snapshot docs several times and
 there doesn't appear to be much talk about how the process scales.

 Thanks!

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/0add3377-4b49-4a82-a233-e005113ab1b9%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/0add3377-4b49-4a82-a233-e005113ab1b9%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-wCPeyx7mYkan7sN0BK8suHY3RR7bCU19dN0Qn%3DpyALA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: ES - settings/mappings - globally for an index - index: not_analyzed and analyzer:whitespace - new feature or not supported.

2015-03-09 Thread Mark Walkom
True :)

On 10 March 2015 at 03:14, Magnus Bäck magnus.b...@sonymobile.com wrote:

 On Thursday, March 05, 2015 at 22:56 CET,
  Mark Walkom markwal...@gmail.com wrote:

   On 6 March 2015 at 01:39, KaranM [1]karan.mu...@gmail.com wrote:
  
   I want to globally set the following for all current string fields
   for an Index and also for the future(new) string fields on that
   Index.
  
   Can some some one send example or link that has example, I was
   researching and could not find one.
  
   index: not_analyzed,Â
   analyzer:whitespace
 
  You cannot set it globally, you have to do it for each field.

 Wait, isn't this what dynamic templates are for?


 http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/custom-dynamic-mapping.html#dynamic-templates

 --
 Magnus Bäck| Software Engineer, Development Tools
 magnus.b...@sonymobile.com | Sony Mobile Communications

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/20150309161404.GB18116%40seldlx20533.corpusers.net
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-%3DLDPyZZhjGOx41Zb-7%2BkmA94UxJr_1znvEGEW0nyziA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Single NFS Storage for Entire Cluster - Separate processing and data replication

2015-03-09 Thread Mark Walkom
We don't recommend using NFS for storing ES data, but if that is all you
have then so be it.

They won't be overwriting, if you go to the data directory you will see it
creates a directory for each node and that node then stores data there.

You shouldn't need to do anything else.

On 9 March 2015 at 15:31, Bartleby O'Connor bartlebyocon...@hotmail.com
wrote:

 I have a set of queries that take up a lot of RAM (mostly reads, few,
 infrequent writes), so I'm testing a cluster that has multiple nodes on
 different machines as query engines to field requests. However, my machines
 are already set with an NFS (without any local storage, only shared, and
 beyond my control).  I realize that's kind of a weird topology abusing ES
 a bit.  I'm trying to essentially separate the query processing engine
 (that I need multiple of) from the data distribution (that I can only have
 one of)---and am looking for settings that will help with this.

 If my conf settings are all pointing to the same place:
 path.data = ~/data/elasticsearch

 ---my nodes are probably all overwriting on top of each other on start-up
 and all replication, right? One node undoing what the last node did?  If
 this is the topology I'm stuck with, is there still a way to use ES?  The
 settings I think I should have are:

 cluster.routing.allocation.enable = none
 index.number_of_shards: 5
 index.number_of_replicas: 0

 Is that correct?  Are there other recommendations from you wizards?
 Thanks for your help!
 Bart

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/COL128-W11198BF1FEA92DAF1C5F3FA61B0%40phx.gbl
 https://groups.google.com/d/msgid/elasticsearch/COL128-W11198BF1FEA92DAF1C5F3FA61B0%40phx.gbl?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X_aZVuKk38p_ZTr3C%3D%2B0hH9hZv35%3DArmkkT69gudgUhuw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


new noteid assigned after restart?

2015-03-09 Thread Daniel Li
Hi,

I noticed a nodeid is always reassigned after the node is shutdown and 
started again (restart). Is this by design? This caused master has to 
remove the old node and add the new node, whenever the node is crashed and 
restarted.

Is there a way to have the node to keep the same id after restart?

thanks
Daniel

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/434203f0-c30f-439b-b58e-78f2841b1518%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Related to cluster java version

2015-03-09 Thread Mark Walkom
It's better if you can have the same version on all, but it will work as
long as the major version is the same.

We also recommend 1.7u55+ as well due to bugs in previous releases.

On 10 March 2015 at 01:51, phani.nadimi...@goktree.com wrote:

 Hi All,

 I have a cluster with three nodes two machines have java version java
 version 1.7.0_55 and one machine java version is java version
 1.7.0_51.The new machine need to add to existing cluster soon.

 I have following question please clarify me.

Is there any restriction in elasticsearch that cluster should
 contains same version of java on each node?





 Thanks,
 phani

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/fbb2cab8-7379-49ac-b3f9-0564a39a80fd%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/fbb2cab8-7379-49ac-b3f9-0564a39a80fd%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8AF_K8gD0UBhhA1GZ7dvwjMkWsv7yU5U4AcLRWqmjApw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to confirm that a node has successfully joined the cluster

2015-03-09 Thread Mark Walkom
I'd use the _cat API and then just grep for the node name.

On 9 March 2015 at 12:35, Lindsey Poole lpo...@gmail.com wrote:

 Hey guys,

 Regarding the rolling restart instructions:
 http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_rolling_restarts.html

 Step 5 states Restart the node, and confirm that it joins the cluster.

 Can you advise the best way to programmatically confirm that the local
 node has joined the cluster?

 Thanks,

 Lindsey

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/26257b43-7b1d-4bf6-a96b-2cf694374a95%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/26257b43-7b1d-4bf6-a96b-2cf694374a95%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-ksPowRhZtSoFnh88YnBLmJMMhTi%3D-b0vYDrCgwUXEqQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Snapshot Scaling Problems

2015-03-09 Thread Andy Nemzek
Hi Mark,

Thanks for the reply.  

We've been using logstash for several months now and it creates a new index 
each day, so I imagine there are over 100 indexes at this point.  

Elasticsearch is running on a single machine...I haven't done anything with 
shards, so the defaults must be in use.  Haven't optimized old indexes. 
 We're pretty much just running ELK out of the box.

When you mention 'optimizing indexes', does this process combine indexes? 
 Do you know if these performance problems are typical when using ELK out 
of the box?



On Monday, March 9, 2015 at 1:58:33 PM UTC-5, Mark Walkom wrote:

 How many indices are there, are you using the default shard count (5)? Are 
 you optimising older indices?

 The snapshot takes segments, so it may be that there is a lot of them to 
 copy. You could try optimising your old indices, eg older than 7 days, down 
 to a single segment and then see if that helps.

 Be aware though, the optimise is a resource heavy operation, so unless you 
 have a lot of resources you should only run one at a time.

 On 10 March 2015 at 05:18, Andy Nemzek bitk...@gmail.com javascript: 
 wrote:

 Hello,

 My company is using the ELK stack.  Right now we have a very small amount 
 of data actually being sent to elastic search (probably a couple hundred 
 logstash entries a day if that), however, the data that is getting logged 
 is very important.  I recently set up snapshots to help protect this data.  

 I take 1 snapshot a day, I delete snapshots that are older than 20 days, 
 and each snapshot is comprised of all the logstash indexes in 
 elasticsearch.  It's also a business requirement that we are able to search 
 at least a year's worth of data, so I can't close logstash indexes unless 
 they're older than at least a year.

 Now, we've been using logstash for several months and each day it creates 
 a new index.  We've found that even though there is very little data in 
 these indexes, it's taking upwards of 30 minutes to take a snapshot of all 
 of them and each day it appears to take 20 - 100 seconds longer than the 
 last.  It is also taking about 30 minutes to delete a single snapshot, 
 which is done each day as a part of cleaning up old snapshots.  So, the 
 whole process is is taking about an hour each day and appears to be growing 
 longer very quickly.

 Am I doing something wrong here or is this kind of thing expected?  It's 
 seems pretty strange that it's taking so long with the little amount of 
 data we have.  I've looked through the snapshot docs several times and 
 there doesn't appear to be much talk about how the process scales.

 Thanks!

 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/0add3377-4b49-4a82-a233-e005113ab1b9%40googlegroups.com
  
 https://groups.google.com/d/msgid/elasticsearch/0add3377-4b49-4a82-a233-e005113ab1b9%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/13241ad4-9ae4-4ac6-b5e9-421a5c62b898%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch import configuration files

2015-03-09 Thread Mark Walkom
You cannot use an array in path.conf.

On 9 March 2015 at 15:02, Michael Power michael.power.eloto...@gmail.com
wrote:

 Hello,

 Is there anyway to reconfigure elasticsearch without changing the main
 /etc/elasticsearch/elasticsearch.yml file?

 We want to setup an elasticsearch.yml file that is common for all our test
 environments.  Then we want an additional file that is specific to the
 environment.  That environment specific file might change the cluster name,
 change the discovery mode, etc.

 Is this possible with elasticsearch?  Can we give it a list of locations
 where it can find the elasticsearch.yml

 For example this is supported:

 path.data: [/mnt/first, /mnt/second]

 Is this supported?
 path.conf: [/etc/elasticsearch, /etc/elasticsearch.d/*]

 or this?

 path.conf: [/etc/elasticsearch, /etc/elasticsearch-local/]

  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/9c3b279b-e4d6-4923-90a2-60fedbc5ea80%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/9c3b279b-e4d6-4923-90a2-60fedbc5ea80%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X_q87MY%2B3xuqJxaOwxOWcUyOG9YVDJ-tggdsxom%3DxXHPw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Snapshot Restore Performance

2015-03-09 Thread Mark Greene
I recently have been testing the S3 snapshot and restore performance.

I'm able to pull down ~25GB across all 4 data nodes in 10 mins on a 
restore. On a given node, I seem to be only getting about 70-80 MBit/s, CPU 
utilization is near zero. On the one node that has two shards being 
restored to it, the through put is nearly double. 

Is there anything I can do to increase the parallelism of the restore 
process? So I assume there is something perhaps limited at the shard level?

My Repo Settings

{
type: s3,
settings: {
bucket: mybucket,
region: us-east,
protocol: https,
base_path: /elasticsearch,
secret_key: SECRET,
access_key: KEY,
max_snapshot_bytes_per_sec: 150mb,
max_restore_bytes_per_sec: 500mb
}
}


Cluster Info

ES 1.4.1
4 Data Nodes r3.2xlarge (8 core, 30GB JVM heap, SSD's)
5 Shards, 1 replica
80GB primary store size (160GB w/ replica)




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3cd6348a-9738-4857-9af9-01c3e0b0bafb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Need full text search engine

2015-03-09 Thread David Pilato
You could use a edge n gram analyzer for this field.

HTH

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

 Le 9 mars 2015 à 02:49, Vijayakumari B N vijayakumari...@gmail.com a écrit :
 
 Hi,
 
 I am trying to search an text using elastic search server. I am able to get 
 the results only if my input text matches one complete word, not the full 
 text search. Ex : I have 2 matching text Colors and Colorfull in elastic 
 search server, If i search for Color, it will not pick any results. Can 
 some one please help me to resolve the issue.
 
 Here is my query
 {
   multi_match : {
 query : color,
 fields : [ keywords, symptom ]
   }
 }
 
 Thanks,
 Vijaya
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/41de2412-cff5-4276-ad9c-a61dc463f67d%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CBF392FF-86DF-49AD-9513-D052812D4A67%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


For username who has logged into greater than 1 geoip.country_name in the last 12 hours

2015-03-09 Thread Tim Jones
Hey Guys is this possible,

I want to create a summary view where i see a list of users that have 
logged into our website but have come from greater than one source in a 
period of time,

Perhaps even a count of different content in fields and I can drill down 
into it per user if it says like 4 different countries in 12 hours I recon 
we have a bit of a problem...

Any help is appreciated.

Cheers

Tim


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f553c4ea-2061-4245-a06d-090af8344c8e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Trouble to load data from my hadoop cluster to elasticsearch via pig and hive

2015-03-09 Thread BEN SALEM Omar


I have some issues with my project and here I am seeking for some guidance 
and help

here is the situation : 

Pig : 

I have a hadoop cluster managed by Cloudera CDH 5.3.

I have ElasticSearch 1.4.4 installed in my master machine(10.44.162.169)

I have downloaded the marvel plugin and so access to my ES via :
http://10.44.162.169:9200/_plugin/marvel/kibana/index.html#/dashboard/file/marvel.overview.json

I have created an index via sense named myindex with a type named mytype to 
push my data in it later.

I did also install kibana 4 and changed the kibana.yml like this :

# The host to bind the server to
host: 10.44.162.169

# The Elasticsearch instance to use for all your queries.
elasticsearch_url: http://10.44.162.169:9200;

I access to it via port 5601 (10.44.162.169:5601)

Now I want to load a data I have in my hdfs into my ElasticSearch.

After dowloading the es-hadoop jar and adding it to the path.

This is how I proceeded :

REGISTER /usr/elasticsearch-hadoop-2.0.2/dist/elasticsearch-hadoop-pig-2.0.2.jar

--load the CDR.csv file
cdr= LOAD '/user/omar/CDR.csv' using PigStorage(';')
AS 
TRAFFIC_TYPE_ID:int,APPELANT:int,CALLED_NUMBER:int,CALL_DURATION:int,LOCATION_NUMBER:chararray,DATE_HEURE_APPEL:chararray);



STORE cdr INTO 'myindex/mytype' USING 
org.elasticsearch.hadoop.pig.PigRunner.run('es.nodes'='10.44.162.169');

When I execute this; the job is a success !!!

BUT, nothing seems to appear in my ES !

1) When I go and access to marvel, I don't find any documents in myindex !

2 )Neither in my Kibana plugin !

3) Furthermore, when I want to consult the logs in the HUE, I can't find a 
thing!

   - Why data isn't pushed in my ES?
   - What should I do to visualize it?
   - Why is my created job a success but none log is there to see what's 
happening!


I then tried this way : 

REGISTER /usr/elasticsearch-hadoop-2.0.2/dist/elasticsearch-hadoop-pig-2.0.2.jar

--load the CDR.csv file
cdr= LOAD '/user/admin/CDR_OMAR.csv' using PigStorage(';')
AS 
(traffic_type_id:int,caller:int,call_time:datetime,tranche_horaire:int,called:int,called:int,call_duration:int,code_type:chararray,code_destination:chararray,location:chararray,id_offre:int,id_service:int,date_heure_appel:chararray);

--STORE cdr INTO 'indexOmar/typeOmar' USING 
EsStorage('es.nodes'='0.44.162.169:9200')
STORE cdr INTO 'telecom/cdr' USING 
org.elasticsearch.hadoop.pig.EsStorage('es.nodes'='10.44.162.169',
'es.mapping.names=call_time:@timestamp',
'es.index.auto.create = false');

But, I got this error :

Run pig script using PigRunner.run() for Pig version 0.8+
2015-03-06 14:22:21,768 [main] INFO  org.apache.pig.Main  - Apache Pig version 
0.12.0-cdh5.3.1 (rexported) compiled Jan 27 2015, 14:45:17
2015-03-06 14:22:21,770 [main] INFO  org.apache.pig.Main  - Logging error 
messages to: 
/yarn/nm/usercache/admin/appcache/application_1425457357655_0009/container_1425457357655_0009_01_02/pig-job_1425457357655_0009.log
2015-03-06 14:22:21,863 [main] INFO  org.apache.pig.impl.util.Utils  - Default 
bootup file /var/lib/hadoop-yarn/.pigbootup not found




Any idea why this is happening and how to fix it?


Now the Hive issue : 


I have downloaded the ES-Hadoop jar and added it to the path.

With that being said; I now want to load data from hive to ES.

1) First of all, I created a table via a CSV file under table metastore(with 
HUE)

2) I defined an external table on top of ES in hive to write and load data in 
it later:

ADD JAR

/usr/elasticsearch-hadoop-2.0.2/dist/elasticsearch-hadoop-hive-2.0.2.jar;

CREATE EXTERNAL TABLE es_cdr(

id bigint,

calling int,

called int,

duration int,

location string,

date string)

ROW FORMAT SERDE 'org.elasticsearch.hadoop.hive.EsSerDe'

STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'

TBLPROPERTIES(

'es.nodes'='10.44.162.169',

'es.resource' = 'indexOmar/typeOmar');

I've also added manually the serde snapshot jar via paramaters= add file =jar

now, I want to load data from my table in the new ES table :

INSERT OVERWRITE TABLE es_cdr

select NULL, h.appelant, h.called_number, h.call_duration, h.location_number, 
h.date_heure_appel from hive_cdr h;

but an error is appearing saying that :

Error while processing statement: FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask

And this is what's written in the log :

15/03/05 14:36:34 INFO log.PerfLogger: /PERFLOG method=semanticAnalyze 
start=1425562594381 end=1425562594463 duration=82 
from=org.apache.hadoop.hive.ql.Driver
15/03/05 14:36:34 INFO ql.Driver: Returning Hive schema: 
Schema(fieldSchemas:[FieldSchema(name:_col0, type:bigint, comment:null), 
FieldSchema(name:_col1, type:int, comment:null), FieldSchema(name:_col2, 
type:int, comment:null), FieldSchema(name:_col3, type:int, comment:null), 
FieldSchema(name:_col4, type:string, comment:null), FieldSchema(name:_col5, 
type:string, comment:null)], properties:null)
15/03/05 14:36:34 INFO ql.Driver: EXPLAIN output for queryid 

Re: getting SearchPhaseExecutionException from elastic search 1.4

2015-03-09 Thread Vijayakumari B N
Thanks for the reply. 

On Friday, March 6, 2015 at 2:04:40 PM UTC+5:30, Adrien Grand wrote:

 The relevant error message is the following: No mapping found for [id] in 
 order to sort on.

 It means that you are sorting on a field which does not exist on at least 
 one of the queried indices.

 See 
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-sort.html#_ignoring_unmapped_fields
  
 for more information.

 On Fri, Mar 6, 2015 at 6:49 AM, Vijayakumari B N vijayak...@gmail.com 
 javascript: wrote:

 Hi,

 I am trying to search some text using elastic search server, with some 
 indexing. I am using default shards and i did not modify anything in my 
 elasticsearch.yml. My application was working fine. I did another indexing 
 for another application where i want to load different data, I have not 
 tested this yet. I came back and trying to run my first application, i am 
 getting the below error. my elastic search is not working.

 Can someone please help me where i am doing wrong.?


 Query 1 :: {
   multi_match : {
 query : new year,
 fields : [ keywords, symptom ]
   }
 }
 org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to 
 execute phase [query], all shards failed; shardFailures 
 {[aslVcGTvTz68YZOza7Wqsg][problem][0]: SearchParseException[[problem][0]: 
 query[((keywords:new keywords:year) | (symptom:new 
 symptom:year))],from[0],size[10]: Parse Failure [Failed to parse source 
 [{from:0,size:10,query:{multi_match:{query:new 
 year,fields:[keywords,symptom]}},sort:[{id:{order:asc}}],highlight:{phrase_limit:1}}]]];
  
 nested: SearchParseException[[problem][0]: query[((keywords:new 
 keywords:year) | (symptom:new symptom:year))],from[0],size[10]: Parse 
 Failure [No mapping found for [id] in order to sort on]]; 
 }{[aslVcGTvTz68YZOza7Wqsg][problem][1]: SearchParseException[[problem][1]: 
 query[((keywords:new keywords:year) | (symptom:new 
 symptom:year))],from[0],size[10]: Parse Failure [Failed to parse source 
 [{from:0,size:10,query:{multi_match:{query:new 
 year,fields:[keywords,symptom]}},sort:[{id:{order:asc}}],highlight:{phrase_limit:1}}]]];
  
 nested: SearchParseException[[problem][1]: query[((keywords:new 
 keywords:year) | (symptom:new symptom:year))],from[0],size[10]: Parse 
 Failure [No mapping found for [id] in order to sort on]]; 
 }{[aslVcGTvTz68YZOza7Wqsg][problem][2]: SearchParseException[[problem][2]: 
 query[((keywords:new keywords:year) | (symptom:new 
 symptom:year))],from[0],size[10]: Parse Failure [Failed to parse source 
 [{from:0,size:10,query:{multi_match:{query:new 
 year,fields:[keywords,symptom]}},sort:[{id:{order:asc}}],highlight:{phrase_limit:1}}]]];
  
 nested: SearchParseException[[problem][2]: query[((keywords:new 
 keywords:year) | (symptom:new symptom:year))],from[0],size[10]: Parse 
 Failure [No mapping found for [id] in order to sort on]]; 
 }{[aslVcGTvTz68YZOza7Wqsg][problem][3]: SearchParseException[[problem][3]: 
 query[((keywords:new keywords:year) | (symptom:new 
 symptom:year))],from[0],size[10]: Parse Failure [Failed to parse source 
 [{from:0,size:10,query:{multi_match:{query:new 
 year,fields:[keywords,symptom]}},sort:[{id:{order:asc}}],highlight:{phrase_limit:1}}]]];
  
 nested: SearchParseException[[problem][3]: query[((keywords:new 
 keywords:year) | (symptom:new symptom:year))],from[0],size[10]: Parse 
 Failure [No mapping found for [id] in order to sort on]]; 
 }{[aslVcGTvTz68YZOza7Wqsg][problem][4]: SearchParseException[[problem][4]: 
 query[((keywords:new keywords:year) | (symptom:new 
 symptom:year))],from[0],size[10]: Parse Failure [Failed to parse source 
 [{from:0,size:10,query:{multi_match:{query:new 
 year,fields:[keywords,symptom]}},sort:[{id:{order:asc}}],highlight:{phrase_limit:1}}]]];
  
 nested: SearchParseException[[problem][4]: query[((keywords:new 
 keywords:year) | (symptom:new symptom:year))],from[0],size[10]: Parse 
 Failure [No mapping found for [id] in order to sort on]]; }
 at 
 org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:233)
 at 
 org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onFailure(TransportSearchTypeAction.java:179)
 at 
 org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:565)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)



 below is the error from elasticsearch server log file.

 [2015-03-06 11:17:24,997][DEBUG][action.search.type   ] [Magma] 
 [problem][1], node[aslVcGTvTz68YZOza7Wqsg], [P], s[STARTED]: Failed to 
 execute [org.elasticsearch.action.search.SearchRequest@78fc50a1] lastShard 
 [true]
 org.elasticsearch.search.SearchParseException: [problem][1]: 
 query[(keywords:jvm | 

failed to start shard

2015-03-09 Thread Sephen Xu
[2015-03-09 13:46:33,720][WARN ][indices.cluster  ] [es_node_4_2] 
[ossdatabse-2015-02-10][6] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: 
[ossdatabse-2015-02-10][6] failed to fetch index version after copying it 
over
at 
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:136)
at 
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:197)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
Caused by: 
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: 
[ossdatabse-2015-02-10][6] shard allocated for local recovery (post api), 
should exist, but doesn't, current files: [_checksums-1424994155456, 
segments.gen]
at 
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:115)
... 4 more
Caused by: java.io.FileNotFoundException: segments_r8
at 
org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:471)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:324)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:117)
at 
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:106)
... 4 more
[2015-03-09 13:46:33,720][WARN ][cluster.action.shard ] [es_node_4_2] 
[ossdatabse-2015-02-11][6] sending failed shard for 
[ossdatabse-2015-02-11][6], node[cq_HPV1ZSlKArq6HgciNSg], [P], 
s[INITIALIZING], indexUUID [7yEPLOpYQciVdw_W1IF5Hg], reason [Failed to 
start shard, message 
[IndexShardGatewayRecoveryException[[ossdatabse-2015-02-11][6] failed to 
fetch index version after copying it over]; nested: 
IndexShardGatewayRecoveryException[[ossdatabse-2015-02-11][6] shard 
allocated for local recovery (post api), should exist, but doesn't, current 
files: [segments.gen, _checksums-1425594266760]]; nested: 
FileNotFoundException[segments_r7]; ]]

How to solve this problem?
thx

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/40deb4ef-d445-4ed6-a821-68a383354fff%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Need help on QueryBuilder

2015-03-09 Thread Vijayakumari B N
Hi,

I want to build Query for the requirement where i have 3 check-boxes in 
input(For 3 different attributes), i have to search input text matching in 
the selected check-box. 

I want to dynamically build the query if attribute1 is selected search in 
attribute1, if attribute2 is selected search in attribute2  etc..

I am trying to build query, which matches the input text with must in both 
the attributes, it is not returning any results. If i use Multimatch, i am 
able to fetch results.

BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();

if (searchIn.contains(keywords)) {
boolQueryBuilder.must(QueryBuilders.matchQuery(keywords, 
inputText));
}
if (searchIn.contains(symptom)) {
boolQueryBuilder.must(QueryBuilders.matchQuery(symptom, 
inputText));
}


Match Query :
{
  bool : {
must : [ {
  match : {
keywords : {
  query : Holi,
  type : boolean
}
  }
}, {
  match : {
symptom : {
  query : Holi,
  type : boolean
}
  }
} ]
  }
}

Multimatch query : 
{
  multi_match : {
query : Holi,
fields : [ keywordsField, symptomField,  ]
  }


Can someone please help me.

Thanks,
Vijaya

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3d4d8dc6-dc5e-428c-87a5-fc893a7d5e59%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


A hook in Kibana's Discover page

2015-03-09 Thread Oranit Dror


Hello,

 I would like to hook into Kibana's Discover page and convert the query 
string supplied by the user into a new query string that will be used on 
ElasticSearch.

Specifically, the query string supplied by the user may contain terms like 
concept: pneumonia and I am interested to replace these with other terms, 
like UMLSTags:c0032285. 

 thank you,

 Oranit

 

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/792658ff-b8af-45f3-b3d2-c6daab564feb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Need full text search engine

2015-03-09 Thread Vijayakumari B N
Hi,

I am trying to search an text using elastic search server. I am able to get 
the results only if my input text matches one complete word, not the full 
text search. Ex : I have 2 matching text Colors and Colorfull in 
elastic search server, If i search for Color, it will not pick any 
results. Can some one please help me to resolve the issue.

Here is my query
{
  multi_match : {
query : color,
fields : [ keywords, symptom ]
  }
}

Thanks,
Vijaya

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/41de2412-cff5-4276-ad9c-a61dc463f67d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.