Re: How to delete index permanently Elastic Search

2015-03-28 Thread Rafał Kuć
Title: Re: How to delete index permanently Elastic Search


Hello!

Which index you want to update and what you mean by update?

Do you want to change river configuration, for example change the SQL statement? Just delete the river and create it once again and it will do the job.

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Hello! 
 
Can we directly update index without deleting it?

On Saturday, March 28, 2015 at 2:12:08 PM UTC+5, Rafał Kuć wrote:
Hello!

The command you show is creating the river (https://github.com/jprante/elasticsearch-river-jdbc ), not the 'updateauto' index itself. When the river starts running it will create the 'updateauto' index. 

I'm not sure what you mean by 'refresh my elasticsearch', but I assume you would like to delete the already created index called 'updateauto' and change the river configuration. If so try the following set of commands:

curl -XDELETE 'localhost:9200/_river/my_update_river' <- it will delete the river resulting in stopping it

curl -XDELETE 'localhost:9200/updateauto' <- this will delete the updateauto index

curl -XPOST 'localhost:9200/_river/my_update_river/_meta' -d '
{
 "type" : "jdbc",
  "jdbc" : {
    "url" : "jdbc:mysql://localhost:3306/admin",
     "user" : "root",
     "password" : "",
     "poll" : "6s",
     "index" : "updateauto",
     "type" : "users",
     "schedule":"0/10 * * ? * *",
     "strategy" : "simple",
     "sql" : "select * from users"
   }
}'

Of course, the last command should be updated to match your needs before sending it. 


-- 
Regards,
Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






I just want to refresh my elasticsearch and update my sql command. But when i post this code/query again it gives me error that index name "updateauto" already created and it gives my "created" : "false".

On Saturday, March 28, 2015 at 1:54:27 PM UTC+5, Abdul Rafay wrote:
localhost:9200/_river/my_update_river/_meta
{
 "type" : "jdbc",
  "jdbc" : {
    "url" : "jdbc:mysql://localhost:3306/admin",
     "user" : "root",
     "password" : "",
     "poll" : "6s",
     "index" : "updateauto",
     "type" : "users",
     "schedule":"0/10 * * ? * *",
     "strategy" : "simple",
     "sql" : "select * from users"
   }
}

Here's my code and when i delete index it works for only 10seconds and when i refresh again it automatically created the index.
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/275ee522-d6f2-4f44-8043-fcd1d10cb8de%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/24b53ed7-6ec1-47da-b561-bdce00cf1390%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/644170795.20150328103624%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: How to delete index permanently Elastic Search

2015-03-28 Thread Rafał Kuć
Title: Re: How to delete index permanently Elastic Search


Hello!

The command you show is creating the river (https://github.com/jprante/elasticsearch-river-jdbc ), not the 'updateauto' index itself. When the river starts running it will create the 'updateauto' index. 

I'm not sure what you mean by 'refresh my elasticsearch', but I assume you would like to delete the already created index called 'updateauto' and change the river configuration. If so try the following set of commands:

curl -XDELETE 'localhost:9200/_river/my_update_river' <- it will delete the river resulting in stopping it

curl -XDELETE 'localhost:9200/updateauto' <- this will delete the updateauto index

curl -XPOST 'localhost:9200/_river/my_update_river/_meta' -d '
{
  "type" : "jdbc",
   "jdbc" : {
     "url" : "jdbc:mysql://localhost:3306/admin",
      "user" : "root",
      "password" : "",
      "poll" : "6s",
      "index" : "updateauto",
      "type" : "users",
      "schedule":"0/10 * * ? * *",
      "strategy" : "simple",
      "sql" : "select * from users"
    }
 }'

Of course, the last command should be updated to match your needs before sending it. 


-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






I just want to refresh my elasticsearch and update my sql command. But when i post this code/query again it gives me error that index name "updateauto" already created and it gives my "created" : "false".

On Saturday, March 28, 2015 at 1:54:27 PM UTC+5, Abdul Rafay wrote:
localhost:9200/_river/my_update_river/_meta
{
  "type" : "jdbc",
   "jdbc" : {
     "url" : "jdbc:mysql://localhost:3306/admin",
      "user" : "root",
      "password" : "",
      "poll" : "6s",
      "index" : "updateauto",
      "type" : "users",
      "schedule":"0/10 * * ? * *",
      "strategy" : "simple",
      "sql" : "select * from users"
    }
 }

Here's my code and when i delete index it works for only 10seconds and when i refresh again it automatically created the index.
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/275ee522-d6f2-4f44-8043-fcd1d10cb8de%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1402015912.20150328101154%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: How to delete index permanently Elastic Search

2015-03-28 Thread Rafał Kuć
Title: Re: How to delete index permanently Elastic Search


Hello!

What are you truing to achieve? Do you want to stop the JDBC river from running? If so, try tunning:

curl -XDELETE 'localhost:9200/_river/my_update_river'

It should stop the river. If you than delete the updateauto index it will be no longer created. 

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






localhost:9200/_river/my_update_river/_meta
{
  "type" : "jdbc",
   "jdbc" : {
     "url" : "jdbc:mysql://localhost:3306/admin",
      "user" : "root",
      "password" : "",
      "poll" : "6s",
      "index" : "updateauto",
      "type" : "users",
      "schedule":"0/10 * * ? * *",
      "strategy" : "simple",
      "sql" : "select * from users"
    }
 }

Here's my code and when i delete index it works for only 10seconds and when i refresh again it automatically created the index.
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6faeb492-a60d-49ca-bbcf-1de118ed8a4e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1224941303.20150328095734%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch logs - sending to remote log server

2015-02-16 Thread Rafał Kuć
Title: Re: Elasticsearch logs - sending to remote log server


Hello!

A colleague of mine wrote a blog post regarding how to do that using Logstash, maybe this will come in handy:  http://blog.sematext.com/2015/01/19/grok-elasticsearch-logs-with-logstash/ 

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Hi,
I've been trying to find some examples with logging.yml configuration that would let me send my elasticsearch logs to remote server but I haven't found good explenation of how to achieve this especially using YAML.
How to maintain the previous configuration (slowlog, cluster log) but get rid of local storing and instead of this send logs to remote server with syslog.
All the operations needed to maintain logs are gonna be performed on the logserver site so there is no need to compress or rotate logs.
Thanks in advance!
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e98977d5-8f6a-4918-b904-4bf2effb0a5b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/566279985.20150216141634%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: Dynamically appending a query (for data entitlements)

2014-12-20 Thread Rafał Kuć
Title: Re: Dynamically appending a query (for data entitlements)


Hello!

Are you allowing your users to directly talk to Elasticsearch? If so apart from modifying Elasticsearch (either the base code itself, or through dedicated plugin) you can't achieve what you want. You could use aliases (http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-aliases.html  ) and define an alias per vendor that would restrict the data returned. However if users are allowed to directly talk to Elasticsearch there is a high risk that one would just omit the alias and go directly to the indices. 

On the other hand you probably have some application in front of Elasticsearch and this is a perfect place to take the query from the user and modify it to include additional filter.  

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






I have a use case where for every query that is coming from the user to elasticsearch (ES), I want to add another query on ES server side before ES executes the query.

The reason I need to dynamically add this other query is for enforcing data-level entitlements.

e.g. Let's say that I am storing Orders in one of my ES indexes. Each Order has a vendorid associated with it.

When a user of my app submits a query for Orders, I want to make sure that only those Orders are returned by ES search that belong to the vendorid of this user

e.g. the user may have submitted a query to show all orders where order value >= $100. I want to append another query to this saying that only the Orders that are associated with the vendor id of this user should be returned.

How can I achieve this? In the servlet world we have the mechanism of FILTERS. Is something similar available in ES?

Thanks

Lokesh

-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a308903e-0653-4de6-a2f8-1747c94b006b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1802889792.20141220205101%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: Document corruption in index, id field is garbled text

2014-08-15 Thread Rafał Kuć
Title: Re: Document corruption in index, id field is garbled text


Hello!

Your document is not corrupted - during indexation the _id field was set to null - this is what _source shows. The _id you are seeing, that contains a random characters was just generated by Elasticsearch, which is the default behavior if you don't specify the id. Let me give an example - try to index the following document:

$ curl -XPOST 'localhost:9200/test/doc/' -d '{
 "name" : "test",
 "_id" : null
}'

Now if you search for all the document in that test index you would see the following:
$ curl -XGET 'localhost:9200/test/_search?pretty'

{
  "took" : 2,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "test",
      "_type" : "doc",
      "_id" : "VVERnyl_TU6iBLT3ndnniA",
      "_score" : 1.0,
      "_source":{
 "name" : "test",
 "_id" : null
}
    } ]
  }
}

As you can see the _id field is generated, while the _id field passed in _source is null.

As for finding documents with _id set to null in the _source, you can try script filter - something like this:

curl -XGET 'localhost:9200/test/_search?pretty' -d '{
 "query" : {
  "filtered" : {
   "filter": { 
    "script" : {
     "script": "_source.containsKey(\"_id\") && _source._id == null"
    }
   }
  }
 }
}'

You are using 1.2.2, so you have to enable dynamic scripting for the above query to work and disable it again once not needed, or just put the script on the file system for Elasticsearch to see it and use its name. 

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






We are using ES 1.2.2 server with a rails application as the client (ActiveRecord document model) and it seems as though some of the documents in the index might have been corrupted because the id field of the document is some garbled text like "JorMcjefSe2_VQkP_ntd8Q" when its supposed to be an Integer value based on the mappings.

As an example here is a document in the index with a corrupted id. Notice the corrupted document id, and the source id of the document is null

curl -XGET http://localhost:9200/production_restaurants/restaurant/Gu-NGnHtR3ef4V2z4NfNsQ?pretty
{
  "_index" : "production_restaurants_20140714222814907",
  "_type" : "restaurant",
  "_id" : "Gu-NGnHtR3ef4V2z4NfNsQ",
  "_version" : 1,
  "found" : true,
  "_source":{"_id":null,"_type":"restaurant","title":"Wreck Bar and Grill","address":"Rum Point","phone":null,"location_hint":null,"popularity":0,"votes_percent":null,"price":null,"city":null,"state":"KY","zip":null,"city_id":375,"neighborhood_id":54892,"activity":null,"location":{"lat":19.371508,"lon":-81.271523},"closed":false,"neighborhood":{"title":"Grand Cayman","id":54892},"cuisines":[],"tags":[],"dishes":[],"restaurant_path":"http://www.urbanspoon.com"}
}

It seems like the corruption might be around document deletion from the index because such indexed documents are no longer in our MySQL data which is the source for indexing documents in ES. Aside from finding what the issue might be with corruption, I am right now looking to find such bad documents in the index. I am finding no love with either a regex query or the missing filter which i apply them to the id field. Its a strange situation because id is of type integer in my index mapping i cannot apply regex query to it and get a NumberFormatException from Lucene.

Any suggestion for a query that I could use to find such corrupted documents and remove them ahead of time. Right now I've had to be very reactive and remove these as I discover them my rails logs / error reports. Before I consider a full-reindex (which is heavy in of itself) I would like to explore what other options I have, including what might be the cause of these corruption.

thanks
- anurag  
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8bd57820-6647-44f9-a089-2f22c2c83431%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/33666749.20140815154609%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: Top Hits aggregator from example not working for me

2014-07-24 Thread Rafał Kuć
Title: Re: Top Hits aggregator from example not working for me


Hello Daniel!

MVEL scripts can't be sandboxed and they provide a lot of features, however when running Elasticsearch with too much user rights one can exploit that situation, i.e. http://bouk.co/blog/elasticsearch-rce/. This resulted in dynamic scripting being disabled by default in Elasticsearch 1.2.0 (http://www.elasticsearch.org/blog/elasticsearch-1-2-0-released/) and enabled again in 1.3.0 (http://www.elasticsearch.org/blog/elasticsearch-1-3-0-released/) for languages like Groovy which can be sandboxed. 

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Thanks, Rafal. The top_hits aggregator is working now, but could you explain why this is a security risk? We will need to use this feature in production and now I am feeling uneasy about it.

On Thursday, July 24, 2014 10:28:32 AM UTC-5, Rafał Kuć wrote:
Hello!

The error is about scripting and dynamic scripting being disabled for MVEL. The simplest way to make it work is turning on dynamic scripting by adding script.disable_dynamic: false to your elasticsearch.yml file. However for production it is not recommended for security reasons. You can also try using scripting language that is sandboxed and allows for dynamic scripting, like groovy. 

-- 
Regards,
Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Hello, I am trying to follow the field collapse example on this page, which uses the new 1.3.0 top_hits aggregator to return the top scoring document for a given criteria. To my knowledge, I set up the settings and mappings correctly, but I am getting a strange error when I try to run the query that the example provided.

curl -XDELETE "http://localhost:9200/personsearch"
curl -XPUT "http://localhost:9200/personsearch" -d'
{
 "settings": {
   "index": {
     "analysis": {
       "analyzer": {
         "idx_analyzer": {
           "tokenizer": "whitespace",
           "filter": [
             "lowercase",
             "snowball",
             "XYZSynFilter"
           ]
         },
         "sch_analyzer": {
           "tokenizer": "standard",
           "filter": [
             "standard",
             "lowercase",
             "stop"
           ]
         },
         "sch_comma_analyzer": {
           "tokenizer": "CommaTokenizer",
           "filter": [
             "standard",
             "lowercase",
             "stop"
           ]
         }
       },
       "filter": {
         "XYZSynFilter": {
           "type": "synonym",
           "synonyms": [
             "aids virus, aids, retrovirology, hiv"
           ],
           "expand": true,
           "ignore_case": true
         }
       },
       "tokenizer": {
         "CommaTokenizer": {
           "type": "pattern",
           "pattern": ","
         }
       }
     }
   }
 },
 "mappings": {
   "employees": {
     "properties": {
       "fullName": {
         "type": "string",
         "search_analyzer": "sch_analyzer"
       },
       "specialty": {
         "type": "string",
         "search_analyzer": "sch_comma_analyzer"
       }
     }
   }
 }
}'
curl -XPUT "http://localhost:9200/personsearch/employees/1" -d'
{
 "fullName": "Don White",
 "specialty": "Adult Retrovirology, aids, hiv"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/2" -d'
{
 "fullName": "Don White",
 "specialty": "general practitioner, physician, general, primary care"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/3" -d'
{
 "fullName": "Don White",
 "specialty": "icu, er"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/4" -d'
{
 "fullName": "Terrance Gartner",
 "specialty": "oncology, cancer, research, tumor, polyp"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/5" -d'
{
 "fullName": "Terrance Gartner",
 "specialty": "physician, general, GP, primary care, aids"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/6" -d'
{
 "fullName": "

Re: Top Hits aggregator from example not working for me

2014-07-24 Thread Rafał Kuć
Title: Re: Top Hits aggregator from example not working for me


Hello!

The error is about scripting and dynamic scripting being disabled for MVEL. The simplest way to make it work is turning on dynamic scripting by adding script.disable_dynamic: false to your elasticsearch.yml file. However for production it is not recommended for security reasons. You can also try using scripting language that is sandboxed and allows for dynamic scripting, like groovy. 

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Hello, I am trying to follow the field collapse example on this page, which uses the new 1.3.0 top_hits aggregator to return the top scoring document for a given criteria. To my knowledge, I set up the settings and mappings correctly, but I am getting a strange error when I try to run the query that the example provided.

curl -XDELETE "http://localhost:9200/personsearch"
curl -XPUT "http://localhost:9200/personsearch" -d'
{
  "settings": {
    "index": {
      "analysis": {
        "analyzer": {
          "idx_analyzer": {
            "tokenizer": "whitespace",
            "filter": [
              "lowercase",
              "snowball",
              "XYZSynFilter"
            ]
          },
          "sch_analyzer": {
            "tokenizer": "standard",
            "filter": [
              "standard",
              "lowercase",
              "stop"
            ]
          },
          "sch_comma_analyzer": {
            "tokenizer": "CommaTokenizer",
            "filter": [
              "standard",
              "lowercase",
              "stop"
            ]
          }
        },
        "filter": {
          "XYZSynFilter": {
            "type": "synonym",
            "synonyms": [
              "aids virus, aids, retrovirology, hiv"
            ],
            "expand": true,
            "ignore_case": true
          }
        },
        "tokenizer": {
          "CommaTokenizer": {
            "type": "pattern",
            "pattern": ","
          }
        }
      }
    }
  },
  "mappings": {
    "employees": {
      "properties": {
        "fullName": {
          "type": "string",
          "search_analyzer": "sch_analyzer"
        },
        "specialty": {
          "type": "string",
          "search_analyzer": "sch_comma_analyzer"
        }
      }
    }
  }
}'
curl -XPUT "http://localhost:9200/personsearch/employees/1" -d'
{
  "fullName": "Don White",
  "specialty": "Adult Retrovirology, aids, hiv"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/2" -d'
{
  "fullName": "Don White",
  "specialty": "general practitioner, physician, general, primary care"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/3" -d'
{
  "fullName": "Don White",
  "specialty": "icu, er"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/4" -d'
{
  "fullName": "Terrance Gartner",
  "specialty": "oncology, cancer, research, tumor, polyp"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/5" -d'
{
  "fullName": "Terrance Gartner",
  "specialty": "physician, general, GP, primary care, aids"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/6" -d'
{
  "fullName": "Terrance Gartner",
  "specialty": "emergency care, icu, ambulance, er, urgent"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/7" -d'
{
  "fullName": "Carter Taylor",
  "specialty": "neurosurgery, brain surgery, brain tumor"
}'
curl -XPUT "http://localhost:9200/personsearch/employees/8" -d'
{
  "fullName": "Carter Taylor",
  "specialty": "trauma, icu, emergency care, ER, urgent care"
}'


Executing this search (per the example) gives me an error
curl -XGET "http://localhost:9200/personsearch/employees/_search?pretty=true" -d'
{
  "query": {
    "query_string": {
      "query": "icu"
    }
  },
  "aggs": {
    "most-rel-pro

Re: index doc, pdf, odt .... => cluster : yellow, why ?

2014-07-10 Thread Rafał Kuć
Title: Re: index doc, pdf, odt  => cluster : yellow,  why ?


Hello!

Multiple nodes for high availability - everything crashes, and you don't want to loose Elasticsearch and the ability to search and analyze your data. Also for better performance - you can have your index built of many shards that are spread across multiple nodes and have replicas of that shards, so you can physical copies that can handle the traffic if one node is not able to handle it. 

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Thank you.

In my case I have many client, I use one index by client and a single node.
I work with elasticsearch recently and for the moment I don't understand why to use a second node ? 
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a7920608-37ce-4f17-b6f7-4c9b2806dacb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1826333283.20140710163009%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: index doc, pdf, odt .... => cluster : yellow, why ?

2014-07-10 Thread Rafał Kuć
Title: Re: index doc, pdf, odt  => cluster : yellow,  why ?


Hello!

You can see this is exactly the case - you have a single node and Elasticsearch won't assign primary shard and replicas of it on the same node. If you would run a second node Elasticsearch would allocate that 15 unassigned shards and rebalance the shards between the two node cluster - try that out, even locally. 

Of course, if you are using that single node for development you can just remove the replicas by running:
 
curl -XPUT 'localhost:9200/_settings' -d '{"index" : { "number_of_replicas" : 0 }}'

However remember that in production you usually don't want to have primaries and their replicas on the same node and you want more than a single Elasticsearch node running. 

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Health of cluster: yellow (35 50)

curl -XGET 'localhost:9200/_cluster/health?pretty'
{
   "cluster_name": "elasticsearch",
   "status": "yellow",
   "timed_out": false,
   "number_of_nodes": 1,
   "number_of_data_nodes": 1,
   "active_primary_shards": 35,
   "active_shards": 35,
   "relocating_shards": 0,
   "initializing_shards": 0,
   "unassigned_shards": 15
}
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b7d49cbb-687f-4eac-8b54-9b5b24c886db%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1295114784.20140710155719%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: index doc, pdf, odt .... => cluster : yellow, why ?

2014-07-10 Thread Rafał Kuć
Title: Re: index doc, pdf, odt  => cluster : yellow,  why ?


Hello!

Can you check and paste the cluster health (curl -XGET 'localhost:9200/_cluster/health?pretty')?

You probably have a single Elasticsearch node and you are suing the default configuration? If so than, by default, Elasticsearch created 5 shards and 1 replica of each. This means that while the 5 primary shards were properly initialized, the replicas are left alone (by default ES don't put primary shard and its replicas on the same node). 


-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Hello, this is my code when I index ( I use this because I want to make a search inside a document (on this content))

PUT test

PUT test/my_type/_mapping
{
    "my_type" : {
        "properties" : {
            "my_file" : {
                "type" : "attachment",
                "fields" : {
                    "my_file" : { "term_vector":"with_positions_offsets"}
                }
            }
        }
    }
}


I open my doc, I convert it in base64 and I index it.

PUT test/my_type/1
{
   "my_file" : "my file in base64",
   "name": "the name of file"
    
}

And I search :

GET test/my_etape/_search?pretty=true
{"size": 50,
"query": {
    "query_string": {
       
       "query": "my keywords"
       
    }
    },"highlight": {"fields": {"my_file":{"term_vector" : "with_positions_offsets"}
}
}
}

I don't understand why my cluster turn yellow ?
Have you an explanation for me or an other way to do this, please ?

Thank to you in advance.


-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c7ec2914-0498-414d-96a1-ccffeff93148%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1668330457.20140710153919%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: beginner question

2014-04-26 Thread Rafał Kuć
Title: Re: beginner question


Hello!

You should have your results returned unless you disabled _source field in the mappings. Can you show us result of:

curl -XGET 'localhost:9200/megacorp/employee/_mapping?pretty'

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Hi ,

 Can you try the insert using POST and paste the result.

Thanks
           Vineeth



On Sat, Apr 26, 2014 at 9:23 PM, Jinyuan Zhou <zhou.jiny...@gmail.com> wrote:
I am following the definitive guide. I added this docume by issue this in sense
PUT /megacorp/employee/1
{
    "first_name" : "John",
    "last_name" :  "Smith",
    "age" :        25,
    "about" :      "I love to go rock climbing",
    "interests": [ "sports", "music" ]
}

when I try to retrieve the docoment
by doing this 

GET /megacorp/employee/1

But I didn't get original json back. It get some description of the stored document like this. 
{
   "_index": "megacorp",
   "_type": "employee",
   "_id": "1",
   "_version": 2,
   "found": true
}.

I did this on command line using cur. Same result come back.

What do I missing? Thanks,
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/73171fd0-dc65-4c3d-9493-5e6b9f62a6ba%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3DASvTuLvDb_ixQOW1krSw70dtX0CpiMXkfLNGrxmY0HQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/957210301.20140426181406%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: Can we perform the text search presnet in the images or pdf files through elasticsearch

2014-04-18 Thread Rafał Kuć
Hello!

The attachment plugin will use Tika to extract the text from binary
file content that you send in the base64. Tika does a good job with
text extraction, however you have to test it yourself, if your files
are parsed well enough for your use case.

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/


> So can I say that the mapper-attachment plugin is made to work like below:
> Whether I am sending text file or pdf file or image file to ES , the plugin
> will extract the *text content* in all three scenarios and will store it
> into the ES and then it will be available for search as well?



> --
> View this message in context:
> http://elasticsearch-users.115913.n3.nabble.com/Can-we-perform-the-text-search-present-in-the-images-or-pdf-files-through-elasticsearch-tp4054367p4054374.html
> Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/241416263.20140418094630%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: Can we perform the text search presnet in the images or pdf files through elasticsearch

2014-04-17 Thread Rafał Kuć
Hello!

You'll need to send the file contents to Elasticsearch in base64 form
and Elasticsearch will use Tika to extract data from the file.

However, in typical case, you would rather store, not the whole data
of the binary file (as it can be quite big), but rather a path to the
file, so that the application that will query Elasticsearch know where
to look for the original file itself. 

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/


> Hi ,

> If I am not wrong you are talking about 
> https://github.com/elasticsearch/elasticsearch-mapper-attachments
> <https://github.com/elasticsearch/elasticsearch-mapper-attachments>  

> So in this I can index the attachments(say pdf file) and that will be stored
> as base64 encoding. So is this plugin made available for searching the text
> present in pdf file as well?

> If yes what will be the result if I search for some keyword in attachment,
> will it return the proper text data or the base64 encoded data?

> ~Prashant



> --
> View this message in context:
> http://elasticsearch-users.115913.n3.nabble.com/Can-we-perform-the-text-search-present-in-the-images-or-pdf-files-through-elasticsearch-tp4054367p4054371.html
> Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2310555013.20140418083728%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: Can we perform the text search presnet in the images or pdf files through elasticsearch

2014-04-17 Thread Rafał Kuć
Hello!

Please look at the attachment plugin for Elasticsearch: 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-attachment-type.html

It uses Apache Tika under the hood. The list of supported formats is
available here: http://tika.apache.org/0.10/formats.html

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/


> Hi ES users,

> Is there anyway we can perform the text search present in the images or pdf
> files through elasticsearch.

> I mean to say that suppose I have pdf/image(will be stored in ES as base64
> format) file indexed in ES. And if that image file contains "prashant" as
> text in it so is there a way I can search for the prashant and get the
> record for that image as well.



> --
> View this message in context:
> http://elasticsearch-users.115913.n3.nabble.com/Can-we-perform-the-text-search-presnet-in-the-images-or-pdf-files-through-elasticsearch-tp4054367.html
> Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/588849345.20140418080555%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: ElasticSearch Benchmark

2014-04-10 Thread Rafał Kuć
Title: Re: ElasticSearch Benchmark


Hello!

Depends on what you want to test. If you want to test indexing - start from scratch, see how many documents you can index with the clean Elasticsearch cluster, when you start to see slowdowns. Try seeing what the bottleneck is if any. In general, such tests depend on how much data you have and how much indexing you are planning to do.

If you want to test querying performance - start with some setup that you assume should be OK. Index your data and start testing, for example by sending production like queries with JMeter. That will give you information on how the cluster behaves in such setup. Then you can change the deployment and tune. For example adding replicas, re-sharding and having more/less shards, depending on the data and its volume. 

During all tests, you should be monitoring your cluster. You can use anything available, although for performance testing anything will be good, even the command line tools that will help you understand what is happening on the operating system level, how garbage collector is performing. 

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Could you please give me some more detailed suggestions on how to test the elasticsearch? thanks. 

On Thursday, April 10, 2014 10:13:49 AM UTC-4, Mark Walkom wrote:
The best way to know is to test it yourself.
It's very dependant on your hardware, your settings and the data that you are indexing.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 10 April 2014 23:49, Leslie Hawthorn <leslie@elasticsearch.com> wrote:
Hi Jianjun,

Thank you for the additional details. I think we will need some more information but I will let someone who is better skilled at benchmarking Elasticsearch ask you about it. 

Welcome to the mailing list! We are happy to have you here and to help. 

Cheers,
LH

On Thu, Apr 10, 2014 at 3:42 PM, Jianjun Hu <4nex...@gmail.com> wrote:
Hi Leslie,

Thanks for your reply.

We have some data in MySQL database and some PDF documents. We want to index them and let our user to use. So we decide to use the ElasticSearch. Before using the ElsaticSearch, we want to know the performance of the ElasticSearch, so we want to do benchmark of the ElasticSearch. However, we don't know how to do. I'm not sure if I express my question clearly. 

Best,
Jianjun


On Thursday, April 10, 2014 9:11:37 AM UTC-4, Leslie Hawthorn wrote:
Hi Jianjun,



On Thu, Apr 10, 2014 at 3:02 PM, Jianjun Hu <4nex...@gmail.com> wrote:
Hi all,

How to do benchmark of the ElasticSearch? Thanks!


The answer to this question is "it depends." It would be very helpful to describe a bit more about what you are working on so people can give a better answer to you. :)

Best,
LH

-- 
Leslie Hawthorn
Community Manager
http://elasticsearch.com

Other Places to Find Me:
Freenode: lh
Twitter: @lhawthorn
Skype: mebelh
Voice: +31 20 794 7300
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b6aa8d62-842a-4d2e-b184-0ef40530d47e%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



-- 
Leslie Hawthorn
Community Manager
http://elasticsearch.com

Other Places to Find Me:
Freenode: lh
Twitter: @lhawthorn
Skype: mebelh
Voice: +31 20 794 7300
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAB5RFS-_2SRbUkc_yVU4W_fPPq4efXPKnB%2Bnmh%2B2cveKoq_BVQ%40mail.gmail.com.

For more options, visit https://groups.google.com/d/optout.
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d8f2dd67-c113-4965-8f4c-7002c2053a1d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/536185794.20140410165742%40alud.com.pl.
For more options, visit https://groups.google.com/d/optout.


Re: synonym token filter

2013-12-30 Thread Rafał Kuć
Title: Re: synonym token filter


Hello!

You can't update the synonyms on already opened index, first you need to close it. You should also update the analyzer on a field you want to use synonyms and re-index your data if the synonym filter is a part of analysis during indexing. This is because already indexed data won't take synonyms into consideration. So, if you can't delete the index, you should first close it, than update the settings and reopen it. For example like this (I assume I have the test index already created):

1. Close the index:
curl -XPOST 'http://localhost:9200/test/_close'

2. Update the settings:
curl -XPUT 'localhost:9200/test/_settings' -d '{
 "settings" : {
  "analysis" : {
   "analyzer" : {
    "synonym" : {
     "tokenizer" : "whitespace",
     "filter" : ["synonym"]
    }
   },
   "filter" : {
    "synonym" : {
     "type" : "synonym",
     "synonyms_path" : "synonym.txt"
    }
   }
  }
 }
}'

3. Update the mappings (the name is just in case you want to update the analyzer not only on _all field):
curl -XPUT 'localhost:9200/test/doc/_mapping' -d '{
 "doc" : {
  "_all" : {
   "enabled" : true,
   "analyzer" : "synonym"
  },
  "properties" : {                
   "name" : { "type" : "string", "index" : "analyzed", "analyzer" : "synonym" }
  }
 }
}'

4. Open the index:
curl -XPOST 'http://localhost:9200/test/_open'

After that, you should have your filter in the settings. You can check it by running:

curl -XGET 'http://localhost:9200/test/_settings?pretty'
curl -XGET 'http://localhost:9200/test/_mapping?pretty'

Now to test it, just index a new document:

curl -XPOST 'localhost:9200/test/doc/1' -d '{"name":"aaa test"}'

And now test the search:

curl -XGET 'localhost:9200/test/_search?pretty' -d '{
 "query" : {
  "match" : {
   "_all" : "bbb"
  }
 }
}'

And it should be working:

{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 0.625,
    "hits" : [ {
      "_index" : "test",
      "_type" : "doc",
      "_id" : "1",
      "_score" : 0.625, "_source" : {"name":"aaa test"}
    } ]
  }
}

However, remember about data re-indexing :)


-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Rafal, thanks for a quick reply! I think I already understood how to do this for a new index. The issue is how do you do this for an existing index? Am I supposed to do smth like this?:

curl -XPOST 'http://localhost:9200/my_twitter_river/settings/' -d '
{
      "analysis" : {
         "filter" : {
            "synonym" : {
               "type" : "synonym",
               "synonyms_path" : "synonym.txt"
            }
         }
      }
}
'

Also, some posts seem to indicate that if I run a query on _all fileds, this won't be taken into account anyway. Is this true?

Thanks!

On Monday, December 30, 2013 1:51:59 PM UTC+1, Rafał Kuć wrote:
Hello!

This is a part of the mappings you send to Elasticsearch, for example during index creation. The synonyms_path property is relative to the config directory. So if your file is synonym.txt, it should go to $ES_HOME/config and you could send the following command to create an index:

curl -XPOST 'localhost:9200/test' -d '
{
"settings": {
 "index" : {
  "analysis" : {
   "analyzer" : {
    "synonym" : {
     "tokenizer" : "whitespace",
     "filter" : ["synonym"]
    }
   },
   "filter" : {
    "synonym" : {
     "type" : "synonym",
     "synonyms_path" : "synonym.txt"
    }
   }
  }
 }
},
"mappings" : {
 "test" : {
  "properties" : {                
   "name" : { "type" : "string", "index" : "analyzed", "analyzer" : "synonym" }
  }
 }
}
}'

My synonym.txt file had the following contents:
aaa=>bbb

Now to test it, just run the following command:
curl -XGET 'localhost:9200/test/_analyze?analyzer=synonym&text=aaa+test&pretty=true'

Re: synonym token filter

2013-12-30 Thread Rafał Kuć
Title: Re: synonym token filter


Hello!

This is a part of the mappings you send to Elasticsearch, for example during index creation. The synonyms_path property is relative to the config directory. So if your file is synonym.txt, it should go to $ES_HOME/config and you could send the following command to create an index:

curl -XPOST 'localhost:9200/test' -d '
{
 "settings": {
  "index" : {
   "analysis" : {
    "analyzer" : {
     "synonym" : {
      "tokenizer" : "whitespace",
      "filter" : ["synonym"]
     }
    },
    "filter" : {
     "synonym" : {
      "type" : "synonym",
      "synonyms_path" : "synonym.txt"
     }
    }
   }
  }
 },
 "mappings" : {
  "test" : {
   "properties" : {                
    "name" : { "type" : "string", "index" : "analyzed", "analyzer" : "synonym" }
   }
  }
 }
}'

My synonym.txt file had the following contents:
aaa=>bbb

Now to test it, just run the following command:
curl -XGET 'localhost:9200/test/_analyze?analyzer=synonym&text=aaa+test&pretty=true'

And you should get something like this:
{
  "tokens" : [ {
    "token" : "bbb",
    "start_offset" : 0,
    "end_offset" : 3,
    "type" : "SYNONYM",
    "position" : 1
  }, {
    "token" : "test",
    "start_offset" : 4,
    "end_offset" : 8,
    "type" : "word",
    "position" : 2
  } ]
}

So, as you can see it works. Can you check if it works for you?

-- 
Regards,
 Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/






Hi,

I'm trying to install a synonym token filter for an existing index and having a hard time understanding how this should be done. I've created a synonym.txt file, but I can't understand how to implement the config described in the doc: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-synonym-tokenfilter.html. Is this a file? If so, should it go into the config directory? Or is this supposed to be PUT via curl? None of the things I've tried so far worked. Please help! 

Thanks a lot,
Alex
-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d89026ea-aad1-4537-8dac-8ea18a0c6b13%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.






-- 
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/948160320.20131230135159%40alud.com.pl.
For more options, visit https://groups.google.com/groups/opt_out.