Re: I need to call my server xxx.xx.xx.xxx:xxxxx using elasticsearch api in python

2014-09-11 Thread Magnus Bäck
On Friday, September 12, 2014 at 04:45 CEST,
 Nimit Jain  wrote:

> Thanks Honza for your reply. While trying the below code with
> print(es.info()) I am getting the below error.
> pydev debugger: starting (pid: 9652)
> GET / [status:401 request:0.563s]
> Traceback (most recent call last):

[...]

> elasticsearch.exceptions.TransportError: TransportError(401, '')
> Here the status is 401. Please help.

HTTP status 401 is "Unauthorized". Do you have a proxy that
expects authentication between your host and Elasticsearch?
I suspect Elasticsearch itself isn't capable of returning 401.

-- 
Magnus Bäck| Software Engineer, Development Tools
magnus.b...@sonymobile.com | Sony Mobile Communications

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/20140912065928.GD3212%40seldlx20533.corpusers.net.
For more options, visit https://groups.google.com/d/optout.


Re: Some indices failing with "SearchPhaseExecutionException[Failed to execute phase [query], all shards failed]"

2014-09-11 Thread Magnus Bäck
On Friday, September 12, 2014 at 08:53 CEST,
 Kevin DeLand  wrote:

> Everything was working fine when all of a sudden some indices started
> failing.
> GET localhost:9200/logstash-2014.09.11/_search
> yields response:
> {"error":"SearchPhaseExecutionException[Failed to execute phase
> [query], all shards failed]","status":503}

How's the cluster's health? Anything interesting in the Elasticsearch
logs?

-- 
Magnus Bäck| Software Engineer, Development Tools
magnus.b...@sonymobile.com | Sony Mobile Communications

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/20140912065653.GC3212%40seldlx20533.corpusers.net.
For more options, visit https://groups.google.com/d/optout.


Some indices failing with "SearchPhaseExecutionException[Failed to execute phase [query], all shards failed]"

2014-09-11 Thread Kevin DeLand
Everything was working fine when all of a sudden some indices started 
failing.

*GET localhost:9200/logstash-2014.09.11/_search*
yields response:
{"error":"SearchPhaseExecutionException[Failed to execute phase [query], 
all shards failed]","status":503}


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/67143a50-7cd9-4836-a19f-a2911bd05e1f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: disk usage not banalced

2014-09-11 Thread AALISHE
ref..

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8c4ac1c5-e3a1-4692-a3a5-d16d6a174e19%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Simple query string does not work

2014-09-11 Thread Dan
Thanks!! Thats it.

Op donderdag 11 september 2014 18:23:31 UTC+2 schreef vineeth mohan:
>
> Hello Dan , 
>
> The format of the entire query is wrong.
> You need to at-least specify the type of query that you are using.
> In this case , wild card query would be the best fit.
>
> Thanks 
>   Vineeth
>
> On Thu, Sep 11, 2014 at 9:42 PM, Dan > wrote:
>
>> Hi Vineeth,
>>
>>
>> Thanks for your reply.
>>
>>
>> {"from":0,"size":10,"query":{"field":{"tags":"*blaat*"}},"filter":{"and":[{"term":{"representative":1}},{"term":{"is_gift":0}},{"term":{"active":1}},{"terms":{"website_ids":[1],"execution":"and"}}]}}
>>
>>
>> Is this enough?
>>
>>
>> Thanks!
>>
>>
>> Op donderdag 11 september 2014 15:08:05 UTC+2 schreef vineeth mohan:
>>>
>>> Hello Dan , 
>>>
>>> Can you paste the above as JSON.
>>> I am not exactly able to make out what is the query.
>>>
>>> Thanks
>>>   Vineeth
>>>
>>> On Thu, Sep 11, 2014 at 5:50 PM, Dan  wrote:
>>>
 Nobody? :(

 Op woensdag 10 september 2014 21:24:19 UTC+2 schreef Dan:

> Hi Guys,
>
> I have a simple query which is not working. I am using the same query 
> on another server with the same mapping; where it does work.
> Everything else is working like a charm.
>
> I am talking about the following query.
> The problem is to be found when I using the query > field > tags 
> query. 
> When I do not use this part, everything works fine.
>
> Array
> (
> [from] => 0
> [size] => 10
> [query] => Array
> (
>  *   [field] => Array
> (
> [tags] => *blaat*
> )
> *
> )
>
> [filter] => Array
> (
> [and] => Array
> (
> [0] => Array
> (
> [term] => Array
> (
> [representative] => 1
> )
>
> )
>
> [1] => Array
> (
> [term] => Array
> (
> [is_gift] => 0
> )
>
> )
>
> [2] => Array
> (
> [term] => Array
> (
> [active] => 1
> )
>
> )
>
> [3] => Array
> (
> [terms] => Array
> (
> [website_ids] => Array
> (
> [0] => 1
> )
>
> [execution] => and
> )
>
> )
>
> )
>
> )
>
> )
>
>
> The mapping is as follows:
>
>
>   "product" : {
> "properties" : {
>   "action" : {
> "type" : "string"
>   },
>   "active" : {
> "type" : "string"
>   },
>   "brand_ids" : {
> "type" : "string"
>   },*  "tags" : {
> "type" : "string"
>   },*
> .
>
>
> When I index an item I am using the following part:
>
> Array
> (
> [2359] => Array
> (
> 
> *[tags] => blaat, another blaat, etc*
>   
>
> Maby an installation confuguring issue?
>
> Does anyone have a clue?
>
> Thanks!
>
>  -- 
 You received this message because you are subscribed to the Google 
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/574351f6-a7ea-4b60-8dd4-3a5dea2f4d3b%
 40googlegroups.com 
 
 .

 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on th

Re: Elasticsearch restart task hangs up in Ansible Playbook

2014-09-11 Thread Roopendra Vishwakarma
Fix this problem.  Cause of the issue. I am using old version of service 
wrapper script for elasticsearch. After update * /etc/init.d/elasticsearch* 
everything is working fine.


On Wednesday, 10 September 2014 15:52:50 UTC+5:30, Roopendra Vishwakarma 
wrote:
>
> I am using ansible playbook to install elasticsearch and elasticsearch 
> plugin. after successfully installation of Elasticsearch I written one 
> ansible task to Restart Elasticsearch. Its restarting elasticsearch but 
> ansible playbook hang up in this task. My ansible task is:
>
> - name: "Ensure Elasticsearch is Running"
>   service: name=elasticsearch state=restarted
>
>   
> I also tried with `shell: sudo service elasticsearch restart` but no luck. 
>
> **Elasticsearch Version** : 1.3.0  
> **Ansible Version**   : 1.5.5
>
> Verbose Output for the task is :
>
>  ESTABLISH CONNECTION FOR USER: prod on PORT 22 TO 
> app101.host.com
>  REMOTE_MODULE service name=elasticsearch 
> state=restarted
>  EXEC /bin/sh -c 'mkdir -p 
> $HOME/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310 
>&& chmod a+rx 
> $HOME/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310 && echo 
> $HOME/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310'
>  PUT /tmp/tmpjIMUkF TO 
> /home/prod/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310/service
>  EXEC /bin/sh -c 'sudo -k && sudo -H -S -p "[sudo 
> via ansible, key=yeztwzmmsgyvjjqmmunnvtbopcplrbso] 
>   password: " -u root /bin/sh -c '"'"'echo 
> SUDO-SUCCESS-yeztwzmmsgyvjjqmmunnvtbopcplrbso; /usr/bin/python 
> /home/prod/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310/service;
>   rm -rf 
> /home/prod/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310/ 
> >/dev/null 2>&1'"'"''
>
> Any Suggestion?
>
> While Starting Elasticsearch It show log on shell in earlier version it 
> was just show [OK]. Does this cause any problem in ansible playbook?
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/75daf5e9-2407-415e-a152-169a02017987%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Aggregations across values returned by term then date histogram

2014-09-11 Thread ppearcy
Hi,
  I am doing a terms aggregation on user with a sub date histogram 
aggregation to get time series per user. I then want to perform a stats 
aggregation all the values of each date bucket across users. 

Thanks,
Paul

On Thursday, September 11, 2014 8:32:13 PM UTC-4, vineeth mohan wrote:
>
> Hello , 
>
> I didn't get your question completely , but then i feel a simple date 
> histogram query should do the trick.
>
>   "aggs" : {
>
> "{{time_interval}}": {
>   "date_histogram": {
> "field": "time",
> "interval": "{{time_interval}}",
> "min_doc_count": 0
>   }
> }
>   }
>
> Let me know if this doesn't fit your need and if so , what other data you are 
> looking for .
>
> Thanks
>  Vineeth
>
>
> On Thu, Sep 11, 2014 at 11:38 PM, ppearcy > 
> wrote:
>
>> I haven't been able to figure out how to do this and it may not be 
>> possible, but figured I'd ask. 
>>
>> I have a query with multiple aggregations that looks like this:
>> https://gist.github.com/ppearcy/0c6a86ebf32a0bbcb1fc
>>
>> This returns a time series of data per user: 
>> https://gist.github.com/ppearcy/7ceac858da2e647ff341
>>
>> I want to do a stats aggregation across all the values for each week to 
>> provided per weekly statistical view of things. 
>>
>> Currently, I am doing these computations client side and it works pretty 
>> well, but have performance concerns around merging lots of time series 
>> streams. 
>>
>> Any help or ideas would be much appreciated. 
>>
>> Thanks!
>> Paul
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/a83cc20d-8c9c-4a6b-b843-349a2669e580%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3b0a967b-8445-4401-82fe-ee22c942d050%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: complex nested query

2014-09-11 Thread vineeth mohan
Hello Yancey ,


   1. Date aggregation -
   
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-datehistogram-aggregation.html#search-aggregations-bucket-datehistogram-aggregation
   2. Sum aggregation -
   
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-metrics-sum-aggregation.html
   3. Nested aggregation -
   
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-nested-aggregation.html

Thanks
  Vineeth

On Fri, Sep 12, 2014 at 8:17 AM, yancey  wrote:

> Hi!
> I can not find how to filter date based on aggregation result in
> documents. btw, can i use function do this?
>
> Thanks && Best Regard!
>
>
>
> 在 2014年9月12日,1:18,yancey  写道:
>
> Vineeth!
>
> Thanks for your reply! I’ll try your solution,hope this can solve my
> problem.
>
>
> Thanks && Best Regard!
>
>
>
> 在 2014年9月12日,0:12,vineeth mohan  写道:
>
> Hello ,
>
> I don't feel you can do this in a single call.
> What i have in mind would be
>
>
>1. Run a two level aggregation query with date histogram aggregation
>on first level with date and term aggregation on second with sum
>aggregation on prize field on second level. You might need to use nested
>aggregation also here.
>2. Once you get the results , choose the dates based on the criteria ,
>i.e. sum of prize more than 300. With the dates you are interested in ,
>fire the next query which has all interesting date ranges as range query.
>
> Thanks
>  Vineeth
>
> On Thu, Sep 11, 2014 at 5:04 PM, 闫旭  wrote:
>
>> Anyone can help this?
>>
>> Thanks && Best Regard!
>>
>> 在 2014年9月11日,13:24,闫旭  写道:
>>
>> Thank you !  But nested bool query can not plus all price with the data
>> range. how  can i do this??
>>
>> Thx again.
>>
>> Thanks && Best Regard!
>>
>> 在 2014年9月11日,12:04,vineeth mohan  写道:
>>
>> Hello ,
>>
>>
>> First you need to declare field details as nested. -
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-nested-type.html#mapping-nested-type
>>
>> Then do a bool query with the date range constrain and range constrain
>>
>> Thanks
>> Vineeth
>>
>> On Thu, Sep 11, 2014 at 8:53 AM, 闫旭  wrote:
>>
>>> Dear All!
>>>
>>> I have a problem with a complex nested query
>>> the docs like this:
>>> _id:1
>>> {
>>> "detail":[
>>> {
>>> "date":"2014-09-01",
>>> "price”:50
>>> },
>>> {
>>> "date":"2014-09-02",
>>> "price”:100
>>> },
>>> {
>>> "date":"2014-09-03",
>>> "price":100
>>> },
>>> {
>>> "date":"2014-09-04",
>>> "price":200
>>> }
>>> ]
>>>
>>> }
>>> _id:2
>>> {
>>> "detail":[
>>> {
>>> "date":"2014-09-01",
>>> "price":100
>>> },
>>> {
>>> "date":"2014-09-02",
>>> "price":200
>>> },
>>> {
>>> "date":"2014-09-03",
>>> "price":300
>>> },
>>> {
>>> "date":"2014-09-04",
>>> "price":200
>>> }
>>> ]
>>>
>>> }
>>> I will filter the docs with “date in [2014-09-01, 2014-09-03] and
>>> sum(price) > 300”.
>>> I only find some way with “aggregation”, but it can only stat the sum of
>>> all docs.
>>>
>>> How Can I solve the problem??
>>>
>>>
>>> Thanks && Best Regard!
>>>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/A98354E4-9C9F-43B2-9310-6355DE3D6F85%40gmail.com
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAGdPd5kfRarPNNBctvYfHsk52tjD2rxv18aQGqq3Hz0i_2ZxVQ%40mail.gmail.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/F5848899-E506-470B-AA05-E6A2B1965986%40gmail.com
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>

Re: complex nested query

2014-09-11 Thread yancey
Hi!
I can not find how to filter date based on aggregation result in documents. 
btw, can i use function do this? 

Thanks && Best Regard!



在 2014年9月12日,1:18,yancey  写道:

> Vineeth!
> 
> Thanks for your reply! I’ll try your solution,hope this can solve my problem.
> 
> 
> Thanks && Best Regard!
> 
> 
> 
> 在 2014年9月12日,0:12,vineeth mohan  写道:
> 
>> Hello , 
>> 
>> I don't feel you can do this in a single call.
>> What i have in mind would be 
>> 
>> Run a two level aggregation query with date histogram aggregation on first 
>> level with date and term aggregation on second with sum aggregation on prize 
>> field on second level. You might need to use nested aggregation also here.
>> Once you get the results , choose the dates based on the criteria , i.e. sum 
>> of prize more than 300. With the dates you are interested in , fire the next 
>> query which has all interesting date ranges as range query. 
>> Thanks
>>  Vineeth
>> 
>> On Thu, Sep 11, 2014 at 5:04 PM, 闫旭  wrote:
>> Anyone can help this?
>> 
>> Thanks && Best Regard!
>> 
>> 在 2014年9月11日,13:24,闫旭  写道:
>> 
>>> Thank you !  But nested bool query can not plus all price with the data 
>>> range. how  can i do this??
>>> 
>>> Thx again.
>>> 
>>> Thanks && Best Regard!
>>> 
>>> 在 2014年9月11日,12:04,vineeth mohan  写道:
>>> 
 Hello , 
 
 
 First you need to declare field details as nested. - 
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-nested-type.html#mapping-nested-type
 
 Then do a bool query with the date range constrain and range constrain
 
 Thanks
 Vineeth
 
 On Thu, Sep 11, 2014 at 8:53 AM, 闫旭  wrote:
 Dear All!
 
 I have a problem with a complex nested query
 the docs like this:
 _id:1
 {
"detail":[
{
"date":"2014-09-01",
"price”:50
},
{
"date":"2014-09-02",
"price”:100
},
{
"date":"2014-09-03",
"price":100
},
{
"date":"2014-09-04",
"price":200
}
]
 
 }
 _id:2
 {
"detail":[
{
"date":"2014-09-01",
"price":100
},
{
"date":"2014-09-02",
"price":200
},
{
"date":"2014-09-03",
"price":300
},
{
"date":"2014-09-04",
"price":200
}
]
 
 }
 I will filter the docs with “date in [2014-09-01, 2014-09-03] and 
 sum(price) > 300”.
 I only find some way with “aggregation”, but it can only stat the sum of 
 all docs.
 
 How Can I solve the problem?? 
 
 
 Thanks && Best Regard!
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/A98354E4-9C9F-43B2-9310-6355DE3D6F85%40gmail.com.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/CAGdPd5kfRarPNNBctvYfHsk52tjD2rxv18aQGqq3Hz0i_2ZxVQ%40mail.gmail.com.
 For more options, visit https://groups.google.com/d/optout.
>>> 
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/F5848899-E506-470B-AA05-E6A2B1965986%40gmail.com.
>> 
>> For more options, visit https://groups.google.com/d/optout.
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/CAGdPd5mH%2BSrgrkomvNh9-a-A5gDTjeXCO6DE8uWp32ruNbGPFA%40mail.gmail.com.

Re: I need to call my server xxx.xx.xx.xxx:xxxxx using elasticsearch api in python

2014-09-11 Thread Nimit Jain
Thanks Honza for your reply. While trying the below code with 
print(es.info()) I am getting the below error.

pydev debugger: starting (pid: 9652)
GET / [status:401 request:0.563s]
Traceback (most recent call last):
  File 
"C:\Users\nimitja\Downloads\adt-bundle-windows-x86_64-20140702\adt-bundle-windows-x86_64-20140702\eclipse\plugins\org.python.pydev_3.6.0.201406232321\pysrc\pydevd.py",
 
line 1845, in 
debugger.run(setup['file'], None, None)
  File 
"C:\Users\nimitja\Downloads\adt-bundle-windows-x86_64-20140702\adt-bundle-windows-x86_64-20140702\eclipse\plugins\org.python.pydev_3.6.0.201406232321\pysrc\pydevd.py",
 
line 1373, in run
pydev_imports.execfile(file, globals, locals)  # execute the script
  File 
"C:\Users\nimitja\Downloads\adt-bundle-windows-x86_64-20140702\adt-bundle-windows-x86_64-20140702\eclipse\plugins\org.python.pydev_3.6.0.201406232321\pysrc\_pydev_execfile.py",
 
line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc) 
  File 
"C:\Users\nimitja\workspace\ParsingClientAndServerLogs\serverlog\KibanaRestAPIConnection.py",
 
line 12, in 
print(es.info())
  File "C:\Python34\lib\site-packages\elasticsearch\client\utils.py", line 
68, in _wrapped
return func(*args, params=params, **kwargs)
  File "C:\Python34\lib\site-packages\elasticsearch\client\__init__.py", 
line 159, in info
_, data = self.transport.perform_request('GET', '/', params=params)
  File "C:\Python34\lib\site-packages\elasticsearch\transport.py", line 
284, in perform_request
status, headers, data = connection.perform_request(method, url, params, 
body, ignore=ignore, timeout=timeout)
  File 
"C:\Python34\lib\site-packages\elasticsearch\connection\http_urllib3.py", 
line 55, in perform_request
self._raise_error(response.status, raw_data)
  File "C:\Python34\lib\site-packages\elasticsearch\connection\base.py", 
line 97, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, 
error_message, additional_info)
elasticsearch.exceptions.TransportError: TransportError(401, '')



Here the status is 401. Please help.

Regards,
Nimit Jain

On Thursday, 11 September 2014 14:56:10 UTC+5:30, Honza Král wrote:
>
> Hi, 
>
> the code you have here should work, what do you get when you try: 
>
> from elasticsearch import Elasticsearch 
>
> es = Elasticsearch("10.120.xx.xxx:6xxx8") 
> print(es.info()) 
>
> Thanks 
>
> On Thu, Sep 11, 2014 at 11:14 AM, Nimit Jain  > wrote: 
> > Hi All, 
> > I need I need to call my server xxx.xx.xx.xxx:x using elasticsearch 
> api 
> > in python but I am not able to get the proper code to run that. Below is 
> > that I have done yet. 
> > 
> > from datetime import datetime 
> > from elasticsearch import Elasticsearch 
> > 
> > es = Elasticsearch("10.120.xx.xxx:6xxx8") 
> > print(es.cluster) 
> > print(es.cat) 
> > print(es.indices) 
> > print(es.nodes) 
> > print(es.snapshot) 
> > 
> > 
> > # but not deserialized 
> > es.get(index="logstash-2014.09.11", doc_type="syslog", 
> > id='iFP2D8nHSKeqevBWrm1Hgg')['_source'] 
> > {u'any': u'data', u'timestamp': u'2013-05-12T19:45:31.804229'} 
> > 
> > print(es) 
> > 
> > 
> > Please tell me what to do from here so I am doing wrong. 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "elasticsearch" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to elasticsearc...@googlegroups.com . 
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/elasticsearch/d33976fd-b4ea-4ce3-8e97-cd019b77b0f7%40googlegroups.com.
>  
>
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7b51ba00-66e8-4ba8-9415-c49ea868ea0a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: When indexing a few hours duration, a timeout error occurs.

2014-09-11 Thread Sephen Xu
And I noticed RES and SHR growing, is this normal?




BTW, my ES_MIN_MEM = ES_MAX_MEM = 8g , a four-node cluster on two machines.

在 2014年9月12日星期五UTC+8上午10时00分37秒,Sephen Xu写道:
>
> Hello, 
> I have a program, continuing to submit bulk to Elasticsearch, each bulk 
> has 5000 documents, the number of concurrent requests is 8, such lasted 
> several hours later, the program appeared timeout error:
>
> 2014-09-12 09:23:29,092 INFO [org.elasticsearch.client.transport] - 
> <[James Jaspers] failed to get node info for 
> [#transport#-1][node1][inet[/192.60.9.4:9300]], disconnecting...>
> org.elasticsearch.transport.ReceiveTimeoutTransportException: 
> [][inet[/192.60.9.4:9300]][cluster/nodes/info] request_id [202756] timed 
> out after [5000ms]
> at 
> org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
>  
> ~[elasticsearch-1.1.2.jar:na]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>  
> [na:1.6.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>  
> [na:1.6.0_45]
> at java.lang.Thread.run(Thread.java:662) [na:1.6.0_45]
>
> My elasticsearch.yml  reads as follows:
> index.merge.policy.max_merged_segment: 1gb
> index.merge.policy.segments_per_tier: 4
> index.merge.policy.max_merge_at_once: 4
> index.merge.policy.max_merge_at_once_explicit: 4
> index.merge.scheduler.max_thread_count: 1
> indices.memory.index_buffer_size: 33%
> indices.store.throttle.type: none
> threadpool.merge.type: fixed
> threadpool.merge.size: 4
> threadpool.merge.queue_size: 32
> threadpool.bulk.type: fixed
> threadpool.bulk.size: 12
> threadpool.bulk.queue_size: 32
> bootstrap.mlockall: true
> node.max_local_storage_nodes: 2
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/41e91518-5789-4782-81bf-d22d68045a28%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


When indexing a few hours duration, a timeout error occurs.

2014-09-11 Thread Sephen Xu
Hello, 
I have a program, continuing to submit bulk to Elasticsearch, each bulk has 
5000 documents, the number of concurrent requests is 8, such lasted several 
hours later, the program appeared timeout error:

2014-09-12 09:23:29,092 INFO [org.elasticsearch.client.transport] - <[James 
Jaspers] failed to get node info for 
[#transport#-1][node1][inet[/192.60.9.4:9300]], disconnecting...>
org.elasticsearch.transport.ReceiveTimeoutTransportException: 
[][inet[/192.60.9.4:9300]][cluster/nodes/info] request_id [202756] timed 
out after [5000ms]
at 
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
 
~[elasticsearch-1.1.2.jar:na]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 
[na:1.6.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) 
[na:1.6.0_45]
at java.lang.Thread.run(Thread.java:662) [na:1.6.0_45]

My elasticsearch.yml  reads as follows:
index.merge.policy.max_merged_segment: 1gb
index.merge.policy.segments_per_tier: 4
index.merge.policy.max_merge_at_once: 4
index.merge.policy.max_merge_at_once_explicit: 4
index.merge.scheduler.max_thread_count: 1
indices.memory.index_buffer_size: 33%
indices.store.throttle.type: none
threadpool.merge.type: fixed
threadpool.merge.size: 4
threadpool.merge.queue_size: 32
threadpool.bulk.type: fixed
threadpool.bulk.size: 12
threadpool.bulk.queue_size: 32
bootstrap.mlockall: true
node.max_local_storage_nodes: 2


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5f72db0e-01fe-4e9e-afdb-9440baf51645%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Why all replicas are unassigned?

2014-09-11 Thread Sephen Xu
Hi,
Not set .
:)

2014-09-11 14:19 GMT+08:00 Jun Ohtani :

> Hi
>
> Did you use some properties of prefix
> "cluster.routing.allocation.awareness"?
>
> 2014-09-11 14:48 GMT+09:00 Sephen Xu :
>
>> uh...'other cluster' is means that another 2 machine build a four node
>> cluster, and it work fine with cluster.routing.allocation.same_shard.host:
>> true.
>>
>> And the problem clusters can work fine with 
>> cluster.routing.allocation.same_shard.host:
>> false.
>>
>> Now I got the results I wanted, but I fear shards and its replicas will
>> be assigned to the same machine if I don't set 
>> cluster.routing.allocation.same_shard.host
>> to true.
>>
>> 在 2014年9月11日星期四UTC+8下午1时26分10秒,Pablo Musa写道:
>>>
>>> > But on other cluster,
>>>
>>> Here you mean cluster or node?
>>>
>>> I could not understand, Is everything working as you wanted?
>>>
>>> > This setting only applies if multiple nodes are started on the same
>>> machine.
>>>
>>> Just by curiosity, are you running nodes on the same machine?
>>>
>>> Regards,
>>> Pablo
>>>
>>> 2014-09-11 2:00 GMT-03:00 Sephen Xu :
>>>
 Hi,Pablo,
 I have tried settings replicas to zero and putting it back to 1, it
 does not work as you say.

 And finally, I found, when I turn 
 cluster.routing.allocation.same_shard.host:
 true to false, the replicas was work well. But on other cluster, this
 setting does not affect use. And the official document is to describe this
 set:
 cluster.routing.allocation.same_shard.hostAllows to perform a check to
 prevent allocation of multiple instances of the same shard on a single
 host, based on host name and host address. Defaults to false, meaning
 that no check is performed by default. This setting only applies if
 multiple nodes are started on the same machine.
 Why is this so?

 (Sorry for my bad English : )

 在 2014年9月11日星期四UTC+8上午11时12分09秒,Pablo Musa写道:
>
> googled: elasticsearch java 1.6.0_45 shard unassigned
>
> https://github.com/elasticsearch/elasticsearch/issues/3145 (search
> for 1.6)
> https://groups.google.com/forum/#!msg/elasticsearch/MSrKvfgK
> wy0/Tfk6nhlqYxYJ
>
> For all that I have researched it points to Java version problem.
> You could try some things as settings replicas to 0 and putting it
> back to 1  or forcing allocation (do not remember the exact command, but
> google for unassigned chards and you will find it), but I do not think 
> that
> they will work.
>
> I really would try installing a new version of Java and running
> Elasticsearch using it.
>
> Regards,
> Pablo
>
> 2014-09-11 0:03 GMT-03:00 Sephen Xu :
>
>> Thank you for your reply, the java version on each machine are same
>> -- 1.6.0_45, and the elasticsearch version is 1.1.2.
>>
>>
>>
>> 在 2014年9月11日星期四UTC+8上午10时48分44秒,pabli...@gmail.com写道:
>>
>>> I would check for the Java version on each machine.
>>> I had the same problem on a running cluster when adding a node, and
>>> unfortunately the last node had Java 1.7.0_65 instead of 1.7.0_55
>>> (recommended version and the version of my other machines).
>>>
>>> I did not have the time to create a post explaining the whole
>>> problem. But, in summary, I ran a default install script using apt-get 
>>> and,
>>> by default, they use the "latest" Java version.
>>>
>>> One big problem for me is that they do not support versioned jdk
>>> installation and I could not find a deb package for 1.7.0_55. Maybe 
>>> someone
>>> here can help with this.
>>>
>>> The "problematic" command:
>>> apt-get install openjdk-7-jre-headless -y
>>>
>>> Regards,
>>> Pablo Musa
>>>
>>> On Wednesday, September 10, 2014 10:31:24 PM UTC-3, Sephen Xu wrote:

 Hello,

 I startup 4 nodes on 2 machines, and when create index, all
 replicas are unassigned.

 {
   "cluster_name" : "elasticsearch_log",
   "status" : "yellow",
   "timed_out" : false,
   "number_of_nodes" : 4,
   "number_of_data_nodes" : 4,
   "active_primary_shards" : 22,
   "active_shards" : 22,
   "relocating_shards" : 0,
   "initializing_shards" : 0,
   "unassigned_shards" : 22
 }

 How can I do?

>>>  --
>> You received this message because you are subscribed to a topic in
>> the Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>> pic/elasticsearch/kn3UHQgQKJk/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> elasticsearc...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/
>> msgid/elasticsearch/8a7a7fcc-9a93-4ae8-9b26-8cdd9715eb7b%40goo
>> glegroups.com
>>

Re: Aggregations across values returned by term then date histogram

2014-09-11 Thread vineeth mohan
Hello ,

I didn't get your question completely , but then i feel a simple date
histogram query should do the trick.

  "aggs" : {

"{{time_interval}}": {
  "date_histogram": {
"field": "time",
"interval": "{{time_interval}}",
"min_doc_count": 0
  }
}
  }

Let me know if this doesn't fit your need and if so , what other data
you are looking for .

Thanks
 Vineeth


On Thu, Sep 11, 2014 at 11:38 PM, ppearcy  wrote:

> I haven't been able to figure out how to do this and it may not be
> possible, but figured I'd ask.
>
> I have a query with multiple aggregations that looks like this:
> https://gist.github.com/ppearcy/0c6a86ebf32a0bbcb1fc
>
> This returns a time series of data per user:
> https://gist.github.com/ppearcy/7ceac858da2e647ff341
>
> I want to do a stats aggregation across all the values for each week to
> provided per weekly statistical view of things.
>
> Currently, I am doing these computations client side and it works pretty
> well, but have performance concerns around merging lots of time series
> streams.
>
> Any help or ideas would be much appreciated.
>
> Thanks!
> Paul
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/a83cc20d-8c9c-4a6b-b843-349a2669e580%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5nZXHThM-odXc96kLmFTUa869qJkodQZpd%2Bw7NTWhJMeQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Context in Native Scripts

2014-09-11 Thread vineeth mohan
Hello ,

Can you give a more elaborate explanation on the behavior of scoring you
want ?
I dont see any direct way to achieve this.

Also re-scoring might  interest  you -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-rescore.html

Thanks
   Vineeth

On Thu, Sep 11, 2014 at 11:19 PM,  wrote:

>
> Hello everyone,
>
>  I've been playing with native scripts and have a few questions:
>
>  Is there any notion of context for native scripts?
>
> For example, is there a way to know that a method "runAsDouble", for
> example, is called for the last time?
> I might, for instance, like to send some sort of statistics after a
> search is done.
>
> Is there any way to know how many documents the search produced,
> beforehand?
> I might want to do some pre calculations based on this number before
> the actual scoring begins.
>
> Is there any way to get all the documents (or ids) somehow to process
> (score) them in bulk?
> My scoring might depend on the search result, I might want to
> calculate an average of a search result field and base my scores on this
> number.
>
> I apologize in advance, if some of my questions are uninformed. I'm
> new to ES, trying to switch from Solr.
>
> Thank you,
>
> ZS
>
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/48643754-67cc-497c-8c84-c1565dfcb867%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5mJ2jQ5ueZQepu8Z%2B0Sjo%3DwxhTh%3D3AvREOiJtKMaFOMXA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


disk usage not banalced

2014-09-11 Thread AALISHE
Hi,

we have a cluster composed of 2 nodes ... I am new to ES ... but I noticed 
that data directories assigned to each node are not holding the same size 
of data inside 

the disk usage on node01 is :

$ sudo du -sh *
1.7Gd1
488Md2
5.1Gd3
1.7Gd4
9.3Gd5
27G d6

on node02:

$ sudo du -sh *
3.9Gd1
299Md2
584Md3
5.8Gd4
11G d5
24G d6

our version is *0.20.3*


how do I rebalance the disk usage safely on a production ?  

btw I built another stronger nodes and preparing to move the 
indexes/indices to new machines 


cheers!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/71e01c4f-740c-437b-a61a-f21574ec0049%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Function score with nested docs stops working after adding attribute with not unique name

2014-09-11 Thread Gosia
Hi guys,

I am trying to calculate the `_score` of the document as a sum of IDs of my 
nested docs.

Here is the mapping:

```
PUT /test/property/_mapping
{
   "property": {
  "properties": {
 "destination_id": {
"type": "long"
 },
 "rates": {
"type": "nested",
"properties": {
   "id": {
  "type": "long"
   },
   "end_date": {
  "type": "date",
  "format": "dateOptionalTime"
   },
   "start_date": {
  "type": "date",
  "format": "dateOptionalTime"
   }
}
 }
  }
   }
} 
```

and here is my query:

```
GET /test/property/_search
{
   "query": {
  "filtered": {
 "query": {
"nested": {
   "path": "rates",
   "score_mode": "sum",
   "query": {
  "function_score": {
 "functions": [
{
   "script_score": {
  "script": "doc['id'].value"
   }
}
 ]
  }
   }
}
 }
  }
   }
}
```

It works fine and score is calculated properly when I don't have field `id` 
in my main (top level) document.
Something unexpected happens though as soon as I add document constructed 
that way:

```
POST /test/property/3
{
   "destination_id": 1,
   "id": 12,
   "rates": [
  {
 "id": 9,
 "end_date": "2014-10-15"
  },
  {
 "id": 10,
 "end_date": "2014-10-20"
  }
   ]
}
```
after that all my `_scores` are reduced to 0.
Seems like the top attribute kind of 'overshadows' the nested ones... 
anyway shouldn't happen as I explicitly use nested path?
I wonder why is that happening and how can I make sure that one incorrectly 
indexed document will not have such an impact on the search query?

Thank you,
Gosia

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ef1c870b-080c-41ec-99c7-5854d6c5a54d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: elasticsearch hadoop, dynamically decide index name too (not just type name), is it possible?

2014-09-11 Thread Jinyuan Zhou
This is great. Thanks,

Jinyuan (Jack) Zhou

On Thu, Sep 11, 2014 at 1:56 PM, Costin Leau  wrote:

> Yes, simply use a different pattern - {my-index-pattern}/{my-type}-foobar
>
> On 9/11/14 9:56 PM, Jinyuan Zhou wrote:
>
>> I saw hadoop documentation regarding setting up index for a mapreduce job
>> to EsOutuputFormat. Below  is a part regarding
>> dynamically decide the document type.
>> (http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/
>> current/mapreduce.html). My question is is it possible to
>> parameterize the "my-collection"?
>> Thanks,
>> Jack
>>
>>
>> writing to dynamic/multi-resourcesedit
>> > hadoop/edit/2.0/docs/src/reference/asciidoc/core/mr.adoc>
>>
>> As expected, the difference between the |old| and |new| API are minimal
>> (to be read non-existing) in this case as well:
>>
>> Configuration  conf=  new  Configuration();
>> conf.set("es.resource.write","my-collection/{media-type}");
>> ...
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to
>> elasticsearch+unsubscr...@googlegroups.com > unsubscr...@googlegroups.com>.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/3270d2e8-
>> 8bbb-4bd0-b10a-f889e4cbc61a%40googlegroups.com
>> > 8bbb-4bd0-b10a-f889e4cbc61a%40googlegroups.com?utm_medium=
>> email&utm_source=footer>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> Costin
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/elasticsearch/WkXHVSPZMsI/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/elasticsearch/54120C75.8060509%40gmail.com.
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CANBTPCG_70h-CZ1R2k9z4BjPkPWbnkiuSch6Z63L-3AJUUxKFA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: elasticsearch hadoop, dynamically decide index name too (not just type name), is it possible?

2014-09-11 Thread Costin Leau

Yes, simply use a different pattern - {my-index-pattern}/{my-type}-foobar

On 9/11/14 9:56 PM, Jinyuan Zhou wrote:

I saw hadoop documentation regarding setting up index for a mapreduce job to 
EsOutuputFormat. Below  is a part regarding
dynamically decide the document type.
(http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/mapreduce.html).
 My question is is it possible to
parameterize the "my-collection"?
Thanks,
Jack


writing to dynamic/multi-resourcesedit



As expected, the difference between the |old| and |new| API are minimal (to be 
read non-existing) in this case as well:

Configuration  conf=  new  Configuration();
conf.set("es.resource.write","my-collection/{media-type}");
...

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to
elasticsearch+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/3270d2e8-8bbb-4bd0-b10a-f889e4cbc61a%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.


--
Costin

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/54120C75.8060509%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


control primary/replica placement?

2014-09-11 Thread slushi
I have a setup where I have some machines with SSD drives and others with 
slower spinning disks. I have some indices that I would like to place on 
the SSD machines so those requests are served more quickly. I have used 
allocation filtering thus far and it's worked well. However I am running 
out of space on the SSD drives and would still like to put more data there. 
I noticed there is a "preference" setting in searches that allow requests 
to prefer primary shards. So I was thinking that perhaps I could place only 
primary shards on the SSD machines and have the replicas on the spinning 
disks (other than disk space, the machines are not close to capacity at 
all). I looked through the documentation and I don't see any configuration 
settings to allow this. The closest thing I found was awareness, but that's 
not quite it: I could keep a full copy of the data would be on the SSD 
machines, but not necessarily the primary shards. And the search request 
"preference" also doesn't seem to allow for restricting the search to a set 
of nodes. Is there some way to do this? The only other thing I could think 
of was perhaps periodically running some cluster re-route script to try to 
keep the primary shards on the SSD machines but that seems pretty hacky and 
I'm not sure it would actually work reliably.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2d921ec3-8e0f-4a6f-aa7f-80e8dd7896e3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch parse failure error

2014-09-11 Thread shriyansh jain
Hi,

I am using ELK stack and have a cluster of 2 elasticsearch nodes. When I am 
querying Elasticsearch from kibana. I am getting the following log error 
message in the elasticsearch log file.

http://pastebin.com/sD539SNZ

I am not able to figure out what is causing the error to happen. Any input 
will greatly appreciated.

Thank you.
-Shriyansh

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d7e6b8c4-dc56-4512-854b-081a5aaf88d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Move Elasticsearch index's from /auto/abc to /auto/def

2014-09-11 Thread shriyansh jain
Thank you sir for your reply back. I appreciate it.

Thank you.
-Shriyansh

On Thursday, September 11, 2014 1:07:46 PM UTC-7, Mark Walkom wrote:
>
> You should be able to just copy them across and then start ES with your 
> new path value, yes.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>
>
> On 12 September 2014 03:37, shriyansh jain  > wrote:
>
>> Can someone help me out with this question.? Is there any ambiguity with 
>> the question.?
>>
>> Thank you.
>> Shriyansh
>>
>> On Wednesday, September 10, 2014 6:30:30 PM UTC-7, shriyansh jain wrote:
>>>
>>> Hi,
>>>
>>> I need an advice on migrating all Elasticsearch indexes from one 
>>> partition to another partition. Currently, I am using cluster of 2 nodes 
>>> with Elasticsearch.
>>> And both the nodes are pointing to the same partition, which is 
>>> /auto/abc. How can I point both the nodes to the partition /auto/def and 
>>> keep all the indexes as they were befrore. 
>>> Will copying all the index from /auto/abc to /auto/def and pointing both 
>>> the elasticsearch nodes data path to /auto/def work.? Or I will have to 
>>> make some other changes which might cause change to happen.
>>>
>>> Thank you,
>>> Shriyash
>>>
>>>
>>>
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/8d397708-3681-41bc-b422-97fd8a546c00%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f6eb21b2-e379-4ffb-bfc2-8516d341e979%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Which UI tool is good for adjusting templates for the future indices?

2014-09-11 Thread Mark Walkom
There isn't a GUI tool for editing templates that I have found, though I
imagine if you added a feature request to kopf/elastichq/etc you might get
it filled.
And it's hard to go past curl for the CLI or even
https://github.com/elasticsearch/es2unix for *nix, but Windows is another
world in that regards.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 12 September 2014 05:03, Konstantin Erman  wrote:

> We have different indexes for different log types and one index of each
> type is generated every day.
>
> What I'm looking for is some convenient to use user interface to see
> things like "How many shards are allocated for particular index" (a ton of
> tools can show me that) and then I want to create (or adjust) template for
> that index type to allocate different number of shards.
>
> Another example - I want to see how all the types are mapped and again
> adjust that for future indices.
>
> So far all the tools I could find only show something, but are not
> suitable for changing the template.
>
> Besides GUI we probably need some command line tooling too, in order to
> preserve the settings and be able to configure new cluster from scratch.
> The complication is that we run everything on Windows.
>
> Any tooling advice would be very much appreciated.
>
> Thank you!
> Konstantin
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/0e03de6d-764a-42d4-9068-6a7c6ab02544%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624aN_gajpyXMOqGfhD-T-BHj5O4cax95HdLP4BUi1MTVvw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Move Elasticsearch index's from /auto/abc to /auto/def

2014-09-11 Thread Mark Walkom
You should be able to just copy them across and then start ES with your new
path value, yes.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 12 September 2014 03:37, shriyansh jain  wrote:

> Can someone help me out with this question.? Is there any ambiguity with
> the question.?
>
> Thank you.
> Shriyansh
>
> On Wednesday, September 10, 2014 6:30:30 PM UTC-7, shriyansh jain wrote:
>>
>> Hi,
>>
>> I need an advice on migrating all Elasticsearch indexes from one
>> partition to another partition. Currently, I am using cluster of 2 nodes
>> with Elasticsearch.
>> And both the nodes are pointing to the same partition, which is
>> /auto/abc. How can I point both the nodes to the partition /auto/def and
>> keep all the indexes as they were befrore.
>> Will copying all the index from /auto/abc to /auto/def and pointing both
>> the elasticsearch nodes data path to /auto/def work.? Or I will have to
>> make some other changes which might cause change to happen.
>>
>> Thank you,
>> Shriyash
>>
>>
>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/8d397708-3681-41bc-b422-97fd8a546c00%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624bKFc%2Bfj8G%2BuvBc27JZ_8LQ0zGPjGiY3a5wynAB6im1-w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


What is 'marked shard as started, but shard has not been created, mark shard as failed"?

2014-09-11 Thread Jeffrey Zhou
We're using Elasticsearch 1.2.2 in our cluster, and have 3 dedicated master 
nodes. Last night, two of these master nodes (ES-SEARCH-64 and 
ES-SEARCH-65) became primary master alternately in a period, and we found 
quite a few 'marked shard as started, but shard has not been created, mark 
shard as failed' in the logs, see an example below. Does anyone know what 
this means? and why it happened? 

[2014-09-10 06:58:15,567][DEBUG][action.admin.cluster.health] 
[ES-SEARCH-64-master] connection exception while trying to forward request 
to master node 
[[ES-SEARCH-65-master][LF0n3IxmSFGcIG6FoqTqCw][es-search-65][inet[/10.0.4.5:9300]]{data=false,
 
AvailabilitySet=master, master=true}], scheduling a retry. Error: 
[org.elasticsearch.transport.SendRequestTransportException: 
[ES-SEARCH-65-master][inet[/10.0.4.5:9300]][cluster/health]; 
org.elasticsearch.transport.NodeNotConnectedException: 
[ES-SEARCH-65-master][inet[/10.0.4.5:9300]] Node not connected]
[2014-09-10 06:58:16,130][DEBUG][action.admin.cluster.health] 
[ES-SEARCH-64-master] connection exception while trying to forward request 
to master node 
[[ES-SEARCH-65-master][LF0n3IxmSFGcIG6FoqTqCw][es-search-65][inet[/10.0.4.5:9300]]{data=false,
 
AvailabilitySet=master, master=true}], scheduling a retry. Error: 
[org.elasticsearch.transport.SendRequestTransportException: 
[ES-SEARCH-65-master][inet[/10.0.4.5:9300]][cluster/health]; 
org.elasticsearch.transport.NodeNotConnectedException: 
[ES-SEARCH-65-master][inet[/10.0.4.5:9300]] Node not connected]
[2014-09-10 06:58:17,045][DEBUG][action.admin.cluster.health] 
[ES-SEARCH-64-master] observer: timeout notification from cluster service. 
timeout setting [30s], time since start [30s]
[2014-09-10 06:58:18,342][DEBUG][action.admin.cluster.health] 
[ES-SEARCH-64-master] connection exception while trying to forward request 
to master node 
[[ES-SEARCH-65-master][LF0n3IxmSFGcIG6FoqTqCw][es-search-65][inet[/10.0.4.5:9300]]{data=false,
 
AvailabilitySet=master, master=true}], scheduling a retry. Error: 
[org.elasticsearch.transport.SendRequestTransportException: 
[ES-SEARCH-65-master][inet[/10.0.4.5:9300]][cluster/health]; 
org.elasticsearch.transport.NodeNotConnectedException: 
[ES-SEARCH-65-master][inet[/10.0.4.5:9300]] Node not connected]
[2014-09-10 06:58:18,734][DEBUG][action.admin.cluster.health] 
[ES-SEARCH-64-master] connection exception while trying to forward request 
to master node 
[[ES-SEARCH-65-master][LF0n3IxmSFGcIG6FoqTqCw][es-search-65][inet[/10.0.4.5:9300]]{data=false,
 
AvailabilitySet=master, master=true}], scheduling a retry. Error: 
[org.elasticsearch.transport.SendRequestTransportException: 
[ES-SEARCH-65-master][inet[/10.0.4.5:9300]][cluster/health]; 
org.elasticsearch.transport.NodeNotConnectedException: 
[ES-SEARCH-65-master][inet[/10.0.4.5:9300]] Node not connected]
[2014-09-10 06:58:20,093][WARN ][cluster.action.shard ] 
[ES-SEARCH-64-master] [doc-v5][3] received shard failed for [doc-v5][3], 
node[8-q2fmItSOyde6Yk2yUbWw], [R], s[STARTED], indexUUID 
[fw1ocDrpSTejbY_m3WQP6Q], reason [master 
[ES-SEARCH-64-master][_zIl76rXQVGzTKVs4LbsZQ][es-search-64][inet[/10.0.4.4:9300]]{data=false,
 
AvailabilitySet=master, master=true} *marked shard as started, but shard 
has not been created, mark shard as failed*]
[2014-09-10 06:58:20,093][WARN ][cluster.action.shard ] 
[ES-SEARCH-64-master] [pub-v5][2] received shard failed for [pub-v5][2], 
node[8-q2fmItSOyde6Yk2yUbWw], [P], s[STARTED], indexUUID 
[4QlLH670Q_Kd6rb1WXhgaQ], reason [master 
[ES-SEARCH-64-master][_zIl76rXQVGzTKVs4LbsZQ][es-search-64][inet[/10.0.4.4:9300]]{data=false,
 
AvailabilitySet=master, master=true} *marked shard as started, but shard 
has not been created, mark shard as failed*]
[2014-09-10 06:58:20,109][WARN ][cluster.action.shard ] 
[ES-SEARCH-64-master] [doc-v5][23] received shard failed for [doc-v5][23], 
node[_uM3zeD6QXaj3eWEzagADw], [R], s[STARTED], indexUUID 
[fw1ocDrpSTejbY_m3WQP6Q], reason [master 
[ES-SEARCH-64-master][_zIl76rXQVGzTKVs4LbsZQ][es-search-64][inet[/10.0.4.4:9300]]{data=false,
 
AvailabilitySet=master, master=true} marked shard as started, but shard has 
not been created, mark shard as failed]
[2014-09-10 06:58:20,109][WARN ][cluster.action.shard ] 
[ES-SEARCH-64-master] [ppub-v5][16] received shard failed for 
[ppub-v5][16], node[_uM3zeD6QXaj3eWEzagADw], [P], s[STARTED], indexUUID 
[4QlLH670Q_Kd6rb1WXhgaQ], reason [master 
[ES-SEARCH-64-master][_zIl76rXQVGzTKVs4LbsZQ][es-search-64][inet[/10.0.4.4:9300]]{data=false,
 
AvailabilitySet=master, master=true} marked shard as started, but shard has 
not been created, mark shard as failed]
[2014-09-10 06:58:20,155][WARN ][cluster.action.shard ] 
[ES-SEARCH-64-master] [doc-v5][20] received shard failed for [doc-v5][20], 
node[3fvBrmUcQmyhQ5XniKs7lg], [R], s[STARTED], indexUUID 
[fw1ocDrpSTejbY_m3WQP6Q], reason [master 
[ES-SEARCH-64-master][_zIl76rXQVGzTKVs4LbsZQ][es-search-64][inet[/10.0.4.4:9300]]{data=false,
 
AvailabilitySet=master, maste

Re: Do I need the JDBC driver

2014-09-11 Thread Employ
Like this: 
http://stackoverflow.com/questions/13576703/indexing-documents-in-elasticsearch-with-php-curl

Sent from my iPhone

> On 11 Sep 2014, at 19:52, "joergpra...@gmail.com"  
> wrote:
> 
> I do not know the PHP client in particular, but this is just another one of 
> the official Elasticsearch clients, like there are Elasticsearch clients for 
> other language, Perl, Python, Ruby, etc.
> 
> With an Elasticsearch client, you can use Elasticsearch, not an RDBMS 
> database.
> 
> Jörg
> 
>> On Thu, Sep 11, 2014 at 6:57 PM, Employ  wrote:
>> Thank you, that answers a lot of my questions. There is still the point of 
>> using the Php library for elastic search, where I can send documents 
>> directly to elastic search in JSON format without needing a JDBC driver. Is 
>> this not a good option?
>> 
>> Sent from my iPhone
>> 
>>> On 11 Sep 2014, at 16:33, "joergpra...@gmail.com"  
>>> wrote:
>>> 
>>> Synchronization of data is a very broad question. This is just because the 
>>> data organization in an RDBMS is very different from ES. You surely know 
>>> that. See also object-relational impedance mismatch 
>>> http://en.wikipedia.org/wiki/Object-relational_impedance_mismatch
>>> 
>>> The JDBC river plugin allows you to define SQL statements so you can easily 
>>> construct JSON out if it, for indexing into ES. 
>>> 
>>> If you can map identifiers from your RDBMS to JSON doc IDs and allocate the 
>>> _id field in the JDBC river plugin, you are lucky. In that case you can 
>>> just overwrite existing docs in ES to keep up with the most recent version.
>>> 
>>> Synchronization also includes modifications and deletions to avoid stale 
>>> docs, and transactional ACID properties. I have no general solution for 
>>> this. The best approach is to provide timewindowed indices and drop indices 
>>> that are too old, similar to what Logstash does.
>>> 
>>> Jörg
>>> 
 On Thu, Sep 11, 2014 at 3:39 PM, James  wrote:
 Hi Jorg,
 
 Thank you for the reply. Yes I meant the elasticsearch river. Simply put, 
 I want to syncronize the entries in my SQL database with my elasticsearch, 
 so I can use elasicsearch for searching and not doing fulltext search. I 
 want to know that when a new item gets added or removed from that database 
 that it also gets added / removed from elasicsearch.
 
 My understand, which might be wrong, is I can either use the PHP 
 elasticsearch library to push updates (adds / removes) to elasticsearch 
 when new items are added to SQL:
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html
 
 Or I can use the river JDBC river plugin for elasticsearch to connect to 
 my database directly and syncronize elasticsearch with the SQL database.
 
 My two questions are:
 
 1. Is my understanding above correct
 2. Does one option have advantages over the other
 
 - James
 
> On Wednesday, September 10, 2014 10:59:18 AM UTC+1, James wrote:
> Hi,
> 
> I'm setting up a system where I have a main SQL database which is synced 
> with elasticsearch. My plan is to use the main PHP library for 
> elasticsearch. 
> 
> I was going to have a cron run every thirty minuets to check for items in 
> my database that not only have an "active" flag but that also do not have 
> an "indexed" flag, that means I need to add them to the index. Then I was 
> going to add that item to the index. Since I am using taking this path, 
> it doesn't seem like I need the JDBC driver, as I can add items to 
> elasticsearch using the PHP library.
> 
> So, my question is, can I get away without using the JDBC driver?
> 
> James
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/1d5fe901-fd0e-4663-9c68-5f7cf8092cf1%40googlegroups.com.
 
 For more options, visit https://groups.google.com/d/optout.
>>> 
>>> -- 
>>> You received this message because you are subscribed to a topic in the 
>>> Google Groups "elasticsearch" group.
>>> To unsubscribe from this topic, visit 
>>> https://groups.google.com/d/topic/elasticsearch/0dzSMbARlks/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to 
>>> elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFoUMAv3R0b1NN%2B9B7eBNCcqpjg7tTMuNQPzCgGGupkQw%40mail.gmail.com.
>>> For more options, visit https://groups.google.com/d/optout.
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group an

Re: Do I need the JDBC driver

2014-09-11 Thread Employ
I'm sorry I'm such this is clear to a lot if people but not to me. Can I not 
use the official elastic search PHP client to add documents to elasticsearch? 
In which case my php website can use this library to grab data from the 
database, convert it to the right format and send it to elasticsearch to be 
indexed?

Sent from my iPhone

> On 11 Sep 2014, at 19:52, "joergpra...@gmail.com"  
> wrote:
> 
> I do not know the PHP client in particular, but this is just another one of 
> the official Elasticsearch clients, like there are Elasticsearch clients for 
> other language, Perl, Python, Ruby, etc.
> 
> With an Elasticsearch client, you can use Elasticsearch, not an RDBMS 
> database.
> 
> Jörg
> 
>> On Thu, Sep 11, 2014 at 6:57 PM, Employ  wrote:
>> Thank you, that answers a lot of my questions. There is still the point of 
>> using the Php library for elastic search, where I can send documents 
>> directly to elastic search in JSON format without needing a JDBC driver. Is 
>> this not a good option?
>> 
>> Sent from my iPhone
>> 
>>> On 11 Sep 2014, at 16:33, "joergpra...@gmail.com"  
>>> wrote:
>>> 
>>> Synchronization of data is a very broad question. This is just because the 
>>> data organization in an RDBMS is very different from ES. You surely know 
>>> that. See also object-relational impedance mismatch 
>>> http://en.wikipedia.org/wiki/Object-relational_impedance_mismatch
>>> 
>>> The JDBC river plugin allows you to define SQL statements so you can easily 
>>> construct JSON out if it, for indexing into ES. 
>>> 
>>> If you can map identifiers from your RDBMS to JSON doc IDs and allocate the 
>>> _id field in the JDBC river plugin, you are lucky. In that case you can 
>>> just overwrite existing docs in ES to keep up with the most recent version.
>>> 
>>> Synchronization also includes modifications and deletions to avoid stale 
>>> docs, and transactional ACID properties. I have no general solution for 
>>> this. The best approach is to provide timewindowed indices and drop indices 
>>> that are too old, similar to what Logstash does.
>>> 
>>> Jörg
>>> 
 On Thu, Sep 11, 2014 at 3:39 PM, James  wrote:
 Hi Jorg,
 
 Thank you for the reply. Yes I meant the elasticsearch river. Simply put, 
 I want to syncronize the entries in my SQL database with my elasticsearch, 
 so I can use elasicsearch for searching and not doing fulltext search. I 
 want to know that when a new item gets added or removed from that database 
 that it also gets added / removed from elasicsearch.
 
 My understand, which might be wrong, is I can either use the PHP 
 elasticsearch library to push updates (adds / removes) to elasticsearch 
 when new items are added to SQL:
 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html
 
 Or I can use the river JDBC river plugin for elasticsearch to connect to 
 my database directly and syncronize elasticsearch with the SQL database.
 
 My two questions are:
 
 1. Is my understanding above correct
 2. Does one option have advantages over the other
 
 - James
 
> On Wednesday, September 10, 2014 10:59:18 AM UTC+1, James wrote:
> Hi,
> 
> I'm setting up a system where I have a main SQL database which is synced 
> with elasticsearch. My plan is to use the main PHP library for 
> elasticsearch. 
> 
> I was going to have a cron run every thirty minuets to check for items in 
> my database that not only have an "active" flag but that also do not have 
> an "indexed" flag, that means I need to add them to the index. Then I was 
> going to add that item to the index. Since I am using taking this path, 
> it doesn't seem like I need the JDBC driver, as I can add items to 
> elasticsearch using the PHP library.
> 
> So, my question is, can I get away without using the JDBC driver?
> 
> James
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/1d5fe901-fd0e-4663-9c68-5f7cf8092cf1%40googlegroups.com.
 
 For more options, visit https://groups.google.com/d/optout.
>>> 
>>> -- 
>>> You received this message because you are subscribed to a topic in the 
>>> Google Groups "elasticsearch" group.
>>> To unsubscribe from this topic, visit 
>>> https://groups.google.com/d/topic/elasticsearch/0dzSMbARlks/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to 
>>> elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFoUMAv3R0b1NN%2B9B7eBNCcqpjg7tTMuNQPzCgGGupkQw%40mail.gmail.com.
>>> F

Which UI tool is good for adjusting templates for the future indices?

2014-09-11 Thread Konstantin Erman
We have different indexes for different log types and one index of each 
type is generated every day. 

What I'm looking for is some convenient to use user interface to see things 
like "How many shards are allocated for particular index" (a ton of tools 
can show me that) and then I want to create (or adjust) template for that 
index type to allocate different number of shards.

Another example - I want to see how all the types are mapped and again 
adjust that for future indices.

So far all the tools I could find only show something, but are not suitable 
for changing the template.

Besides GUI we probably need some command line tooling too, in order to 
preserve the settings and be able to configure new cluster from scratch. 
The complication is that we run everything on Windows.

Any tooling advice would be very much appreciated.

Thank you!
Konstantin

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0e03de6d-764a-42d4-9068-6a7c6ab02544%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch hadoop, dynamically decide index name too (not just type name), is it possible?

2014-09-11 Thread Jinyuan Zhou
I saw hadoop documentation regarding setting up index for a mapreduce job 
to EsOutuputFormat. Below  is a part regarding dynamically decide the 
document type.
(http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/mapreduce.html).
 
My question is is it possible to parameterize the "my-collection"? 
Thanks,
Jack
writing to dynamic/multi-resourcesedit 


As expected, the difference between the old and new API are minimal (to be 
read non-existing) in this case as well:

Configuration conf = new Configuration();
conf.set("es.resource.write","my-collection/{media-type}");
...

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3270d2e8-8bbb-4bd0-b10a-f889e4cbc61a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Do I need the JDBC driver

2014-09-11 Thread joergpra...@gmail.com
I do not know the PHP client in particular, but this is just another one of
the official Elasticsearch clients, like there are Elasticsearch clients
for other language, Perl, Python, Ruby, etc.

With an Elasticsearch client, you can use Elasticsearch, not an RDBMS
database.

Jörg

On Thu, Sep 11, 2014 at 6:57 PM, Employ  wrote:

> Thank you, that answers a lot of my questions. There is still the point of
> using the Php library for elastic search, where I can send documents
> directly to elastic search in JSON format without needing a JDBC driver. Is
> this not a good option?
>
> Sent from my iPhone
>
> On 11 Sep 2014, at 16:33, "joergpra...@gmail.com" 
> wrote:
>
> Synchronization of data is a very broad question. This is just because the
> data organization in an RDBMS is very different from ES. You surely know
> that. See also object-relational impedance mismatch
> http://en.wikipedia.org/wiki/Object-relational_impedance_mismatch
>
> The JDBC river plugin allows you to define SQL statements so you can
> easily construct JSON out if it, for indexing into ES.
>
> If you can map identifiers from your RDBMS to JSON doc IDs and allocate
> the _id field in the JDBC river plugin, you are lucky. In that case you can
> just overwrite existing docs in ES to keep up with the most recent version.
>
> Synchronization also includes modifications and deletions to avoid stale
> docs, and transactional ACID properties. I have no general solution for
> this. The best approach is to provide timewindowed indices and drop indices
> that are too old, similar to what Logstash does.
>
> Jörg
>
> On Thu, Sep 11, 2014 at 3:39 PM, James  wrote:
>
>> Hi Jorg,
>>
>> Thank you for the reply. Yes I meant the elasticsearch river. Simply put,
>> I want to syncronize the entries in my SQL database with my elasticsearch,
>> so I can use elasicsearch for searching and not doing fulltext search. I
>> want to know that when a new item gets added or removed from that database
>> that it also gets added / removed from elasicsearch.
>>
>> My understand, which might be wrong, is I can either use the PHP
>> elasticsearch library to push updates (adds / removes) to elasticsearch
>> when new items are added to SQL:
>>
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html
>>
>> Or I can use the river JDBC river plugin for elasticsearch to connect to
>> my database directly and syncronize elasticsearch with the SQL database.
>>
>> My two questions are:
>>
>> 1. Is my understanding above correct
>> 2. Does one option have advantages over the other
>>
>> - James
>>
>> On Wednesday, September 10, 2014 10:59:18 AM UTC+1, James wrote:
>>>
>>> Hi,
>>>
>>> I'm setting up a system where I have a main SQL database which is synced
>>> with elasticsearch. My plan is to use the main PHP library for
>>> elasticsearch.
>>>
>>> I was going to have a cron run every thirty minuets to check for items
>>> in my database that not only have an "active" flag but that also do not
>>> have an "indexed" flag, that means I need to add them to the index. Then I
>>> was going to add that item to the index. Since I am using taking this path,
>>> it doesn't seem like I need the JDBC driver, as I can add items to
>>> elasticsearch using the PHP library.
>>>
>>> So, my question is, can I get away without using the JDBC driver?
>>>
>>> James
>>>
>>>
>>>  --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/1d5fe901-fd0e-4663-9c68-5f7cf8092cf1%40googlegroups.com
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/0dzSMbARlks/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFoUMAv3R0b1NN%2B9B7eBNCcqpjg7tTMuNQPzCgGGupkQw%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web

Re: [ANN] Elasticsearch Simple Action Plugin

2014-09-11 Thread joergpra...@gmail.com
It was a quick fix. new version is checked in.

Thanks for reminding,

Jörg

On Thu, Sep 11, 2014 at 7:53 PM, 'Sandeep Ramesh Khanzode' via
elasticsearch  wrote:

> Hi Jorg,
>
> Sure. Thanks,
>
> Just wondering what changed so much in 1.3? Is there sort of a quick fix?
> Or else, will just wait for an update from you.
>
> Thanks,
> Sandeep
>
>
> On Wednesday, 10 September 2014 15:20:57 UTC+5:30, Jörg Prante wrote:
>>
>> The plugin is for 1.2, I have to update the simple action plugin to
>> Elasticsearch 1.3
>>
>> Thanks for the reminder
>>
>> Jörg
>>
>>
>> On Wed, Sep 10, 2014 at 11:08 AM, 'Sandeep Ramesh Khanzode' via
>> elasticsearch  wrote:
>>
>>> Hi Jorg,
>>>
>>> I was trying to install this plugin on ES v1.3.1. I am getting the
>>> errors similar to below. Can you please tell me what has changed and how I
>>> can rectify? Thanks,
>>>
>>> 4) No implementation for 
>>> java.util.Map>> org.elasticsearch.action.support.TransportAction> was bound.
>>>   while locating java.util.Map>> org.elasticsearch.action.support.TransportAction>
>>> for parameter 1 at org.elasticsearch.client.node.
>>> NodeClusterAdminClient.(Unknown Source)
>>>   while locating org.elasticsearch.client.node.NodeClusterAdminClient
>>> for parameter 1 at 
>>> org.elasticsearch.client.node.NodeAdminClient.(Unknown
>>> Source)
>>>   while locating org.elasticsearch.client.node.NodeAdminClient
>>> for parameter 2 at 
>>> org.elasticsearch.client.node.NodeClient.(Unknown
>>> Source)
>>>   at org.elasticsearch.client.node.NodeClientModule.configure(
>>> NodeClientModule.java:38)
>>>
>>> 5) No implementation for 
>>> java.util.Map>> org.elasticsearch.action.support.TransportAction> was bound.
>>>   while locating java.util.Map>> org.elasticsearch.action.support.TransportAction>
>>> for parameter 1 at org.elasticsearch.client.node.
>>> NodeIndicesAdminClient.(Unknown Source)
>>>   at org.elasticsearch.client.node.NodeClientModule.configure(
>>> NodeClientModule.java:36)
>>>
>>> 6) No implementation for 
>>> java.util.Map>> org.elasticsearch.action.support.TransportAction> was bound.
>>>   while locating java.util.Map>> org.elasticsearch.action.support.TransportAction>
>>> for parameter 1 at org.elasticsearch.client.node.
>>> NodeIndicesAdminClient.(Unknown Source)
>>>   while locating org.elasticsearch.client.node.NodeIndicesAdminClient
>>> for parameter 2 at 
>>> org.elasticsearch.client.node.NodeAdminClient.(Unknown
>>> Source)
>>>   at org.elasticsearch.client.node.NodeClientModule.configure(
>>> NodeClientModule.java:37)
>>>
>>> 7) No implementation for 
>>> java.util.Map>> org.elasticsearch.action.support.TransportAction> was bound.
>>>   while locating java.util.Map>> org.elasticsearch.action.support.TransportAction>
>>> for parameter 1 at org.elasticsearch.client.node.
>>> NodeIndicesAdminClient.(Unknown Source)
>>>   while locating org.elasticsearch.client.node.NodeIndicesAdminClient
>>> for parameter 2 at 
>>> org.elasticsearch.client.node.NodeAdminClient.(Unknown
>>> Source)
>>>   while locating org.elasticsearch.client.node.NodeAdminClient
>>> for parameter 2 at 
>>> org.elasticsearch.client.node.NodeClient.(Unknown
>>> Source)
>>>   at org.elasticsearch.client.node.NodeClientModule.configure(
>>> NodeClientModule.java:38)
>>>
>>> 8) No implementation for org.elasticsearch.action.GenericAction
>>> annotated with @org.elasticsearch.common.inject.multibindings.Element(
>>> setNam
>>> e=,uniqueId=275) was bound.
>>>   at org.elasticsearch.action.ActionModule.configure(
>>> ActionModule.java:304)
>>>
>>> 9) An exception was caught and reported. Message: null
>>>   at org.elasticsearch.common.inject.InjectorShell$Builder.
>>> build(InjectorShell.java:130)
>>>
>>> 9 errors
>>> at org.elasticsearch.common.inject.internal.Errors.
>>> throwCreationExceptionIfErrorsExist(Errors.java:344)
>>> at org.elasticsearch.common.inject.InjectorBuilder.
>>> initializeStatically(InjectorBuilder.java:151)
>>> at org.elasticsearch.common.inject.InjectorBuilder.build(
>>> InjectorBuilder.java:102)
>>> at org.elasticsearch.common.inject.Guice.createInjector(
>>> Guice.java:93)
>>> at org.elasticsearch.common.inject.Guice.createInjector(
>>> Guice.java:70)
>>> at org.elasticsearch.common.inject.ModulesBuilder.
>>> createInjector(ModulesBuilder.java:59)
>>> at org.elasticsearch.node.internal.InternalNode.(
>>> InternalNode.java:192)
>>> at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.
>>> java:159)
>>> at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.
>>> java:70)
>>> at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:
>>> 203)
>>> at org.elasticsearch.bootstrap.Elasticsearch.main(
>>> Elasticsearch.java:32)
>>> Caused by: java.lang.reflect.MalformedParameterizedTypeException
>>> at sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.
>>> validateConstructorArguments(ParameterizedTypeImpl.java:58

Pagination on unique data

2014-09-11 Thread jigish thakar
Hey Guys,
I am building some Logging and monitoring product for my employer and using 
ES as backend.
now finding unique value of each/any attribute is core part of business 
logic I have in hand.

lets say I want unique dst_ip, to achieve that,
- I have used "index":"not_analyzed" for selected fields
- Api used to get unique count 
   http://127.0.0.1:9200/es-server/Events/_search -d 
'{"aggs":{"dst_ip_count":{"cardinality":{"field":"dst_ip"}}},"size":0}'
- Api used to fetch those values
   http://127.0.0.1:9200/es-server/Events/_search -d 
'{"fields":["dst_ip"],"facets":{"terms":{"terms":{"field":"dst_ip","size":1116,"order":"count"}}},"size":1116}'

  here 1116 is received from first API. now here the count is very small 
but in production environment this count goes greater then 2lakh. which 
results in slow query response.

do we have any other way to fetch such values with pagination inbuild like 
we have in search query with size and from.

Please suggest, thanks in advance.


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b11eaa9f-ba52-4e0a-ba21-3cfb6e669a58%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Search Plugin to intercept search response

2014-09-11 Thread joergpra...@gmail.com
Yes. I have checked in some code for a simple action plugin.

https://github.com/jprante/elasticsearch-simple-action-plugin

The plugin implements a simple "match_all" search action, by reusing much
of the code of the search action.

Best,

Jörg



On Thu, Sep 11, 2014 at 7:55 PM, Sandeep Ramesh Khanzode <
k.sandee...@gmail.com> wrote:

> Thanks for bearing with me till now :) Please provide one final input on
> this issue.
>
> Is there any example for a custom search action? If not, can you please
> provide some details on how I can implement one?
>
> Thanks,
> Sandeep
>
>
> On Thu, Sep 11, 2014 at 4:53 PM, joergpra...@gmail.com <
> joergpra...@gmail.com> wrote:
>
>> You can not intercept the SearchResponse on the ES server itself.
>> Instead, you must implement your custom search action.
>>
>> Jörg
>>
>> On Thu, Sep 11, 2014 at 10:00 AM, Sandeep Ramesh Khanzode <
>> k.sandee...@gmail.com> wrote:
>>
>>> When you say, 'receive the SearchResponse', is that in the ES Server
>>> node or the TransportClient node that spawned the request? I would want to
>>> intercept the SearchResponse when created at the ES Server itself, since I
>>> want to send the subset of Response to another process on the same node,
>>> and it would not be very efficient to have the response sent back to the
>>> client node only to be sent back again.
>>>
>>> Thanks,
>>> Sandeep
>>>
>>> On Thu, Sep 11, 2014 at 12:43 PM, joergpra...@gmail.com <
>>> joergpra...@gmail.com> wrote:
>>>
 You can receive the SearchResponse, process the response, and return
 the response with whatever format you want.

 Jörg

 On Wed, Sep 10, 2014 at 11:59 AM, Sandeep Ramesh Khanzode <
 k.sandee...@gmail.com> wrote:

> Hi Jorg,
>
> Thanks for the links. I was checking the sources. There are relevant
> to my functional use case. But I will be using the TransportClient Java
> API, not the REST client.
>
> Can you please tell me how I can find/modify these classes/sources to
> get the appropriate classes for inctercepting the Search Response when
> invoked from a TransportClient?
>
>
> Thanks,
> Sandeep
>
> On Wed, Aug 27, 2014 at 6:38 PM, joergpra...@gmail.com <
> joergpra...@gmail.com> wrote:
>
>> Have a look at array-format or csv plugin, they are processing the
>> SearchResponse to output it in another format:
>>
>> https://github.com/jprante/elasticsearch-arrayformat
>>
>> https://github.com/jprante/elasticsearch-csv
>>
>> Jörg
>>
>>
>> On Wed, Aug 27, 2014 at 3:05 PM, 'Sandeep Ramesh Khanzode' via
>> elasticsearch  wrote:
>>
>>> Hi,
>>>
>>> Is there any action/module that I can extend/register/add so that I
>>> can intercept the SearchResponse on the server node before the response 
>>> is
>>> sent back to the TransportClient on the calling box?
>>>
>>> Thanks,
>>> Sandeep
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it,
>>> send an email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/559a5c68-4567-425f-9842-7f2fe6755095%40googlegroups.com
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>> You received this message because you are subscribed to a topic in
>> the Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/elasticsearch/o6RZL4KwJVs/unsubscribe
>> .
>> To unsubscribe from this group and all its topics, send an email to
>> elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGJ_%3D5RnyFqMP_AX4744z6tdAp8cfLBi_OqzLM23_rqzw%40mail.gmail.com
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google
> Groups "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKnM90bENin_aU4AXa%3DTVHQ_SyTTn-89Rev5vjj3%3DoDikwstkQ%40mail.gmail.com
> 

Aggregations across values returned by term then date histogram

2014-09-11 Thread ppearcy
I haven't been able to figure out how to do this and it may not be 
possible, but figured I'd ask. 

I have a query with multiple aggregations that looks like this:
https://gist.github.com/ppearcy/0c6a86ebf32a0bbcb1fc

This returns a time series of data per user: 
https://gist.github.com/ppearcy/7ceac858da2e647ff341

I want to do a stats aggregation across all the values for each week to 
provided per weekly statistical view of things. 

Currently, I am doing these computations client side and it works pretty 
well, but have performance concerns around merging lots of time series 
streams. 

Any help or ideas would be much appreciated. 

Thanks!
Paul

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a83cc20d-8c9c-4a6b-b843-349a2669e580%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Search Plugin to intercept search response

2014-09-11 Thread Sandeep Ramesh Khanzode
Thanks for bearing with me till now :) Please provide one final input on
this issue.

Is there any example for a custom search action? If not, can you please
provide some details on how I can implement one?

Thanks,
Sandeep


On Thu, Sep 11, 2014 at 4:53 PM, joergpra...@gmail.com <
joergpra...@gmail.com> wrote:

> You can not intercept the SearchResponse on the ES server itself. Instead,
> you must implement your custom search action.
>
> Jörg
>
> On Thu, Sep 11, 2014 at 10:00 AM, Sandeep Ramesh Khanzode <
> k.sandee...@gmail.com> wrote:
>
>> When you say, 'receive the SearchResponse', is that in the ES Server node
>> or the TransportClient node that spawned the request? I would want to
>> intercept the SearchResponse when created at the ES Server itself, since I
>> want to send the subset of Response to another process on the same node,
>> and it would not be very efficient to have the response sent back to the
>> client node only to be sent back again.
>>
>> Thanks,
>> Sandeep
>>
>> On Thu, Sep 11, 2014 at 12:43 PM, joergpra...@gmail.com <
>> joergpra...@gmail.com> wrote:
>>
>>> You can receive the SearchResponse, process the response, and return the
>>> response with whatever format you want.
>>>
>>> Jörg
>>>
>>> On Wed, Sep 10, 2014 at 11:59 AM, Sandeep Ramesh Khanzode <
>>> k.sandee...@gmail.com> wrote:
>>>
 Hi Jorg,

 Thanks for the links. I was checking the sources. There are relevant to
 my functional use case. But I will be using the TransportClient Java API,
 not the REST client.

 Can you please tell me how I can find/modify these classes/sources to
 get the appropriate classes for inctercepting the Search Response when
 invoked from a TransportClient?


 Thanks,
 Sandeep

 On Wed, Aug 27, 2014 at 6:38 PM, joergpra...@gmail.com <
 joergpra...@gmail.com> wrote:

> Have a look at array-format or csv plugin, they are processing the
> SearchResponse to output it in another format:
>
> https://github.com/jprante/elasticsearch-arrayformat
>
> https://github.com/jprante/elasticsearch-csv
>
> Jörg
>
>
> On Wed, Aug 27, 2014 at 3:05 PM, 'Sandeep Ramesh Khanzode' via
> elasticsearch  wrote:
>
>> Hi,
>>
>> Is there any action/module that I can extend/register/add so that I
>> can intercept the SearchResponse on the server node before the response 
>> is
>> sent back to the TransportClient on the calling box?
>>
>> Thanks,
>> Sandeep
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/559a5c68-4567-425f-9842-7f2fe6755095%40googlegroups.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/o6RZL4KwJVs/unsubscribe
> .
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGJ_%3D5RnyFqMP_AX4744z6tdAp8cfLBi_OqzLM23_rqzw%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

  --
 You received this message because you are subscribed to the Google
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAKnM90bENin_aU4AXa%3DTVHQ_SyTTn-89Rev5vjj3%3DoDikwstkQ%40mail.gmail.com
 
 .

 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "elasticsearch" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/elasticsearch/o6RZL4KwJVs/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> e

Re: [ANN] Elasticsearch Simple Action Plugin

2014-09-11 Thread 'Sandeep Ramesh Khanzode' via elasticsearch
Hi Jorg,

Sure. Thanks,

Just wondering what changed so much in 1.3? Is there sort of a quick fix? 
Or else, will just wait for an update from you. 

Thanks,
Sandeep


On Wednesday, 10 September 2014 15:20:57 UTC+5:30, Jörg Prante wrote:
>
> The plugin is for 1.2, I have to update the simple action plugin to 
> Elasticsearch 1.3
>
> Thanks for the reminder
>
> Jörg
>
>
> On Wed, Sep 10, 2014 at 11:08 AM, 'Sandeep Ramesh Khanzode' via 
> elasticsearch > wrote:
>
>> Hi Jorg,
>>
>> I was trying to install this plugin on ES v1.3.1. I am getting the errors 
>> similar to below. Can you please tell me what has changed and how I can 
>> rectify? Thanks,
>>
>> 4) No implementation for 
>> java.util.Map> org.elasticsearch.action.support.TransportAction> was bound.
>>   while locating java.util.Map> org.elasticsearch.action.support.TransportAction>
>> for parameter 1 at 
>> org.elasticsearch.client.node.NodeClusterAdminClient.(Unknown Source)
>>   while locating org.elasticsearch.client.node.NodeClusterAdminClient
>> for parameter 1 at 
>> org.elasticsearch.client.node.NodeAdminClient.(Unknown Source)
>>   while locating org.elasticsearch.client.node.NodeAdminClient
>> for parameter 2 at 
>> org.elasticsearch.client.node.NodeClient.(Unknown Source)
>>   at 
>> org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:38)
>>
>> 5) No implementation for 
>> java.util.Map> org.elasticsearch.action.support.TransportAction> was bound.
>>   while locating java.util.Map> org.elasticsearch.action.support.TransportAction>
>> for parameter 1 at 
>> org.elasticsearch.client.node.NodeIndicesAdminClient.(Unknown Source)
>>   at 
>> org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:36)
>>
>> 6) No implementation for 
>> java.util.Map> org.elasticsearch.action.support.TransportAction> was bound.
>>   while locating java.util.Map> org.elasticsearch.action.support.TransportAction>
>> for parameter 1 at 
>> org.elasticsearch.client.node.NodeIndicesAdminClient.(Unknown Source)
>>   while locating org.elasticsearch.client.node.NodeIndicesAdminClient
>> for parameter 2 at 
>> org.elasticsearch.client.node.NodeAdminClient.(Unknown Source)
>>   at 
>> org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:37)
>>
>> 7) No implementation for 
>> java.util.Map> org.elasticsearch.action.support.TransportAction> was bound.
>>   while locating java.util.Map> org.elasticsearch.action.support.TransportAction>
>> for parameter 1 at 
>> org.elasticsearch.client.node.NodeIndicesAdminClient.(Unknown Source)
>>   while locating org.elasticsearch.client.node.NodeIndicesAdminClient
>> for parameter 2 at 
>> org.elasticsearch.client.node.NodeAdminClient.(Unknown Source)
>>   while locating org.elasticsearch.client.node.NodeAdminClient
>> for parameter 2 at 
>> org.elasticsearch.client.node.NodeClient.(Unknown Source)
>>   at 
>> org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:38)
>>
>> 8) No implementation for org.elasticsearch.action.GenericAction annotated 
>> with @org.elasticsearch.common.inject.multibindings.Element(setNam
>> e=,uniqueId=275) was bound.
>>   at 
>> org.elasticsearch.action.ActionModule.configure(ActionModule.java:304)
>>
>> 9) An exception was caught and reported. Message: null
>>   at 
>> org.elasticsearch.common.inject.InjectorShell$Builder.build(InjectorShell.java:130)
>>
>> 9 errors
>> at 
>> org.elasticsearch.common.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:344)
>> at 
>> org.elasticsearch.common.inject.InjectorBuilder.initializeStatically(InjectorBuilder.java:151)
>> at 
>> org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:102)
>> at 
>> org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)
>> at 
>> org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)
>> at 
>> org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:59)
>> at 
>> org.elasticsearch.node.internal.InternalNode.(InternalNode.java:192)
>> at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
>> at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:70)
>> at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:203)
>> at 
>> org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
>> Caused by: java.lang.reflect.MalformedParameterizedTypeException
>> at 
>> sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.validateConstructorArguments(ParameterizedTypeImpl.java:58)
>> at 
>> sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.(ParameterizedTypeImpl.java:51)
>> at 
>> sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.make(ParameterizedTypeImpl.java:92)
>> at 
>> sun.reflect.generics.factory.CoreReflectionFactory.makeParameterizedType(C

Context in Native Scripts

2014-09-11 Thread zeev . sands

Hello everyone,

 I've been playing with native scripts and have a few questions:

 Is there any notion of context for native scripts? 

For example, is there a way to know that a method "runAsDouble", for 
example, is called for the last time? 
I might, for instance, like to send some sort of statistics after a 
search is done.

Is there any way to know how many documents the search produced, 
beforehand? 
I might want to do some pre calculations based on this number before 
the actual scoring begins.
   
Is there any way to get all the documents (or ids) somehow to process 
(score) them in bulk? 
My scoring might depend on the search result, I might want to calculate 
an average of a search result field and base my scores on this number.

I apologize in advance, if some of my questions are uninformed. I'm new 
to ES, trying to switch from Solr.

Thank you,

ZS




-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/48643754-67cc-497c-8c84-c1565dfcb867%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Cannot Get REST API Clients to See Server on Win 8

2014-09-11 Thread Jerry Burman
I downloaded ElasticSearch (ES) 1.3.2 and Java jre1.8.0_20 and attempted to 
follow the ES tutorial.  I am new to ES.  I installed ES on my Windows 8.1 
x64 laptop and it appears that the server is running in the prompt window.  
I also set up JAVA_HOME.  However, when I attempt to run the simple 
tutorials under either Chrome Sense or Mozilla, I get server errors (e.g. 
status 0).  I also attempted to run service install and service start (x64) 
from the ES directory and both failed.  How can I get the clients to see 
the server and make service operate properly?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ab07dc1e-b3a1-4134-88bb-02b3d544f4b9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Move Elasticsearch index's from /auto/abc to /auto/def

2014-09-11 Thread shriyansh jain
Can someone help me out with this question.? Is there any ambiguity with 
the question.?

Thank you.
Shriyansh

On Wednesday, September 10, 2014 6:30:30 PM UTC-7, shriyansh jain wrote:
>
> Hi,
>
> I need an advice on migrating all Elasticsearch indexes from one partition 
> to another partition. Currently, I am using cluster of 2 nodes with 
> Elasticsearch.
> And both the nodes are pointing to the same partition, which is /auto/abc. 
> How can I point both the nodes to the partition /auto/def and keep all the 
> indexes as they were befrore. 
> Will copying all the index from /auto/abc to /auto/def and pointing both 
> the elasticsearch nodes data path to /auto/def work.? Or I will have to 
> make some other changes which might cause change to happen.
>
> Thank you,
> Shriyash
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8d397708-3681-41bc-b422-97fd8a546c00%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: complex nested query

2014-09-11 Thread yancey
Vineeth!

Thanks for your reply! I’ll try your solution,hope this can solve my problem.


Thanks && Best Regard!



在 2014年9月12日,0:12,vineeth mohan  写道:

> Hello , 
> 
> I don't feel you can do this in a single call.
> What i have in mind would be 
> 
> Run a two level aggregation query with date histogram aggregation on first 
> level with date and term aggregation on second with sum aggregation on prize 
> field on second level. You might need to use nested aggregation also here.
> Once you get the results , choose the dates based on the criteria , i.e. sum 
> of prize more than 300. With the dates you are interested in , fire the next 
> query which has all interesting date ranges as range query. 
> Thanks
>  Vineeth
> 
> On Thu, Sep 11, 2014 at 5:04 PM, 闫旭  wrote:
> Anyone can help this?
> 
> Thanks && Best Regard!
> 
> 在 2014年9月11日,13:24,闫旭  写道:
> 
>> Thank you !  But nested bool query can not plus all price with the data 
>> range. how  can i do this??
>> 
>> Thx again.
>> 
>> Thanks && Best Regard!
>> 
>> 在 2014年9月11日,12:04,vineeth mohan  写道:
>> 
>>> Hello , 
>>> 
>>> 
>>> First you need to declare field details as nested. - 
>>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-nested-type.html#mapping-nested-type
>>> 
>>> Then do a bool query with the date range constrain and range constrain
>>> 
>>> Thanks
>>> Vineeth
>>> 
>>> On Thu, Sep 11, 2014 at 8:53 AM, 闫旭  wrote:
>>> Dear All!
>>> 
>>> I have a problem with a complex nested query
>>> the docs like this:
>>> _id:1
>>> {
>>> "detail":[
>>> {
>>> "date":"2014-09-01",
>>> "price”:50
>>> },
>>> {
>>> "date":"2014-09-02",
>>> "price”:100
>>> },
>>> {
>>> "date":"2014-09-03",
>>> "price":100
>>> },
>>> {
>>> "date":"2014-09-04",
>>> "price":200
>>> }
>>> ]
>>> 
>>> }
>>> _id:2
>>> {
>>> "detail":[
>>> {
>>> "date":"2014-09-01",
>>> "price":100
>>> },
>>> {
>>> "date":"2014-09-02",
>>> "price":200
>>> },
>>> {
>>> "date":"2014-09-03",
>>> "price":300
>>> },
>>> {
>>> "date":"2014-09-04",
>>> "price":200
>>> }
>>> ]
>>> 
>>> }
>>> I will filter the docs with “date in [2014-09-01, 2014-09-03] and 
>>> sum(price) > 300”.
>>> I only find some way with “aggregation”, but it can only stat the sum of 
>>> all docs.
>>> 
>>> How Can I solve the problem?? 
>>> 
>>> 
>>> Thanks && Best Regard!
>>> 
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/A98354E4-9C9F-43B2-9310-6355DE3D6F85%40gmail.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/CAGdPd5kfRarPNNBctvYfHsk52tjD2rxv18aQGqq3Hz0i_2ZxVQ%40mail.gmail.com.
>>> For more options, visit https://groups.google.com/d/optout.
>> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/F5848899-E506-470B-AA05-E6A2B1965986%40gmail.com.
> 
> For more options, visit https://groups.google.com/d/optout.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/CAGdPd5mH%2BSrgrkomvNh9-a-A5gDTjeXCO6DE8uWp32ruNbGPFA%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view

Marvel - Total Queries Per Day

2014-09-11 Thread Scott Decker
I must be missing something, or not be querying the dashboards correctly =(

How do I get a graph of the total queries being generated for nodes?
Basically, just trying to see counts, per hour/per day/per week of how many 
queries that are being run against the es cluster we have.

I know query_total can be used, but not quite following how to get that 
displayed.

Any help would be appreciated.

Thanks,
Scott

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1fd01e35-9864-4d8d-a204-262324c28747%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Do I need the JDBC driver

2014-09-11 Thread Employ
Thank you, that answers a lot of my questions. There is still the point of 
using the Php library for elastic search, where I can send documents directly 
to elastic search in JSON format without needing a JDBC driver. Is this not a 
good option?

Sent from my iPhone

> On 11 Sep 2014, at 16:33, "joergpra...@gmail.com"  
> wrote:
> 
> Synchronization of data is a very broad question. This is just because the 
> data organization in an RDBMS is very different from ES. You surely know 
> that. See also object-relational impedance mismatch 
> http://en.wikipedia.org/wiki/Object-relational_impedance_mismatch
> 
> The JDBC river plugin allows you to define SQL statements so you can easily 
> construct JSON out if it, for indexing into ES. 
> 
> If you can map identifiers from your RDBMS to JSON doc IDs and allocate the 
> _id field in the JDBC river plugin, you are lucky. In that case you can just 
> overwrite existing docs in ES to keep up with the most recent version.
> 
> Synchronization also includes modifications and deletions to avoid stale 
> docs, and transactional ACID properties. I have no general solution for this. 
> The best approach is to provide timewindowed indices and drop indices that 
> are too old, similar to what Logstash does.
> 
> Jörg
> 
>> On Thu, Sep 11, 2014 at 3:39 PM, James  wrote:
>> Hi Jorg,
>> 
>> Thank you for the reply. Yes I meant the elasticsearch river. Simply put, I 
>> want to syncronize the entries in my SQL database with my elasticsearch, so 
>> I can use elasicsearch for searching and not doing fulltext search. I want 
>> to know that when a new item gets added or removed from that database that 
>> it also gets added / removed from elasicsearch.
>> 
>> My understand, which might be wrong, is I can either use the PHP 
>> elasticsearch library to push updates (adds / removes) to elasticsearch when 
>> new items are added to SQL:
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html
>> 
>> Or I can use the river JDBC river plugin for elasticsearch to connect to my 
>> database directly and syncronize elasticsearch with the SQL database.
>> 
>> My two questions are:
>> 
>> 1. Is my understanding above correct
>> 2. Does one option have advantages over the other
>> 
>> - James
>> 
>>> On Wednesday, September 10, 2014 10:59:18 AM UTC+1, James wrote:
>>> Hi,
>>> 
>>> I'm setting up a system where I have a main SQL database which is synced 
>>> with elasticsearch. My plan is to use the main PHP library for 
>>> elasticsearch. 
>>> 
>>> I was going to have a cron run every thirty minuets to check for items in 
>>> my database that not only have an "active" flag but that also do not have 
>>> an "indexed" flag, that means I need to add them to the index. Then I was 
>>> going to add that item to the index. Since I am using taking this path, it 
>>> doesn't seem like I need the JDBC driver, as I can add items to 
>>> elasticsearch using the PHP library.
>>> 
>>> So, my question is, can I get away without using the JDBC driver?
>>> 
>>> James
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/1d5fe901-fd0e-4663-9c68-5f7cf8092cf1%40googlegroups.com.
>> 
>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to a topic in the Google 
> Groups "elasticsearch" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/elasticsearch/0dzSMbARlks/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFoUMAv3R0b1NN%2B9B7eBNCcqpjg7tTMuNQPzCgGGupkQw%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9F779070-9B92-4A0E-A09A-71848F551214%40employ.com.
For more options, visit https://groups.google.com/d/optout.


Re: ElasticsearchIntegrationTest issue

2014-09-11 Thread Jean-Bernard Damiano
I found the problem with the code.
Is a test configuration issue.

The pluginsService class loads plugin from plugin.types information, but it 
calls another method, 
tupleBuilder.addAll(loadPluginsFromClasspath(settings));

The loadPluginsFromClasspath, in my case it find my plugin another time. 
It's for this reason that we have encountered the specified error.

Regards, 


Le mardi 9 septembre 2014 07:46:45 UTC+2, Jean-Bernard Damiano a écrit :
>
> Hello,
>
>  I'm encountered an issue when Elasticsearch 1.3.2 and the test.
>
> The test suite is configure like this:
> @ClusterScope(scope = ElasticsearchIntegrationTest.Scope.SUITE, 
> numDataNodes = 2)
> public abstract class AbstractRestActionTest extends 
> ElasticsearchIntegrationTest {
>
>
> @Override
> protected Settings nodeSettings(int nodeOrdinal) {
> Settings settings = ImmutableSettings.settingsBuilder()
> .put(super.nodeSettings(nodeOrdinal))
> .put("plugins." + PluginsService.
> LOAD_PLUGIN_FROM_CLASSPATH, true)
> .put("plugin.types", InOutPlugin.class.getName())
> .put("index.number_of_shards", defaultShardCount())
> .put("index.number_of_replicas", 0)
> .put("http.enabled", false)
> .build();
> return settings;
> }
>
>
> @Override
> public Settings indexSettings() {
> return settingsBuilder().put("index.number_of_shards", 
> defaultShardCount()).put("index.number_of_replicas", 0).build();
> } 
>
>
>public void setupTestIndexLikeUsers(String indexName, int shards, 
> boolean loadTestData) throws IOException {
> System.out.println("CreateIndex "+ indexName);
> assertAcked(prepareCreate(indexName).setSettings(settingsBuilder
> ().put("index.number_of_shards", shards).put("index.number_of_replicas", 0
> ))
> .addMapping("d", jsonBuilder().startObject()
> .startObject("d")
> .startObject("properties")
> .startObject("name")
> .field("type", "string")
> .field("index", "not_analyzed")
> .field("store", "yes")
> .endObject()
> .endObject()
> .endObject()));
> ensureGreen(indexName);
>
>
> if (loadTestData) {
> index(indexName, "d", "1", "name", "car");
> index(indexName, "d", "2", "name", "bike");
> index(indexName, "d", "3", "name", "train");
> index(indexName, "d", "4", "name", "bus");
> }
> refresh();
> waitForRelocation();
> }
>
>
> The plugins register RestModule and ActionModule like this:
>
> public void onModule(RestModule restModule) {
> restModule.addRestAction(RestExportAction.class);
> restModule.addRestAction(RestImportAction.class);
> restModule.addRestAction(RestSearchIntoAction.class);
> restModule.addRestAction(RestDumpAction.class);
> restModule.addRestAction(RestRestoreAction.class);
> restModule.addRestAction(RestReindexAction.class);
> }
> public void onModule(ActionModule module) {
> if (!settings.getAsBoolean("node.client", false)) {
> module.registerAction(ExportAction.INSTANCE, 
> TransportExportAction.class);
> module.registerAction(ImportAction.INSTANCE, 
> TransportImportAction.class);
> module.registerAction(SearchIntoAction.INSTANCE, 
> TransportSearchIntoAction.class);
> module.registerAction(DumpAction.INSTANCE, 
> TransportDumpAction.class);
> module.registerAction(RestoreAction.INSTANCE, 
> TransportRestoreAction.class);
> module.registerAction(ReindexAction.INSTANCE, 
> TransportReindexAction.class);
>}
> }
>
>
>
> Another module is registered for map binding  like this in the code of 
> plugin:
>
>
> @Override
> public Collection class="sty
> ...

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0f093ce6-cfe2-48f1-a5ba-f6a0aad63bf8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Simple query string does not work

2014-09-11 Thread vineeth mohan
Hello Dan ,

The format of the entire query is wrong.
You need to at-least specify the type of query that you are using.
In this case , wild card query would be the best fit.

Thanks
  Vineeth

On Thu, Sep 11, 2014 at 9:42 PM, Dan  wrote:

> Hi Vineeth,
>
>
> Thanks for your reply.
>
>
> {"from":0,"size":10,"query":{"field":{"tags":"*blaat*"}},"filter":{"and":[{"term":{"representative":1}},{"term":{"is_gift":0}},{"term":{"active":1}},{"terms":{"website_ids":[1],"execution":"and"}}]}}
>
>
> Is this enough?
>
>
> Thanks!
>
>
> Op donderdag 11 september 2014 15:08:05 UTC+2 schreef vineeth mohan:
>>
>> Hello Dan ,
>>
>> Can you paste the above as JSON.
>> I am not exactly able to make out what is the query.
>>
>> Thanks
>>   Vineeth
>>
>> On Thu, Sep 11, 2014 at 5:50 PM, Dan  wrote:
>>
>>> Nobody? :(
>>>
>>> Op woensdag 10 september 2014 21:24:19 UTC+2 schreef Dan:
>>>
 Hi Guys,

 I have a simple query which is not working. I am using the same query
 on another server with the same mapping; where it does work.
 Everything else is working like a charm.

 I am talking about the following query.
 The problem is to be found when I using the query > field > tags query.
 When I do not use this part, everything works fine.

 Array
 (
 [from] => 0
 [size] => 10
 [query] => Array
 (
  *   [field] => Array
 (
 [tags] => *blaat*
 )
 *
 )

 [filter] => Array
 (
 [and] => Array
 (
 [0] => Array
 (
 [term] => Array
 (
 [representative] => 1
 )

 )

 [1] => Array
 (
 [term] => Array
 (
 [is_gift] => 0
 )

 )

 [2] => Array
 (
 [term] => Array
 (
 [active] => 1
 )

 )

 [3] => Array
 (
 [terms] => Array
 (
 [website_ids] => Array
 (
 [0] => 1
 )

 [execution] => and
 )

 )

 )

 )

 )


 The mapping is as follows:


   "product" : {
 "properties" : {
   "action" : {
 "type" : "string"
   },
   "active" : {
 "type" : "string"
   },
   "brand_ids" : {
 "type" : "string"
   },*  "tags" : {
 "type" : "string"
   },*
 .


 When I index an item I am using the following part:

 Array
 (
 [2359] => Array
 (
 
 *[tags] => blaat, another blaat, etc*
   

 Maby an installation confuguring issue?

 Does anyone have a clue?

 Thanks!

  --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/
>>> msgid/elasticsearch/574351f6-a7ea-4b60-8dd4-3a5dea2f4d3b%
>>> 40googlegroups.com
>>> 
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/94e06349-b707-4b79-acd6-51ea41358a24%40googlegroups.com
> 
> .
>
> For more opt

Re: complex nested query

2014-09-11 Thread vineeth mohan
Hello ,

I don't feel you can do this in a single call.
What i have in mind would be


   1. Run a two level aggregation query with date histogram aggregation on
   first level with date and term aggregation on second with sum aggregation
   on prize field on second level. You might need to use nested aggregation
   also here.
   2. Once you get the results , choose the dates based on the criteria ,
   i.e. sum of prize more than 300. With the dates you are interested in ,
   fire the next query which has all interesting date ranges as range query.

Thanks
 Vineeth

On Thu, Sep 11, 2014 at 5:04 PM, 闫旭  wrote:

> Anyone can help this?
>
> Thanks && Best Regard!
>
> 在 2014年9月11日,13:24,闫旭  写道:
>
> Thank you !  But nested bool query can not plus all price with the data
> range. how  can i do this??
>
> Thx again.
>
> Thanks && Best Regard!
>
> 在 2014年9月11日,12:04,vineeth mohan  写道:
>
> Hello ,
>
>
> First you need to declare field details as nested. -
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-nested-type.html#mapping-nested-type
>
> Then do a bool query with the date range constrain and range constrain
>
> Thanks
> Vineeth
>
> On Thu, Sep 11, 2014 at 8:53 AM, 闫旭  wrote:
>
>> Dear All!
>>
>> I have a problem with a complex nested query
>> the docs like this:
>> _id:1
>> {
>> "detail":[
>> {
>> "date":"2014-09-01",
>> "price”:50
>> },
>> {
>> "date":"2014-09-02",
>> "price”:100
>> },
>> {
>> "date":"2014-09-03",
>> "price":100
>> },
>> {
>> "date":"2014-09-04",
>> "price":200
>> }
>> ]
>>
>> }
>> _id:2
>> {
>> "detail":[
>> {
>> "date":"2014-09-01",
>> "price":100
>> },
>> {
>> "date":"2014-09-02",
>> "price":200
>> },
>> {
>> "date":"2014-09-03",
>> "price":300
>> },
>> {
>> "date":"2014-09-04",
>> "price":200
>> }
>> ]
>>
>> }
>> I will filter the docs with “date in [2014-09-01, 2014-09-03] and
>> sum(price) > 300”.
>> I only find some way with “aggregation”, but it can only stat the sum of
>> all docs.
>>
>> How Can I solve the problem??
>>
>>
>> Thanks && Best Regard!
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/A98354E4-9C9F-43B2-9310-6355DE3D6F85%40gmail.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAGdPd5kfRarPNNBctvYfHsk52tjD2rxv18aQGqq3Hz0i_2ZxVQ%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/F5848899-E506-470B-AA05-E6A2B1965986%40gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5mH%2BSrgrkomvNh9-a-A5gDTjeXCO6DE8uWp32ruNbGPFA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Simple query string does not work

2014-09-11 Thread Dan


Hi Vineeth,


Thanks for your reply.


{"from":0,"size":10,"query":{"field":{"tags":"*blaat*"}},"filter":{"and":[{"term":{"representative":1}},{"term":{"is_gift":0}},{"term":{"active":1}},{"terms":{"website_ids":[1],"execution":"and"}}]}}


Is this enough?


Thanks!


Op donderdag 11 september 2014 15:08:05 UTC+2 schreef vineeth mohan:
>
> Hello Dan , 
>
> Can you paste the above as JSON.
> I am not exactly able to make out what is the query.
>
> Thanks
>   Vineeth
>
> On Thu, Sep 11, 2014 at 5:50 PM, Dan > wrote:
>
>> Nobody? :(
>>
>> Op woensdag 10 september 2014 21:24:19 UTC+2 schreef Dan:
>>
>>> Hi Guys,
>>>
>>> I have a simple query which is not working. I am using the same query on 
>>> another server with the same mapping; where it does work.
>>> Everything else is working like a charm.
>>>
>>> I am talking about the following query.
>>> The problem is to be found when I using the query > field > tags query. 
>>> When I do not use this part, everything works fine.
>>>
>>> Array
>>> (
>>> [from] => 0
>>> [size] => 10
>>> [query] => Array
>>> (
>>>  *   [field] => Array
>>> (
>>> [tags] => *blaat*
>>> )
>>> *
>>> )
>>>
>>> [filter] => Array
>>> (
>>> [and] => Array
>>> (
>>> [0] => Array
>>> (
>>> [term] => Array
>>> (
>>> [representative] => 1
>>> )
>>>
>>> )
>>>
>>> [1] => Array
>>> (
>>> [term] => Array
>>> (
>>> [is_gift] => 0
>>> )
>>>
>>> )
>>>
>>> [2] => Array
>>> (
>>> [term] => Array
>>> (
>>> [active] => 1
>>> )
>>>
>>> )
>>>
>>> [3] => Array
>>> (
>>> [terms] => Array
>>> (
>>> [website_ids] => Array
>>> (
>>> [0] => 1
>>> )
>>>
>>> [execution] => and
>>> )
>>>
>>> )
>>>
>>> )
>>>
>>> )
>>>
>>> )
>>>
>>>
>>> The mapping is as follows:
>>>
>>>
>>>   "product" : {
>>> "properties" : {
>>>   "action" : {
>>> "type" : "string"
>>>   },
>>>   "active" : {
>>> "type" : "string"
>>>   },
>>>   "brand_ids" : {
>>> "type" : "string"
>>>   },*  "tags" : {
>>> "type" : "string"
>>>   },*
>>> .
>>>
>>>
>>> When I index an item I am using the following part:
>>>
>>> Array
>>> (
>>> [2359] => Array
>>> (
>>> 
>>> *[tags] => blaat, another blaat, etc*
>>>   
>>>
>>> Maby an installation confuguring issue?
>>>
>>> Does anyone have a clue?
>>>
>>> Thanks!
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/574351f6-a7ea-4b60-8dd4-3a5dea2f4d3b%40googlegroups.com
>>  
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/94e06349-b707-4b79-acd6-51ea41358a24%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: searching across multiple types returns doesn't find all documents matching

2014-09-11 Thread ben
Thank you!

On Wednesday, September 10, 2014 8:59:31 PM UTC-7, vineeth mohan wrote:
>
> Hello Ben , 
>
>
> This is the type/field ambiguity bug - 
> https://github.com/elasticsearch/elasticsearch/issues/4081
>
> Basically , if you use the field name and type name as same , this might 
> come up.
> Make these two different and it should work.
>
> Thanks
> Vineeth
>
> On Thu, Sep 11, 2014 at 4:17 AM, ben > 
> wrote:
>
>> I include a bash script that recreates the situation.
>>
>> #!/bin/sh
>>
>> curl -XDELETE "http://localhost:9200/test";
>> curl -XPUT "http://localhost:9200/test";
>>
>> echo
>>
>> curl -XPUT "http://localhost:9200/test/foo/_mapping"; -d '{
>> "foo" : { 
>> "properties" : {
>> "id": {
>> "type" : "multi_field",
>> "path": "full",
>> "fields" : {
>> "foo_id_in_another_field" : {"type" : "long", 
>> include_in_all:false },
>> "id" : {"type" : "long"}
>>}
>> }
>> }
>> }
>> }'
>>
>> echo
>>
>> #foo is a basically a duplicate of the foo document to support search use 
>> cases
>> curl -XPUT "http://localhost:9200/test/bar/_mapping"; -d '{
>> "bar" : {
>> "properties" : {
>> "id": {
>> "type" : "multi_field",
>> "path": "full",
>> "fields" : {
>> "bar_id_in_another_field" : {"type" : "long", 
>> include_in_all:false },
>> "id" : {"type" : "long"}
>>}
>> },
>> "foo": {
>> "properties": {
>> "id": {
>> "type" : "multi_field",
>> "path": "full",
>> "fields" : {
>> "foo_id_in_another_field" : {"type" : "long", 
>> include_in_all:false },
>> "id" : {"type" : "long"}
>> }
>> }
>> }
>> }
>> }
>> }
>> }'
>>
>> echo
>>
>> curl -XPUT "http://localhost:9200/test/foo/1?refresh=true"; -d '{
>> "foo": {
>> "id": 1
>> }
>> }'
>>
>> echo
>>
>> #failure case appears even when not including the following JSON
>> # "bar": {
>> #   "id": 2,
>> #   "foo": {
>> # "id": 3
>> #   }
>> # }
>> curl -XPUT "http://localhost:9200/test/bar/2?refresh=true"; -d '{
>> "bar": {
>> "id": 2
>> }
>> }'
>>
>> echo
>>
>> #expect two results, get one (FAIL)
>> curl -XPOST "http://localhost:9200/test/foo,bar/_search?pretty=true"; -d '{
>>   "size": 10,
>>   "query": {
>> "query_string": {
>>   "query": "foo.id:1 OR bar.id:2"
>> }
>>   }
>> }'
>>
>> echo
>>
>> #except one result, get one (PASS)
>> curl -XPOST "http://localhost:9200/test/bar/_search?pretty=true"; -d '{
>>   "size": 10,
>>   "query": {
>> "query_string": {
>>   "query": "foo.id:1 OR bar.id:2"
>> }
>>   }
>> }'
>>
>> echo
>>
>> #expect one result, get one result (PASS)
>> curl -XPOST "http://localhost:9200/test/foo/_search?pretty=true"; -d '{
>>   "size": 10,
>>   "query": {
>> "query_string": {
>>   "query": "foo.id:1 OR bar.id:2"
>> }
>>   }
>> }'
>>
>> echo
>>
>> #expect two results, get tow results (PASS)
>> curl -XPOST "http://localhost:9200/test/_search?pretty=true"; -d '{
>>   "size": 10,
>>   "query": {
>> "query_string": {
>>   "query": "foo.id:1 OR bar.id:2"
>> }
>>   }
>> }'
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/1d8f1ea8-db1b-425c-b6ee-153f5f369f43%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0a550ac5-4ed9-4a6e-89b5-100b2be813a3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Do I need the JDBC driver

2014-09-11 Thread joergpra...@gmail.com
Synchronization of data is a very broad question. This is just because the
data organization in an RDBMS is very different from ES. You surely know
that. See also object-relational impedance mismatch
http://en.wikipedia.org/wiki/Object-relational_impedance_mismatch

The JDBC river plugin allows you to define SQL statements so you can easily
construct JSON out if it, for indexing into ES.

If you can map identifiers from your RDBMS to JSON doc IDs and allocate the
_id field in the JDBC river plugin, you are lucky. In that case you can
just overwrite existing docs in ES to keep up with the most recent version.

Synchronization also includes modifications and deletions to avoid stale
docs, and transactional ACID properties. I have no general solution for
this. The best approach is to provide timewindowed indices and drop indices
that are too old, similar to what Logstash does.

Jörg

On Thu, Sep 11, 2014 at 3:39 PM, James  wrote:

> Hi Jorg,
>
> Thank you for the reply. Yes I meant the elasticsearch river. Simply put,
> I want to syncronize the entries in my SQL database with my elasticsearch,
> so I can use elasicsearch for searching and not doing fulltext search. I
> want to know that when a new item gets added or removed from that database
> that it also gets added / removed from elasicsearch.
>
> My understand, which might be wrong, is I can either use the PHP
> elasticsearch library to push updates (adds / removes) to elasticsearch
> when new items are added to SQL:
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html
>
> Or I can use the river JDBC river plugin for elasticsearch to connect to
> my database directly and syncronize elasticsearch with the SQL database.
>
> My two questions are:
>
> 1. Is my understanding above correct
> 2. Does one option have advantages over the other
>
> - James
>
> On Wednesday, September 10, 2014 10:59:18 AM UTC+1, James wrote:
>>
>> Hi,
>>
>> I'm setting up a system where I have a main SQL database which is synced
>> with elasticsearch. My plan is to use the main PHP library for
>> elasticsearch.
>>
>> I was going to have a cron run every thirty minuets to check for items in
>> my database that not only have an "active" flag but that also do not have
>> an "indexed" flag, that means I need to add them to the index. Then I was
>> going to add that item to the index. Since I am using taking this path, it
>> doesn't seem like I need the JDBC driver, as I can add items to
>> elasticsearch using the PHP library.
>>
>> So, my question is, can I get away without using the JDBC driver?
>>
>> James
>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/1d5fe901-fd0e-4663-9c68-5f7cf8092cf1%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFoUMAv3R0b1NN%2B9B7eBNCcqpjg7tTMuNQPzCgGGupkQw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Does the wares transport support the plugin sites?

2014-09-11 Thread John Smith
I installed wares and it seems to work fine, though it doesn't serve the 
plugin sites.

When I tried to access the _plugin url It get a a "No handler registered 
for _plugin".

I haven't touched servlets in for ages. So I assume it may be doable with a 
filter...

So map ES servlet to /es and use default servlet to load statis site 
content, though this requires to copy the plugins folder in WEB-INF or 
something like that...

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7b28a528-e93d-4b71-9a6f-db20bd8e9c5d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Do I need the JDBC driver

2014-09-11 Thread James
Hi Jorg,

Thank you for the reply. Yes I meant the elasticsearch river. Simply put, I 
want to syncronize the entries in my SQL database with my elasticsearch, so 
I can use elasicsearch for searching and not doing fulltext search. I want 
to know that when a new item gets added or removed from that database that 
it also gets added / removed from elasicsearch.

My understand, which might be wrong, is I can either use the PHP 
elasticsearch library to push updates (adds / removes) to elasticsearch 
when new items are added to SQL:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html

Or I can use the river JDBC river plugin for elasticsearch to connect to my 
database directly and syncronize elasticsearch with the SQL database.

My two questions are:

1. Is my understanding above correct
2. Does one option have advantages over the other

- James

On Wednesday, September 10, 2014 10:59:18 AM UTC+1, James wrote:
>
> Hi,
>
> I'm setting up a system where I have a main SQL database which is synced 
> with elasticsearch. My plan is to use the main PHP library for 
> elasticsearch. 
>
> I was going to have a cron run every thirty minuets to check for items in 
> my database that not only have an "active" flag but that also do not have 
> an "indexed" flag, that means I need to add them to the index. Then I was 
> going to add that item to the index. Since I am using taking this path, it 
> doesn't seem like I need the JDBC driver, as I can add items to 
> elasticsearch using the PHP library.
>
> So, my question is, can I get away without using the JDBC driver?
>
> James
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1d5fe901-fd0e-4663-9c68-5f7cf8092cf1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Error on adding the new field mapping in already existing mapping

2014-09-11 Thread David Pilato
Probably at some point you did send something to elasticsearch which created 
the username field.

Try to get the actual mapping for your YsFact type using:

GET /index/YsFact/_mapping 

And you should see the existing definition.
May be you are using templates?

-- 
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr


Le 11 septembre 2014 à 12:05:33, Narinder Kaur (narinder.k...@izap.in) a écrit:

Hi,

           We already have a type in our system, YsFact. Now I needed to add a 
new field in this type, username. And username has following mapping.

"username":{"type":"string","index":"not_analyzed"}

We have a script in the system, that executes the mappings automatically. and 
in the execution of mapping, I am getting the following errors.

YsFact :: MergeMappingException[Merge failed with failures {[mapper [username] 
has different index values, mapper [username] has different `norms.enabled` 
values, mapper [username] has different tokenize values, mapper [username] has 
different index_analyzer]}]

Although, We have not added anything against username in the YsFact type. So I 
do not think so that Elasticsearch should have created some default mappings on 
it as no data indexed against this field. This was the response for the first 
time,we introduced this field in mappings.

One thing, I want to notify is that, we have username fields in other types 
which has data against the username. Can it conflict the YsFact ??
Also Our Production Elasticsearch server consists of 3 nodes and 2 replicas. 
Can something related to distribution there ? 

Does anyone have any idea what happend over here?

Thanks in advance for all kind of details on it.


--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7f5d6215-6388-489d-9810-9cc2fc0972f9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/etPan.5411a508.1190cde7.3c6%40MacBook-Air-de-David.local.
For more options, visit https://groups.google.com/d/optout.


Re: Do I need the JDBC driver

2014-09-11 Thread joergpra...@gmail.com
How do you mean "PHP elasticsearch library can convert SQL to JSON"? How
can this be? It is only for Elasticsearch.

As a matter fact, there is no "JDBC driver used by elasticsearch", there is
a plugin elasticsearch-river-jdbc, a community effort - I assume you mean
this implementation?

What do you mean by "better" implementation, in regard to what requirements?

Jörg

On Thu, Sep 11, 2014 at 2:16 PM, James  wrote:

> I'm sorry but that doesn't answer my question. It's elasticsearch that is
> Java. I need to sync elasticsearch with my SQL DB. I'm stuck between these
> two scenarios:
>
> Scenario 1:
> PHP website adds data to the SQL DB
> JDBC driver used by elasticsearch to grab values from SQL DB into index
>
> Scenario 2:
> PHP website adds data to SQL
> CRON job uses PHP elasticsearch library to convert SQL to JSON and send it
> to elasticsearch to be indexed.
>
> Which one is the better implementation?
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAPng%3D3e2ykmb4S2aqaXiLbV1h39ZE_ZWvHkX4OmVEQV2hPK3_A%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoG0p6P9TCSsjToewsEVdHnHY6jAWZPu0kP346vnxwVzMQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: What is better - create several document types or several indices?

2014-09-11 Thread smonasco
Every index has a minimum of one shard.  Multiple types can live in the same 
shard.  Shards both have maintenance overheads and slow down queries.  However, 
if you have a lot of targeted queries you can more easily reduce the shards 
accessed by reducing indexes than you could if you had multi-tenancy.  I could 
be missing something but I don't think you can have multiple routing values in 
a query, but someone may want to query multiple log types.

So it depends. 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ed078e55-b9e3-4c82-8285-a08ba5f90e21%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Simple query string does not work

2014-09-11 Thread vineeth mohan
Hello Dan ,

Can you paste the above as JSON.
I am not exactly able to make out what is the query.

Thanks
  Vineeth

On Thu, Sep 11, 2014 at 5:50 PM, Dan  wrote:

> Nobody? :(
>
> Op woensdag 10 september 2014 21:24:19 UTC+2 schreef Dan:
>
>> Hi Guys,
>>
>> I have a simple query which is not working. I am using the same query on
>> another server with the same mapping; where it does work.
>> Everything else is working like a charm.
>>
>> I am talking about the following query.
>> The problem is to be found when I using the query > field > tags query.
>> When I do not use this part, everything works fine.
>>
>> Array
>> (
>> [from] => 0
>> [size] => 10
>> [query] => Array
>> (
>>  *   [field] => Array
>> (
>> [tags] => *blaat*
>> )
>> *
>> )
>>
>> [filter] => Array
>> (
>> [and] => Array
>> (
>> [0] => Array
>> (
>> [term] => Array
>> (
>> [representative] => 1
>> )
>>
>> )
>>
>> [1] => Array
>> (
>> [term] => Array
>> (
>> [is_gift] => 0
>> )
>>
>> )
>>
>> [2] => Array
>> (
>> [term] => Array
>> (
>> [active] => 1
>> )
>>
>> )
>>
>> [3] => Array
>> (
>> [terms] => Array
>> (
>> [website_ids] => Array
>> (
>> [0] => 1
>> )
>>
>> [execution] => and
>> )
>>
>> )
>>
>> )
>>
>> )
>>
>> )
>>
>>
>> The mapping is as follows:
>>
>>
>>   "product" : {
>> "properties" : {
>>   "action" : {
>> "type" : "string"
>>   },
>>   "active" : {
>> "type" : "string"
>>   },
>>   "brand_ids" : {
>> "type" : "string"
>>   },*  "tags" : {
>> "type" : "string"
>>   },*
>> .
>>
>>
>> When I index an item I am using the following part:
>>
>> Array
>> (
>> [2359] => Array
>> (
>> 
>> *[tags] => blaat, another blaat, etc*
>>   
>>
>> Maby an installation confuguring issue?
>>
>> Does anyone have a clue?
>>
>> Thanks!
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/574351f6-a7ea-4b60-8dd4-3a5dea2f4d3b%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3DLDzK%2BDQJY-VTsfqq6_64yr%2Bm1y5oC915Hus%2BOONA4bw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: What is better - create several document types or several indices?

2014-09-11 Thread Konstantin Erman
Vineeth, thank you for your advice!

It seems we already do everything as you said. We name indices in Logstash 
style with the date. Is that what you are referring to as tailing?
Creating an index per hour would lead to hundreds of indices open. I wonder 
what are the guidelines regarding the number of indices vs their size?
We also close less interesting logs and in a couple weeks delete them.

BUT STILL, with all that in place my original question still stands: 
different document types in the same index or rather different indices for 
different document types. What are the rules of thumb?

Konstantin

On Wednesday, September 10, 2014 9:27:05 PM UTC-7, vineeth mohan wrote:
>
> Hello , 
>
> My advice would be to keep all the logs in a single index , but apply 
> index tailing.
> That is write logs of a day or hour ( depending upon traffic) to each 
> index like logstash does.
> So name of the index would be of format logs-`-MM-dd` 
> This way , you wont be stuck with the fixed shard problem and dynamic 
> horizontal scaling can be achieved. 
> Also , it would be a wise idea to remove old logs using TTL facility OR 
> closing old index or even take a snapshot and remove the index.
>
> TTL - 
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html#index-ttl
> Index Close - 
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-open-close.html#indices-open-close
>
> Thanks
>   Vineeth
>
>
>
>
>
> On Thu, Sep 11, 2014 at 7:39 AM, Konstantin Erman  > wrote:
>
>> We use Elasticsearch to aggregate several types of logs - web server 
>> logs, application logs, windows event logs, statistics, etc.
>>
>> As far as I understand I can do one of the following:
>> 1, Send each log to its own index and when I need to combine them in 
>> query - specify several indices in Kibana settings;
>> 2. Send all logs to the same index (we turn them over every day) and give 
>> logs from different sources different document types;
>> 3. Do more or less nothing, push all documents together without 
>> distinguishing them explicitly;
>>
>> My question is - what are advantages and disadvantages of each approach? 
>> We have substantial amount of logs going in every second, but querying is 
>> rather rare, at least so far.
>>
>> Thank you!
>> Konstantin
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/e41e4959-6a45-417a-8ba6-856abcd33350%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5df677a5-46d9-4ecd-9bb9-a82f7897753b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Simple query string does not work

2014-09-11 Thread Dan
Nobody? :(

Op woensdag 10 september 2014 21:24:19 UTC+2 schreef Dan:
>
> Hi Guys,
>
> I have a simple query which is not working. I am using the same query on 
> another server with the same mapping; where it does work.
> Everything else is working like a charm.
>
> I am talking about the following query.
> The problem is to be found when I using the query > field > tags query. 
> When I do not use this part, everything works fine.
>
> Array
> (
> [from] => 0
> [size] => 10
> [query] => Array
> (
>  *   [field] => Array
> (
> [tags] => *blaat*
> )
> *
> )
>
> [filter] => Array
> (
> [and] => Array
> (
> [0] => Array
> (
> [term] => Array
> (
> [representative] => 1
> )
>
> )
>
> [1] => Array
> (
> [term] => Array
> (
> [is_gift] => 0
> )
>
> )
>
> [2] => Array
> (
> [term] => Array
> (
> [active] => 1
> )
>
> )
>
> [3] => Array
> (
> [terms] => Array
> (
> [website_ids] => Array
> (
> [0] => 1
> )
>
> [execution] => and
> )
>
> )
>
> )
>
> )
>
> )
>
>
> The mapping is as follows:
>
>
>   "product" : {
> "properties" : {
>   "action" : {
> "type" : "string"
>   },
>   "active" : {
> "type" : "string"
>   },
>   "brand_ids" : {
> "type" : "string"
>   },*  "tags" : {
> "type" : "string"
>   },*
> .
>
>
> When I index an item I am using the following part:
>
> Array
> (
> [2359] => Array
> (
> 
> *[tags] => blaat, another blaat, etc*
>   
>
> Maby an installation confuguring issue?
>
> Does anyone have a clue?
>
> Thanks!
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/574351f6-a7ea-4b60-8dd4-3a5dea2f4d3b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Do I need the JDBC driver

2014-09-11 Thread James
I'm sorry but that doesn't answer my question. It's elasticsearch that is
Java. I need to sync elasticsearch with my SQL DB. I'm stuck between these
two scenarios:

Scenario 1:
PHP website adds data to the SQL DB
JDBC driver used by elasticsearch to grab values from SQL DB into index

Scenario 2:
PHP website adds data to SQL
CRON job uses PHP elasticsearch library to convert SQL to JSON and send it
to elasticsearch to be indexed.

Which one is the better implementation?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPng%3D3e2ykmb4S2aqaXiLbV1h39ZE_ZWvHkX4OmVEQV2hPK3_A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: complex nested query

2014-09-11 Thread 闫旭
Anyone can help this?

Thanks && Best Regard!

在 2014年9月11日,13:24,闫旭  写道:

> Thank you !  But nested bool query can not plus all price with the data 
> range. how  can i do this??
> 
> Thx again.
> 
> Thanks && Best Regard!
> 
> 在 2014年9月11日,12:04,vineeth mohan  写道:
> 
>> Hello , 
>> 
>> 
>> First you need to declare field details as nested. - 
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-nested-type.html#mapping-nested-type
>> 
>> Then do a bool query with the date range constrain and range constrain
>> 
>> Thanks
>> Vineeth
>> 
>> On Thu, Sep 11, 2014 at 8:53 AM, 闫旭  wrote:
>> Dear All!
>> 
>> I have a problem with a complex nested query
>> the docs like this:
>> _id:1
>> {
>>  "detail":[
>>  {
>>  "date":"2014-09-01",
>>  "price”:50
>>  },
>>  {
>>  "date":"2014-09-02",
>>  "price”:100
>>  },
>>  {
>>  "date":"2014-09-03",
>>  "price":100
>>  },
>>  {
>>  "date":"2014-09-04",
>>  "price":200
>>  }
>>  ]
>> 
>> }
>> _id:2
>> {
>>  "detail":[
>>  {
>>  "date":"2014-09-01",
>>  "price":100
>>  },
>>  {
>>  "date":"2014-09-02",
>>  "price":200
>>  },
>>  {
>>  "date":"2014-09-03",
>>  "price":300
>>  },
>>  {
>>  "date":"2014-09-04",
>>  "price":200
>>  }
>>  ]
>> 
>> }
>> I will filter the docs with “date in [2014-09-01, 2014-09-03] and sum(price) 
>> > 300”.
>> I only find some way with “aggregation”, but it can only stat the sum of all 
>> docs.
>> 
>> How Can I solve the problem?? 
>> 
>> 
>> Thanks && Best Regard!
>> 
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/A98354E4-9C9F-43B2-9310-6355DE3D6F85%40gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/CAGdPd5kfRarPNNBctvYfHsk52tjD2rxv18aQGqq3Hz0i_2ZxVQ%40mail.gmail.com.
>> For more options, visit https://groups.google.com/d/optout.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/F5848899-E506-470B-AA05-E6A2B1965986%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Ramifications of G1GC in ES1.3 with JDK 1.8

2014-09-11 Thread Mark Walkom
We also use it in dev for months across various ES and Java 8 releases, I
have been considering rolling it out to a smaller prod cluster as well as
we've had no problems at all.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 11 September 2014 21:30, joergpra...@gmail.com 
wrote:

> Nice to see another success of G1 GC :)
>
> Jörg
>
> On Thu, Sep 11, 2014 at 11:34 AM, Robert Gardam 
> wrote:
>
>> Yeah, we are seeing much better gc performance here too!
>>
>> We were experiencing the stop the world GC with CMS and then nodes would
>> time out.
>>
>> Our heap is 32gb, we run two nodes per system. IO doesn't seem to be an
>> issue here.
>> 20 nodes = 10 boxes, 128gb ram each
>>
>> Field cache is limited to 20%
>> We're bulk indexing around 10k events/s
>>
>> It seems to be much more stable and predictable in terms of GC. The GC
>> Logs are showing a huge reductions
>>
>> 61043,866: Total time for which application threads were stopped:
>> 0,0140156 seconds
>> 61044,524: Total time for which application threads were stopped:
>> 0,0005284 seconds
>> 61045,801: Total time for which application threads were stopped:
>> 0,0006138 seconds
>> 61045,802: Total time for which application threads were stopped:
>> 0,0003635 seconds
>> 61045,802: Total time for which application threads were stopped:
>> 0,0002545 seconds
>> 61045,803: Total time for which application threads were stopped:
>> 0,0002944 seconds
>> 61045,804: Total time for which application threads were stopped:
>> 0,0002367 seconds
>> 61046,469: Total time for which application threads were stopped:
>> 0,0004653 seconds
>> 61048,172: Total time for which application threads were stopped:
>> 0,0004850 seconds
>> 61048,598: Total time for which application threads were stopped:
>> 0,0004937 seconds
>> 61049,197: Total time for which application threads were stopped:
>> 0,0004396 seconds
>> 61050,264: Total time for which application threads were stopped:
>> 0,0004587 seconds
>> 61051,593: Total time for which application threads were stopped:
>> 0,0004600 seconds
>> 61051,689: Total time for which application threads were stopped:
>> 0,0005021 seconds
>> 61053,822: Total time for which application threads were stopped:
>> 0,0004721 seconds
>> 61053,824: Total time for which application threads were stopped:
>> 0,0005323 seconds
>> 61053,825: Total time for which application threads were stopped:
>> 0,0003403 seconds
>> 61053,825: Total time for which application threads were stopped:
>> 0,0003301 seconds
>> 61053,826: Total time for which application threads were stopped:
>> 0,0003322 seconds
>> 61053,826: Total time for which application threads were stopped:
>> 0,0003364 seconds
>> 61059,265: Total time for which application threads were stopped:
>> 0,0004321 seconds
>> 61061,691: Total time for which application threads were stopped:
>> 0,0004619 seconds
>> 61062,595: Total time for which application threads were stopped:
>> 0,0004529 seconds
>> 61064,199: Total time for which application threads were stopped:
>> 0,0004587 seconds
>> 61070,267: Total time for which application threads were stopped:
>> 0,0004606 seconds
>> 61074,200: Total time for which application threads were stopped:
>> 0,0004508 seconds
>> 61076,693: Total time for which application threads were stopped:
>> 0,0004709 seconds
>> 61077,597: Total time for which application threads were stopped:
>> 0,0004698 seconds
>> 61079,268: Total time for which application threads were stopped:
>> 0,0004601 seconds
>> 61079,817: Total time for which application threads were stopped:
>> 0,0004535 seconds
>> 61081,818: Total time for which application threads were stopped:
>> 0,0004979 seconds
>> 61082,819: Total time for which application threads were stopped:
>> 0,0004817 seconds
>> 61089,204: Total time for which application threads were stopped:
>> 0,0011584 seconds
>> 61091,699: Total time for which application threads were stopped:
>> 0,0004501 seconds
>> 61092,599: Total time for which application threads were stopped:
>> 0,0004539 seconds
>> 61094,204: Total time for which application threads were stopped:
>> 0,0006452 seconds
>> 61095,271: Total time for which application threads were stopped:
>> 0,0006568 seconds
>> 61101,701: Total time for which application threads were stopped:
>> 0,0004679 seconds
>> 61102,601: Total time for which application threads were stopped:
>> 0,0004576 seconds
>> 61104,272: Total time for which application threads were stopped:
>> 0,0004474 seconds
>> 61114,207: Total time for which application threads were stopped:
>> 0,0005483 seconds
>> 61115,273: Total time for which application threads were stopped:
>> 0,0004848 seconds
>> 61117,604: Total time for which application threads were stopped:
>> 0,0008780 seconds
>> 61117,703: Total time for which application threads were stopped:
>> 0,0005068 seconds
>> 61124,274: Total time for which application threads were stopped:
>> 0,0004

Re: Ramifications of G1GC in ES1.3 with JDK 1.8

2014-09-11 Thread joergpra...@gmail.com
Nice to see another success of G1 GC :)

Jörg

On Thu, Sep 11, 2014 at 11:34 AM, Robert Gardam 
wrote:

> Yeah, we are seeing much better gc performance here too!
>
> We were experiencing the stop the world GC with CMS and then nodes would
> time out.
>
> Our heap is 32gb, we run two nodes per system. IO doesn't seem to be an
> issue here.
> 20 nodes = 10 boxes, 128gb ram each
>
> Field cache is limited to 20%
> We're bulk indexing around 10k events/s
>
> It seems to be much more stable and predictable in terms of GC. The GC
> Logs are showing a huge reductions
>
> 61043,866: Total time for which application threads were stopped:
> 0,0140156 seconds
> 61044,524: Total time for which application threads were stopped:
> 0,0005284 seconds
> 61045,801: Total time for which application threads were stopped:
> 0,0006138 seconds
> 61045,802: Total time for which application threads were stopped:
> 0,0003635 seconds
> 61045,802: Total time for which application threads were stopped:
> 0,0002545 seconds
> 61045,803: Total time for which application threads were stopped:
> 0,0002944 seconds
> 61045,804: Total time for which application threads were stopped:
> 0,0002367 seconds
> 61046,469: Total time for which application threads were stopped:
> 0,0004653 seconds
> 61048,172: Total time for which application threads were stopped:
> 0,0004850 seconds
> 61048,598: Total time for which application threads were stopped:
> 0,0004937 seconds
> 61049,197: Total time for which application threads were stopped:
> 0,0004396 seconds
> 61050,264: Total time for which application threads were stopped:
> 0,0004587 seconds
> 61051,593: Total time for which application threads were stopped:
> 0,0004600 seconds
> 61051,689: Total time for which application threads were stopped:
> 0,0005021 seconds
> 61053,822: Total time for which application threads were stopped:
> 0,0004721 seconds
> 61053,824: Total time for which application threads were stopped:
> 0,0005323 seconds
> 61053,825: Total time for which application threads were stopped:
> 0,0003403 seconds
> 61053,825: Total time for which application threads were stopped:
> 0,0003301 seconds
> 61053,826: Total time for which application threads were stopped:
> 0,0003322 seconds
> 61053,826: Total time for which application threads were stopped:
> 0,0003364 seconds
> 61059,265: Total time for which application threads were stopped:
> 0,0004321 seconds
> 61061,691: Total time for which application threads were stopped:
> 0,0004619 seconds
> 61062,595: Total time for which application threads were stopped:
> 0,0004529 seconds
> 61064,199: Total time for which application threads were stopped:
> 0,0004587 seconds
> 61070,267: Total time for which application threads were stopped:
> 0,0004606 seconds
> 61074,200: Total time for which application threads were stopped:
> 0,0004508 seconds
> 61076,693: Total time for which application threads were stopped:
> 0,0004709 seconds
> 61077,597: Total time for which application threads were stopped:
> 0,0004698 seconds
> 61079,268: Total time for which application threads were stopped:
> 0,0004601 seconds
> 61079,817: Total time for which application threads were stopped:
> 0,0004535 seconds
> 61081,818: Total time for which application threads were stopped:
> 0,0004979 seconds
> 61082,819: Total time for which application threads were stopped:
> 0,0004817 seconds
> 61089,204: Total time for which application threads were stopped:
> 0,0011584 seconds
> 61091,699: Total time for which application threads were stopped:
> 0,0004501 seconds
> 61092,599: Total time for which application threads were stopped:
> 0,0004539 seconds
> 61094,204: Total time for which application threads were stopped:
> 0,0006452 seconds
> 61095,271: Total time for which application threads were stopped:
> 0,0006568 seconds
> 61101,701: Total time for which application threads were stopped:
> 0,0004679 seconds
> 61102,601: Total time for which application threads were stopped:
> 0,0004576 seconds
> 61104,272: Total time for which application threads were stopped:
> 0,0004474 seconds
> 61114,207: Total time for which application threads were stopped:
> 0,0005483 seconds
> 61115,273: Total time for which application threads were stopped:
> 0,0004848 seconds
> 61117,604: Total time for which application threads were stopped:
> 0,0008780 seconds
> 61117,703: Total time for which application threads were stopped:
> 0,0005068 seconds
> 61124,274: Total time for which application threads were stopped:
> 0,0004519 seconds
> 61127,605: Total time for which application threads were stopped:
> 0,0004786 seconds
> 61129,249: [GC pause (G1 Evacuation Pause) (young)
> Desired survivor size 1291845632 bytes, new threshold 15 (max 15)
> - age   1:   19633616 bytes,   19633616 total
> - age   2:3428232 bytes,   23061848 total
> - age   3:1362152 bytes,   24424000 total
> - age   4:1443728 bytes,   25867728 total
> - age   5: 996840 bytes,   26864568 total
> - age   6:1584

Re: Do I need the JDBC driver

2014-09-11 Thread joergpra...@gmail.com
I think I answered the question, or I do not fully understand.

JDBC is Java Database Connectivity, it is Java only, it has nothing to do
with PHP.  So if you choose PHP, it is obvious that you can not use it.

Jörg

On Thu, Sep 11, 2014 at 12:08 PM, James  wrote:

> I've also put this question up on stackoverflow for anyone who might be
> able to help me understand.
>
>
> http://stackoverflow.com/questions/25763997/elasticsearch-do-i-need-the-jdbc-driver
>
> On Wednesday, September 10, 2014 10:59:18 AM UTC+1, James wrote:
>>
>> Hi,
>>
>> I'm setting up a system where I have a main SQL database which is synced
>> with elasticsearch. My plan is to use the main PHP library for
>> elasticsearch.
>>
>> I was going to have a cron run every thirty minuets to check for items in
>> my database that not only have an "active" flag but that also do not have
>> an "indexed" flag, that means I need to add them to the index. Then I was
>> going to add that item to the index. Since I am using taking this path, it
>> doesn't seem like I need the JDBC driver, as I can add items to
>> elasticsearch using the PHP library.
>>
>> So, my question is, can I get away without using the JDBC driver?
>>
>> James
>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/3d07653f-a62b-4e16-bab0-8a9b0008730d%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHY8cj8YoSh5wJN6MVK6UzjoJJEDdC7iDehBRqwxNjK-w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Search Plugin to intercept search response

2014-09-11 Thread joergpra...@gmail.com
You can not intercept the SearchResponse on the ES server itself. Instead,
you must implement your custom search action.

Jörg

On Thu, Sep 11, 2014 at 10:00 AM, Sandeep Ramesh Khanzode <
k.sandee...@gmail.com> wrote:

> When you say, 'receive the SearchResponse', is that in the ES Server node
> or the TransportClient node that spawned the request? I would want to
> intercept the SearchResponse when created at the ES Server itself, since I
> want to send the subset of Response to another process on the same node,
> and it would not be very efficient to have the response sent back to the
> client node only to be sent back again.
>
> Thanks,
> Sandeep
>
> On Thu, Sep 11, 2014 at 12:43 PM, joergpra...@gmail.com <
> joergpra...@gmail.com> wrote:
>
>> You can receive the SearchResponse, process the response, and return the
>> response with whatever format you want.
>>
>> Jörg
>>
>> On Wed, Sep 10, 2014 at 11:59 AM, Sandeep Ramesh Khanzode <
>> k.sandee...@gmail.com> wrote:
>>
>>> Hi Jorg,
>>>
>>> Thanks for the links. I was checking the sources. There are relevant to
>>> my functional use case. But I will be using the TransportClient Java API,
>>> not the REST client.
>>>
>>> Can you please tell me how I can find/modify these classes/sources to
>>> get the appropriate classes for inctercepting the Search Response when
>>> invoked from a TransportClient?
>>>
>>>
>>> Thanks,
>>> Sandeep
>>>
>>> On Wed, Aug 27, 2014 at 6:38 PM, joergpra...@gmail.com <
>>> joergpra...@gmail.com> wrote:
>>>
 Have a look at array-format or csv plugin, they are processing the
 SearchResponse to output it in another format:

 https://github.com/jprante/elasticsearch-arrayformat

 https://github.com/jprante/elasticsearch-csv

 Jörg


 On Wed, Aug 27, 2014 at 3:05 PM, 'Sandeep Ramesh Khanzode' via
 elasticsearch  wrote:

> Hi,
>
> Is there any action/module that I can extend/register/add so that I
> can intercept the SearchResponse on the server node before the response is
> sent back to the TransportClient on the calling box?
>
> Thanks,
> Sandeep
>
> --
> You received this message because you are subscribed to the Google
> Groups "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/559a5c68-4567-425f-9842-7f2fe6755095%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

  --
 You received this message because you are subscribed to a topic in the
 Google Groups "elasticsearch" group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/elasticsearch/o6RZL4KwJVs/unsubscribe
 .
 To unsubscribe from this group and all its topics, send an email to
 elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGJ_%3D5RnyFqMP_AX4744z6tdAp8cfLBi_OqzLM23_rqzw%40mail.gmail.com
 
 .

 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/CAKnM90bENin_aU4AXa%3DTVHQ_SyTTn-89Rev5vjj3%3DoDikwstkQ%40mail.gmail.com
>>> 
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/elasticsearch/o6RZL4KwJVs/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGWm8upDW9De7OvkM0cps%2BEyn3goo7Tgy3jyqJ8Jz5Khw%40mail.gmail.com
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>

Re: More Like This - Results Is Empty

2014-09-11 Thread phenrigomes
I am making an application for product comparison, but "more like this" query
not is adequate. Fuzzy query is adequate to bind to products the same
vendor?



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/More-Like-This-Results-Is-Empty-tp4063176p4063262.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1410350643491-4063262.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Re: Do I need the JDBC driver

2014-09-11 Thread James
I've also put this question up on stackoverflow for anyone who might be 
able to help me understand.

http://stackoverflow.com/questions/25763997/elasticsearch-do-i-need-the-jdbc-driver

On Wednesday, September 10, 2014 10:59:18 AM UTC+1, James wrote:
>
> Hi,
>
> I'm setting up a system where I have a main SQL database which is synced 
> with elasticsearch. My plan is to use the main PHP library for 
> elasticsearch. 
>
> I was going to have a cron run every thirty minuets to check for items in 
> my database that not only have an "active" flag but that also do not have 
> an "indexed" flag, that means I need to add them to the index. Then I was 
> going to add that item to the index. Since I am using taking this path, it 
> doesn't seem like I need the JDBC driver, as I can add items to 
> elasticsearch using the PHP library.
>
> So, my question is, can I get away without using the JDBC driver?
>
> James
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3d07653f-a62b-4e16-bab0-8a9b0008730d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Error on adding the new field mapping in already existing mapping

2014-09-11 Thread Narinder Kaur
Hi,

   We already have a type in our system, *YsFact*. Now I needed to 
add a new field in this type, *username*. And username has following 
mapping.

*"username":{"type":"string","index":"not_analyzed"}*

We have a script in the system, that executes the mappings automatically. 
and in the execution of mapping, I am getting the following errors.

YsFact :: MergeMappingException[Merge failed with failures {[mapper 
[username] has different index values, mapper [username] has different 
`norms.enabled` values, mapper [username] has different tokenize values, 
mapper [username] has different index_analyzer]}]

Although, We have not added anything against username in the YsFact type. 
So I do not think so that Elasticsearch should have created some default 
mappings on it as no data indexed against this field. This was the response 
for the first time,we introduced this field in mappings.

One thing, I want to notify is that, we have username fields in other types 
which has data against the username. Can it conflict the YsFact ??
Also Our Production Elasticsearch server consists of 3 nodes and 2 
replicas. Can something related to distribution there ? 

Does anyone have any idea what happend over here?

Thanks in advance for all kind of details on it.


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7f5d6215-6388-489d-9810-9cc2fc0972f9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: No results in REST API

2014-09-11 Thread Nimit Jain
Hi
Could you please help me to connect my server xx.xxx.xx.xxx:12345 with 
elasticsearch rest api in python. I tried alot but still not able to 
established the connection.

On Wednesday, 21 May 2014 09:59:29 UTC+5:30, Pratik Poddar wrote:
>
> When I am doing es.search using pyelasticsearch I am able to get results, 
> but doing REST API, I do not get results. Please help. Thanks
>
> searchdoc yields results but rest api gives 0 hits.
>
> py elastic search settings:
> bodyquery = {
> "custom_score": {
> "script" : "_score * 
> ("+str(1.0+recency)+"**(doc['articleid'].value*5.0/"+maxarticleid+"))",
> "query": {
> "query_string": {"query": "Arvind", "fields": 
> ["text", "title^3", "domain"]}
> }
> }
> }
>
>
> REST API query: 
> http://46.137.209.142:9200/article-index/article/_search?q=Arvind
>
>
> My indexing settings are: 
>
> 'settings': {
> 'analysis': {
> 'analyzer': {
> 'my_ngram_analyzer' : {
> 'tokenizer' : 
> 'my_ngram_tokenizer',
> 'filter': 
> ['my_synonym_filter']
> }
> },
> 'filter': {
> 'my_synonym_filter': {
> 'type': 'synonym',
> 'format': 'wordnet',
> 'synonyms_path': 
> 'analysis/wn_s.pl'
> }
> },
> 'tokenizer' : {
> 'my_ngram_tokenizer' : {
> 'type' : 'nGram',
> 'min_gram' : '1',
> 'max_gram' : '50'
> }
> }
> }
> },
> 'mappings': {
> 'article': {
> '_all': {
> 'enabled': False
> },
> '_source': {
> 'compressed': True
> },
> 'properties': {
> 'tags': {
> 'type': 'string',
> 'index': 
> 'not_analyzed'
> }
> }
> }
> }
>
> Thanks. Please advise.
>
> -- 
> Pratik Poddar
> www.linkedin.com/in/pratikpoddar
> http://www.cseblog.com
> http://pratikpoddar.wordpress.com/
>  

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/de380e4f-85c1-4959-b42e-9dde9e520a37%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Ramifications of G1GC in ES1.3 with JDK 1.8

2014-09-11 Thread Robert Gardam
Yeah, we are seeing much better gc performance here too!

We were experiencing the stop the world GC with CMS and then nodes would
time out.

Our heap is 32gb, we run two nodes per system. IO doesn't seem to be an
issue here.
20 nodes = 10 boxes, 128gb ram each

Field cache is limited to 20%
We're bulk indexing around 10k events/s

It seems to be much more stable and predictable in terms of GC. The GC Logs
are showing a huge reductions

61043,866: Total time for which application threads were stopped: 0,0140156
seconds
61044,524: Total time for which application threads were stopped: 0,0005284
seconds
61045,801: Total time for which application threads were stopped: 0,0006138
seconds
61045,802: Total time for which application threads were stopped: 0,0003635
seconds
61045,802: Total time for which application threads were stopped: 0,0002545
seconds
61045,803: Total time for which application threads were stopped: 0,0002944
seconds
61045,804: Total time for which application threads were stopped: 0,0002367
seconds
61046,469: Total time for which application threads were stopped: 0,0004653
seconds
61048,172: Total time for which application threads were stopped: 0,0004850
seconds
61048,598: Total time for which application threads were stopped: 0,0004937
seconds
61049,197: Total time for which application threads were stopped: 0,0004396
seconds
61050,264: Total time for which application threads were stopped: 0,0004587
seconds
61051,593: Total time for which application threads were stopped: 0,0004600
seconds
61051,689: Total time for which application threads were stopped: 0,0005021
seconds
61053,822: Total time for which application threads were stopped: 0,0004721
seconds
61053,824: Total time for which application threads were stopped: 0,0005323
seconds
61053,825: Total time for which application threads were stopped: 0,0003403
seconds
61053,825: Total time for which application threads were stopped: 0,0003301
seconds
61053,826: Total time for which application threads were stopped: 0,0003322
seconds
61053,826: Total time for which application threads were stopped: 0,0003364
seconds
61059,265: Total time for which application threads were stopped: 0,0004321
seconds
61061,691: Total time for which application threads were stopped: 0,0004619
seconds
61062,595: Total time for which application threads were stopped: 0,0004529
seconds
61064,199: Total time for which application threads were stopped: 0,0004587
seconds
61070,267: Total time for which application threads were stopped: 0,0004606
seconds
61074,200: Total time for which application threads were stopped: 0,0004508
seconds
61076,693: Total time for which application threads were stopped: 0,0004709
seconds
61077,597: Total time for which application threads were stopped: 0,0004698
seconds
61079,268: Total time for which application threads were stopped: 0,0004601
seconds
61079,817: Total time for which application threads were stopped: 0,0004535
seconds
61081,818: Total time for which application threads were stopped: 0,0004979
seconds
61082,819: Total time for which application threads were stopped: 0,0004817
seconds
61089,204: Total time for which application threads were stopped: 0,0011584
seconds
61091,699: Total time for which application threads were stopped: 0,0004501
seconds
61092,599: Total time for which application threads were stopped: 0,0004539
seconds
61094,204: Total time for which application threads were stopped: 0,0006452
seconds
61095,271: Total time for which application threads were stopped: 0,0006568
seconds
61101,701: Total time for which application threads were stopped: 0,0004679
seconds
61102,601: Total time for which application threads were stopped: 0,0004576
seconds
61104,272: Total time for which application threads were stopped: 0,0004474
seconds
61114,207: Total time for which application threads were stopped: 0,0005483
seconds
61115,273: Total time for which application threads were stopped: 0,0004848
seconds
61117,604: Total time for which application threads were stopped: 0,0008780
seconds
61117,703: Total time for which application threads were stopped: 0,0005068
seconds
61124,274: Total time for which application threads were stopped: 0,0004519
seconds
61127,605: Total time for which application threads were stopped: 0,0004786
seconds
61129,249: [GC pause (G1 Evacuation Pause) (young)
Desired survivor size 1291845632 bytes, new threshold 15 (max 15)
- age   1:   19633616 bytes,   19633616 total
- age   2:3428232 bytes,   23061848 total
- age   3:1362152 bytes,   24424000 total
- age   4:1443728 bytes,   25867728 total
- age   5: 996840 bytes,   26864568 total
- age   6:1584400 bytes,   28448968 total
- age   7:1697168 bytes,   30146136 total
- age   8: 578056 bytes,   30724192 total
- age   9:1166056 bytes,   31890248 total
- age  10:   8904 bytes,   31899152 total
- age  11:  31640 bytes,   31930792 total
- age  12: 979168 bytes,   32909960 total
- age  13: 108016 byt

Re: I need to call my server xxx.xx.xx.xxx:xxxxx using elasticsearch api in python

2014-09-11 Thread Honza Král
Hi,

the code you have here should work, what do you get when you try:

from elasticsearch import Elasticsearch

es = Elasticsearch("10.120.xx.xxx:6xxx8")
print(es.info())

Thanks

On Thu, Sep 11, 2014 at 11:14 AM, Nimit Jain  wrote:
> Hi All,
> I need I need to call my server xxx.xx.xx.xxx:x using elasticsearch api
> in python but I am not able to get the proper code to run that. Below is
> that I have done yet.
>
> from datetime import datetime
> from elasticsearch import Elasticsearch
>
> es = Elasticsearch("10.120.xx.xxx:6xxx8")
> print(es.cluster)
> print(es.cat)
> print(es.indices)
> print(es.nodes)
> print(es.snapshot)
>
>
> # but not deserialized
> es.get(index="logstash-2014.09.11", doc_type="syslog",
> id='iFP2D8nHSKeqevBWrm1Hgg')['_source']
> {u'any': u'data', u'timestamp': u'2013-05-12T19:45:31.804229'}
>
> print(es)
>
>
> Please tell me what to do from here so I am doing wrong.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/d33976fd-b4ea-4ce3-8e97-cd019b77b0f7%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CABfdDir0PTP%3D%3DW14ikyMk33zBj%3DJgqcfokxZBPiXg8wM8oDFsQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Copying fields to a geopoint type ?

2014-09-11 Thread Kushal Zamkade
Hello,

I have created a location filed by using below code 

 if [latitude] and [longitude] {

mutate {
  rename => [ "latitude", "[location][lat]", "longitude", "[location][lon]" 
]

}
  }

But when i check location field type then it is not created as geo_point.

when i am trying to search a geo_point then i am getting below error.
 QueryParsingException[[logstash-2014.09.11] failed to find geo_point field 
[location1]]; 

can you help me to resolve this



On Thursday, April 10, 2014 2:42:22 AM UTC+5:30, Pascal VINCENT wrote:
>
> Hi,
>
> I have included logstash in my stack and started to play with it. I'm sure 
> it can do the trick I was looking for, and much more.
> Thank you ... 
>
> [waiting for your blog post :)] 
>
> Pascal. 
>
>
> On Mon, Apr 7, 2014 at 9:38 AM, Alexander Reelsen  > wrote:
>
>> Hey,
>>
>> I dont know about your stack, but maybe logstash would be a good idea to 
>> add it in there. It is more flexible than the csv river and features a CSV 
>> input as well. You can easily change the structure of the data you want to 
>> index. This is how the logstash config would look like
>>
>>   if [latitude] and [longitude] {
>>
>> mutate {
>>   rename => [ "latitude", "[location][lat]", "longitude", 
>> "[location][lon]" ]
>>
>> }
>>   }
>>
>> I am currently working on a blog post how to utilize elasticsearch, 
>> logstash and kibana on CSV based data and hope to release it soonish on the 
>> .org blog - which covers exactly this. Stay tuned! :-)
>>
>>
>> --Alex
>>
>>
>>
>> On Thu, Apr 3, 2014 at 12:21 AM, Pascal VINCENT > > wrote:
>>
>>> Hi,
>>>
>>> I'm new to elasticsearch. My usecase is to load a csv file containing 
>>> some agencies with geo location, each lines are like :
>>>
>>> id;label;address;zipcode;city;region;*latitude*;*longitude*;(and some 
>>> others fields)+
>>>
>>> I'm using the csv river plugin to index the file.
>>>
>>> My mapping is :
>>>
>>> {
>>>   "office": {
>>> "properties": {
>>>
>>> *(first fields omitted...)*
>>>
>>>   "*latitude*": {
>>> "type": "double",
>>>   },
>>>   "*longitude*": {
>>> "type": "double",
>>>   },
>>>   "*location*": {
>>> "type": "geo_point",
>>> "lat_lon": "true"
>>>   }
>>> }  
>>> }
>>>
>>> I'd like to index the location .lon and .lat value from the latitude and 
>>> longitude fields. I tried the copy_to function with no success :
>>>   "latitude": {
>>> "type": "double",
>>> "copy_to": "location.lat"
>>>   },
>>>   "longitude": {
>>> "type": "double",
>>> "copy_to": "location.lon"
>>>   },
>>>
>>> Is there any way to feed the "location" property from latitude and 
>>> longitude fields at indexation ?
>>>
>>> My point is that I don't want to modify the input csv file to adapt it 
>>> to the GeoJSON format (i.e concat lat and lon in one field in the csv file).
>>>
>>> Thank you for any hints.
>>>
>>> Pascal.
>>>
>>>  -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com .
>>>
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/6e12ced7-5b1a-4142-93d1-a3d22d7138a2%40googlegroups.com
>>>  
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/elasticsearch/QaI1fj74RlM/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/CAGCwEM-uHKT74qVbDT%3D8qg5Cv4vH0y%3DOzC8hGyO2uq_sY3sJ8g%40mail.gmail.com
>>  
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d31447ff-ec4b-4273-a35c-fc5134aaedf0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


I need to call my server xxx.xx.xx.xxx:xxxxx using elasticsearch api in python

2014-09-11 Thread Nimit Jain
Hi All,
I need I need to call my server xxx.xx.xx.xxx:x using elasticsearch api 
in python but I am not able to get the proper code to run that. Below is 
that I have done yet.

from datetime import datetime
from elasticsearch import Elasticsearch

es = Elasticsearch("10.120.xx.xxx:6xxx8")
print(es.cluster)
print(es.cat)
print(es.indices)
print(es.nodes)
print(es.snapshot)


# but not deserialized
es.get(index="logstash-2014.09.11", doc_type="syslog", 
id='iFP2D8nHSKeqevBWrm1Hgg')['_source']
{u'any': u'data', u'timestamp': u'2013-05-12T19:45:31.804229'}

print(es)


Please tell me what to do from here so I am doing wrong.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d33976fd-b4ea-4ce3-8e97-cd019b77b0f7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Preparing for ElasticSearch in production

2014-09-11 Thread David Pilato
Sure. If you don't care at the beginning of your production about replication 
(and failover), that's perfectly fine.

-- 
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr


Le 11 septembre 2014 à 09:17:44, Simon Forsberg (simon.fo...@gmail.com) a écrit:

Hello,

I am wondering if it's a valid approach to start with a single-noded 
ElasticSearch cluster and then scale out when needed?

This would of course involve a proper shard management.

Thanks,

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6826063f-c7a2-4782-a260-23881cde0c8d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/etPan.541161a3.74b0dc51.3c6%40MacBook-Air-de-David.local.
For more options, visit https://groups.google.com/d/optout.


Re: Search Plugin to intercept search response

2014-09-11 Thread Sandeep Ramesh Khanzode
When you say, 'receive the SearchResponse', is that in the ES Server node
or the TransportClient node that spawned the request? I would want to
intercept the SearchResponse when created at the ES Server itself, since I
want to send the subset of Response to another process on the same node,
and it would not be very efficient to have the response sent back to the
client node only to be sent back again.

Thanks,
Sandeep

On Thu, Sep 11, 2014 at 12:43 PM, joergpra...@gmail.com <
joergpra...@gmail.com> wrote:

> You can receive the SearchResponse, process the response, and return the
> response with whatever format you want.
>
> Jörg
>
> On Wed, Sep 10, 2014 at 11:59 AM, Sandeep Ramesh Khanzode <
> k.sandee...@gmail.com> wrote:
>
>> Hi Jorg,
>>
>> Thanks for the links. I was checking the sources. There are relevant to
>> my functional use case. But I will be using the TransportClient Java API,
>> not the REST client.
>>
>> Can you please tell me how I can find/modify these classes/sources to get
>> the appropriate classes for inctercepting the Search Response when invoked
>> from a TransportClient?
>>
>>
>> Thanks,
>> Sandeep
>>
>> On Wed, Aug 27, 2014 at 6:38 PM, joergpra...@gmail.com <
>> joergpra...@gmail.com> wrote:
>>
>>> Have a look at array-format or csv plugin, they are processing the
>>> SearchResponse to output it in another format:
>>>
>>> https://github.com/jprante/elasticsearch-arrayformat
>>>
>>> https://github.com/jprante/elasticsearch-csv
>>>
>>> Jörg
>>>
>>>
>>> On Wed, Aug 27, 2014 at 3:05 PM, 'Sandeep Ramesh Khanzode' via
>>> elasticsearch  wrote:
>>>
 Hi,

 Is there any action/module that I can extend/register/add so that I can
 intercept the SearchResponse on the server node before the response is sent
 back to the TransportClient on the calling box?

 Thanks,
 Sandeep

 --
 You received this message because you are subscribed to the Google
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/559a5c68-4567-425f-9842-7f2fe6755095%40googlegroups.com
 
 .
 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "elasticsearch" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/elasticsearch/o6RZL4KwJVs/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGJ_%3D5RnyFqMP_AX4744z6tdAp8cfLBi_OqzLM23_rqzw%40mail.gmail.com
>>> 
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAKnM90bENin_aU4AXa%3DTVHQ_SyTTn-89Rev5vjj3%3DoDikwstkQ%40mail.gmail.com
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/o6RZL4KwJVs/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGWm8upDW9De7OvkM0cps%2BEyn3goo7Tgy3jyqJ8Jz5Khw%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKnM90aQiQO-cFzVb3vi_h7Buf1N

Re: Java Client integration with Jetty Plugin

2014-09-11 Thread joergpra...@gmail.com
There are so much authentication layers out there. I do not see much sense
in reinventing the wheel.

You should run ES cluster in private subnets only where nobody has access
to the machines, except your frontend service which is open to the web
(HTTP port 80/443). Such a frontend service can be an nginx reverse proxy,
Java EE container, whatever. The authentication has to take place there, at
the frontend only, not in the low level ES API, where it simply does not
fit the purpose.

Jörg

On Wed, Sep 10, 2014 at 12:52 PM, Mihir M  wrote:

> Hi
>
> I am using Elasticsearch version 1.2.1. I was looking for ways to secure
> access to Elasticsearch when I found the Jetty plugin. It lets me create
> users based on roles and is satisfying my requirements.
> However it only restricts HTTP requests.
>
> I was using Java Transport client for talking to Elasticsearch and since it
> uses Transport layer protocol while connecting to ES at 9300, the Jetty
> plugin has no effect on it.
>
> So as an alternative to the Transport Client I have tried using the Jest
> Client. But I have not found any API for sending authentication credentials
> while inserting data or reading or when creating a client.
>
> Is there a way in which I can pass authentication credentials in the Jest
> client while sending requests?
> If Jest Client does not support this, are there any alternative clients
> which will allow me to do so?
>
> Any help on this would be appreciated.
>
> Regards
> Mihir
>
>
>
> -
> Regards
> --
> View this message in context:
> http://elasticsearch-users.115913.n3.nabble.com/Java-Client-integration-with-Jetty-Plugin-tp4063253.html
> Sent from the ElasticSearch Users mailing list archive at Nabble.com.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/1410346337441-4063253.post%40n3.nabble.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEsaWbpCBAhOOu44vVLoMa0GWEtNSV-FAWKk1GFGn9UmA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: cluster can't recover after upgrade from 1.1.1 to 1.3.2 due to MaxBytesLengthExceededException

2014-09-11 Thread Jilles van Gurp
You are running into this 
problem: 
http://elasticsearch-users.115913.n3.nabble.com/encoding-is-longer-than-the-max-length-32766-td4056738.html

You need to change the mapping and define a maximum token length in your 
analyzer. Unfortunately, you would need to do that before you migrate and I 
don't think you'll be able to fix the shards without this mapping change in 
place.

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-length-tokenfilter.html#analysis-length-tokenfilter

Jilles

On Thursday, September 11, 2014 12:46:46 AM UTC+2, omar wrote:
>
> After doing a rolling upgrade from 1.1.1 to 1.3.2 some shards are failing 
> to recover. 
> I have two nodes with 8 shards and 1 replica. The index is a daily rolling 
> index, after the upgrade, the old indices recovered fine. The error is only 
> happening in today's index. I didn't stop indexing during the upgrade. From 
> the stack trace below this seems that I have reached the maximum limit for 
> a unanalyzed field; but this field's length is always greater than 32766. 
> I search lucene open bugs in 4.9 but didn't find anything. 
> my main concern now is how to recover the cluster without losing the 
> shards that are failing to start? also will this limit always be enforced, 
> and why it just started showing up now?
>
> Here is the full stack trace of the exception:
> Enter code here...[2014-09-10 18:39:03,045][WARN ][indices.cluster   
>] [qldbtrindex1.qa.cyveillance.com] [transient_2014_09_10][7] failed 
> to start shard
>
> org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: 
> [transient_2014_09_10][7] failed to recover shard
>
> at 
> org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:269)
>
> at 
> org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:722)
>
> Caused by: java.lang.IllegalArgumentException: Document contains at least 
> one immense term in field="providerEntity" (whose UTF8 encoding is longer 
> than the max length 32766), all of which were skipped.  Please correct the 
> analyzer to not produce such terms.  The prefix of the first immense term 
> is: '[123, 34, 119, 105, 107, 105, 112, 101, 100, 105, 97, 34, 58, 123, 34, 
> 101, 120, 116, 101, 114, 110, 97, 108, 108, 105, 110, 107, 115, 34, 
> 58]...', original message: bytes can be at most 32766 in length; got 249537
>
> at 
> org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:671)
>
> at 
> org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:342)
>
> at 
> org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:301)
>
> at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:222)
>
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:450)
>
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1507)
>
> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1222)
>
> at 
> org.elasticsearch.index.engine.internal.InternalEngine.innerIndex(InternalEngine.java:563)
>
> at 
> org.elasticsearch.index.engine.internal.InternalEngine.index(InternalEngine.java:492)
>
> at 
> org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:769)
>
> at 
> org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:250)
>
> ... 4 more
>
> Caused by: 
> org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes 
> can be at most 32766 in length; got 249537
>
> at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)
>
> at 
> org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
>
> at 
> org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:645)
> ... 14 more
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ba34a600-2d60-4421-9b2d-e9506c0f08f3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Error while reading elasticsearch data in hadoop program

2014-09-11 Thread Costin Leau

Unfortunately there's no clear answer since each distro and version (in 
particular 1.x vs 2.x) added more options to
configure the classpath and thus configure the cluster and its jobs.
This is actually the reason why in the docs we don't recommend one big way - 
the classpath config is fragile (the order
of the arguments matter, one settings can override another, a different env 
variable is read instead of another) etc...

You could try following up with the distro (apache or not) docs and potentially 
mailing list. Additionally you can try
checking out the sources in particular the start-up scripts...

On 9/11/14 9:13 AM, gaurav redkar wrote:

Hi Costin,

Thanks for your inputs. I was able to get it running after I copied the 
elasticsearch-hadoop-2.0.0.jar to the "lib"
directory of my hadoop installation. The reason why I was stuck with this issue 
is because i had already packaged this
es-hadoop jar into my application and had built a jar. So when I was running 
the example as follows :-

hadoop jar es2.jar Es2

where Es2 is the name of the runner class which the main() function, I was 
expecting the program to find the required
classes  since I had already bundled the es-hadoop jar within the project jar.

Also in the instructions on the elasticsearch-hadoop documentation at

http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/mapreduce.html#CO14-2

it was mentioned to add the jar to the HADOOP_CLASSPATH. I first added the path 
to the es-hadoop.jar to the
HADOOP_CLASSPATH, but it gave the same error. Later I added the jar to one of 
paths mentioned within HADOOP_CLASSPATH,
and the program executed.  Can you guide as to why is it working in the second 
case and not the first case ? Or am I
doing something wrong ?

Anyway thanks for your guidance.

Regards,
Gaurav

On Wed, Sep 10, 2014 at 1:54 AM, Costin Leau mailto:costin.l...@gmail.com>> wrote:

If by error you mean the ClassNotFoundException, you need to check again 
your classpath. Also be sure to add
es-hadoop to your job classpath (typically pack it with the jar) - the 
documentation
describes some of the options available [1]

[1] 
http://www.elasticsearch.org/__guide/en/elasticsearch/hadoop/__2.1.Beta/mapreduce.html#___installation




On 9/9/14 10:26 PM, gaurav redkar wrote:

Hi Costin,

I had downloaded the elasticsearch-hadoop-2.1.0.__Beta1.zip file and 
used all the jars from that for the
program. Later I
even  tried replacing all the jars in my program with jars from with 
elasticsearch-hadoop-2.0.0.zip file, but still
facing the same error.

On Tue, Sep 9, 2014 at 6:52 PM, Costin Leau mailto:costin.l...@gmail.com>
>__> wrote:

 Most likely you have a classpath conflict caused by multiple 
versions of es-hadoop. Can you double check
you only
 have one version (2.1.0.Beta1) available?
 Based on the error, I'm guessing you have some 1.3 Mx or the RC 
somewhere in there...

 On 9/9/14 4:06 PM, gaurav redkar wrote:

 Hi Costin,

 Thanks for the heads up regarding gist. I will try to follow 
the guidelines in the future. As for my
program, I
 am using
 Elasticsearch Hadoop v2.1.0.Beta1 . I tried your suggestion 
and changed the output value class to
 LinkedMapWritable. but
 now I am getting the following error.

https://gist.github.com/gauravub/7d55bc6b10cb63935eb8

>

 Any idea why is this happening ? I even tried using the v2.0.0 
of es-hadoop but am still getting the
same error.

 On Tue, Sep 9, 2014 at 4:02 PM, Costin Leau mailto:costin.l...@gmail.com>
>
  
>__>__> wrote:

  Hi,

  What version of es-hadoop are you using? The problem 
stems from the difference in the types
mentioned on your
  Mapper, namely the output value class:


conf.setMapOutputValueClass(__MapWritable.class);


  to MapWritable while LinkedMapWritable is returned. The 
latest versions automatically detect this
and use
 the proper
  type so I recommend upgrading.
  If that's not an option, use LinkedMapWritable.

  

Preparing for ElasticSearch in production

2014-09-11 Thread Simon Forsberg
Hello,

I am wondering if it's a valid approach to start with a single-noded 
ElasticSearch cluster and then scale out when needed?

This would of course involve a proper shard management.

Thanks,

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6826063f-c7a2-4782-a260-23881cde0c8d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Ramifications of G1GC in ES1.3 with JDK 1.8

2014-09-11 Thread joergpra...@gmail.com
Java 8 / G1GC works well here, what issues do you have?

Jörg

On Wed, Sep 10, 2014 at 8:13 PM, Robert Gardam 
wrote:

> I had been hitting my head up against heap issues until this afternoon
> after enabling G1GC.
>
> What are the known issues with this type of GC?
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/5e158da0-fd01-4dd1-8483-cf2671c675b2%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGBvjnVve4V1gbzuULR%2BYtb4OD3AgukxfX1q_6wDg%3DOYg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Search Plugin to intercept search response

2014-09-11 Thread joergpra...@gmail.com
You can receive the SearchResponse, process the response, and return the
response with whatever format you want.

Jörg

On Wed, Sep 10, 2014 at 11:59 AM, Sandeep Ramesh Khanzode <
k.sandee...@gmail.com> wrote:

> Hi Jorg,
>
> Thanks for the links. I was checking the sources. There are relevant to my
> functional use case. But I will be using the TransportClient Java API, not
> the REST client.
>
> Can you please tell me how I can find/modify these classes/sources to get
> the appropriate classes for inctercepting the Search Response when invoked
> from a TransportClient?
>
>
> Thanks,
> Sandeep
>
> On Wed, Aug 27, 2014 at 6:38 PM, joergpra...@gmail.com <
> joergpra...@gmail.com> wrote:
>
>> Have a look at array-format or csv plugin, they are processing the
>> SearchResponse to output it in another format:
>>
>> https://github.com/jprante/elasticsearch-arrayformat
>>
>> https://github.com/jprante/elasticsearch-csv
>>
>> Jörg
>>
>>
>> On Wed, Aug 27, 2014 at 3:05 PM, 'Sandeep Ramesh Khanzode' via
>> elasticsearch  wrote:
>>
>>> Hi,
>>>
>>> Is there any action/module that I can extend/register/add so that I can
>>> intercept the SearchResponse on the server node before the response is sent
>>> back to the TransportClient on the calling box?
>>>
>>> Thanks,
>>> Sandeep
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/559a5c68-4567-425f-9842-7f2fe6755095%40googlegroups.com
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/elasticsearch/o6RZL4KwJVs/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGJ_%3D5RnyFqMP_AX4744z6tdAp8cfLBi_OqzLM23_rqzw%40mail.gmail.com
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKnM90bENin_aU4AXa%3DTVHQ_SyTTn-89Rev5vjj3%3DoDikwstkQ%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGWm8upDW9De7OvkM0cps%2BEyn3goo7Tgy3jyqJ8Jz5Khw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


MVEL script and checks on boolean type fields

2014-09-11 Thread Florian B.
hola everybody,

i have a field in my mapping like
"some_flag" : {
  "type" : "boolean",
  "store" : "yes"
}

and i want to give bias to results that have the flag set to false. first i 
ran into the issue that apparently the field is treated as a string (was 
getting type errors)

then it at least executed, but i cant find out how the booleans are 
represented as strings. i had found some stack overflow entry that 
suggested that internally it's represented as 'T' and 'F' due to the way 
the JSON is interpreted but

(doc['some_flag']=='T') ? 1 : 2;

does not work either.

What's the trick I am missing?

elasticsearch version: 1.3.1

thx,
_f

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0db9a4c6-426f-4ba6-bd67-0d6431322f4e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.