IndexQueryParserModule Custom Filter with Shard awareness

2014-09-10 Thread 'Sandeep Ramesh Khanzode' via elasticsearch
Hi,

If I need to create my own query or filter parser, I can use a plugin that 
adds a new processor using the IndexQueryParserModule. It contains a 
QueryParserContext argument which holds the Index name. Is there any way I 
can be aware of the shardId to which I am being routed at that time inside 
the FilterParser class? Since the parser is invoked once for every shard, 
it will be good if that information could have been passed on in the 
argument to the filter/query parser.

Thanks,
Sandeep

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c1cde10e-3cf9-4c5c-9177-4f972c663208%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Why all replicas are unassigned?

2014-09-10 Thread Jun Ohtani
Hi

Did you use some properties of prefix
"cluster.routing.allocation.awareness"?

2014-09-11 14:48 GMT+09:00 Sephen Xu :

> uh...'other cluster' is means that another 2 machine build a four node
> cluster, and it work fine with cluster.routing.allocation.same_shard.host:
> true.
>
> And the problem clusters can work fine with 
> cluster.routing.allocation.same_shard.host:
> false.
>
> Now I got the results I wanted, but I fear shards and its replicas will be
> assigned to the same machine if I don't set 
> cluster.routing.allocation.same_shard.host
> to true.
>
> 在 2014年9月11日星期四UTC+8下午1时26分10秒,Pablo Musa写道:
>>
>> > But on other cluster,
>>
>> Here you mean cluster or node?
>>
>> I could not understand, Is everything working as you wanted?
>>
>> > This setting only applies if multiple nodes are started on the same
>> machine.
>>
>> Just by curiosity, are you running nodes on the same machine?
>>
>> Regards,
>> Pablo
>>
>> 2014-09-11 2:00 GMT-03:00 Sephen Xu :
>>
>>> Hi,Pablo,
>>> I have tried settings replicas to zero and putting it back to 1, it does
>>> not work as you say.
>>>
>>> And finally, I found, when I turn 
>>> cluster.routing.allocation.same_shard.host:
>>> true to false, the replicas was work well. But on other cluster, this
>>> setting does not affect use. And the official document is to describe this
>>> set:
>>> cluster.routing.allocation.same_shard.hostAllows to perform a check to
>>> prevent allocation of multiple instances of the same shard on a single
>>> host, based on host name and host address. Defaults to false, meaning
>>> that no check is performed by default. This setting only applies if
>>> multiple nodes are started on the same machine.
>>> Why is this so?
>>>
>>> (Sorry for my bad English : )
>>>
>>> 在 2014年9月11日星期四UTC+8上午11时12分09秒,Pablo Musa写道:

 googled: elasticsearch java 1.6.0_45 shard unassigned

 https://github.com/elasticsearch/elasticsearch/issues/3145 (search for
 1.6)
 https://groups.google.com/forum/#!msg/elasticsearch/MSrKvfgK
 wy0/Tfk6nhlqYxYJ

 For all that I have researched it points to Java version problem.
 You could try some things as settings replicas to 0 and putting it back
 to 1  or forcing allocation (do not remember the exact command, but google
 for unassigned chards and you will find it), but I do not think that they
 will work.

 I really would try installing a new version of Java and running
 Elasticsearch using it.

 Regards,
 Pablo

 2014-09-11 0:03 GMT-03:00 Sephen Xu :

> Thank you for your reply, the java version on each machine are same --
> 1.6.0_45, and the elasticsearch version is 1.1.2.
>
>
>
> 在 2014年9月11日星期四UTC+8上午10时48分44秒,pabli...@gmail.com写道:
>
>> I would check for the Java version on each machine.
>> I had the same problem on a running cluster when adding a node, and
>> unfortunately the last node had Java 1.7.0_65 instead of 1.7.0_55
>> (recommended version and the version of my other machines).
>>
>> I did not have the time to create a post explaining the whole
>> problem. But, in summary, I ran a default install script using apt-get 
>> and,
>> by default, they use the "latest" Java version.
>>
>> One big problem for me is that they do not support versioned jdk
>> installation and I could not find a deb package for 1.7.0_55. Maybe 
>> someone
>> here can help with this.
>>
>> The "problematic" command:
>> apt-get install openjdk-7-jre-headless -y
>>
>> Regards,
>> Pablo Musa
>>
>> On Wednesday, September 10, 2014 10:31:24 PM UTC-3, Sephen Xu wrote:
>>>
>>> Hello,
>>>
>>> I startup 4 nodes on 2 machines, and when create index, all replicas
>>> are unassigned.
>>>
>>> {
>>>   "cluster_name" : "elasticsearch_log",
>>>   "status" : "yellow",
>>>   "timed_out" : false,
>>>   "number_of_nodes" : 4,
>>>   "number_of_data_nodes" : 4,
>>>   "active_primary_shards" : 22,
>>>   "active_shards" : 22,
>>>   "relocating_shards" : 0,
>>>   "initializing_shards" : 0,
>>>   "unassigned_shards" : 22
>>> }
>>>
>>> How can I do?
>>>
>>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/elasticsearch/kn3UHQgQKJk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearc...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/elasticsearch/8a7a7fcc-9a93-4ae8-9b26-8cdd9715eb7b%40goo
> glegroups.com
> 
> .
>
> For more options, visit https://groups.google.co

Re: Error while reading elasticsearch data in hadoop program

2014-09-10 Thread gaurav redkar
Hi Costin,

Thanks for your inputs. I was able to get it running after I copied the
elasticsearch-hadoop-2.0.0.jar to the "lib" directory of my hadoop
installation. The reason why I was stuck with this issue is because i had
already packaged this es-hadoop jar into my application and had built a
jar. So when I was running the example as follows :-

hadoop jar es2.jar Es2

where Es2 is the name of the runner class which the main() function, I was
expecting the program to find the required classes  since I had already
bundled the es-hadoop jar within the project jar.

Also in the instructions on the elasticsearch-hadoop documentation at

http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/mapreduce.html#CO14-2


it was mentioned to add the jar to the HADOOP_CLASSPATH. I first added the
path to the es-hadoop.jar to the HADOOP_CLASSPATH, but it gave the same
error. Later I added the jar to one of paths mentioned within
HADOOP_CLASSPATH, and the program executed.  Can you guide as to why is it
working in the second case and not the first case ? Or am I doing something
wrong ?

Anyway thanks for your guidance.

Regards,
Gaurav

On Wed, Sep 10, 2014 at 1:54 AM, Costin Leau  wrote:

> If by error you mean the ClassNotFoundException, you need to check again
> your classpath. Also be sure to add es-hadoop to your job classpath
> (typically pack it with the jar) - the documentation
> describes some of the options available [1]
>
> [1] http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/
> 2.1.Beta/mapreduce.html#_installation
>
>
> On 9/9/14 10:26 PM, gaurav redkar wrote:
>
>> Hi Costin,
>>
>> I had downloaded the elasticsearch-hadoop-2.1.0.Beta1.zip file and used
>> all the jars from that for the program. Later I
>> even  tried replacing all the jars in my program with jars from with
>> elasticsearch-hadoop-2.0.0.zip file, but still
>> facing the same error.
>>
>> On Tue, Sep 9, 2014 at 6:52 PM, Costin Leau > > wrote:
>>
>> Most likely you have a classpath conflict caused by multiple versions
>> of es-hadoop. Can you double check you only
>> have one version (2.1.0.Beta1) available?
>> Based on the error, I'm guessing you have some 1.3 Mx or the RC
>> somewhere in there...
>>
>> On 9/9/14 4:06 PM, gaurav redkar wrote:
>>
>> Hi Costin,
>>
>> Thanks for the heads up regarding gist. I will try to follow the
>> guidelines in the future. As for my program, I
>> am using
>> Elasticsearch Hadoop v2.1.0.Beta1 . I tried your suggestion and
>> changed the output value class to
>> LinkedMapWritable. but
>> now I am getting the following error.
>>
>> https://gist.github.com/__gauravub/7d55bc6b10cb63935eb8 <
>> https://gist.github.com/gauravub/7d55bc6b10cb63935eb8>
>>
>> Any idea why is this happening ? I even tried using the v2.0.0 of
>> es-hadoop but am still getting the same error.
>>
>> On Tue, Sep 9, 2014 at 4:02 PM, Costin Leau <
>> costin.l...@gmail.com 
>> >__>
>> wrote:
>>
>>  Hi,
>>
>>  What version of es-hadoop are you using? The problem stems
>> from the difference in the types mentioned on your
>>  Mapper, namely the output value class:
>>
>>conf.setMapOutputValueClass(__
>> __MapWritable.class);
>>
>>
>>  to MapWritable while LinkedMapWritable is returned. The
>> latest versions automatically detect this and use
>> the proper
>>  type so I recommend upgrading.
>>  If that's not an option, use LinkedMapWritable.
>>
>>  Cheers,
>>
>>  P.S. Please don't post code and stracktraces on the mailing
>> list since it highly reduces the readability of
>> your
>>  email. Instead use gist or any other service
>>  to post the code as indicated in the docs [1]. Thanks
>>
>>  [1]
>> http://www.elasticsearch.org/guide/en/elasticsearch/__
>> hadoop/__2.1.Beta/__troubleshooting.html#___where_
>> __do_i_post_my___information
>> > hadoop/__2.1.Beta/troubleshooting.html#___where_
>> do_i_post_my___information>
>>
>> > hadoop/__2.1.Beta/troubleshooting.html#___where_
>> do_i_post_my___information
>> > 2.1.Beta/troubleshooting.html#_where_do_i_post_my_information>>
>>
>>
>>
>>
>>  On 9/9/14 11:59 AM, gaurav redkar wrote:
>>
>>  Hi, I was following the example given on official
>> elasticsearch documentation to read data from
>> elasticsearch using
>>  hadoop but i am getting the following error.
>>
>>  java.lang.Exception: java.io.IOException: Type mismatch
>> 

Re: Why all replicas are unassigned?

2014-09-10 Thread Sephen Xu
uh...'other cluster' is means that another 2 machine build a four node 
cluster, and it work fine with cluster.routing.allocation.same_shard.host: 
true.

And the problem clusters can work fine 
with cluster.routing.allocation.same_shard.host: false.

Now I got the results I wanted, but I fear shards and its replicas will be 
assigned to the same machine if I don't 
set cluster.routing.allocation.same_shard.host to true.

在 2014年9月11日星期四UTC+8下午1时26分10秒,Pablo Musa写道:
>
> > But on other cluster,
>
> Here you mean cluster or node?
>
> I could not understand, Is everything working as you wanted?
>
> > This setting only applies if multiple nodes are started on the same 
> machine.
>
> Just by curiosity, are you running nodes on the same machine?
>
> Regards,
> Pablo
>
> 2014-09-11 2:00 GMT-03:00 Sephen Xu >:
>
>> Hi,Pablo,
>> I have tried settings replicas to zero and putting it back to 1, it does 
>> not work as you say.
>>
>> And finally, I found, when I turn 
>> cluster.routing.allocation.same_shard.host: true to false, the replicas was 
>> work well. But on other cluster, this setting does not affect use. And the 
>> official document is to describe this set:
>> cluster.routing.allocation.same_shard.hostAllows to perform a check to 
>> prevent allocation of multiple instances of the same shard on a single 
>> host, based on host name and host address. Defaults to false, meaning 
>> that no check is performed by default. This setting only applies if 
>> multiple nodes are started on the same machine.
>> Why is this so?
>>
>> (Sorry for my bad English : )
>>
>> 在 2014年9月11日星期四UTC+8上午11时12分09秒,Pablo Musa写道:
>>>
>>> googled: elasticsearch java 1.6.0_45 shard unassigned
>>>
>>> https://github.com/elasticsearch/elasticsearch/issues/3145 (search for 
>>> 1.6)
>>> https://groups.google.com/forum/#!msg/elasticsearch/
>>> MSrKvfgKwy0/Tfk6nhlqYxYJ
>>>
>>> For all that I have researched it points to Java version problem.
>>> You could try some things as settings replicas to 0 and putting it back 
>>> to 1  or forcing allocation (do not remember the exact command, but google 
>>> for unassigned chards and you will find it), but I do not think that they 
>>> will work.
>>>
>>> I really would try installing a new version of Java and running 
>>> Elasticsearch using it.
>>>
>>> Regards,
>>> Pablo
>>>
>>> 2014-09-11 0:03 GMT-03:00 Sephen Xu :
>>>
 Thank you for your reply, the java version on each machine are same -- 
 1.6.0_45, and the elasticsearch version is 1.1.2.



 在 2014年9月11日星期四UTC+8上午10时48分44秒,pabli...@gmail.com写道:

> I would check for the Java version on each machine.
> I had the same problem on a running cluster when adding a node, and 
> unfortunately the last node had Java 1.7.0_65 instead of 1.7.0_55 
> (recommended version and the version of my other machines).
>
> I did not have the time to create a post explaining the whole problem. 
> But, in summary, I ran a default install script using apt-get and, by 
> default, they use the "latest" Java version.
>
> One big problem for me is that they do not support versioned jdk 
> installation and I could not find a deb package for 1.7.0_55. Maybe 
> someone 
> here can help with this.
>
> The "problematic" command:
> apt-get install openjdk-7-jre-headless -y
>
> Regards,
> Pablo Musa
>
> On Wednesday, September 10, 2014 10:31:24 PM UTC-3, Sephen Xu wrote:
>>
>> Hello,
>>
>> I startup 4 nodes on 2 machines, and when create index, all replicas 
>> are unassigned.
>>
>> {
>>   "cluster_name" : "elasticsearch_log",
>>   "status" : "yellow",
>>   "timed_out" : false,
>>   "number_of_nodes" : 4,
>>   "number_of_data_nodes" : 4,
>>   "active_primary_shards" : 22,
>>   "active_shards" : 22,
>>   "relocating_shards" : 0,
>>   "initializing_shards" : 0,
>>   "unassigned_shards" : 22
>> }
>>
>> How can I do?
>>
>  -- 
 You received this message because you are subscribed to a topic in the 
 Google Groups "elasticsearch" group.
 To unsubscribe from this topic, visit https://groups.google.com/d/
 topic/elasticsearch/kn3UHQgQKJk/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/8a7a7fcc-9a93-4ae8-9b26-8cdd9715eb7b%
 40googlegroups.com 
 
 .

 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/elasticsearch/kn3UHQgQKJk/u

Re: Backup and restore using snapshots

2014-09-10 Thread Alex Harvey
I could still use feedback on this plan.

On Sunday, August 31, 2014 9:08:12 PM UTC+10, Alex Harvey wrote:
>
> Hi all
>
> I could use some help getting my head around the snapshot and restore 
> functionality in ES.
>
> I have a requirement to do incremental daily tape backups and full backups 
> weekly using EMC's Avamar backup software.
>
> I'd really appreciate if someone can tell me if the following plan is 
> going to work -
>
> 1)  Export an NFS filesystem from the storage node to both ES data nodes, 
> and mount that as /mnt/backup on both nodes.
>
> 2)  From one of the ES nodes register this directory as the shared 
> repository: curl -XPUT 'http://localhost:9200/_snapshot/backup' -d 
> '{"type": "fs","settings": {"location": "/mnt/backup"}}'
>
> 3)  On Saturday do a full backup:
>
> i. Get a list of all snapshots using: curl -XGET 
> 'localhost:9200/_snapshot/_status'
> ii. For each of these delete using a command like: curl -XDELETE 
> 'localhost:9200/_snapshot/backup/snapshot_20140830'
> iii.  Create a full backup using:  curl -XPUT 
> "localhost:9200/_snapshot/backup/snapshot_$(date 
> +%Y%m%d)?wait_for_completion=true"
> iv.  Copy the /mnt/backup directory to tape telling Avamar to take a full 
> backup
>
> 4)  On Sunday to Friday do incremental backups based on the Saturday 
> backup:
>
> i.  Simply run: curl -XPUT 
> "localhost:9200/_snapshot/backup/snapshot_$(date 
> +%d%m%Y)?wait_for_completion=true"
> ii.  Copy /mnt/backup to tape telling Avamar to take an incremental backup
>
> Is this plan going to work?  Is there a better way?
>
> Thanks very much in advance.
>
> Best regards,
> Alex
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6bea9d13-5137-41e6-842c-32fbe71c56b8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Why all replicas are unassigned?

2014-09-10 Thread Pablo Musa
> But on other cluster,

Here you mean cluster or node?

I could not understand, Is everything working as you wanted?

> This setting only applies if multiple nodes are started on the same
machine.

Just by curiosity, are you running nodes on the same machine?

Regards,
Pablo

2014-09-11 2:00 GMT-03:00 Sephen Xu :

> Hi,Pablo,
> I have tried settings replicas to zero and putting it back to 1, it does
> not work as you say.
>
> And finally, I found, when I turn
> cluster.routing.allocation.same_shard.host: true to false, the replicas was
> work well. But on other cluster, this setting does not affect use. And the
> official document is to describe this set:
> cluster.routing.allocation.same_shard.hostAllows to perform a check to
> prevent allocation of multiple instances of the same shard on a single
> host, based on host name and host address. Defaults to false, meaning
> that no check is performed by default. This setting only applies if
> multiple nodes are started on the same machine.
> Why is this so?
>
> (Sorry for my bad English : )
>
> 在 2014年9月11日星期四UTC+8上午11时12分09秒,Pablo Musa写道:
>>
>> googled: elasticsearch java 1.6.0_45 shard unassigned
>>
>> https://github.com/elasticsearch/elasticsearch/issues/3145 (search for
>> 1.6)
>> https://groups.google.com/forum/#!msg/elasticsearch/
>> MSrKvfgKwy0/Tfk6nhlqYxYJ
>>
>> For all that I have researched it points to Java version problem.
>> You could try some things as settings replicas to 0 and putting it back
>> to 1  or forcing allocation (do not remember the exact command, but google
>> for unassigned chards and you will find it), but I do not think that they
>> will work.
>>
>> I really would try installing a new version of Java and running
>> Elasticsearch using it.
>>
>> Regards,
>> Pablo
>>
>> 2014-09-11 0:03 GMT-03:00 Sephen Xu :
>>
>>> Thank you for your reply, the java version on each machine are same --
>>> 1.6.0_45, and the elasticsearch version is 1.1.2.
>>>
>>>
>>>
>>> 在 2014年9月11日星期四UTC+8上午10时48分44秒,pabli...@gmail.com写道:
>>>
 I would check for the Java version on each machine.
 I had the same problem on a running cluster when adding a node, and
 unfortunately the last node had Java 1.7.0_65 instead of 1.7.0_55
 (recommended version and the version of my other machines).

 I did not have the time to create a post explaining the whole problem.
 But, in summary, I ran a default install script using apt-get and, by
 default, they use the "latest" Java version.

 One big problem for me is that they do not support versioned jdk
 installation and I could not find a deb package for 1.7.0_55. Maybe someone
 here can help with this.

 The "problematic" command:
 apt-get install openjdk-7-jre-headless -y

 Regards,
 Pablo Musa

 On Wednesday, September 10, 2014 10:31:24 PM UTC-3, Sephen Xu wrote:
>
> Hello,
>
> I startup 4 nodes on 2 machines, and when create index, all replicas
> are unassigned.
>
> {
>   "cluster_name" : "elasticsearch_log",
>   "status" : "yellow",
>   "timed_out" : false,
>   "number_of_nodes" : 4,
>   "number_of_data_nodes" : 4,
>   "active_primary_shards" : 22,
>   "active_shards" : 22,
>   "relocating_shards" : 0,
>   "initializing_shards" : 0,
>   "unassigned_shards" : 22
> }
>
> How can I do?
>
  --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "elasticsearch" group.
>>> To unsubscribe from this topic, visit https://groups.google.com/d/
>>> topic/elasticsearch/kn3UHQgQKJk/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/
>>> msgid/elasticsearch/8a7a7fcc-9a93-4ae8-9b26-8cdd9715eb7b%
>>> 40googlegroups.com
>>> 
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/kn3UHQgQKJk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/f20b0c5a-00c9-42cb-94b7-5ecb59f83904%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving e

Re: complex nested query

2014-09-10 Thread 闫旭
Thank you !  But nested bool query can not plus all price with the data range. 
how  can i do this??

Thx again.

Thanks && Best Regard!

在 2014年9月11日,12:04,vineeth mohan  写道:

> Hello , 
> 
> 
> First you need to declare field details as nested. - 
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-nested-type.html#mapping-nested-type
> 
> Then do a bool query with the date range constrain and range constrain
> 
> Thanks
> Vineeth
> 
> On Thu, Sep 11, 2014 at 8:53 AM, 闫旭  wrote:
> Dear All!
> 
> I have a problem with a complex nested query
> the docs like this:
> _id:1
> {
>   "detail":[
>   {
>   "date":"2014-09-01",
>   "price”:50
>   },
>   {
>   "date":"2014-09-02",
>   "price”:100
>   },
>   {
>   "date":"2014-09-03",
>   "price":100
>   },
>   {
>   "date":"2014-09-04",
>   "price":200
>   }
>   ]
> 
> }
> _id:2
> {
>   "detail":[
>   {
>   "date":"2014-09-01",
>   "price":100
>   },
>   {
>   "date":"2014-09-02",
>   "price":200
>   },
>   {
>   "date":"2014-09-03",
>   "price":300
>   },
>   {
>   "date":"2014-09-04",
>   "price":200
>   }
>   ]
> 
> }
> I will filter the docs with “date in [2014-09-01, 2014-09-03] and sum(price) 
> > 300”.
> I only find some way with “aggregation”, but it can only stat the sum of all 
> docs.
> 
> How Can I solve the problem?? 
> 
> 
> Thanks && Best Regard!
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/A98354E4-9C9F-43B2-9310-6355DE3D6F85%40gmail.com.
> For more options, visit https://groups.google.com/d/optout.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/CAGdPd5kfRarPNNBctvYfHsk52tjD2rxv18aQGqq3Hz0i_2ZxVQ%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/BAA9C6D6-8060-4F4E-B2F0-7764EAD2BCDB%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Why all replicas are unassigned?

2014-09-10 Thread Sephen Xu
Hi,Pablo,
I have tried settings replicas to zero and putting it back to 1, it does 
not work as you say.

And finally, I found, when I turn 
cluster.routing.allocation.same_shard.host: true to false, the replicas was 
work well. But on other cluster, this setting does not affect use. And the 
official document is to describe this set:
cluster.routing.allocation.same_shard.hostAllows to perform a check to 
prevent allocation of multiple instances of the same shard on a single 
host, based on host name and host address. Defaults to false, meaning that 
no check is performed by default. This setting only applies if multiple 
nodes are started on the same machine.
Why is this so?

(Sorry for my bad English : )

在 2014年9月11日星期四UTC+8上午11时12分09秒,Pablo Musa写道:
>
> googled: elasticsearch java 1.6.0_45 shard unassigned
>
> https://github.com/elasticsearch/elasticsearch/issues/3145 (search for 
> 1.6)
>
> https://groups.google.com/forum/#!msg/elasticsearch/MSrKvfgKwy0/Tfk6nhlqYxYJ
>
> For all that I have researched it points to Java version problem.
> You could try some things as settings replicas to 0 and putting it back to 
> 1  or forcing allocation (do not remember the exact command, but google for 
> unassigned chards and you will find it), but I do not think that they will 
> work.
>
> I really would try installing a new version of Java and running 
> Elasticsearch using it.
>
> Regards,
> Pablo
>
> 2014-09-11 0:03 GMT-03:00 Sephen Xu >:
>
>> Thank you for your reply, the java version on each machine are same -- 
>> 1.6.0_45, and the elasticsearch version is 1.1.2.
>>
>>
>>
>> 在 2014年9月11日星期四UTC+8上午10时48分44秒,pabli...@gmail.com写道:
>>
>>> I would check for the Java version on each machine.
>>> I had the same problem on a running cluster when adding a node, and 
>>> unfortunately the last node had Java 1.7.0_65 instead of 1.7.0_55 
>>> (recommended version and the version of my other machines).
>>>
>>> I did not have the time to create a post explaining the whole problem. 
>>> But, in summary, I ran a default install script using apt-get and, by 
>>> default, they use the "latest" Java version.
>>>
>>> One big problem for me is that they do not support versioned jdk 
>>> installation and I could not find a deb package for 1.7.0_55. Maybe someone 
>>> here can help with this.
>>>
>>> The "problematic" command:
>>> apt-get install openjdk-7-jre-headless -y
>>>
>>> Regards,
>>> Pablo Musa
>>>
>>> On Wednesday, September 10, 2014 10:31:24 PM UTC-3, Sephen Xu wrote:

 Hello,

 I startup 4 nodes on 2 machines, and when create index, all replicas 
 are unassigned.

 {
   "cluster_name" : "elasticsearch_log",
   "status" : "yellow",
   "timed_out" : false,
   "number_of_nodes" : 4,
   "number_of_data_nodes" : 4,
   "active_primary_shards" : 22,
   "active_shards" : 22,
   "relocating_shards" : 0,
   "initializing_shards" : 0,
   "unassigned_shards" : 22
 }

 How can I do?

>>>  -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/elasticsearch/kn3UHQgQKJk/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/8a7a7fcc-9a93-4ae8-9b26-8cdd9715eb7b%40googlegroups.com
>>  
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f20b0c5a-00c9-42cb-94b7-5ecb59f83904%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: What is better - create several document types or several indices?

2014-09-10 Thread vineeth mohan
Hello ,

My advice would be to keep all the logs in a single index , but apply index
tailing.
That is write logs of a day or hour ( depending upon traffic) to each index
like logstash does.
So name of the index would be of format logs-`-MM-dd`
This way , you wont be stuck with the fixed shard problem and dynamic
horizontal scaling can be achieved.
Also , it would be a wise idea to remove old logs using TTL facility OR
closing old index or even take a snapshot and remove the index.

TTL -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html#index-ttl
Index Close -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-open-close.html#indices-open-close

Thanks
  Vineeth





On Thu, Sep 11, 2014 at 7:39 AM, Konstantin Erman  wrote:

> We use Elasticsearch to aggregate several types of logs - web server logs,
> application logs, windows event logs, statistics, etc.
>
> As far as I understand I can do one of the following:
> 1, Send each log to its own index and when I need to combine them in query
> - specify several indices in Kibana settings;
> 2. Send all logs to the same index (we turn them over every day) and give
> logs from different sources different document types;
> 3. Do more or less nothing, push all documents together without
> distinguishing them explicitly;
>
> My question is - what are advantages and disadvantages of each approach?
> We have substantial amount of logs going in every second, but querying is
> rather rare, at least so far.
>
> Thank you!
> Konstantin
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/e41e4959-6a45-417a-8ba6-856abcd33350%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5kmDEJ%2BmhfX8RtGm9KAiBKEK%3DT1-1r3kj7pCnNNwMY-PA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: complex nested query

2014-09-10 Thread vineeth mohan
Hello ,


First you need to declare field details as nested. -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-nested-type.html#mapping-nested-type

Then do a bool query with the date range constrain and range constrain

Thanks
Vineeth

On Thu, Sep 11, 2014 at 8:53 AM, 闫旭  wrote:

> Dear All!
>
> I have a problem with a complex nested query
> the docs like this:
> _id:1
> {
> "detail":[
> {
> "date":"2014-09-01",
> "price”:50
> },
> {
> "date":"2014-09-02",
> "price”:100
> },
> {
> "date":"2014-09-03",
> "price":100
> },
> {
> "date":"2014-09-04",
> "price":200
> }
> ]
>
> }
> _id:2
> {
> "detail":[
> {
> "date":"2014-09-01",
> "price":100
> },
> {
> "date":"2014-09-02",
> "price":200
> },
> {
> "date":"2014-09-03",
> "price":300
> },
> {
> "date":"2014-09-04",
> "price":200
> }
> ]
>
> }
> I will filter the docs with “date in [2014-09-01, 2014-09-03] and
> sum(price) > 300”.
> I only find some way with “aggregation”, but it can only stat the sum of
> all docs.
>
> How Can I solve the problem??
>
>
> Thanks && Best Regard!
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/A98354E4-9C9F-43B2-9310-6355DE3D6F85%40gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5kfRarPNNBctvYfHsk52tjD2rxv18aQGqq3Hz0i_2ZxVQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: searching across multiple types returns doesn't find all documents matching

2014-09-10 Thread vineeth mohan
Hello Ben ,


This is the type/field ambiguity bug -
https://github.com/elasticsearch/elasticsearch/issues/4081

Basically , if you use the field name and type name as same , this might
come up.
Make these two different and it should work.

Thanks
Vineeth

On Thu, Sep 11, 2014 at 4:17 AM, ben  wrote:

> I include a bash script that recreates the situation.
>
> #!/bin/sh
>
> curl -XDELETE "http://localhost:9200/test";
> curl -XPUT "http://localhost:9200/test";
>
> echo
>
> curl -XPUT "http://localhost:9200/test/foo/_mapping"; -d '{
> "foo" : {
> "properties" : {
> "id": {
> "type" : "multi_field",
> "path": "full",
> "fields" : {
> "foo_id_in_another_field" : {"type" : "long", 
> include_in_all:false },
> "id" : {"type" : "long"}
>}
> }
> }
> }
> }'
>
> echo
>
> #foo is a basically a duplicate of the foo document to support search use 
> cases
> curl -XPUT "http://localhost:9200/test/bar/_mapping"; -d '{
> "bar" : {
> "properties" : {
> "id": {
> "type" : "multi_field",
> "path": "full",
> "fields" : {
> "bar_id_in_another_field" : {"type" : "long", 
> include_in_all:false },
> "id" : {"type" : "long"}
>}
> },
> "foo": {
> "properties": {
> "id": {
> "type" : "multi_field",
> "path": "full",
> "fields" : {
> "foo_id_in_another_field" : {"type" : "long", 
> include_in_all:false },
> "id" : {"type" : "long"}
> }
> }
> }
> }
> }
> }
> }'
>
> echo
>
> curl -XPUT "http://localhost:9200/test/foo/1?refresh=true"; -d '{
> "foo": {
> "id": 1
> }
> }'
>
> echo
>
> #failure case appears even when not including the following JSON
> # "bar": {
> #   "id": 2,
> #   "foo": {
> # "id": 3
> #   }
> # }
> curl -XPUT "http://localhost:9200/test/bar/2?refresh=true"; -d '{
> "bar": {
> "id": 2
> }
> }'
>
> echo
>
> #expect two results, get one (FAIL)
> curl -XPOST "http://localhost:9200/test/foo,bar/_search?pretty=true"; -d '{
>   "size": 10,
>   "query": {
> "query_string": {
>   "query": "foo.id:1 OR bar.id:2"
> }
>   }
> }'
>
> echo
>
> #except one result, get one (PASS)
> curl -XPOST "http://localhost:9200/test/bar/_search?pretty=true"; -d '{
>   "size": 10,
>   "query": {
> "query_string": {
>   "query": "foo.id:1 OR bar.id:2"
> }
>   }
> }'
>
> echo
>
> #expect one result, get one result (PASS)
> curl -XPOST "http://localhost:9200/test/foo/_search?pretty=true"; -d '{
>   "size": 10,
>   "query": {
> "query_string": {
>   "query": "foo.id:1 OR bar.id:2"
> }
>   }
> }'
>
> echo
>
> #expect two results, get tow results (PASS)
> curl -XPOST "http://localhost:9200/test/_search?pretty=true"; -d '{
>   "size": 10,
>   "query": {
> "query_string": {
>   "query": "foo.id:1 OR bar.id:2"
> }
>   }
> }'
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/1d8f1ea8-db1b-425c-b6ee-153f5f369f43%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3DJAwmeymjLcb-%3DApxCp%3DFfhSKMUnmxw-97UqLtaL3W4A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Scripting and dates

2014-09-10 Thread vineeth mohan
Hello Michael ,

When you set the type of the field as time , its indexed as time  ( as
epoch to be more precise ).
And by default , it takes multiple date formats.
But the issue here is that , the index data is not available to you while
updating.
Only the stored data which is the date string is available.

So there are 2 options here

   1. While declaring a field as type date , you can also specify the
   format in which the date string comes in. Any other format is rejected.
   This way hard coding the format in the script should work fine.
   2. Store the epoch representation as time instead of date string. This
   wont in anyway hinter with any of date operation like bucketing  ,range
   selection but at the same time , your update operations would be smooth.

Thanks
Vineeth



On Thu, Sep 11, 2014 at 5:16 AM, Michael Giagnocavo 
wrote:

>  Thank you, I did have to resort to parsing the date string. The next
> problem is that dates don’t seem to come back in a fixed format. If I post
> a date with just the date string, that’s what I get back. Likewise for
> fractional seconds. So I do some string work with substring first.
>
>
>
> How can I determine if ES has actually indexed a value as a date?
>
>
>
> -Michael
>
>
>
> *From:* elasticsearch@googlegroups.com [mailto:
> elasticsearch@googlegroups.com] *On Behalf Of *vineeth mohan
> *Sent:* Wednesday, September 10, 2014 2:13 PM
>
> *To:* elasticsearch@googlegroups.com
> *Subject:* Re: Scripting and dates
>
>
>
> Hello Michael ,
>
>
>
> This should work for you -
>
>
>
> cat scr
>
> {
>
> "script": "sdf = new
> java.text.SimpleDateFormat('-MM-dd\\'T\\'HH:mm:ss');startDate =
> sdf.parse(ctx._source.start);endDate = sdf.parse(ctx._source.end);
> ctx._source.diff = endDate.getTime() - startDate.getTime();"
>
> }
>
> curl  -XPOST http://localhost:9200/test/logs/1/_update -d @scr
>
> {"_index":"test","_type":"logs","_id":"1","_version":10
>
>
>
>
>
> I was in the pretext that one can access the doc object for updation , but
> seems we can only access _source here.
>
> In that case , you need to parse the string to date object and then do the
> stuffs.
>
> The above works for me perfectly.
>
>
>
> Thanks
>
>Vineeth
>
>
>
>
>
>
>
> On Thu, Sep 11, 2014 at 12:12 AM, Michael Giagnocavo 
> wrote:
>
>  BTW I found the problem with referring to a script by name. If the
> script has an error, then it fails on compile, written to error log. It’s
> then not considered a script. ES might want to change that behavior, so if
> you use “script” : “brokenscript” you get an error indicating what’s
> actually wrong. Of course if you know about this behavior I guess it’s not
> a big deal.
>
>
>
> *From:* elasticsearch@googlegroups.com [mailto:
> elasticsearch@googlegroups.com] *On Behalf Of *vineeth mohan
> *Sent:* Wednesday, September 10, 2014 5:14 AM
> *To:* elasticsearch@googlegroups.com
> *Subject:* Re: Scripting and dates
>
>
>
> Hello Michael ,
>
>
>
> Please find the answers in the order of questions you have asked -
>
>
>
>1. Referencing script from file system is explained here. It has very
>well worked for me , please double check on it -
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html
>2. I feel you haven't declared that field as date type in the schema .
>If you had done that , you will recieve the epoch instead of string. -
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#date
>3. Dates are internally stored as epoch. So it should handle that
>second fraction too. More on the format can be seen here -
>
> http://joda-time.sourceforge.net/api-release/org/joda/time/format/DateTimeFormat.html
>4. What exactly do you want to do with the duration ? If its range
>aggregation , it does have script support -
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-range-aggregation.html#search-aggregations-bucket-range-aggregation
>
>  Thanks
>
>   Vineeth
>
>
>
> On Wed, Sep 10, 2014 at 11:32 AM, Michael Giagnocavo 
> wrote:
>
> I'm trying to work with dates inside a script. I've got a few questions:
>
> 1. How do I reference a script that I have in the scripts directory?
> Simply POSTing to /index/type/id/_update with { "script": "scriptname" }
> does not seem to work. "No such property: scriptname for class: ScriptN",
> where N starts at 3 (I have two .groovy files in my scripts directory).
>
> 2: How can I get actual date objects from the source?
> ctx._source.fieldname always returns a type string, even if I just created
> the field with ctx._source.fieldname = new Date(). Right now I'm parsing
> the string output in Groovy, which seems suboptimal.
>
> 3: Are ISO8601 dates not fully supported, as far as arbitrary fractional
> second decimals? (Not just 3 or another fixed number?) Any suggestions on
> handling JSON input from multipl

complex nested query

2014-09-10 Thread 闫旭
Dear All!

I have a problem with a complex nested query
the docs like this:
_id:1
{
"detail":[
{
"date":"2014-09-01",
"price":50
},
{
"date":"2014-09-02",
"price":100
},
{
"date":"2014-09-03",
"price":100
},
{
"date":"2014-09-04",
"price":200
}
]

}
_id:2
{
"detail":[
{
"date":"2014-09-01",
"price":100
},
{
"date":"2014-09-02",
"price":200
},
{
"date":"2014-09-03",
"price":300
},
{
"date":"2014-09-04",
"price":200
}
]

}
I will filter the docs with "date in [2014-09-01, 2014-09-03] and sum(price) > 
300".
I only find some way with "aggregation", but it can only stat the sum of all 
docs.

How Can I solve the problem?? 


Thanks && Best Regard!


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/A98354E4-9C9F-43B2-9310-6355DE3D6F85%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Why all replicas are unassigned?

2014-09-10 Thread Pablo Musa
googled: elasticsearch java 1.6.0_45 shard unassigned

https://github.com/elasticsearch/elasticsearch/issues/3145 (search for 1.6)
https://groups.google.com/forum/#!msg/elasticsearch/MSrKvfgKwy0/Tfk6nhlqYxYJ

For all that I have researched it points to Java version problem.
You could try some things as settings replicas to 0 and putting it back to
1  or forcing allocation (do not remember the exact command, but google for
unassigned chards and you will find it), but I do not think that they will
work.

I really would try installing a new version of Java and running
Elasticsearch using it.

Regards,
Pablo

2014-09-11 0:03 GMT-03:00 Sephen Xu :

> Thank you for your reply, the java version on each machine are same --
> 1.6.0_45, and the elasticsearch version is 1.1.2.
>
>
>
> 在 2014年9月11日星期四UTC+8上午10时48分44秒,pabli...@gmail.com写道:
>
>> I would check for the Java version on each machine.
>> I had the same problem on a running cluster when adding a node, and
>> unfortunately the last node had Java 1.7.0_65 instead of 1.7.0_55
>> (recommended version and the version of my other machines).
>>
>> I did not have the time to create a post explaining the whole problem.
>> But, in summary, I ran a default install script using apt-get and, by
>> default, they use the "latest" Java version.
>>
>> One big problem for me is that they do not support versioned jdk
>> installation and I could not find a deb package for 1.7.0_55. Maybe someone
>> here can help with this.
>>
>> The "problematic" command:
>> apt-get install openjdk-7-jre-headless -y
>>
>> Regards,
>> Pablo Musa
>>
>> On Wednesday, September 10, 2014 10:31:24 PM UTC-3, Sephen Xu wrote:
>>>
>>> Hello,
>>>
>>> I startup 4 nodes on 2 machines, and when create index, all replicas are
>>> unassigned.
>>>
>>> {
>>>   "cluster_name" : "elasticsearch_log",
>>>   "status" : "yellow",
>>>   "timed_out" : false,
>>>   "number_of_nodes" : 4,
>>>   "number_of_data_nodes" : 4,
>>>   "active_primary_shards" : 22,
>>>   "active_shards" : 22,
>>>   "relocating_shards" : 0,
>>>   "initializing_shards" : 0,
>>>   "unassigned_shards" : 22
>>> }
>>>
>>> How can I do?
>>>
>>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/kn3UHQgQKJk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/8a7a7fcc-9a93-4ae8-9b26-8cdd9715eb7b%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAF6PhFJXPbYbXMXPwKSXxdfETpXZO1n4PyBNbsstERgQoSa-XQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Why all replicas are unassigned?

2014-09-10 Thread Sephen Xu
Thank you for your reply, the java version on each machine are same -- 
1.6.0_45, and the elasticsearch version is 1.1.2.



在 2014年9月11日星期四UTC+8上午10时48分44秒,pabli...@gmail.com写道:
>
> I would check for the Java version on each machine.
> I had the same problem on a running cluster when adding a node, and 
> unfortunately the last node had Java 1.7.0_65 instead of 1.7.0_55 
> (recommended version and the version of my other machines).
>
> I did not have the time to create a post explaining the whole problem. 
> But, in summary, I ran a default install script using apt-get and, by 
> default, they use the "latest" Java version.
>
> One big problem for me is that they do not support versioned jdk 
> installation and I could not find a deb package for 1.7.0_55. Maybe someone 
> here can help with this.
>
> The "problematic" command:
> apt-get install openjdk-7-jre-headless -y
>
> Regards,
> Pablo Musa
>
> On Wednesday, September 10, 2014 10:31:24 PM UTC-3, Sephen Xu wrote:
>>
>> Hello,
>>
>> I startup 4 nodes on 2 machines, and when create index, all replicas are 
>> unassigned.
>>
>> {
>>   "cluster_name" : "elasticsearch_log",
>>   "status" : "yellow",
>>   "timed_out" : false,
>>   "number_of_nodes" : 4,
>>   "number_of_data_nodes" : 4,
>>   "active_primary_shards" : 22,
>>   "active_shards" : 22,
>>   "relocating_shards" : 0,
>>   "initializing_shards" : 0,
>>   "unassigned_shards" : 22
>> }
>>
>> How can I do?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8a7a7fcc-9a93-4ae8-9b26-8cdd9715eb7b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Why all replicas are unassigned?

2014-09-10 Thread pablitomusa
I would check for the Java version on each machine.
I had the same problem on a running cluster when adding a node, and 
unfortunately the last node had Java 1.7.0_65 instead of 1.7.0_55 
(recommended version and the version of my other machines).

I did not have the time to create a post explaining the whole problem. But, 
in summary, I ran a default install script using apt-get and, by default, 
they use the "latest" Java version.

One big problem for me is that they do not support versioned jdk 
installation and I could not find a deb package for 1.7.0_55. Maybe someone 
here can help with this.

The "problematic" command:
apt-get install openjdk-7-jre-headless -y

Regards,
Pablo Musa

On Wednesday, September 10, 2014 10:31:24 PM UTC-3, Sephen Xu wrote:
>
> Hello,
>
> I startup 4 nodes on 2 machines, and when create index, all replicas are 
> unassigned.
>
> {
>   "cluster_name" : "elasticsearch_log",
>   "status" : "yellow",
>   "timed_out" : false,
>   "number_of_nodes" : 4,
>   "number_of_data_nodes" : 4,
>   "active_primary_shards" : 22,
>   "active_shards" : 22,
>   "relocating_shards" : 0,
>   "initializing_shards" : 0,
>   "unassigned_shards" : 22
> }
>
> How can I do?
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/67b8560d-1d41-4005-8f23-32f604b218c8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


What is better - create several document types or several indices?

2014-09-10 Thread Konstantin Erman
We use Elasticsearch to aggregate several types of logs - web server logs, 
application logs, windows event logs, statistics, etc.

As far as I understand I can do one of the following:
1, Send each log to its own index and when I need to combine them in query 
- specify several indices in Kibana settings;
2. Send all logs to the same index (we turn them over every day) and give 
logs from different sources different document types;
3. Do more or less nothing, push all documents together without 
distinguishing them explicitly;

My question is - what are advantages and disadvantages of each approach? We 
have substantial amount of logs going in every second, but querying is 
rather rare, at least so far.

Thank you!
Konstantin

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e41e4959-6a45-417a-8ba6-856abcd33350%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Why all replicas are unassigned?

2014-09-10 Thread Sephen Xu
Hello,

I startup 4 nodes on 2 machines, and when create index, all replicas are 
unassigned.

{
  "cluster_name" : "elasticsearch_log",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 4,
  "number_of_data_nodes" : 4,
  "active_primary_shards" : 22,
  "active_shards" : 22,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 22
}

How can I do?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/bfae28b3-1ef3-4151-bf9e-17fad6d5f16f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Move Elasticsearch index's from /auto/abc to /auto/def

2014-09-10 Thread shriyansh jain
Hi,

I need an advice on migrating all Elasticsearch indexes from one partition 
to another partition. Currently, I am using cluster of 2 nodes with 
Elasticsearch.
And both the nodes are pointing to the same partition, which is /auto/abc. 
How can I point both the nodes to the partition /auto/def and keep all the 
indexes as they were befrore. 
Will copying all the index from /auto/abc to /auto/def and pointing both 
the elasticsearch nodes data path to /auto/def work.? Or I will have to 
make some other changes which might cause change to happen.

Thank you,
Shriyash




-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1e9ddb09-eac2-462a-b682-165b7c35e055%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


RE: Scripting and dates

2014-09-10 Thread Michael Giagnocavo
Thank you, I did have to resort to parsing the date string. The next problem is 
that dates don’t seem to come back in a fixed format. If I post a date with 
just the date string, that’s what I get back. Likewise for fractional seconds. 
So I do some string work with substring first.

How can I determine if ES has actually indexed a value as a date?

-Michael

From: elasticsearch@googlegroups.com [mailto:elasticsearch@googlegroups.com] On 
Behalf Of vineeth mohan
Sent: Wednesday, September 10, 2014 2:13 PM
To: elasticsearch@googlegroups.com
Subject: Re: Scripting and dates

Hello Michael ,

This should work for you -

cat scr
{
"script": "sdf = new 
java.text.SimpleDateFormat('-MM-dd\\'T\\'HH:mm:ss');startDate = 
sdf.parse(ctx._source.start);endDate = sdf.parse(ctx._source.end); 
ctx._source.diff = endDate.getTime() - startDate.getTime();"
}
curl  -XPOST http://localhost:9200/test/logs/1/_update -d @scr
{"_index":"test","_type":"logs","_id":"1","_version":10


I was in the pretext that one can access the doc object for updation , but 
seems we can only access _source here.
In that case , you need to parse the string to date object and then do the 
stuffs.
The above works for me perfectly.

Thanks
   Vineeth



On Thu, Sep 11, 2014 at 12:12 AM, Michael Giagnocavo 
mailto:m...@giagnocavo.net>> wrote:
BTW I found the problem with referring to a script by name. If the script has 
an error, then it fails on compile, written to error log. It’s then not 
considered a script. ES might want to change that behavior, so if you use 
“script” : “brokenscript” you get an error indicating what’s actually wrong. Of 
course if you know about this behavior I guess it’s not a big deal.

From: elasticsearch@googlegroups.com 
[mailto:elasticsearch@googlegroups.com] 
On Behalf Of vineeth mohan
Sent: Wednesday, September 10, 2014 5:14 AM
To: elasticsearch@googlegroups.com
Subject: Re: Scripting and dates

Hello Michael ,

Please find the answers in the order of questions you have asked -


  1.  Referencing script from file system is explained here. It has very well 
worked for me , please double check on it - 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html
  2.  I feel you haven't declared that field as date type in the schema . If 
you had done that , you will recieve the epoch instead of string. - 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#date
  3.  Dates are internally stored as epoch. So it should handle that second 
fraction too. More on the format can be seen here - 
http://joda-time.sourceforge.net/api-release/org/joda/time/format/DateTimeFormat.html
  4.  What exactly do you want to do with the duration ? If its range 
aggregation , it does have script support - 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-range-aggregation.html#search-aggregations-bucket-range-aggregation
Thanks
  Vineeth

On Wed, Sep 10, 2014 at 11:32 AM, Michael Giagnocavo 
mailto:m...@giagnocavo.net>> wrote:
I'm trying to work with dates inside a script. I've got a few questions:

1. How do I reference a script that I have in the scripts directory? Simply 
POSTing to /index/type/id/_update with { "script": "scriptname" } does not seem 
to work. "No such property: scriptname for class: ScriptN", where N starts at 3 
(I have two .groovy files in my scripts directory).

2: How can I get actual date objects from the source? ctx._source.fieldname 
always returns a type string, even if I just created the field with 
ctx._source.fieldname = new Date(). Right now I'm parsing the string output in 
Groovy, which seems suboptimal.

3: Are ISO8601 dates not fully supported, as far as arbitrary fractional second 
decimals? (Not just 3 or another fixed number?) Any suggestions on handling 
JSON input from multiple sources, some of which have high-precision?

4: Can I use a script to project the document into a scalar for aggregates? For 
instance, if I have Date fields "start" and "end", and want to calculate the 
average duration (start - end) in an aggregate. I see value-level scripts are 
allowed, and 1.4 has "scripted metric aggregation". For now am I best off just 
storing the duration in the document?

Thank you,
Michael

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/13d4cf783a83447a84b62206605ad312%40CO1PR07MB331.namprd07.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you 

How to search in the first vector object

2014-09-10 Thread Waldemar Neto
Hi all.
I have an vector with one or more objects into elasticsearch, when i need 
verify the first object from this vector, its possible? what filter i use?
Tnx all

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c379dec2-d610-4412-8c5a-dbf3d160f6bb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


searching across multiple types returns doesn't find all documents matching

2014-09-10 Thread ben


I include a bash script that recreates the situation.

#!/bin/sh

curl -XDELETE "http://localhost:9200/test";
curl -XPUT "http://localhost:9200/test";

echo

curl -XPUT "http://localhost:9200/test/foo/_mapping"; -d '{
"foo" : { 
"properties" : {
"id": {
"type" : "multi_field",
"path": "full",
"fields" : {
"foo_id_in_another_field" : {"type" : "long", 
include_in_all:false },
"id" : {"type" : "long"}
   }
}
}
}
}'

echo

#foo is a basically a duplicate of the foo document to support search use cases
curl -XPUT "http://localhost:9200/test/bar/_mapping"; -d '{
"bar" : {
"properties" : {
"id": {
"type" : "multi_field",
"path": "full",
"fields" : {
"bar_id_in_another_field" : {"type" : "long", 
include_in_all:false },
"id" : {"type" : "long"}
   }
},
"foo": {
"properties": {
"id": {
"type" : "multi_field",
"path": "full",
"fields" : {
"foo_id_in_another_field" : {"type" : "long", 
include_in_all:false },
"id" : {"type" : "long"}
}
}
}
}
}
}
}'

echo

curl -XPUT "http://localhost:9200/test/foo/1?refresh=true"; -d '{
"foo": {
"id": 1
}
}'

echo

#failure case appears even when not including the following JSON
# "bar": {
#   "id": 2,
#   "foo": {
# "id": 3
#   }
# }
curl -XPUT "http://localhost:9200/test/bar/2?refresh=true"; -d '{
"bar": {
"id": 2
}
}'

echo

#expect two results, get one (FAIL)
curl -XPOST "http://localhost:9200/test/foo,bar/_search?pretty=true"; -d '{
  "size": 10,
  "query": {
"query_string": {
  "query": "foo.id:1 OR bar.id:2"
}
  }
}'

echo

#except one result, get one (PASS)
curl -XPOST "http://localhost:9200/test/bar/_search?pretty=true"; -d '{
  "size": 10,
  "query": {
"query_string": {
  "query": "foo.id:1 OR bar.id:2"
}
  }
}'

echo

#expect one result, get one result (PASS)
curl -XPOST "http://localhost:9200/test/foo/_search?pretty=true"; -d '{
  "size": 10,
  "query": {
"query_string": {
  "query": "foo.id:1 OR bar.id:2"
}
  }
}'

echo

#expect two results, get tow results (PASS)
curl -XPOST "http://localhost:9200/test/_search?pretty=true"; -d '{
  "size": 10,
  "query": {
"query_string": {
  "query": "foo.id:1 OR bar.id:2"
}
  }
}'

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1d8f1ea8-db1b-425c-b6ee-153f5f369f43%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


cluster can't recover after upgrade from 1.1.1 to 1.3.2 due to MaxBytesLengthExceededException

2014-09-10 Thread omar
After doing a rolling upgrade from 1.1.1 to 1.3.2 some shards are failing 
to recover. 
I have two nodes with 8 shards and 1 replica. The index is a daily rolling 
index, after the upgrade, the old indices recovered fine. The error is only 
happening in today's index. I didn't stop indexing during the upgrade. From 
the stack trace below this seems that I have reached the maximum limit for 
a unanalyzed field; but this field's length is always greater than 32766. I 
search lucene open bugs in 4.9 but didn't find anything. 
my main concern now is how to recover the cluster without losing the shards 
that are failing to start? also will this limit always be enforced, and why 
it just started showing up now?

Here is the full stack trace of the exception:
Enter code here...[2014-09-10 18:39:03,045][WARN ][indices.cluster  
] [qldbtrindex1.qa.cyveillance.com] [transient_2014_09_10][7] failed to 
start shard

org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: 
[transient_2014_09_10][7] failed to recover shard

at 
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:269)

at 
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:722)

Caused by: java.lang.IllegalArgumentException: Document contains at least 
one immense term in field="providerEntity" (whose UTF8 encoding is longer 
than the max length 32766), all of which were skipped.  Please correct the 
analyzer to not produce such terms.  The prefix of the first immense term 
is: '[123, 34, 119, 105, 107, 105, 112, 101, 100, 105, 97, 34, 58, 123, 34, 
101, 120, 116, 101, 114, 110, 97, 108, 108, 105, 110, 107, 115, 34, 
58]...', original message: bytes can be at most 32766 in length; got 249537

at 
org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:671)

at 
org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:342)

at 
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:301)

at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:222)

at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:450)

at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1507)

at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1222)

at 
org.elasticsearch.index.engine.internal.InternalEngine.innerIndex(InternalEngine.java:563)

at 
org.elasticsearch.index.engine.internal.InternalEngine.index(InternalEngine.java:492)

at 
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:769)

at 
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:250)

... 4 more

Caused by: 
org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes 
can be at most 32766 in length; got 249537

at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)

at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)

at 
org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:645)
... 14 more


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/326a43b5-62aa-4d60-a73e-77605d736242%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Specifying bucket values ordering in Range aggregation

2014-09-10 Thread Adrien Grand
The range aggregation indeed does not support ordering. I believe that is
something that would be easy to implement on client side?

On Wed, Sep 10, 2014 at 2:13 AM, Raul, Jr. Martinez 
wrote:

> Hello,
>
> I'd like to add an "order" parameter to a range aggregation ( use case:
> sorting the age of records from latest added to oldest) but adding "order"
> : {"_terms":"asc"} or _count causes a search error.
>
> I'm trying to confirm if "range" aggregation doesn't really support
> "order" parameter at this point or am I just missing anything. Been into
> the documentation examples, mailing list and github issues but couldn't
> really confirm if this is really the case.
>
>
> Thanks
> Raul
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/a0b4c6a2-69b0-4e70-96ed-f1497cd80f27%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Adrien Grand

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAL6Z4j63wsQM%2B9tvbmku9fw8JaPdAvgvBduTrHmAG3TE54mY3w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Scripting and dates

2014-09-10 Thread vineeth mohan
Hello Michael ,

This should work for you -

cat scr
{
"script": "sdf = new
java.text.SimpleDateFormat('-MM-dd\\'T\\'HH:mm:ss');startDate =
sdf.parse(ctx._source.start);endDate = sdf.parse(ctx._source.end);
ctx._source.diff = endDate.getTime() - startDate.getTime();"
}
curl  -XPOST http://localhost:9200/test/logs/1/_update -d @scr
{"_index":"test","_type":"logs","_id":"1","_version":10


I was in the pretext that one can access the doc object for updation , but
seems we can only access _source here.
In that case , you need to parse the string to date object and then do the
stuffs.
The above works for me perfectly.

Thanks
   Vineeth



On Thu, Sep 11, 2014 at 12:12 AM, Michael Giagnocavo 
wrote:

>  BTW I found the problem with referring to a script by name. If the
> script has an error, then it fails on compile, written to error log. It’s
> then not considered a script. ES might want to change that behavior, so if
> you use “script” : “brokenscript” you get an error indicating what’s
> actually wrong. Of course if you know about this behavior I guess it’s not
> a big deal.
>
>
>
> *From:* elasticsearch@googlegroups.com [mailto:
> elasticsearch@googlegroups.com] *On Behalf Of *vineeth mohan
> *Sent:* Wednesday, September 10, 2014 5:14 AM
> *To:* elasticsearch@googlegroups.com
> *Subject:* Re: Scripting and dates
>
>
>
> Hello Michael ,
>
>
>
> Please find the answers in the order of questions you have asked -
>
>
>
>1. Referencing script from file system is explained here. It has very
>well worked for me , please double check on it -
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html
>2. I feel you haven't declared that field as date type in the schema .
>If you had done that , you will recieve the epoch instead of string. -
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#date
>3. Dates are internally stored as epoch. So it should handle that
>second fraction too. More on the format can be seen here -
>
> http://joda-time.sourceforge.net/api-release/org/joda/time/format/DateTimeFormat.html
>4. What exactly do you want to do with the duration ? If its range
>aggregation , it does have script support -
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-range-aggregation.html#search-aggregations-bucket-range-aggregation
>
>  Thanks
>
>   Vineeth
>
>
>
> On Wed, Sep 10, 2014 at 11:32 AM, Michael Giagnocavo 
> wrote:
>
> I'm trying to work with dates inside a script. I've got a few questions:
>
> 1. How do I reference a script that I have in the scripts directory?
> Simply POSTing to /index/type/id/_update with { "script": "scriptname" }
> does not seem to work. "No such property: scriptname for class: ScriptN",
> where N starts at 3 (I have two .groovy files in my scripts directory).
>
> 2: How can I get actual date objects from the source?
> ctx._source.fieldname always returns a type string, even if I just created
> the field with ctx._source.fieldname = new Date(). Right now I'm parsing
> the string output in Groovy, which seems suboptimal.
>
> 3: Are ISO8601 dates not fully supported, as far as arbitrary fractional
> second decimals? (Not just 3 or another fixed number?) Any suggestions on
> handling JSON input from multiple sources, some of which have
> high-precision?
>
> 4: Can I use a script to project the document into a scalar for
> aggregates? For instance, if I have Date fields "start" and "end", and want
> to calculate the average duration (start - end) in an aggregate. I see
> value-level scripts are allowed, and 1.4 has "scripted metric aggregation".
> For now am I best off just storing the duration in the document?
>
> Thank you,
> Michael
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/13d4cf783a83447a84b62206605ad312%40CO1PR07MB331.namprd07.prod.outlook.com
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3DBzthM14yz3SuzxvTz5QXOW4Gtt72rvsA1-dND5eP--A%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>   --
> You received this message because you are subscribed 

SearchParseException: Failed to parse source [_na_]

2014-09-10 Thread David Koblas
We're getting the following exception in our logs, which is taking down the 
whole cluster.  We don't think we've changed any code recently on our side, 
but the error has recently started showing up.

We're still investigating if this is a query we're building, or something 
else.  But, it doesn't correlate with anything on our side yet.

This is with version: 1.3.2


*Help??*

[2014-09-10 18:24:04,387][DEBUG][action.search.type   ] [i-3276901e] 
[intelligence_v1][118], node[NNDfAL3sS8GoLynod1mU8w], [P], s[STARTED]: 
Failed to execute [org.elasticsearch.action.search.SearchRequest@4f9438d5] 
lastShard [true]
org.elasticsearch.transport.RemoteTransportException: 
[i-cd99ee9e][inet[/10.0.2.33:9300]][search/phase/query]
Caused by: org.elasticsearch.search.SearchParseException: 
[intelligence_v1][118]: from[-1],size[-1]: Parse Failure [Failed to parse 
source [_na_]]
at 
org.elasticsearch.search.SearchService.parseSource(SearchService.java:664)
at 
org.elasticsearch.search.SearchService.createContext(SearchService.java:515)
at 
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:487)
at 
org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:256)
at 
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:688)
at 
org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:677)
at 
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:275)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to derive 
xcontent from 
org.elasticsearch.common.bytes.ChannelBufferBytesReference@7f9c402
at 
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:259)
at 
org.elasticsearch.search.SearchService.parseSource(SearchService.java:634)
... 9 more
[2014-09-10 18:24:04,385][DEBUG][action.search.type   ] [i-3276901e] 
[intelligence_v1][54], node[NNDfAL3sS8GoLynod1mU8w], [R], s[STARTED]: 
Failed to execute [org.elasticsearch.action.search.SearchRequest@4f9438d5] 
lastShard [true]

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0b12aada-0b99-48ee-a602-cb53299242e1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Simple query string does not work

2014-09-10 Thread Dan
Hi Guys,

I have a simple query which is not working. I am using the same query on 
another server with the same mapping; where it does work.
Everything else is working like a charm.

I am talking about the following query.
The problem is to be found when I using the query > field > tags query. 
When I do not use this part, everything works fine.

Array
(
[from] => 0
[size] => 10
[query] => Array
(
 *   [field] => Array
(
[tags] => *blaat*
)
*
)

[filter] => Array
(
[and] => Array
(
[0] => Array
(
[term] => Array
(
[representative] => 1
)

)

[1] => Array
(
[term] => Array
(
[is_gift] => 0
)

)

[2] => Array
(
[term] => Array
(
[active] => 1
)

)

[3] => Array
(
[terms] => Array
(
[website_ids] => Array
(
[0] => 1
)

[execution] => and
)

)

)

)

)


The mapping is as follows:


  "product" : {
"properties" : {
  "action" : {
"type" : "string"
  },
  "active" : {
"type" : "string"
  },
  "brand_ids" : {
"type" : "string"
  },*  "tags" : {
"type" : "string"
  },*
.


When I index an item I am using the following part:

Array
(
[2359] => Array
(

*[tags] => blaat, another blaat, etc*
  

Maby an installation confuguring issue?

Does anyone have a clue?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a18a812b-3d73-47c6-b7d3-53b969c71d85%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: HTTP Basic Auth + SSL: communication between nodes?

2014-09-10 Thread Julien Genestoux
Please disregard, I read more of these docs 
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/distributed-cluster.html
 

and found that changing the HTTP interface does not affect at all the 
inter-node communciation!


On Wednesday, September 10, 2014 4:56:05 PM UTC+2, Julien Genestoux wrote:
>
> Hello,
>
> We're evaluating ElasticSearch and for this we're trying to deploy a 
> cluster against our production data.
> For now, the goal is to index and see the performance on some obvious 
> queries.
>
> We have deployed 2 nodes and "secured" them using an NGINX proxy which 
> "hides" them behind HTTPS with Basic Auth. 
> Each node work well individually, but they fail to communicate with each 
> other.
>
> I understand that multicast will not work since we use nginx, so we 
> configured the nodes using this:
> discovery.zen.minimum_master_nodes: 2
> discovery.zen.ping.multicast.enabled: false
> discovery.zen.ping.unicast.hosts: ,
>
> How can we specify which ports to use (8080), the fact that it needs to 
> use SSL and credentials for HTTP basic Auth?
>
> Is that even doable? If not how can we get both nodes to communicate 
> securely given that we Linode servers (XEN instances in a network that we 
> do not control).
>
> Thanks,
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8819458b-ad5f-48e3-8733-1090f4051778%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


RE: Scripting and dates

2014-09-10 Thread Michael Giagnocavo
BTW I found the problem with referring to a script by name. If the script has 
an error, then it fails on compile, written to error log. It’s then not 
considered a script. ES might want to change that behavior, so if you use 
“script” : “brokenscript” you get an error indicating what’s actually wrong. Of 
course if you know about this behavior I guess it’s not a big deal.

From: elasticsearch@googlegroups.com [mailto:elasticsearch@googlegroups.com] On 
Behalf Of vineeth mohan
Sent: Wednesday, September 10, 2014 5:14 AM
To: elasticsearch@googlegroups.com
Subject: Re: Scripting and dates

Hello Michael ,

Please find the answers in the order of questions you have asked -


  1.  Referencing script from file system is explained here. It has very well 
worked for me , please double check on it - 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html
  2.  I feel you haven't declared that field as date type in the schema . If 
you had done that , you will recieve the epoch instead of string. - 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#date
  3.  Dates are internally stored as epoch. So it should handle that second 
fraction too. More on the format can be seen here - 
http://joda-time.sourceforge.net/api-release/org/joda/time/format/DateTimeFormat.html
  4.  What exactly do you want to do with the duration ? If its range 
aggregation , it does have script support - 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-range-aggregation.html#search-aggregations-bucket-range-aggregation
Thanks
  Vineeth

On Wed, Sep 10, 2014 at 11:32 AM, Michael Giagnocavo 
mailto:m...@giagnocavo.net>> wrote:
I'm trying to work with dates inside a script. I've got a few questions:

1. How do I reference a script that I have in the scripts directory? Simply 
POSTing to /index/type/id/_update with { "script": "scriptname" } does not seem 
to work. "No such property: scriptname for class: ScriptN", where N starts at 3 
(I have two .groovy files in my scripts directory).

2: How can I get actual date objects from the source? ctx._source.fieldname 
always returns a type string, even if I just created the field with 
ctx._source.fieldname = new Date(). Right now I'm parsing the string output in 
Groovy, which seems suboptimal.

3: Are ISO8601 dates not fully supported, as far as arbitrary fractional second 
decimals? (Not just 3 or another fixed number?) Any suggestions on handling 
JSON input from multiple sources, some of which have high-precision?

4: Can I use a script to project the document into a scalar for aggregates? For 
instance, if I have Date fields "start" and "end", and want to calculate the 
average duration (start - end) in an aggregate. I see value-level scripts are 
allowed, and 1.4 has "scripted metric aggregation". For now am I best off just 
storing the duration in the document?

Thank you,
Michael

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/13d4cf783a83447a84b62206605ad312%40CO1PR07MB331.namprd07.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3DBzthM14yz3SuzxvTz5QXOW4Gtt72rvsA1-dND5eP--A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ef8054ee424145a489ee496117ae0dd4%40CO1PR07MB331.namprd07.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.


Ramifications of G1GC in ES1.3 with JDK 1.8

2014-09-10 Thread Robert Gardam
I had been hitting my head up against heap issues until this afternoon 
after enabling G1GC. 

What are the known issues with this type of GC?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5e158da0-fd01-4dd1-8483-cf2671c675b2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Connecting Hbase to Elasticsearch

2014-09-10 Thread Alex Kamil
I posted step-by-step instructions here
 on using
Apache Hbase/Phoenix with Elasticsearch JDBC River.

This might be useful to Elasticsearch users who want to use Hbase as a
primary data store, and to Hbase users who wish to enable full-text search
on their existing tables via Elasticsearch API.

Alex

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAOtKWX4R81324NmKZou_zCT0e-DbFv%2BmWHg_pAinCmUapwyYcA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


RE: Scripting and dates

2014-09-10 Thread Michael Giagnocavo
Thanks for the reply Vineeth. Here’s a concise example showing the date type 
problem. Despite having a mapping, the script is getting the properties as 
strings. How can I verify how ElasticSearch is actually storing something?

$ cat /etc/elasticsearch/scripts/setdur.groovy
ctx._source.dur = (ctx._source.end.getTime() - ctx._source.start.getTime())

$ curl -i -XPUT http://localhost:9200/tesi/ -d '{\
   "mappings": {\
 "testt": { "properties": { "start": { "type": "date" }, "end": { "type": 
"date" } } }\
   }\
}'
HTTP/1.1 200 OK

$ curl -i -XPOST http://localhost:9200/tesi/testt/1 -d '{ "start": 
"2014-09-01T12:00:00", "end": "2014-09-02T12:00:00" }'
HTTP/1.1 201 Created

rooty@sofab-es1:~$ curl -i -XPOST http://localhost:9200/tesi/testt/1/_update -d 
'{"script": "setdur", "lang": "groovy"}'
HTTP/1.1 400 Bad Request
Content-Type: application/json; charset=UTF-8
Content-Length: 399

{"error":"ElasticsearchIllegalArgumentException[failed to execute script]; 
nested: GroovyScriptExecutionException[MissingMethodException[No signature of 
method: java.lang.String.getTime() is applicable for argument types: () values: 
[]\nPossible solutions: getBytes(), trim(), getBytes(java.lang.String), 
getBytes(java.nio.charset.Charset), getAt(groovy.lang.IntRange), getAt(int)]]; 
","status":400}rooty@sofab-es1:~$

Any suggestions?
-Michael

From: elasticsearch@googlegroups.com [mailto:elasticsearch@googlegroups.com] On 
Behalf Of vineeth mohan
Sent: Wednesday, September 10, 2014 5:14 AM
To: elasticsearch@googlegroups.com
Subject: Re: Scripting and dates

Hello Michael ,

Please find the answers in the order of questions you have asked -


  1.  Referencing script from file system is explained here. It has very well 
worked for me , please double check on it - 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html
  2.  I feel you haven't declared that field as date type in the schema . If 
you had done that , you will recieve the epoch instead of string. - 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#date
  3.  Dates are internally stored as epoch. So it should handle that second 
fraction too. More on the format can be seen here - 
http://joda-time.sourceforge.net/api-release/org/joda/time/format/DateTimeFormat.html
  4.  What exactly do you want to do with the duration ? If its range 
aggregation , it does have script support - 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-range-aggregation.html#search-aggregations-bucket-range-aggregation
Thanks
  Vineeth

On Wed, Sep 10, 2014 at 11:32 AM, Michael Giagnocavo 
mailto:m...@giagnocavo.net>> wrote:
I'm trying to work with dates inside a script. I've got a few questions:

1. How do I reference a script that I have in the scripts directory? Simply 
POSTing to /index/type/id/_update with { "script": "scriptname" } does not seem 
to work. "No such property: scriptname for class: ScriptN", where N starts at 3 
(I have two .groovy files in my scripts directory).

2: How can I get actual date objects from the source? ctx._source.fieldname 
always returns a type string, even if I just created the field with 
ctx._source.fieldname = new Date(). Right now I'm parsing the string output in 
Groovy, which seems suboptimal.

3: Are ISO8601 dates not fully supported, as far as arbitrary fractional second 
decimals? (Not just 3 or another fixed number?) Any suggestions on handling 
JSON input from multiple sources, some of which have high-precision?

4: Can I use a script to project the document into a scalar for aggregates? For 
instance, if I have Date fields "start" and "end", and want to calculate the 
average duration (start - end) in an aggregate. I see value-level scripts are 
allowed, and 1.4 has "scripted metric aggregation". For now am I best off just 
storing the duration in the document?

Thank you,
Michael

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/13d4cf783a83447a84b62206605ad312%40CO1PR07MB331.namprd07.prod.outlook.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3DBzthM14yz3SuzxvTz5QXOW4Gtt72rvsA1-dND5eP--A%40mail.gmail.com

Re: elasticsearch Java API for function_score query

2014-09-10 Thread mramaprasad
It worked. Thank you Ivan.

On Tuesday, September 9, 2014 7:33:58 PM UTC-7, Ivan Brusic wrote:
>
> Malini, I would suggest starting a new thread instead of adding to an old 
> one.
>
> I find the Java API for the boost functions to be confusing, or at least, 
> not as clean as the rest of the Java API. I wonder if the Elasticsearch 
> team would accept a PR. Jörg's example above could be used as a skeleton 
> for your code. Something like
>
> new FunctionScoreQueryBuilder(existingFilteredQuery) 
> .add(termsFilter("abbrev", "computer"), factorFunction(-10f))
>
> -- 
> Ivan
>
>
> On Tue, Sep 9, 2014 at 4:28 PM, Malini  > wrote:
>
>> How do I implement the following query using Java ApI? Thanks!
>>
>> curl -XGET http://localhost:9200/cs/csdl/_search?pretty=true -d '
>> {
>> "query":{
>> "function_score": {
>> "functions": [
>> {
>> "boost_factor": "-10",
>> "filter": {
>>  "terms" : {"abbrev" : ["computer"] }   
>> }
>> }
>> ],
>> "query": {
>>   "filtered": {
>> "query" : {
>> "multi_match" : {
>> "fields" : ["title"],
>> "query" : ["computer"]
>> 
>> }
>> },
>> "filter": {
>>   "bool": {
>> "must": { "range": { 
>>  "pubdate": { 
>> "gte": "1890-09" ,
>> "lte":"2014-08"
>>   }
>>}
>>  },
>>
>>  "must" : {
>> "terms" : { 
>>"abbrev" : ["computer","annals","software"]
>>  }
>> }
>>   }
>> }
>>   }
>>  }
>> }
>> }
>> }'
>>
>>
>>
>> On Tuesday, June 10, 2014 1:39:57 PM UTC-7, Jörg Prante wrote:
>>>
>>> Try this
>>>
>>> import org.elasticsearch.action.search.SearchRequest;
>>> import org.elasticsearch.index.query.functionscore.
>>> FunctionScoreQueryBuilder;
>>>
>>> import java.util.Arrays;
>>>
>>> import static org.elasticsearch.client.Requests.searchRequest;
>>> import static org.elasticsearch.index.query.FilterBuilders.termsFilter;
>>> import static org.elasticsearch.index.query.QueryBuilders.matchQuery;
>>> import static org.elasticsearch.index.query.functionscore.
>>> ScoreFunctionBuilders.factorFunction;
>>> import static org.elasticsearch.search.builder.SearchSourceBuilder.
>>> searchSource;
>>>
>>> public class FunctionScoreTest {
>>>
>>> public void testFunctionScore() {
>>> SearchRequest searchRequest = searchRequest()
>>> .source(searchSource().query(new 
>>> FunctionScoreQueryBuilder(matchQuery("party_id", "12"))
>>> .add(termsFilter("course_cd", 
>>> Arrays.asList("writ100", "writ112", "writ113")), factorFunction(3.0f;
>>> }
>>> }
>>>
>>> Jörg
>>>
>>>
>>> On Tue, Jun 10, 2014 at 11:16 AM, Jayanth Inakollu >> > wrote:
>>>
 I need to implement the below function_score query using Java APIs. I 
 couldn't find any official documentation for function_score query in the 
 Java API section of elasticsearch

 "function_score": {
 "functions": [
 {
 "boost_factor": "3",
 "filter": {
  "terms" : {"course_cd" : ["writ100", "writ112", 
 "writ113"] }   
 }
 }
 ],
 "query": {
   "match" : {
"party_id" : "12"
   }
  }
 }

 Please help!

 -- 
 You received this message because you are subscribed to the Google 
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/56d92aab-a4d7-4757-9441-f248c5296b3c%
 40googlegroups.com 
 
 .
 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/311c8492-8e78-4188-847c-44d7d115b464%40googlegroups.com
>>  
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" g

Re: Elasticsearch 1.4.0 release data?

2014-09-10 Thread Ivan Brusic
I think this release might be their biggest one since 1.0. Lots of big
changes including a change in the consensus algorithm. It might take time,
but that is only a guess.

-- 
Ivan

On Wed, Sep 10, 2014 at 2:57 AM, joergpra...@gmail.com <
joergpra...@gmail.com> wrote:

> I use the Github issue tracker to watch the progress of the fabulous ES
> dev team
>
> https://github.com/elasticsearch/elasticsearch/labels/v1.4.0
>
> Today: 20 issues left, 4 blockers. Looks like it will still take some days.
>
> Jörg
>
>
> On Wed, Sep 10, 2014 at 11:39 AM, Dan Tuffery 
> wrote:
>
>> Is there are release date scheduled for ES 1.4.0? I need the child
>> aggregation for the project I'm working on at the moment.
>>
>> https://github.com/elasticsearch/elasticsearch/pull/6936
>>
>> Dan
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/0238c4fd-a702-4fca-8bcc-3dab6d71bc6f%40googlegroups.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGP%2Bq64F5FVAfjym9SvO6RM5dHOzuJMe7L8xFL4ekut%3Dg%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBd2uv%2BkfW4JsmFT%2BjoR3w%2BHr1_RZ4s_Bvh1a5ABzjS5g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Completion suggester: Problems getting suggests of middle words using recommended disabling of preserve_position_increments

2014-09-10 Thread Tom
Hi,

referencing 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/1.x/search-suggesters-completion.html
 
and 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/1.x/search-suggesters-completion.html
 
i tried to get suggests from middlewords without success.
Setup and Request:

#!/bin/bash

DOMAIN='127.0.0.1'
PORT='9200'
INDEX='music'
TYPE='song'

curl -X PUT $DOMAIN:$PORT/hotels -d '
{
  "mappings": {
"hotel" : {
  "properties" : {
"name" : { "type" : "string" },
"city" : { "type" : "string" },
"name_suggest" : {
  "type" :"completion",
  "index_analyzer" :  "stop", # also tried standard, simple ...
  "search_analyzer" : "stop", # also tried standard, simple ...

  "preserve_position_increments": false,
  "preserve_separators": false
}
  }
}
  }
}'

curl -X PUT $DOMAIN:$PORT/hotels/hotel/1 -d '
{
  "name" : "Mercure Hotel Munich",
  "city" : "Munich",
  "name_suggest" : {
"input" :  [
  "Mercure Hotel Munich",
  "Mercure Munich"
]
  }
}'

curl -X PUT $DOMAIN:$PORT/hotels/hotel/2 -d '
{
  "name" : "Hotel Monaco",
  "city" : "Munich",
  "name_suggest" : {
"input" :  [
  "Monaco Munich",
  "Hotel Monaco"
]
  }
}'

curl -X PUT $DOMAIN:$PORT/hotels/hotel/3 -d '
{
  "name" : "Courtyard by Marriot Munich City",
  "city" : "Munich",
  "name_suggest" : {
"input" :  [
  "Courtyard by Marriot Munich City",
  "Marriot Munich City"
]
  }
}'

curl -XPOST $DOMAIN:$PORT/hotels/_refresh

curl -X POST $DOMAIN:$PORT/hotels/_suggest?pretty -d '
{
  "hotels" : {
"text" : "Munich",
"completion" : {
  "field" : "name_suggest"
}
  }
}'

Response:
{
  "_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
  },
  "hotels" : [ {
"text" : "Munich",
"offset" : 0,
"length" : 6,
"options" : [ ]
  } ]
}

Any suggestions? Checked this against ES 1.3.2.

Thx in advance,
Tom

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/72f73b37-6432-4b61-82b9-2b078d956d3f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


HTTP Basic Auth + SSL: communication between nodes?

2014-09-10 Thread Julien Genestoux
Hello,

We're evaluating ElasticSearch and for this we're trying to deploy a 
cluster against our production data.
For now, the goal is to index and see the performance on some obvious 
queries.

We have deployed 2 nodes and "secured" them using an NGINX proxy which 
"hides" them behind HTTPS with Basic Auth. 
Each node work well individually, but they fail to communicate with each 
other.

I understand that multicast will not work since we use nginx, so we 
configured the nodes using this:
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ,

How can we specify which ports to use (8080), the fact that it needs to use 
SSL and credentials for HTTP basic Auth?

Is that even doable? If not how can we get both nodes to communicate 
securely given that we Linode servers (XEN instances in a network that we 
do not control).

Thanks,

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/49e3c1f7-aab1-45ae-9229-f2aa4c37bbc6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: elasticsearch dies every other day

2014-09-10 Thread Robert Gardam
Hey! Your graphs are really nice. That looks like grafana. I was wondering 
how you're pipiing data there? I used the ES graphite plugin and found that 
it flooded my graphite with too much data.

Thanks


On Wednesday, July 16, 2014 9:26:46 AM UTC+2, Klavs Klavsen wrote:
>
> I updated the graph:
> http://blog.klavsen.info/ES-graphs-update.png
>
> I added overview of how many threads were running, and its appearent that 
> what peaked when it crashed (left side of the two graphs - two spikes where 
> it crashed) - correlated with a peak in search threads.
> Also the change to G1GC for two of the nodes - is very appearent in the 
> heap_mem_usage graph :)
>
> It's been stable for two days now.. nearing the record :) I did also move 
> a "culprit" who searched the main index for larger periods, to their own 
> index.. and change threadpool.index.queue_size: from -1 to 900.
> the index queue size does not seem to be hit at all, so I'm not sure that 
> made a change.
>
> Thank you for your input everyone.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ff6462a4-e99a-4b82-b45a-303a9c87c3bd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Behavior of missing filter in 1.3.2

2014-09-10 Thread Drew Daugherty
Hi,

I am trying to determine what the behavior of the missing filter should be. 
 I have a one-node instance of ES 1.3.2 running on centos 6. 

I add the following simple document to an index:

{
  "testField": "",
  "testField2": ""
}


And ran the following query: 
{
  "query" : { "filtered" : {"filter": {
 "missing": {
   "field": "testField"
} } } } 
}

At first, running under JDK 1.7.0_60, I was not able to retrieve the 
document. When I upgraded to JDK 1.7.0_67, I was able to retrieve the 
document with the query above.  This is repeatable when I backlevel the jdk 
I am not able to find the document.  I am aware of the open issue affecting 
the missing filter 
at https://github.com/elasticsearch/elasticsearch/issues/7348 but should I 
rely on the missing filter working with null strings under JDK 1.7.0_67?

-drew

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9856e5a0-130c-4463-9fdd-03e7764712a2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch version upgrade issue -- CorruptIndexException

2014-09-10 Thread Scott Decker
We did not run into this issue when we upgraded from 20.6 to 1.3.1, but 
from looking at the upgrade docs, we did a few things to try and protect 
any index corruption, which looks like what you ran into.

1 - we stopped any apps from writing to the indexes when we started our 
upgrade
2 - we flushed the cluster before bringing it down
3 - we disabled shard allocation/replication before bringing the cluster 
down (just to make sure all nodes brought back the indexes that were on the 
same machines.)
4 - when we brought everything back up, we ran optimize on each index.  
This was noted as a task to do, because all of the indexes are in new 
formats in the newer releases, so, it was recommend to run the optimize 
which would recreate all of the indexes.  It was unclear whether older 
indexes would really work in upgraded cluster. we did not take the chance 
and took the time hit to run optimize.
5 - re-enabled shard/replication allocations
6 - cluster was working just fine

hope our steps help you try and redo the cluster upgrade.

On Tuesday, September 9, 2014 4:18:54 PM UTC-7, Wei wrote:
>
> Hi All, 
>
> I'm working on an ES upgrade from v0.20.5 to v1.2.1
> I tested in a 2 node cluster, 3 indices, ~4 million docs, 18G file sizes, 
> 20 shards, 1 replicas
> However, after bumping the version and reboot the cluster, I kept on 
> seeing some shards are damaged. The ES log said: 
> Caused by: org.apache.lucene.index.CorruptIndexException: did not read all 
> bytes from file: read 451 vs size 452 (resource: 
> BufferedChecksumIndexInput(MMapIndexInput(path="/18/index/_195c_i.del")))
>
> This badly blocked the version upgrade in my case. 
> Could you any one point me the reason of this issue? 
> Lots appreciate to your help!
>
>
> Wei
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/cb341c55-2a15-465e-81ae-463ffb523355%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Problems searching ES from other machine

2014-09-10 Thread Andrew Lakes
Hi,

thx for response.

The remote server just do this request:

curl -XGET -s 
'http://elasticsearch-server:9200/logstash-2014.09.10,logstash-2014.09.09/_search?pretty'
 
-d '{
  "facets": {
"terms": {
  "terms": {
"field": "_type",
"size": 10,
"order": "count",
"exclude": []
  },
  "facet_filter": {
"fquery": {
  "query": {
"filtered": {
  "query": {
"bool": {
  "should": [
{
  "query_string": {
"query": "field:*"
  }
}
  ]
}
  },
  "filter": {
"bool": {
  "must": [
{
  "range": {
"@timestamp": {
  "from": "1410271471958",
  "to": "now"
}
  }
},
{
  "fquery": {
"query": {
  "query_string": {
"query": "logsource:(\"Servername\")"
  }
},
"_cache": true
  }
}
  ]
}
  }
}
  }
}
  }
}
  },
  "size": 0
}'


and this curl-request looks like an http request - doesnt it? 

Thanks.

Am Mittwoch, 10. September 2014 14:04:10 UTC+2 schrieb Jörg Prante:
>
> The message tells that on port 9200, the HTTP message could not be 
> understood. Do you send non-HTTP traffic to port 9200?
>
> Jörg
>
> On Wed, Sep 10, 2014 at 1:34 PM, Andrew Lakes  > wrote:
>
>> Hey Guys,
>>
>> today we encoutered a problem while requesting some data from our ES db 
>> from an other server in our network.
>>
>> All that the other server does is executing the following request:
>>
>> curl -XGET 
>> 'https://elasticsearch-server:443/logstash-2014.09.10,logstash-2014.09.09/_search?pretty'
>>  -d '{
>>   "query": {
>> "filtered": {
>>   "query": {
>> "bool": {
>>   "should": [
>> {
>>   "query_string": {
>> "query": "field:*"
>>   }
>> }
>>   ]
>> }
>>   },
>>   "filter": {
>> "bool": {
>>   "must": [
>> {
>>   "range": {
>> "@timestamp": {
>>   "from": 1410261816133,
>>   "to": 1410348216133
>> }
>>   }
>> },
>> {
>>   "fquery": {
>> "query": {
>>   "query_string": {
>> "query": "logsource:(\"servername\")"
>>   }
>> },
>> "_cache": true
>>   }
>> }
>>   ]
>> }
>>   }
>> }
>>   },
>>   "highlight": {
>> "fields": {},
>> "fragment_size": 2147483647,
>> "pre_tags": [
>>   "@start-highlight@"
>> ],
>> "post_tags": [
>>   "@end-highlight@"
>> ]
>>   },
>>   "size": 100,
>>   "sort": [
>> {
>>   "@timestamp": {
>> "order": "desc",
>> "ignore_unmapped": true
>>   }
>> },
>> {
>>   "@timestamp": {
>> "order": "desc",
>> "ignore_unmapped": true
>>   }
>> }
>>   ]
>> }'
>>
>>
>> which simply count how much events 1 server over 24 hours got.
>>
>>
>> But if this request lead to some abnormal behavior of elasticsearch, we much 
>> of the following error messages in our es-log:
>>
>>
>> [2014-09-10 13:12:32,938][DEBUG][http.netty   ] [NodeName] 
>> Caught exception while handling client http traffic, closing connection [id: 
>> 0x5fd6fd9f, /:40784 :> /> java.nio.channels.ClosedChannelException
>> at 
>> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:433)
>> at 
>> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:128)
>> at 
>> org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99)
>> at 
>> org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)
>> at 
>> org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
>> at 
>> org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
>> at 
>> org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
>> at 
>> org.elasticsearch.common.netty.handler.codec.o

is it possible to reference query/filer data in your aggregations

2014-09-10 Thread Mindaugas Verdingovas
In other words can you create a bucket for each document in your query and 
then add some further aggregations using values of that document.

I also asked for the same question in detail on stack overflow here is a 
link if you need more details to the 
question 
http://stackoverflow.com/questions/25720027/elasticsearch-aggregations-is-it-possible-to-reference-filter-query-data-in-agg

if you don't understand something please ask me I'll try my best to explain.

if it's not possible could someone just tell me that, because I've spend a 
lot of hours trying to find a way of achieving this without any luck.

or maybe you have some other ideas on how I should represent my data to 
meet my demands?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/43c193e9-0ebe-48b9-85b5-24e29f145ef0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Getting ElasticsearchIntegrationTest teardown failures :: "Delete Index failed - not acked"

2014-09-10 Thread mooky
Anyone have any insights?

On Friday, 5 September 2014 17:09:15 UTC+1, mooky wrote:
>
>
> I am getting the following intermittent failure on random different tests 
> (I presume during the teardown) when the build is running on TeamCity.
> I cant seem reproduce it locally. I get a failure about 1 in 10-20 test 
> runs.
> Its not clear to me why I am getting the failure. Anyone have any 
> suggestions of avenues of investigation?
> (it does seem to be the same agent most of the time - linux agent (Linux, 
> version 2.6.18-308.el5) )
>
> Cheers
>
>
> java.lang.AssertionError: Delete Index failed - not acked
> Expected: 
>  but: was 
> at __randomizedtesting.SeedInfo.seed([96D0029807FBF53F:
> BBD4E4A226B82353]:0)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked
> (ElasticsearchAssertions.java:110)
> at org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked
> (ElasticsearchAssertions.java:106)
> at org.elasticsearch.test.ImmutableTestCluster.wipeIndices(
> ImmutableTestCluster.java:126)
> at org.elasticsearch.test.ImmutableTestCluster.wipe(
> ImmutableTestCluster.java:68)
> at org.elasticsearch.test.ElasticsearchIntegrationTest.afterInternal(
> ElasticsearchIntegrationTest.java:513)
> at org.elasticsearch.test.ElasticsearchIntegrationTest.after(
> ElasticsearchIntegrationTest.java:1364)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(
> RandomizedRunner.java:1618)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(
> RandomizedRunner.java:885)
> at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(
> TestRuleSetupTeardownChained.java:50)
> at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(
> TestRuleFieldCacheSanity.java:51)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:46)
> at com.carrotsearch.randomizedtesting.rules.
> SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.
> java:55)
> at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(
> TestRuleThreadAndTestName.java:49)
> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(
> TestRuleIgnoreAfterMaxFailures.java:65)
> at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(
> TestRuleMarkFailure.java:48)
> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(
> StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.
> ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl.
> forkTimeoutingTask(ThreadLeakControl.java:793)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(
> ThreadLeakControl.java:453)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(
> RandomizedRunner.java:836)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(
> RandomizedRunner.java:738)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(
> RandomizedRunner.java:772)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(
> RandomizedRunner.java:783)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:46)
> at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(
> TestRuleStoreClassName.java:42)
> at com.carrotsearch.randomizedtesting.rules.
> SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.
> java:55)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:39)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1 class="styled-by-prettify"
> ...

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/87e88ef3-62d4-442c-9ba1-b1ea2ee688a2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


How to intercept search request (in a pluign) without creating separate RestAction

2014-09-10 Thread Piotr Majewski
Is it possible to intercept the request (and save the payload in ES) in a 
plugin without creating a separate RestAction ?

1. I want to avoid having to add separate url patterns 
(like https://github.com/jprante/elasticsearch-arrayformat)
2. I want to return the same response ES would return without the plugin.


What would be the best possible to log search request into elasticsearch ?
I know I can create a nginx proxy that would do that but I wanted to have a 
solution built in ES.

I was thinking of a service that would schedule puting the payload  in ES 
so the plugin wouldn't block the response? 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/505701c3-f8a0-4a1b-ae58-d6815567a1dc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


ES cluster queries are slow suddenly

2014-09-10 Thread Manish Garg
Hi,

We have azure elastic search cluster with 9 VMs, created around 2 months
back, since then our application has been using this. There are around
eight index created, with five shards each. Suddenly today all the queries
became slow, when run those queries on marvel it takes long time. There
nothing we changed here. I have uploaded the _nodes/stats result on below
gist.
https://gist.github.com/mkaygarg/fc63b0b453249ec0eb20

We have done all what we could, looked into all API where i could find some
lead. Please help us out here, and guide where to look?

Regards,

Manish

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAO4PsGHP0iUc5TT2dkWS_7kwP47cB%3DCdcZ6S2Ljs16Ew3bV65g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Do I need the JDBC driver

2014-09-10 Thread James
I don't really see why though, can't the php library send JSON data to
elasticsearch?

On 10 September 2014 12:47, joergpra...@gmail.com 
wrote:

> You need JDBC (Java Database Connectivity) for Java.
>
> Jörg
>
> On Wed, Sep 10, 2014 at 1:30 PM, James  wrote:
>
>> It just seems I can send data to elasticsearch using the php library, so
>> I don't seen the need to add the JDBC driver?
>>
>>
>> http://www.elasticsearch.org/guide/en/elasticsearch/client/php-api/current/_indexing_operations.html
>>
>> Use a CRON operation to check for items that aren't indexed, and then use
>> the PHP library to send the non-indexed data to ES to get indexed?
>>
>> On Wednesday, September 10, 2014 12:21:35 PM UTC+1, vineeth mohan wrote:
>>>
>>> Hello James ,
>>>
>>> I didn't fully understand your question , but i feel JDBC river might be
>>> of any use to you - https://github.com/jprante/elasticsearch-river-jdbc
>>>
>>> Thanks
>>>Vineeth
>>>
>>> On Wed, Sep 10, 2014 at 3:29 PM, James  wrote:
>>>
 Hi,

 I'm setting up a system where I have a main SQL database which is
 synced with elasticsearch. My plan is to use the main PHP library for
 elasticsearch.

 I was going to have a cron run every thirty minuets to check for items
 in my database that not only have an "active" flag but that also do not
 have an "indexed" flag, that means I need to add them to the index. Then I
 was going to add that item to the index. Since I am using taking this path,
 it doesn't seem like I need the JDBC driver, as I can add items to
 elasticsearch using the PHP library.

 So, my question is, can I get away without using the JDBC driver?

 James


  --
 You received this message because you are subscribed to the Google
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/be682d05-bdad-45a4-8b00-2ecf26217534%
 40googlegroups.com
 
 .
 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/a9901f1a-2753-46d2-9113-056b8d996eb8%40googlegroups.com
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/0dzSMbARlks/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoF33zrKTRqFhr%3D%3DMG2fYFYd04kezBqEcoXgYrFuaEyT6w%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPng%3D3f6qkSEehuowD%3D1QVKnS2JgoGqqYjasL5gWsLkz%2Bq%3D29A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Problems searching ES from other machine

2014-09-10 Thread joergpra...@gmail.com
The message tells that on port 9200, the HTTP message could not be
understood. Do you send non-HTTP traffic to port 9200?

Jörg

On Wed, Sep 10, 2014 at 1:34 PM, Andrew Lakes  wrote:

> Hey Guys,
>
> today we encoutered a problem while requesting some data from our ES db
> from an other server in our network.
>
> All that the other server does is executing the following request:
>
> curl -XGET 
> 'https://elasticsearch-server:443/logstash-2014.09.10,logstash-2014.09.09/_search?pretty'
>  -d '{
>   "query": {
> "filtered": {
>   "query": {
> "bool": {
>   "should": [
> {
>   "query_string": {
> "query": "field:*"
>   }
> }
>   ]
> }
>   },
>   "filter": {
> "bool": {
>   "must": [
> {
>   "range": {
> "@timestamp": {
>   "from": 1410261816133,
>   "to": 1410348216133
> }
>   }
> },
> {
>   "fquery": {
> "query": {
>   "query_string": {
> "query": "logsource:(\"servername\")"
>   }
> },
> "_cache": true
>   }
> }
>   ]
> }
>   }
> }
>   },
>   "highlight": {
> "fields": {},
> "fragment_size": 2147483647,
> "pre_tags": [
>   "@start-highlight@"
> ],
> "post_tags": [
>   "@end-highlight@"
> ]
>   },
>   "size": 100,
>   "sort": [
> {
>   "@timestamp": {
> "order": "desc",
> "ignore_unmapped": true
>   }
> },
> {
>   "@timestamp": {
> "order": "desc",
> "ignore_unmapped": true
>   }
> }
>   ]
> }'
>
>
> which simply count how much events 1 server over 24 hours got.
>
>
> But if this request lead to some abnormal behavior of elasticsearch, we much 
> of the following error messages in our es-log:
>
>
> [2014-09-10 13:12:32,938][DEBUG][http.netty   ] [NodeName] Caught 
> exception while handling client http traffic, closing connection [id: 
> 0x5fd6fd9f, /:40784 :> / java.nio.channels.ClosedChannelException
> at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:433)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:128)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)
> at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
> at 
> org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
> at 
> org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
> at 
> org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
> at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
> at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
> at 
> org.elasticsearch.common.netty.channel.Channels.write(Channels.java:704)
> at 
> org.elasticsearch.common.netty.channel.Channels.write(Channels.java:671)
> at 
> org.elasticsearch.common.netty.channel.AbstractChannel.write(AbstractChannel.java:248)
> at 
> org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:173)
> at 
> org.elasticsearch.rest.action.support.RestResponseListener.processResponse(RestResponseListener.java:43)
> at 
> org.elasticsearch.rest.action.support.RestActionListener.onResponse(RestActionListener.java:49)
> at 
> org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.innerFinishHim(TransportSearchQueryThenFetchAction.java:157)
> at 
> org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.finishHim(TransportSearchQueryThenFetchAction.java:139)
> at 
> org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.moveToSecondPhase(TransportSearchQueryThenFetchAction.java:90)
> at 
> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.innerMoveToSecondPhase(TransportSearchTypeAction.java:404)
> at 
> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:198)
> at 
> org.elasticsearch.action.search.type

Re: Determine Shard Id based on routing key

2014-09-10 Thread Sandeep Ramesh Khanzode
Hi Adrien,

Is it possible to intercept the Search Response, on the server itself,
 from the ES Server to the node hosting the TransportClient? I can use a
plugin to do that, but I have no idea which classes to extend and which
modules to register. Can you please provide some details? Thanks,

Thanks,
Sandeep

On Mon, Sep 1, 2014 at 4:52 PM, Adrien Grand  wrote:

>
>
>
> On Mon, Sep 1, 2014 at 1:18 PM, 'Sandeep Ramesh Khanzode' via
> elasticsearch  wrote:
>
>> However, I am a little concerned with your comment on the equivalence of
>> 1 index with 20 shards and 20 indices with one shard each. You mentioned
>> that you would discourage the latter.
>>
>> Can you please explain why? Is it for management reasons or performance
>> overhead reasons? I can deal with the former but not the latter unless if
>> you have some pointers. Thanks,
>>
>
> Sorry for the confusion, what I would like to discourage is not having 20
> indices with one shard but trying to manage sharding manually instead of
> relying on elasticsearch's routing mechanism that abstracts the number of
> shards.
>
> --
> Adrien Grand
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/Hh5uhhb70Mo/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAL6Z4j5R4-BOtaMkFTDB1PfpVqVrh4BQb%3D4TsAfseOiCFP79Fg%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKnM90ZCbT%2Bf2hS0dR7YHP6mkdVU4%3DutNNTPBkVQnT6HYgDf_g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Do I need the JDBC driver

2014-09-10 Thread joergpra...@gmail.com
You need JDBC (Java Database Connectivity) for Java.

Jörg

On Wed, Sep 10, 2014 at 1:30 PM, James  wrote:

> It just seems I can send data to elasticsearch using the php library, so I
> don't seen the need to add the JDBC driver?
>
>
> http://www.elasticsearch.org/guide/en/elasticsearch/client/php-api/current/_indexing_operations.html
>
> Use a CRON operation to check for items that aren't indexed, and then use
> the PHP library to send the non-indexed data to ES to get indexed?
>
> On Wednesday, September 10, 2014 12:21:35 PM UTC+1, vineeth mohan wrote:
>>
>> Hello James ,
>>
>> I didn't fully understand your question , but i feel JDBC river might be
>> of any use to you - https://github.com/jprante/elasticsearch-river-jdbc
>>
>> Thanks
>>Vineeth
>>
>> On Wed, Sep 10, 2014 at 3:29 PM, James  wrote:
>>
>>> Hi,
>>>
>>> I'm setting up a system where I have a main SQL database which is synced
>>> with elasticsearch. My plan is to use the main PHP library for
>>> elasticsearch.
>>>
>>> I was going to have a cron run every thirty minuets to check for items
>>> in my database that not only have an "active" flag but that also do not
>>> have an "indexed" flag, that means I need to add them to the index. Then I
>>> was going to add that item to the index. Since I am using taking this path,
>>> it doesn't seem like I need the JDBC driver, as I can add items to
>>> elasticsearch using the PHP library.
>>>
>>> So, my question is, can I get away without using the JDBC driver?
>>>
>>> James
>>>
>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/
>>> msgid/elasticsearch/be682d05-bdad-45a4-8b00-2ecf26217534%
>>> 40googlegroups.com
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/a9901f1a-2753-46d2-9113-056b8d996eb8%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoF33zrKTRqFhr%3D%3DMG2fYFYd04kezBqEcoXgYrFuaEyT6w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Problems searching ES from other machine

2014-09-10 Thread Andrew Lakes
Hey Guys,

today we encoutered a problem while requesting some data from our ES db 
from an other server in our network.

All that the other server does is executing the following request:

curl -XGET 
'https://elasticsearch-server:443/logstash-2014.09.10,logstash-2014.09.09/_search?pretty'
 -d '{
  "query": {
"filtered": {
  "query": {
"bool": {
  "should": [
{
  "query_string": {
"query": "field:*"
  }
}
  ]
}
  },
  "filter": {
"bool": {
  "must": [
{
  "range": {
"@timestamp": {
  "from": 1410261816133,
  "to": 1410348216133
}
  }
},
{
  "fquery": {
"query": {
  "query_string": {
"query": "logsource:(\"servername\")"
  }
},
"_cache": true
  }
}
  ]
}
  }
}
  },
  "highlight": {
"fields": {},
"fragment_size": 2147483647,
"pre_tags": [
  "@start-highlight@"
],
"post_tags": [
  "@end-highlight@"
]
  },
  "size": 100,
  "sort": [
{
  "@timestamp": {
"order": "desc",
"ignore_unmapped": true
  }
},
{
  "@timestamp": {
"order": "desc",
"ignore_unmapped": true
  }
}
  ]
}'


which simply count how much events 1 server over 24 hours got.


But if this request lead to some abnormal behavior of elasticsearch, we much of 
the following error messages in our es-log:


[2014-09-10 13:12:32,938][DEBUG][http.netty   ] [NodeName] Caught 
exception while handling client http traffic, closing connection [id: 
0x5fd6fd9f, /:40784 :> /https://groups.google.com/d/msgid/elasticsearch/85627251-b4be-4031-870a-2bd621d0973c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Do I need the JDBC driver

2014-09-10 Thread James
It just seems I can send data to elasticsearch using the php library, so I 
don't seen the need to add the JDBC driver?

http://www.elasticsearch.org/guide/en/elasticsearch/client/php-api/current/_indexing_operations.html

Use a CRON operation to check for items that aren't indexed, and then use 
the PHP library to send the non-indexed data to ES to get indexed?

On Wednesday, September 10, 2014 12:21:35 PM UTC+1, vineeth mohan wrote:
>
> Hello James , 
>
> I didn't fully understand your question , but i feel JDBC river might be 
> of any use to you - https://github.com/jprante/elasticsearch-river-jdbc
>
> Thanks
>Vineeth
>
> On Wed, Sep 10, 2014 at 3:29 PM, James > 
> wrote:
>
>> Hi,
>>
>> I'm setting up a system where I have a main SQL database which is synced 
>> with elasticsearch. My plan is to use the main PHP library for 
>> elasticsearch. 
>>
>> I was going to have a cron run every thirty minuets to check for items in 
>> my database that not only have an "active" flag but that also do not have 
>> an "indexed" flag, that means I need to add them to the index. Then I was 
>> going to add that item to the index. Since I am using taking this path, it 
>> doesn't seem like I need the JDBC driver, as I can add items to 
>> elasticsearch using the PHP library.
>>
>> So, my question is, can I get away without using the JDBC driver?
>>
>> James
>>
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/be682d05-bdad-45a4-8b00-2ecf26217534%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a9901f1a-2753-46d2-9113-056b8d996eb8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Highlight works not always!

2014-09-10 Thread vineeth mohan
Hello Ramy ,

Can you show the mapping ( not just the index creation JSON).
I need to make sure you have applied ngram on the required field.

Thanks
 Vineeth

On Wed, Sep 10, 2014 at 1:11 PM, Ramy  wrote:

> Can someone tell me, why the highlighting works not always? what is my
> mistake?
>
> This is my mapping:
>
> curl -XPUT "http://localhost:9200/my_index"; -d'
> {
>   "settings": {
> "analysis": {
>   "analyzer": {
> "autocomplete": {
>   "type": "custom",
>   "tokenizer": "ngram_tokenizer",
>   "filter": [ "lowercase" ]
> }
>   },
>   "tokenizer": {
> "ngram_tokenizer": {
>   "type": "ngram",
>   "min_gram": 1,
>   "max_gram": 20,
>   "token_chars": ["letter", "digit"]
> }
>   }
> }
>   },
>   ...
> }'
>
>
> and here is my query:
>
> curl -XGET "http://localhost:9200/my_index/my_type/_search"; -d'
> {
>   "_source": false,
>   "size": 5,
>   "query": {
> "multi_match": {
>   "query": "*tisch*",
>   "fields": [
> "*_de.autocomplete"
>   ],
>   "operator": "and"
> }
>   },
>   "highlight": {
> "pre_tags": [
>   ""
> ],
> "post_tags": [
>   ""
> ],
> "fields": {
>   "*_de.autocomplete": {}
> }
>   }
> }'
>
>
> and this is my result:
>
> {
>"took": 220,
>"timed_out": false,
>"_shards": {
>   "total": 5,
>   "successful": 5,
>   "failed": 0
>},
>"hits": {
>   "total": 3649,
>   "max_score": 0.88375586,
>   "hits": [
>  {
> "_index": "my_index",
> "_type": "my_type",
> "_id": "1",
> "_score": 0.88375586,
> "highlight": {
>"group_name_3_de.autocomplete": [
>   "Konsol*tisch*e",
>   "Garten-Ess*tisch*e"
>],
>"group_name_2_de.autocomplete": [
>   "*Tisch*e",
>   "Gartentische"
>]
> }
>  },
>  {
> "_index": "architonic",
> "_type": "product",
> "_id": "2",
> "_score": 0.88375586,
> "highlight": {
>"group_name_3_de.autocomplete": [
>   "Schreib*tisch*e",
>   "Ess*tisch*e"
>],
>"group_name_2_de.autocomplete": [
>   "*Tisch*e"
>]
> }
>  },
>  {
> "_index": "architonic",
> "_type": "product",
> "_id": "3",
> "_score": 0.88375586,
> "highlight": {
>"group_name_3_de.autocomplete": [
>   "Einzel*tisch*e"
>],
>"group_name_2_de.autocomplete": [
>   "Büro*tisch*e"
>]
> }
>  },
>  ...
>   ]
>}
> }
>
>
> As you can see. Elasticsearch was able to highlight some words. I marked
> them as *pink* and the other are marked as *red*
>
> Where is the failure?
>
> Thank you
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/e72c34b1-be77-433b-8120-9c3c51af3186%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5k%2BJ0oNgF%3DefUcAqZhC6mrZM601DaSYhTggv1xJJGxBKg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Do I need the JDBC driver

2014-09-10 Thread vineeth mohan
Hello James ,

I didn't fully understand your question , but i feel JDBC river might be of
any use to you - https://github.com/jprante/elasticsearch-river-jdbc

Thanks
   Vineeth

On Wed, Sep 10, 2014 at 3:29 PM, James  wrote:

> Hi,
>
> I'm setting up a system where I have a main SQL database which is synced
> with elasticsearch. My plan is to use the main PHP library for
> elasticsearch.
>
> I was going to have a cron run every thirty minuets to check for items in
> my database that not only have an "active" flag but that also do not have
> an "indexed" flag, that means I need to add them to the index. Then I was
> going to add that item to the index. Since I am using taking this path, it
> doesn't seem like I need the JDBC driver, as I can add items to
> elasticsearch using the PHP library.
>
> So, my question is, can I get away without using the JDBC driver?
>
> James
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/be682d05-bdad-45a4-8b00-2ecf26217534%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5nFdiMxM7KDrN8-wZnpBtoBxX8BVbqiaSppanzKmNwz1g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: More Like This Note

2014-09-10 Thread vineeth mohan
Hello ,

The source is enabled by default , though store and term vectors are
disabled , so you don't need to do anything.


   1. _source -
   
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-source-field.html#mapping-source-field
   2. Term vectors -
   
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-termvectors.html#docs-termvectors

Thanks
   Vineeth

On Tue, Sep 9, 2014 at 8:14 PM, phenrigomes  wrote:

> How do this "Note: In order to use the mlt feature a mlt_field needs to be
> either be stored, store term_vector or source needs to be enabled."?
>
>
>
> --
> View this message in context:
> http://elasticsearch-users.115913.n3.nabble.com/More-Like-This-Note-tp4063184.html
> Sent from the ElasticSearch Users mailing list archive at Nabble.com.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/1410273863709-4063184.post%40n3.nabble.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5m9g02VAaE-w7p2_OkR4Mza-p3gt32r%2BwT30zGvHz3JeA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Scripting and dates

2014-09-10 Thread vineeth mohan
Hello Michael ,

Please find the answers in the order of questions you have asked -


   1. Referencing script from file system is explained here. It has very
   well worked for me , please double check on it -
   
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html
   2. I feel you haven't declared that field as date type in the schema .
   If you had done that , you will recieve the epoch instead of string. -
   
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#date
   3. Dates are internally stored as epoch. So it should handle that second
   fraction too. More on the format can be seen here -
   
http://joda-time.sourceforge.net/api-release/org/joda/time/format/DateTimeFormat.html
   4. What exactly do you want to do with the duration ? If its range
   aggregation , it does have script support -
   
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-range-aggregation.html#search-aggregations-bucket-range-aggregation

Thanks
  Vineeth

On Wed, Sep 10, 2014 at 11:32 AM, Michael Giagnocavo 
wrote:

> I'm trying to work with dates inside a script. I've got a few questions:
>
> 1. How do I reference a script that I have in the scripts directory?
> Simply POSTing to /index/type/id/_update with { "script": "scriptname" }
> does not seem to work. "No such property: scriptname for class: ScriptN",
> where N starts at 3 (I have two .groovy files in my scripts directory).
>
> 2: How can I get actual date objects from the source?
> ctx._source.fieldname always returns a type string, even if I just created
> the field with ctx._source.fieldname = new Date(). Right now I'm parsing
> the string output in Groovy, which seems suboptimal.
>
> 3: Are ISO8601 dates not fully supported, as far as arbitrary fractional
> second decimals? (Not just 3 or another fixed number?) Any suggestions on
> handling JSON input from multiple sources, some of which have
> high-precision?
>
> 4: Can I use a script to project the document into a scalar for
> aggregates? For instance, if I have Date fields "start" and "end", and want
> to calculate the average duration (start - end) in an aggregate. I see
> value-level scripts are allowed, and 1.4 has "scripted metric aggregation".
> For now am I best off just storing the duration in the document?
>
> Thank you,
> Michael
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/13d4cf783a83447a84b62206605ad312%40CO1PR07MB331.namprd07.prod.outlook.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3DBzthM14yz3SuzxvTz5QXOW4Gtt72rvsA1-dND5eP--A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Java Client integration with Jetty Plugin

2014-09-10 Thread Mihir M
Hi

I am using Elasticsearch version 1.2.1. I was looking for ways to secure
access to Elasticsearch when I found the Jetty plugin. It lets me create
users based on roles and is satisfying my requirements.
However it only restricts HTTP requests. 

I was using Java Transport client for talking to Elasticsearch and since it
uses Transport layer protocol while connecting to ES at 9300, the Jetty
plugin has no effect on it.

So as an alternative to the Transport Client I have tried using the Jest
Client. But I have not found any API for sending authentication credentials
while inserting data or reading or when creating a client.

Is there a way in which I can pass authentication credentials in the Jest
client while sending requests? 
If Jest Client does not support this, are there any alternative clients
which will allow me to do so?

Any help on this would be appreciated.

Regards
Mihir



-
Regards
--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/Java-Client-integration-with-Jetty-Plugin-tp4063253.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1410346337441-4063253.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Re: More Like This - Results Is Empty

2014-09-10 Thread vineeth mohan
Hello ,

We need more data to debug this.
Can you paste the reference document you are performing MLT.

Also i feel if you set min_term_freq as 1 along with min_doc_freq=1 , it
should work.

Thanks
  Vineeth

On Tue, Sep 9, 2014 at 7:39 PM, phenrigomes  wrote:

> Why _mlt results is empty? Even with data in index.
>
> GET /megacorp/deas/1/_mlt?mlt_fields=name&min_doc_freq=1
>
> {
>"took": 3,
>"timed_out": false,
>"_shards": {
>   "total": 5,
>   "successful": 5,
>   "failed": 0
>},
>"hits": {
>   "total": 0,
>   "max_score": null,
>   "hits": []
>}
> }
>
> Type Mapping:
>
> {
>"megacorp": {
>   "mappings": {
>  "deas": {
> "properties": {
>"fields": {
>   "type": "string"
>},
>"ids": {
>   "type": "string"
>},
>"max_query_terms": {
>   "type": "long"
>},
>"min_term_freq": {
>   "type": "long"
>},
>"more_like_this": {
>   "properties": {
>  "docs": {
> "properties": {
>"_id": {
>   "type": "string"
>},
>"_index": {
>   "type": "string"
>},
>"_type": {
>   "type": "string"
>}
> }
>  },
>  "fields": {
> "type": "string"
>  },
>  "ids": {
> "type": "long"
>  },
>  "max_query_terms": {
> "type": "long"
>  },
>  "min_term_freq": {
> "type": "long"
>  }
>   }
>},
>"name": {
>   "type": "string",
>   "store": true
>}
> }
>  }
>   }
>}
> }
>
>
>
> --
> View this message in context:
> http://elasticsearch-users.115913.n3.nabble.com/More-Like-This-Results-Is-Empty-tp4063176.html
> Sent from the ElasticSearch Users mailing list archive at Nabble.com.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/1410271748763-4063176.post%40n3.nabble.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3DoB3QHmQ_dzbU0oJWQcLmhqT03PJdTwJqtRcT9EG5ERw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: ElasticSearch store Index data externally

2014-09-10 Thread a.aneiros
This is because in one project I'm going to have 1 "main" index, wich are
going to be in the same place that elasticsearch, but there are some other
indices (less important and less used) and these indices are going to be
used like 1 time per week or less, so I don't need this indices to be
"optimal" and I need to save the data for this indices in an external server
to save some disk space.



-
I know that I know nothing.
--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/ElasticSearch-store-Index-data-externally-tp4062812p4063251.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1410345813728-4063251.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Re: Hit Counts within a Document

2014-09-10 Thread vineeth mohan
Hello Darren ,

I am glad that my solution worked for you.

The approach there is to use multi fields.
One field , keep the raw data by declaring the analyzer as not_analyzed.
Example is sited in this link -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#_multi_fields_3

Thanks
  Vineeth

On Tue, Sep 9, 2014 at 9:27 PM, Darren Trzynka 
wrote:

> Vineeth,
> I just saw your response today and I came to the same conclusion yesterday
> after you gave me a nice working example!  I took it a step further doing
> the same grouping by the field that you did and it came out nicely.
> Something is sinking in anyways with me..-)
>
> Besides some possible language support issues, the biggest thing I see for
> challenges could be if stemming is involved (you search for federal and
> hits are returned on federal, federalizing, etc... so if you just look for
> federal in the term count, it wouldn't find all the matches) and then
> dealing with case sensitivity when looking at the term frequencies (the
> user typed in "Federal cases" which matches by default on federal and
> cases) it seems you would need to lower case the lookup for the term
> frequencies.  What do you think about these cases?
>
> Thanks!
> Darren
>
> On Mon, Sep 8, 2014 at 11:28 AM, vineeth mohan 
> wrote:
>
>> Hello Darren ,
>>
>> Following query does what you have asked for ( replace FIELD with the
>> field you are looking for) -
>>
>> {
>>   "fields": [
>> "text"
>>   ],
>>   "query": {
>> "term": {
>>   "text": "god"
>> }
>>   },
>>   "script_fields": {
>> "tf": {
>>   "script": "_index['FIELD']['cat'].tf()"
>> }
>>   }
>> }
>>
>> For the second one , use -
>>
>> {
>>   "query": {
>> "term": {
>>   "FIELD": "CAT"
>> }
>>   },
>>   "aggs": {
>> "groupName": {
>>   "terms": {
>> "field": "GROUP_FIELD"
>>   },
>>   "aggs": {
>> "catStats": {
>>   "sum": {
>> "script": "_index['FIELD']['CAT'].tf()"
>>   }
>> }
>>   }
>> }
>>   }
>> }
>>
>> Thanks
>>Vineeth
>>
>>
>> On Mon, Sep 8, 2014 at 6:24 PM, Darren Trzynka 
>> wrote:
>>
>>> Vineeth,
>>> Thanks for responding.  What I am looking for is provided I perform a
>>> search for various terms, how given the search result can I understand the
>>> frequency of the hits within documents.  For example, I perform a full text
>>> search on cat.  5 documents are returned.  I could today get the terms that
>>> were found highlighted but that is of course quite nasty.  Instead what I
>>> would like returned is the documents but something like for each document
>>> saying:
>>> Document 1 (group: 1): cat - 5
>>> Document 2 (group: 2): cat - 3
>>> Document 3 (group: 1): cat - 2
>>> ...
>>> Document n - cat - #
>>>
>>> Also, there is other metadata that it would be nice to aggregate on too
>>> so I could get an answer for the above scenario:
>>> group : 1 - cat - 7
>>> group : 2 - cat - 3
>>>
>>> Thanks
>>> Darren
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/76044495-afc9-4c51-b3f3-6ea7e636bc01%40googlegroups.com
>>> 
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/elasticsearch/vRxbDxqjxVg/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3DfRzwke0GK_pn8WxPBJ6c%2B97yOyDPmkXcWkQJf%3Dy5rfA%40mail.gmail.com
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAFkmSJ-8joV%3DzAu6rRNU9GQ100yzGYXqY4QG2gAod1pniu5qzw%40mail.gmail.com
> 
> .
>
> F

MultiLanguage Index - StopWords

2014-09-10 Thread a.aneiros
Hello,

I'm developing a project where I'm using elasticsearch to index files (pdf,
doc, txt...). I have to index the content of this files, and they're written
in different languages. 

My concern is about the stopwords filter, I've used it before, but with one
language index, so I didn't have any problems but now I'm going to have from
7 to 12 languages (English, French, Spanish, Galician, German...) I made
some research and didn't find any relevant information. 

My questions are:

- There is any kind of stopwords filter for multi-language purpose ?
- Should I use several stopwords filter ? (Doesn't seem optimal to me)
- Should I have 1 index for each language so each index has different
mapping ?
- Maybe mixing the stopwords filter for all the languages I need to index ?

I hope someone has faced this issue before and can point me to a succesfull
solution.

Thanks in advance



-
I know that I know nothing.
--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/MultiLanguage-Index-StopWords-tp4063249.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1410345623216-4063249.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Re: [hadoop] Pipelining Hadoop/Spark with ElasticSearch

2014-09-10 Thread aarthi ranganathan
Thanks for the reply Costin.

On Wednesday, 10 September 2014 14:23:57 UTC+5:30, Costin Leau wrote:
>
> One can specify the id for each document for quite some time now, through `
> es.mapping.id` parameter [1] - simply point 
> it to the field containing the ID and you're good to go. 
>
>
> http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/2.1.Beta/configuration.html#_mapping
>  
>
> On 9/10/14 9:02 AM, aarthi ranganathan wrote: 
> > Hi 
> > Just wanted to know if the code for changing auto generated id has been 
> committed or is it yet to be changed? I am using 
> > elasticsearch-spark_2.10.Beta 1 version. 
> > Thanks 
> > Aarthi 
> > Thanks 
> > On Friday, 25 October 2013 19:40:05 UTC+5:30, Costin Leau wrote: 
> > 
> > On 25/10/2013 4:26 PM, Han JU wrote: 
> > > Thanks. Seems like I misunderstand something. 
> > > 
> > > Now I managed to push documents to ES, and I'd like to know if 
> these are supported by current version of 
> > > elasticsearch-binding: 
> > > 
> > 
> > I assume you mean elasticsearch-hadoop. 
> > 
> > 
> > > - specifying id for index. Now the "_id" for the documents pushed 
> are auto generated 
> > > - the update api 
> > > 
> > 
> > This is being currently worked on and we should have something in 
> trunk by next week. 
> > 
> > > Thanks. 
> > > 
> > > 在 2013年10月24日星期四UTC+2下午7时05分31秒,Costin Leau写道: 
> > > 
> > > Hi, 
> > > 
> > > I replied on IRC but you left. See the docs here [1]. The 
> value represents your document and since it might contain 
> > > multiple fields, ESOuputFormat expects a Map (MapWritable) 
> which contains the actual document. Say your doc is 
> > > something 
> > > like { foo: 123 } then your map would be [Text("foo"):new 
> LongWritable(123)]. 
> > > 
> > > The docs provides more information about the Writable types 
> supported (basically all of them) and their equivalent 
> > > ES types. 
> > > 
> > > [1]
> http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/mapreduce.html
>  
> > <
> http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/mapreduce.html>
>  
>
> > > <
> http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/mapreduce.html
>  
> > <
> http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/mapreduce.html>>
>  
>
> > > 
> > > On 24/10/2013 7:53 PM, Han JU wrote: 
> > > > Hi, 
> > > > 
> > > > I'm trying to write hadoop aggregation results to ES. 
> > > > Say I've K, V for key and value classes respectively. 
> According to elasticsearch-hadoop api/blog, the key is ignored and 
> > > > the value should be a Map. 
> > > > I'm a little bit confused here: do I need an extra map job 
> to convert my (K, V) to (Null, Map) ? 
> > > > Is there any complete examples of using hadoop and ES 
> together? 
> > > > 
> > > > Thanks. 
> > > > 
> > > > -- 
> > > > You received this message because you are subscribed to the 
> Google Groups "elasticsearch" group. 
> > > > To unsubscribe from this group and stop receiving emails 
> from it, send an email to 
> > > >elasticsearc...@googlegroups.com . 
> > > > For more options, visithttps://
> groups.google.com/groups/opt_out  
>  > >. 
> > > 
> > > -- 
> > > Costin 
> > > 
> > > -- 
> > > You received this message because you are subscribed to the Google 
> Groups "elasticsearch" group. 
> > > To unsubscribe from this group and stop receiving emails from it, 
> send an email to 
> > >elasticsearc...@googlegroups.com . 
> > > For more options, visithttps://groups.google.com/groups/opt_out <
> https://groups.google.com/groups/opt_out>. 
> > 
> > -- 
> > Costin 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "elasticsearch" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to 
> > elasticsearc...@googlegroups.com   elasticsearch+unsubscr...@googlegroups.com >. 
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/elasticsearch/3a82d348-066c-421e-996e-69b81455f175%40googlegroups.com
>  
> > <
> https://groups.google.com/d/msgid/elasticsearch/3a82d348-066c-421e-996e-69b81455f175%40googlegroups.com?utm_medium=email&utm_source=footer>.
>  
>
> > For more options, visit https://groups.google.com/d/optout. 
>
> -- 
> Costin 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussi

Data loss after network disconnect

2014-09-10 Thread Israel Tsadok
A temporary network disconnect of the master node caused a torrent of
RELOCATING shards, and then one shard remained UNASSIGNED and the cluster
state was left red.

looking inside the index directory for the shard on the disk, I found that
it was empty (i.e., the _state and translog dirs were there, but the index
dir had no files).

Looking at the log files, I see that the disconnect happened around
11:42:05, and a few minutes later I start seeing these error messages:

*[2014-09-10 11:45:33,341]*[WARN ][indices.cluster  ]
[buzzilla_data008] [el-2011-10-31-][0] failed to start shard
*[2014-09-10 11:45:33,342]*[WARN ][cluster.action.shard ]
[buzzilla_data008] [el-2011-10-31-][0] sending failed shard for
[el-2011-10-31-][0], node[RAR26zfuTiKl4mdbRVTtNA], [P],
s[INITIALIZING], indexUUID [_na_], reason [Failed to start shard, message
[IndexShardGatewayRecoveryException[[el-2011-10-31-][0] failed to fetch
index version after copying it over]; nested:
IndexShardGatewayRecoveryException[[el-2011-10-31-][0] shard allocated
for local recovery (post api), should exist, but doesn't, current files:
[]]; nested: IndexNotFoundException[no segments* file found in
store(least_used[rate_limited(mmapfs(/home/omgili/data/elasticsearch/data/buzzilla/nodes/0/indices/el-2011-10-31-/0/index),
type=MERGE, rate=20.0)]): files: []]; ]]

The relevant log files are at
https://gist.github.com/itsadok/97453743d6b211681aca
data009 is the original master, data017 is the new master, and data008 is
where I found the empty index directory.

I had to delete the unassigned index from the cluster to return to green
state.
I am running Elasticsearch 1.2.1 in a 20 node cluster.

How does this happen? What can I do to prevent this from happening again?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CADdQPqz%3DXZDMbHC7zpWgEdaqW4Xy_VkX7EgRwfXsrJjuoQ50SA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch restart task hangs up in Ansible Playbook

2014-09-10 Thread Roopendra Vishwakarma
I am using ansible playbook to install elasticsearch and elasticsearch 
plugin. after successfully installation of Elasticsearch I written one 
ansible task to Restart Elasticsearch. Its restarting elasticsearch but 
ansible playbook hang up in this task. My ansible task is:

- name: "Ensure Elasticsearch is Running"
  service: name=elasticsearch state=restarted

  
I also tried with `shell: sudo service elasticsearch restart` but no luck. 

**Elasticsearch Version** : 1.3.0  
**Ansible Version**   : 1.5.5

Verbose Output for the task is :

 ESTABLISH CONNECTION FOR USER: prod on PORT 22 TO 
app101.host.com
 REMOTE_MODULE service name=elasticsearch 
state=restarted
 EXEC /bin/sh -c 'mkdir -p 
$HOME/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310 
   && chmod a+rx 
$HOME/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310 && echo 
$HOME/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310'
 PUT /tmp/tmpjIMUkF TO 
/home/prod/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310/service
 EXEC /bin/sh -c 'sudo -k && sudo -H -S -p "[sudo via 
ansible, key=yeztwzmmsgyvjjqmmunnvtbopcplrbso] 
  password: " -u root /bin/sh -c '"'"'echo 
SUDO-SUCCESS-yeztwzmmsgyvjjqmmunnvtbopcplrbso; /usr/bin/python 
/home/prod/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310/service;
  rm -rf 
/home/prod/.ansible/tmp/ansible-tmp-1410327554.04-167734794521310/ 
>/dev/null 2>&1'"'"''

Any Suggestion?

While Starting Elasticsearch It show log on shell in earlier version it was 
just show [OK]. Does this cause any problem in ansible playbook?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6674a460-7cf5-4b97-b2b2-70671493420b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Do I need the JDBC driver

2014-09-10 Thread James
Hi,

I'm setting up a system where I have a main SQL database which is synced 
with elasticsearch. My plan is to use the main PHP library for 
elasticsearch. 

I was going to have a cron run every thirty minuets to check for items in 
my database that not only have an "active" flag but that also do not have 
an "indexed" flag, that means I need to add them to the index. Then I was 
going to add that item to the index. Since I am using taking this path, it 
doesn't seem like I need the JDBC driver, as I can add items to 
elasticsearch using the PHP library.

So, my question is, can I get away without using the JDBC driver?

James


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/be682d05-bdad-45a4-8b00-2ecf26217534%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Kibana ElasticSearch issue "Error Could not contact Elasticsearch at http:/address:9200. Please ensure that Elasticsearch is reachable from your system."

2014-09-10 Thread Jonathan H
Thanks very much Mark.

On Wednesday, 10 September 2014 10:45:39 UTC+1, Mark Walkom wrote:
>
> Because kibana connects from the users desktop to ES.
>
> You can reverse proxy this very easily, there are a few example configs in 
> the KB code to help ad it's worth doing.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>
>
> On 10 September 2014 19:31, Jonathan H  > wrote:
>
>> Hey everyone. 
>>
>> I have installed logstash, elasticsearch and Kibana 3 successfully, 
>> however when I ask a member on my team to view kibana he see this "Error 
>> Could not contact Elasticsearch at http:/address:9200. Please ensure that 
>> Elasticsearch is reachable from your system. " (Please find image 
>> attached). 
>>
>> My understanding was that kibana was the one to query Elasticsearch. Why 
>> does the user need access to port 9200? Is there anyway around this issue 
>> with out opening the port up?
>>
>> Regards,
>> Jonathan Hickey 
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/0d3f7aa5-69e0-4a6d-b8fa-9c10069d96d0%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/55a734d0-04e2-46d8-b934-6c663f836d3b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Search Plugin to intercept search response

2014-09-10 Thread Sandeep Ramesh Khanzode
Hi Jorg,

Thanks for the links. I was checking the sources. There are relevant to my
functional use case. But I will be using the TransportClient Java API, not
the REST client.

Can you please tell me how I can find/modify these classes/sources to get
the appropriate classes for inctercepting the Search Response when invoked
from a TransportClient?


Thanks,
Sandeep

On Wed, Aug 27, 2014 at 6:38 PM, joergpra...@gmail.com <
joergpra...@gmail.com> wrote:

> Have a look at array-format or csv plugin, they are processing the
> SearchResponse to output it in another format:
>
> https://github.com/jprante/elasticsearch-arrayformat
>
> https://github.com/jprante/elasticsearch-csv
>
> Jörg
>
>
> On Wed, Aug 27, 2014 at 3:05 PM, 'Sandeep Ramesh Khanzode' via
> elasticsearch  wrote:
>
>> Hi,
>>
>> Is there any action/module that I can extend/register/add so that I can
>> intercept the SearchResponse on the server node before the response is sent
>> back to the TransportClient on the calling box?
>>
>> Thanks,
>> Sandeep
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/559a5c68-4567-425f-9842-7f2fe6755095%40googlegroups.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/o6RZL4KwJVs/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGJ_%3D5RnyFqMP_AX4744z6tdAp8cfLBi_OqzLM23_rqzw%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKnM90bENin_aU4AXa%3DTVHQ_SyTTn-89Rev5vjj3%3DoDikwstkQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch 1.4.0 release data?

2014-09-10 Thread joergpra...@gmail.com
I use the Github issue tracker to watch the progress of the fabulous ES dev
team

https://github.com/elasticsearch/elasticsearch/labels/v1.4.0

Today: 20 issues left, 4 blockers. Looks like it will still take some days.

Jörg


On Wed, Sep 10, 2014 at 11:39 AM, Dan Tuffery  wrote:

> Is there are release date scheduled for ES 1.4.0? I need the child
> aggregation for the project I'm working on at the moment.
>
> https://github.com/elasticsearch/elasticsearch/pull/6936
>
> Dan
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/0238c4fd-a702-4fca-8bcc-3dab6d71bc6f%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGP%2Bq64F5FVAfjym9SvO6RM5dHOzuJMe7L8xFL4ekut%3Dg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ANN] Elasticsearch Simple Action Plugin

2014-09-10 Thread joergpra...@gmail.com
The plugin is for 1.2, I have to update the simple action plugin to
Elasticsearch 1.3

Thanks for the reminder

Jörg


On Wed, Sep 10, 2014 at 11:08 AM, 'Sandeep Ramesh Khanzode' via
elasticsearch  wrote:

> Hi Jorg,
>
> I was trying to install this plugin on ES v1.3.1. I am getting the errors
> similar to below. Can you please tell me what has changed and how I can
> rectify? Thanks,
>
> 4) No implementation for
> java.util.Map org.elasticsearch.action.support.TransportAction> was bound.
>   while locating java.util.Map org.elasticsearch.action.support.TransportAction>
> for parameter 1 at
> org.elasticsearch.client.node.NodeClusterAdminClient.(Unknown Source)
>   while locating org.elasticsearch.client.node.NodeClusterAdminClient
> for parameter 1 at
> org.elasticsearch.client.node.NodeAdminClient.(Unknown Source)
>   while locating org.elasticsearch.client.node.NodeAdminClient
> for parameter 2 at
> org.elasticsearch.client.node.NodeClient.(Unknown Source)
>   at
> org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:38)
>
> 5) No implementation for
> java.util.Map org.elasticsearch.action.support.TransportAction> was bound.
>   while locating java.util.Map org.elasticsearch.action.support.TransportAction>
> for parameter 1 at
> org.elasticsearch.client.node.NodeIndicesAdminClient.(Unknown Source)
>   at
> org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:36)
>
> 6) No implementation for
> java.util.Map org.elasticsearch.action.support.TransportAction> was bound.
>   while locating java.util.Map org.elasticsearch.action.support.TransportAction>
> for parameter 1 at
> org.elasticsearch.client.node.NodeIndicesAdminClient.(Unknown Source)
>   while locating org.elasticsearch.client.node.NodeIndicesAdminClient
> for parameter 2 at
> org.elasticsearch.client.node.NodeAdminClient.(Unknown Source)
>   at
> org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:37)
>
> 7) No implementation for
> java.util.Map org.elasticsearch.action.support.TransportAction> was bound.
>   while locating java.util.Map org.elasticsearch.action.support.TransportAction>
> for parameter 1 at
> org.elasticsearch.client.node.NodeIndicesAdminClient.(Unknown Source)
>   while locating org.elasticsearch.client.node.NodeIndicesAdminClient
> for parameter 2 at
> org.elasticsearch.client.node.NodeAdminClient.(Unknown Source)
>   while locating org.elasticsearch.client.node.NodeAdminClient
> for parameter 2 at
> org.elasticsearch.client.node.NodeClient.(Unknown Source)
>   at
> org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:38)
>
> 8) No implementation for org.elasticsearch.action.GenericAction annotated
> with @org.elasticsearch.common.inject.multibindings.Element(setNam
> e=,uniqueId=275) was bound.
>   at org.elasticsearch.action.ActionModule.configure(ActionModule.java:304)
>
> 9) An exception was caught and reported. Message: null
>   at
> org.elasticsearch.common.inject.InjectorShell$Builder.build(InjectorShell.java:130)
>
> 9 errors
> at
> org.elasticsearch.common.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:344)
> at
> org.elasticsearch.common.inject.InjectorBuilder.initializeStatically(InjectorBuilder.java:151)
> at
> org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:102)
> at
> org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)
> at
> org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)
> at
> org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:59)
> at
> org.elasticsearch.node.internal.InternalNode.(InternalNode.java:192)
> at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
> at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:70)
> at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:203)
> at
> org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
> Caused by: java.lang.reflect.MalformedParameterizedTypeException
> at
> sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.validateConstructorArguments(ParameterizedTypeImpl.java:58)
> at
> sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.(ParameterizedTypeImpl.java:51)
> at
> sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.make(ParameterizedTypeImpl.java:92)
> at
> sun.reflect.generics.factory.CoreReflectionFactory.makeParameterizedType(CoreReflectionFactory.java:105)
> at
> sun.reflect.generics.visitor.Reifier.visitClassTypeSignature(Reifier.java:140)
> at
> sun.reflect.generics.tree.ClassTypeSignature.accept(ClassTypeSignature.java:49)
> at
> sun.reflect.generics.repository.ClassRepository.getSuperclass(ClassRepository.java:86)
> at java.lang.Class.getGenericSuperclass(Class.java:764)
>   

Re: Kibana ElasticSearch issue "Error Could not contact Elasticsearch at http:/address:9200. Please ensure that Elasticsearch is reachable from your system."

2014-09-10 Thread Mark Walkom
Because kibana connects from the users desktop to ES.

You can reverse proxy this very easily, there are a few example configs in
the KB code to help ad it's worth doing.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 10 September 2014 19:31, Jonathan H  wrote:

> Hey everyone.
>
> I have installed logstash, elasticsearch and Kibana 3 successfully,
> however when I ask a member on my team to view kibana he see this "Error
> Could not contact Elasticsearch at http:/address:9200. Please ensure that
> Elasticsearch is reachable from your system. " (Please find image
> attached).
>
> My understanding was that kibana was the one to query Elasticsearch. Why
> does the user need access to port 9200? Is there anyway around this issue
> with out opening the port up?
>
> Regards,
> Jonathan Hickey
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/0d3f7aa5-69e0-4a6d-b8fa-9c10069d96d0%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624YeA9cvFu1kXOsT%2Bd6Z30t-oQ6qmNzgzGt4xR9mBdZZcQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch 1.4.0 release data?

2014-09-10 Thread Dan Tuffery
Is there are release date scheduled for ES 1.4.0? I need the child 
aggregation for the project I'm working on at the moment.

https://github.com/elasticsearch/elasticsearch/pull/6936

Dan

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0238c4fd-a702-4fca-8bcc-3dab6d71bc6f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Kibana ElasticSearch issue "Error Could not contact Elasticsearch at http:/address:9200. Please ensure that Elasticsearch is reachable from your system."

2014-09-10 Thread Jonathan H
Hey everyone. 

I have installed Logstash, Elasticsearch and Kibana 3 successfully, I can 
view the Kibana and see that all the logs are coming in.  However when I 
ask a member on my team to view Kibana he see this "Error Could not contact 
Elasticsearch at http:/address:9200. Please ensure that Elasticsearch is 
reachable from your system. " (Please find image attached). 

My understanding was that Kibana was the one to query Elasticsearch. Why 
does the user need access to port 9200? Is there anyway around this issue 
with out opening the port up?

Regards,
Jonathan

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/253e6533-2ade-4e1b-9360-74e5564ffdcb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Kibana ElasticSearch issue "Error Could not contact Elasticsearch at http:/address:9200. Please ensure that Elasticsearch is reachable from your system."

2014-09-10 Thread Jonathan H
Hey everyone. 

I have installed logstash, elasticsearch and Kibana 3 successfully, however 
when I ask a member on my team to view kibana he see this "Error Could not 
contact Elasticsearch at http:/address:9200. Please ensure that 
Elasticsearch is reachable from your system. " (Please find image 
attached). 

My understanding was that kibana was the one to query Elasticsearch. Why 
does the user need access to port 9200? Is there anyway around this issue 
with out opening the port up?

Regards,
Jonathan Hickey 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0d3f7aa5-69e0-4a6d-b8fa-9c10069d96d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Can I find a list of indices containing documents which contain a specified term?

2014-09-10 Thread Chris Lees
Apologies for delay -- upgrading took a while to get through./

You are right -- 1.3.2 resolved this issue and I'm now able to run 
aggregations on _index.

Thanks for your help.

On Monday, September 1, 2014 4:13:31 PM UTC+1, vineeth mohan wrote:
>
> Hello Chris , 
>
> I am using ES 1.3.1
> Can you give it a try on that version.
>
> Thanks
>  Vineeth
>
>
> On Mon, Sep 1, 2014 at 6:59 PM, Chris Lees  > wrote:
>
>>
>> Same result I'm afraid... 
>>
>> {
>> "took": 62,
>> "timed_out": false,
>> "_shards": {
>> "total": 147,
>> "successful": 144,
>> "failed": 0
>> },
>> "hits": {
>> "total": 9975671,
>> "max_score": 1.0,
>> "hits": [
>> ... results removed as data is sensitive, but I see correct 
>> documents returned in here ...
>> ]
>> },
>> "aggregations": {
>> "aggs": {
>> "buckets": []
>> }
>> }
>> }
>>
>> I'm still running Elasticsearch 1.1.1 -- is this something that changed 
>> after that perhaps?
>>
>> Thanks for your help.
>>
>>
>> On Monday, September 1, 2014 2:23:17 PM UTC+1, vineeth mohan wrote:
>>
>>> Hello Chris , 
>>>
>>> That is strange , its working fine on my side.
>>>
>>> Can you run the below and paste the result - 
>>>
>>> curl -XPOST 'http://localhost:9200/_search' -d '{
>>>   "aggregations": {
>>> "aggs": {
>>>   "terms": {
>>> "field": "_index"
>>>   }
>>> }
>>>   }
>>> }'
>>> Thanks
>>>   Vineeth
>>>
>>>
>>> On Mon, Sep 1, 2014 at 6:17 PM, Chris Lees  wrote:
>>>
 Thanks Vineeth.

 Unfortunately it doesn't return any results in the aggregations result.

 Input query:
 GET _search

 {
   "aggregations": {
 "aggs": {
   "terms": {
 "field": "_index"
   }
 }
   }
 }

 Result JSON showing 26K hits (correct), but no index aggregations:
 {
"took": 4,
"timed_out": false,
"_shards": {
   "total": 57,
   "successful": 57,
   "failed": 0
},
"hits": {
   "total": 26622,
   "max_score": 1,
   "hits": [...]
},
"aggregations": {
   "aggs": {
  "buckets": []

   }
}
 }



 On Monday, September 1, 2014 1:40:00 PM UTC+1, vineeth mohan wrote:

> Hello Chris , 
>
> This should work - 
>
> {
> "query" : {
> // GIVE QUERY HERE
> },
>   "aggregations": {
>  "aggs": {
>   "terms": {
> "field": "_index"
>   }
> }
>   }
> }
>
> Thanks
>Vineeth
>
>
> On Mon, Sep 1, 2014 at 3:10 PM, Chris Lees  wrote:
>
>>
>> I'm building a simple app which presents the user with two drop-downs 
>> to easily filter data: one for day (mapping to my daily indices), and 
>> one 
>> for client (a term within documents).
>>
>> I'm currently finding indices using curl -XGET 
>> localhost:9200/_aliases, and a simple aggregation query to get a list of 
>> known clients over all indices. It works, but since not every client is 
>> present on every date it feels clunky when the client is known but the 
>> list 
>> of dates still contains all indices, many of which are irrelevant for 
>> the 
>> selected client.
>>
>> Can anyone recommend a good way of finding a list of indices in which 
>> there is at least one document containing a specified term please? Thank 
>> you very much.
>>  
>> -- 
>> You received this message because you are subscribed to the Google 
>> Groups "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, 
>> send an email to elasticsearc...@googlegroups.com.
>>
>> To view this discussion on the web visit https://groups.google.com/d/
>> msgid/elasticsearch/3dcf46da-3eeb-4503-a348-365e3f0fd7a0%40goo
>> glegroups.com 
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  -- 
 You received this message because you are subscribed to the Google 
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/a485bf59-4ab6-43a3-b3df-172b8d09e7ba%
 40googlegroups.com 
 
 .

 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  -- 
>> You received this message beca

Re: [ANN] Elasticsearch Simple Action Plugin

2014-09-10 Thread 'Sandeep Ramesh Khanzode' via elasticsearch
Hi Jorg,

I was trying to install this plugin on ES v1.3.1. I am getting the errors 
similar to below. Can you please tell me what has changed and how I can 
rectify? Thanks,

4) No implementation for 
java.util.Map was bound.
  while locating java.util.Map
for parameter 1 at 
org.elasticsearch.client.node.NodeClusterAdminClient.(Unknown Source)
  while locating org.elasticsearch.client.node.NodeClusterAdminClient
for parameter 1 at 
org.elasticsearch.client.node.NodeAdminClient.(Unknown Source)
  while locating org.elasticsearch.client.node.NodeAdminClient
for parameter 2 at 
org.elasticsearch.client.node.NodeClient.(Unknown Source)
  at 
org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:38)

5) No implementation for 
java.util.Map was bound.
  while locating java.util.Map
for parameter 1 at 
org.elasticsearch.client.node.NodeIndicesAdminClient.(Unknown Source)
  at 
org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:36)

6) No implementation for 
java.util.Map was bound.
  while locating java.util.Map
for parameter 1 at 
org.elasticsearch.client.node.NodeIndicesAdminClient.(Unknown Source)
  while locating org.elasticsearch.client.node.NodeIndicesAdminClient
for parameter 2 at 
org.elasticsearch.client.node.NodeAdminClient.(Unknown Source)
  at 
org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:37)

7) No implementation for 
java.util.Map was bound.
  while locating java.util.Map
for parameter 1 at 
org.elasticsearch.client.node.NodeIndicesAdminClient.(Unknown Source)
  while locating org.elasticsearch.client.node.NodeIndicesAdminClient
for parameter 2 at 
org.elasticsearch.client.node.NodeAdminClient.(Unknown Source)
  while locating org.elasticsearch.client.node.NodeAdminClient
for parameter 2 at 
org.elasticsearch.client.node.NodeClient.(Unknown Source)
  at 
org.elasticsearch.client.node.NodeClientModule.configure(NodeClientModule.java:38)

8) No implementation for org.elasticsearch.action.GenericAction annotated 
with @org.elasticsearch.common.inject.multibindings.Element(setNam
e=,uniqueId=275) was bound.
  at org.elasticsearch.action.ActionModule.configure(ActionModule.java:304)

9) An exception was caught and reported. Message: null
  at 
org.elasticsearch.common.inject.InjectorShell$Builder.build(InjectorShell.java:130)

9 errors
at 
org.elasticsearch.common.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:344)
at 
org.elasticsearch.common.inject.InjectorBuilder.initializeStatically(InjectorBuilder.java:151)
at 
org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:102)
at 
org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)
at 
org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)
at 
org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:59)
at 
org.elasticsearch.node.internal.InternalNode.(InternalNode.java:192)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:70)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:203)
at 
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.lang.reflect.MalformedParameterizedTypeException
at 
sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.validateConstructorArguments(ParameterizedTypeImpl.java:58)
at 
sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.(ParameterizedTypeImpl.java:51)
at 
sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl.make(ParameterizedTypeImpl.java:92)
at 
sun.reflect.generics.factory.CoreReflectionFactory.makeParameterizedType(CoreReflectionFactory.java:105)
at 
sun.reflect.generics.visitor.Reifier.visitClassTypeSignature(Reifier.java:140)
at 
sun.reflect.generics.tree.ClassTypeSignature.accept(ClassTypeSignature.java:49)
at 
sun.reflect.generics.repository.ClassRepository.getSuperclass(ClassRepository.java:86)
at java.lang.Class.getGenericSuperclass(Class.java:764)
at 
org.elasticsearch.common.inject.internal.MoreTypes.getGenericSupertype(MoreTypes.java:390)
at 
org.elasticsearch.common.inject.TypeLiteral.getSupertype(TypeLiteral.java:262)
at 
org.elasticsearch.common.inject.spi.InjectionPoint.addInjectionPoints(InjectionPoint.java:341)
at 
org.elasticsearch.common.inject.spi.InjectionPoint.forInstanceMethodsAndFields(InjectionPoint.java:287)
at 
org.elasticsearch.common.inject.spi.InjectionPoint.forInstanceMethodsAndFields(InjectionPoint.java:309)
at 
org.elasticsearch.common.inject.internal.BindingBuilder.toInstance(BindingBuilder.java:78)
at 
org.elasticsearch.action.ActionModule.configure(ActionModule.java:304)
at 
org.elasticsearch.common.inject.Abs

Re: [hadoop] Pipelining Hadoop/Spark with ElasticSearch

2014-09-10 Thread Costin Leau
One can specify the id for each document for quite some time now, through `es.mapping.id` parameter [1] - simply point 
it to the field containing the ID and you're good to go.


http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/2.1.Beta/configuration.html#_mapping

On 9/10/14 9:02 AM, aarthi ranganathan wrote:

Hi
Just wanted to know if the code for changing auto generated id has been 
committed or is it yet to be changed? I am using
elasticsearch-spark_2.10.Beta 1 version.
Thanks
Aarthi
Thanks
On Friday, 25 October 2013 19:40:05 UTC+5:30, Costin Leau wrote:

On 25/10/2013 4:26 PM, Han JU wrote:
> Thanks. Seems like I misunderstand something.
>
> Now I managed to push documents to ES, and I'd like to know if these are 
supported by current version of
> elasticsearch-binding:
>

I assume you mean elasticsearch-hadoop.


> - specifying id for index. Now the "_id" for the documents pushed are 
auto generated
> - the update api
>

This is being currently worked on and we should have something in trunk by 
next week.

> Thanks.
>
> 在 2013年10月24日星期四UTC+2下午7时05分31秒,Costin Leau写道:
>
> Hi,
>
> I replied on IRC but you left. See the docs here [1]. The value 
represents your document and since it might contain
> multiple fields, ESOuputFormat expects a Map (MapWritable) which 
contains the actual document. Say your doc is
> something
> like { foo: 123 } then your map would be [Text("foo"):new 
LongWritable(123)].
>
> The docs provides more information about the Writable types supported 
(basically all of them) and their equivalent
> ES types.
>
> 
[1]http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/mapreduce.html


> 
>
>
> On 24/10/2013 7:53 PM, Han JU wrote:
> > Hi,
> >
> > I'm trying to write hadoop aggregation results to ES.
> > Say I've K, V for key and value classes respectively. According to 
elasticsearch-hadoop api/blog, the key is ignored and
> > the value should be a Map.
> > I'm a little bit confused here: do I need an extra map job to convert my 
(K, V) to (Null, Map) ?
> > Is there any complete examples of using hadoop and ES together?
> >
> > Thanks.
> >
> > --
> > You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
> > To unsubscribe from this group and stop receiving emails from it, 
send an email to
> >elasticsearc...@googlegroups.com .
> > For more options, visithttps://groups.google.com/groups/opt_out 
 >.
>
> --
> Costin
>
> --
> You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
email to
>elasticsearc...@googlegroups.com .
> For more options, visithttps://groups.google.com/groups/opt_out 
.

--
Costin

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to
elasticsearch+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/3a82d348-066c-421e-996e-69b81455f175%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.


--
Costin

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5410118C.1060907%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


update parent(_parent) from JSON rather than query

2014-09-10 Thread Santosh B
usually parent field is updated using


POST /dsdemo/content/child11787?parent=parent1
{
"title": "The 111 document",
"body": "This 111 could be huge #14"
}


instead of this can we try to update parent like this

POST /dsdemo/content/child11787
{
"title": "The 111 document",
"body": "This 111 could be huge #14",
"parent":"parent1"
}

or may be

POST /dsdemo/content/child11787
{
"title": "The 111 document",
"body": "This 111 could be huge #14",
"_parent":"parent1"
}


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/55dc90a8-09c2-4879-bf33-73f040f60d4a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: stuck thread problem?

2014-09-10 Thread Martijn v Groningen
Hieu: The stack trace you shared, is from a search request with a mvel
script in a function_score query. I don't think that this is causing
threads to get stuck and it just pops up here because in general a query
with scripts take more cpu time. You can easily verify this if you stop
search requests temporarily then the shared stacktrace shouldn't be
included in the hot threads output. If a node gets slower over time there
may be a different issue. For example the jvm is slowly running out of
memory and garbage collections taking are longer.


On 10 September 2014 05:26, Hieu Nguyen  wrote:

> Thanks for the response, Martijn! We'll consider upgrading, but it'd be
> great to root cause the issue.
>
> On Tuesday, September 9, 2014 12:03:57 AM UTC-7, Martijn v Groningen wrote:
>>
>> Patrick: I have never seen this, but this means the openjdk on FreeBSD
>> doesn't support cpu sampling of threads. Not sure, but maybe you can try
>> with an Oracle jdk/jre? Otherwise jstack should be used on order to figure
>> out on what threads of ES get stuck on.
>>
>> Hieu: This doesn't look like the issue that is fixed, since the threads
>> are in mvel (scripting) code and not in scan/scroll code. 0.90.12 is an old
>> version and I would recommend to upgrade. Also mvel is deprecated and will
>> only be available via a plugin: https://github.com/
>> elasticsearch/elasticsearch/pull/6571 and https://github.com/
>> elasticsearch/elasticsearch/pull/6610
>>
>>
>> On 6 September 2014 16:12, Hieu Nguyen  wrote:
>>
>>> We have seen a similar issue in our cluster (CPU usage and search time
>>> suddenly went up slowly for the master node over a period of one day, until
>>> we restarted). Is there a easy way to confirm that it's indeed the same
>>> issue mentioned here?
>>>
>>> Below is the output of our hot threads on this node (version 0.90.12):
>>>
>>>85.8% (857.7ms out of 1s) cpu usage by thread
>>> 'elasticsearch[cluster1][search][T#3]'
>>>  8/10 snapshots sharing following 30 elements
>>>java.lang.ThreadLocal$ThreadLocalMap.set(ThreadLocal.java:429)
>>>java.lang.ThreadLocal$ThreadLocalMap.access$100(
>>> ThreadLocal.java:261)
>>>java.lang.ThreadLocal.set(ThreadLocal.java:183)
>>>org.elasticsearch.common.mvel2.optimizers.OptimizerFactory.
>>> clearThreadAccessorOptimizer(OptimizerFactory.java:114)
>>>org.elasticsearch.common.mvel2.MVELRuntime.execute(
>>> MVELRuntime.java:169)
>>>org.elasticsearch.common.mvel2.compiler.CompiledExpression.
>>> getDirectValue(CompiledExpression.java:123)
>>>org.elasticsearch.common.mvel2.compiler.
>>> CompiledExpression.getValue(CompiledExpression.java:119)
>>>org.elasticsearch.script.mvel.MvelScriptEngineService$
>>> MvelSearchScript.run(MvelScriptEngineService.java:191)
>>>org.elasticsearch.script.mvel.MvelScriptEngineService$
>>> MvelSearchScript.runAsDouble(MvelScriptEngineService.java:206)
>>>org.elasticsearch.common.lucene.search.function.
>>> ScriptScoreFunction.score(ScriptScoreFunction.java:54)
>>>org.elasticsearch.common.lucene.search.function.
>>> FunctionScoreQuery$CustomBoostFactorScorer.score(
>>> FunctionScoreQuery.java:175)
>>>org.apache.lucene.search.TopScoreDocCollector$
>>> OutOfOrderTopScoreDocCollector.collect(TopScoreDocCollector.java:140)
>>>org.apache.lucene.search.TimeLimitingCollector.collect(
>>> TimeLimitingCollector.java:153)
>>>org.apache.lucene.search.Scorer.score(Scorer.java:65)
>>>org.apache.lucene.search.IndexSearcher.search(
>>> IndexSearcher.java:621)
>>>org.elasticsearch.search.internal.ContextIndexSearcher.
>>> search(ContextIndexSearcher.java:162)
>>>org.apache.lucene.search.IndexSearcher.search(
>>> IndexSearcher.java:491)
>>>org.apache.lucene.search.IndexSearcher.search(
>>> IndexSearcher.java:448)
>>>org.apache.lucene.search.IndexSearcher.search(
>>> IndexSearcher.java:281)
>>>org.apache.lucene.search.IndexSearcher.search(
>>> IndexSearcher.java:269)
>>>org.elasticsearch.search.query.QueryPhase.execute(
>>> QueryPhase.java:117)
>>>org.elasticsearch.search.SearchService.executeQueryPhase(
>>> SearchService.java:244)
>>>org.elasticsearch.search.action.SearchServiceTransportAction.
>>> sendExecuteQuery(SearchServiceTransportAction.java:202)
>>>org.elasticsearch.action.search.type.
>>> TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(
>>> TransportSearchQueryThenFetchAction.java:80)
>>>org.elasticsearch.action.search.type.TransportSearchTypeAction$
>>> BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)
>>>org.elasticsearch.action.search.type.TransportSearchTypeAction$
>>> BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)
>>>org.elasticsearch.action.search.type.TransportSearchTypeAction$
>>> BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)
>>>java.util.concurrent.Thr

Re: Hit Counts within a Document

2014-09-10 Thread Darren Trzynka
Vineeth,
I just saw your response today and I came to the same conclusion yesterday
after you gave me a nice working example!  I took it a step further doing
the same grouping by the field that you did and it came out nicely.
Something is sinking in anyways with me..-)

Besides some possible language support issues, the biggest thing I see for
challenges could be if stemming is involved (you search for federal and
hits are returned on federal, federalizing, etc... so if you just look for
federal in the term count, it wouldn't find all the matches) and then
dealing with case sensitivity when looking at the term frequencies (the
user typed in "Federal cases" which matches by default on federal and
cases) it seems you would need to lower case the lookup for the term
frequencies.  What do you think about these cases?

Thanks!
Darren

On Mon, Sep 8, 2014 at 11:28 AM, vineeth mohan 
wrote:

> Hello Darren ,
>
> Following query does what you have asked for ( replace FIELD with the
> field you are looking for) -
>
> {
>   "fields": [
> "text"
>   ],
>   "query": {
> "term": {
>   "text": "god"
> }
>   },
>   "script_fields": {
> "tf": {
>   "script": "_index['FIELD']['cat'].tf()"
> }
>   }
> }
>
> For the second one , use -
>
> {
>   "query": {
> "term": {
>   "FIELD": "CAT"
> }
>   },
>   "aggs": {
> "groupName": {
>   "terms": {
> "field": "GROUP_FIELD"
>   },
>   "aggs": {
> "catStats": {
>   "sum": {
> "script": "_index['FIELD']['CAT'].tf()"
>   }
> }
>   }
> }
>   }
> }
>
> Thanks
>Vineeth
>
>
> On Mon, Sep 8, 2014 at 6:24 PM, Darren Trzynka 
> wrote:
>
>> Vineeth,
>> Thanks for responding.  What I am looking for is provided I perform a
>> search for various terms, how given the search result can I understand the
>> frequency of the hits within documents.  For example, I perform a full text
>> search on cat.  5 documents are returned.  I could today get the terms that
>> were found highlighted but that is of course quite nasty.  Instead what I
>> would like returned is the documents but something like for each document
>> saying:
>> Document 1 (group: 1): cat - 5
>> Document 2 (group: 2): cat - 3
>> Document 3 (group: 1): cat - 2
>> ...
>> Document n - cat - #
>>
>> Also, there is other metadata that it would be nice to aggregate on too
>> so I could get an answer for the above scenario:
>> group : 1 - cat - 7
>> group : 2 - cat - 3
>>
>> Thanks
>> Darren
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/76044495-afc9-4c51-b3f3-6ea7e636bc01%40googlegroups.com
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/vRxbDxqjxVg/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAGdPd5%3DfRzwke0GK_pn8WxPBJ6c%2B97yOyDPmkXcWkQJf%3Dy5rfA%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAFkmSJ-8joV%3DzAu6rRNU9GQ100yzGYXqY4QG2gAod1pniu5qzw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


More Like This Note

2014-09-10 Thread phenrigomes
How do this "Note: In order to use the mlt feature a mlt_field needs to be
either be stored, store term_vector or source needs to be enabled."?



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/More-Like-This-Note-tp4063184.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1410273863709-4063184.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


More Like This - Results Is Empty

2014-09-10 Thread phenrigomes
Why _mlt results is empty? Even with data in index.

GET /megacorp/deas/1/_mlt?mlt_fields=name&min_doc_freq=1

{
   "took": 3,
   "timed_out": false,
   "_shards": {
  "total": 5,
  "successful": 5,
  "failed": 0
   },
   "hits": {
  "total": 0,
  "max_score": null,
  "hits": []
   }
}

Type Mapping:

{
   "megacorp": {
  "mappings": {
 "deas": {
"properties": {
   "fields": {
  "type": "string"
   },
   "ids": {
  "type": "string"
   },
   "max_query_terms": {
  "type": "long"
   },
   "min_term_freq": {
  "type": "long"
   },
   "more_like_this": {
  "properties": {
 "docs": {
"properties": {
   "_id": {
  "type": "string"
   },
   "_index": {
  "type": "string"
   },
   "_type": {
  "type": "string"
   }
}
 },
 "fields": {
"type": "string"
 },
 "ids": {
"type": "long"
 },
 "max_query_terms": {
"type": "long"
 },
 "min_term_freq": {
"type": "long"
 }
  }
   },
   "name": {
  "type": "string",
  "store": true
   }
}
 }
  }
   }
}



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/More-Like-This-Results-Is-Empty-tp4063176.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1410271748763-4063176.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Highlight works not always!

2014-09-10 Thread Ramy
Can someone tell me, why the highlighting works not always? what is my 
mistake?

This is my mapping:

curl -XPUT "http://localhost:9200/my_index"; -d'
{
  "settings": {
"analysis": {
  "analyzer": {
"autocomplete": {
  "type": "custom",
  "tokenizer": "ngram_tokenizer",
  "filter": [ "lowercase" ]
}
  },
  "tokenizer": {
"ngram_tokenizer": {
  "type": "ngram",
  "min_gram": 1,
  "max_gram": 20,
  "token_chars": ["letter", "digit"]
}
  }
}
  },
  ...
}'


and here is my query:

curl -XGET "http://localhost:9200/my_index/my_type/_search"; -d'
{
  "_source": false,
  "size": 5,
  "query": {
"multi_match": {
  "query": "*tisch*",
  "fields": [
"*_de.autocomplete"
  ],
  "operator": "and"
}
  },
  "highlight": {
"pre_tags": [
  ""
],
"post_tags": [
  ""
],
"fields": {
  "*_de.autocomplete": {}
}
  }
}'


and this is my result:

{
   "took": 220,
   "timed_out": false,
   "_shards": {
  "total": 5,
  "successful": 5,
  "failed": 0
   },
   "hits": {
  "total": 3649,
  "max_score": 0.88375586,
  "hits": [
 {
"_index": "my_index",
"_type": "my_type",
"_id": "1",
"_score": 0.88375586,
"highlight": {
   "group_name_3_de.autocomplete": [
  "Konsol*tisch*e",
  "Garten-Ess*tisch*e"
   ],
   "group_name_2_de.autocomplete": [
  "*Tisch*e",
  "Gartentische"
   ]
}
 },
 {
"_index": "my_index",
"_type": "my_type",
"_id": "2",
"_score": 0.88375586,
"highlight": {
   "group_name_3_de.autocomplete": [
  "Schreib*tisch*e",
  "Ess*tisch*e"
   ],
   "group_name_2_de.autocomplete": [
  "*Tisch*e"
   ]
}
 },
 {
"_index": "my_index",
"_type": "my_type",
"_id": "3",
"_score": 0.88375586,
"highlight": {
   "group_name_3_de.autocomplete": [
  "Einzel*tisch*e"
   ],
   "group_name_2_de.autocomplete": [
  "Büro*tisch*e"
   ]
}
 },
 ...
  ]
   }
}


As you can see. Elasticsearch was able to highlight some words. I marked 
them as *pink* and the other are marked as *red*

Where is the failure?

Thank you

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ce7b0413-f113-4819-bd24-0499488b7ecf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Highlight works not always!

2014-09-10 Thread Ramy
Can someone tell me, why the highlighting works not always? what is my 
mistake?

This is my mapping:

curl -XPUT "http://localhost:9200/my_index"; -d'
{
  "settings": {
"analysis": {
  "analyzer": {
"autocomplete": {
  "type": "custom",
  "tokenizer": "ngram_tokenizer",
  "filter": [ "lowercase" ]
}
  },
  "tokenizer": {
"ngram_tokenizer": {
  "type": "ngram",
  "min_gram": 1,
  "max_gram": 20,
  "token_chars": ["letter", "digit"]
}
  }
}
  },
  ...
}'


and here is my query:

curl -XGET "http://localhost:9200/my_index/my_type/_search"; -d'
{
  "_source": false,
  "size": 5,
  "query": {
"multi_match": {
  "query": "*tisch*",
  "fields": [
"*_de.autocomplete"
  ],
  "operator": "and"
}
  },
  "highlight": {
"pre_tags": [
  ""
],
"post_tags": [
  ""
],
"fields": {
  "*_de.autocomplete": {}
}
  }
}'


and this is my result:

{
   "took": 220,
   "timed_out": false,
   "_shards": {
  "total": 5,
  "successful": 5,
  "failed": 0
   },
   "hits": {
  "total": 3649,
  "max_score": 0.88375586,
  "hits": [
 {
"_index": "my_index",
"_type": "my_type",
"_id": "1",
"_score": 0.88375586,
"highlight": {
   "group_name_3_de.autocomplete": [
  "Konsol*tisch*e",
  "Garten-Ess*tisch*e"
   ],
   "group_name_2_de.autocomplete": [
  "*Tisch*e",
  "Gartentische"
   ]
}
 },
 {
"_index": "architonic",
"_type": "product",
"_id": "2",
"_score": 0.88375586,
"highlight": {
   "group_name_3_de.autocomplete": [
  "Schreib*tisch*e",
  "Ess*tisch*e"
   ],
   "group_name_2_de.autocomplete": [
  "*Tisch*e"
   ]
}
 },
 {
"_index": "architonic",
"_type": "product",
"_id": "3",
"_score": 0.88375586,
"highlight": {
   "group_name_3_de.autocomplete": [
  "Einzel*tisch*e"
   ],
   "group_name_2_de.autocomplete": [
  "Büro*tisch*e"
   ]
}
 },
 ...
  ]
   }
}


As you can see. Elasticsearch was able to highlight some words. I marked 
them as *pink* and the other are marked as *red*

Where is the failure?

Thank you

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e72c34b1-be77-433b-8120-9c3c51af3186%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


How to sort fuzzy result by Levenshtein distance?

2014-09-10 Thread Артем Прокопенко
I want that in the search results exact matches had more weight.

Example:
dest
test
tesd
desd

I want to see such a result by "query" = test
test (Levenshtein distance = 0)
dest (Levenshtein distance = 1)
tesd (Levenshtein distance = 1)
desd (Levenshtein distance = 2)

But a see:
dest
desd
test
tesd

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/29655425-c6f1-44a4-b6eb-9520bcc2c208%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.