Finding exact value with hyphen on not analyzed field is not working

2015-04-28 Thread asmierzchalski
Hello together,

i'm using Logstash in combination with ElasticSearch. I have a field from
type String which is not_analyzed. The values are containing hyphens. When
i'm trying to filter with a term for this field and that value i'm not
getting any results.

After reading
http://www.elastic.co/guide/en/elasticsearch/guide/master/_finding_exact_values.html
i guess this should be possible. I tried the example from this answer
http://stackoverflow.com/questions/11566838/elastic-search-hyphen-issue-with-term-filter
and it's working, so i'm not sure where the problem with the index from
Logstash or my Filter is.

Field description in the index:
"objectId":{"type":"string","norms":{"enabled":false},"fields":{"raw{"type":"string","index":"not_analyzed","ignore_above":256}}}

My filter query:
curl -XGET http://localhost:9200/_all/_search -d '{
   "filter" : {
   "term" : {
   "objectId" : "8c9a3c20-744e-44d7-8467-621a9b461002"
   }
   }
}'




--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/Finding-exact-value-with-hyphen-on-not-analyzed-field-is-not-working-tp4074265.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1430290404341-4074265.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Re: Problem with "long" type field

2015-04-28 Thread Stabaoc
So I can't use Java Api?

在 2015年4月29日星期三 UTC+8上午11:43:42,Igor Motov写道:
>
> It typically happens if you put data into elasticsearch using Sense or 
> some other JavaScript-based applications. Large numbers like this one are 
> getting rounded in JavaScript before they reach Elasticsearch. Please see 
> https://github.com/elastic/elasticsearch/issues/5518#issuecomment-38540645 
> for more information about the issue.
>
> On Tuesday, 28 April 2015 23:02:16 UTC-4, Stabaoc wrote:
>>
>> I meet a problem . When I index an value , for example {"id": 
>> -8848340816900692111},
>> then i search it ,it shows that "id": -8848340816900692000. 
>> Anyone can help? I want know why does elasticsearch do this and how can i 
>> deal with.
>> Thanks.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fbb8fe57-cbad-49b9-b13c-6280e7a85c55%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Problem with "long" type field

2015-04-28 Thread Stabaoc
Thanks a lot!Help much!

在 2015年4月29日星期三 UTC+8上午11:43:42,Igor Motov写道:
>
> It typically happens if you put data into elasticsearch using Sense or 
> some other JavaScript-based applications. Large numbers like this one are 
> getting rounded in JavaScript before they reach Elasticsearch. Please see 
> https://github.com/elastic/elasticsearch/issues/5518#issuecomment-38540645 
> for more information about the issue.
>
> On Tuesday, 28 April 2015 23:02:16 UTC-4, Stabaoc wrote:
>>
>> I meet a problem . When I index an value , for example {"id": 
>> -8848340816900692111},
>> then i search it ,it shows that "id": -8848340816900692000. 
>> Anyone can help? I want know why does elasticsearch do this and how can i 
>> deal with.
>> Thanks.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/cc260c9d-fb09-4a5d-96f9-a79c728cab5a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: More memory or more CPU cores help better performance?

2015-04-28 Thread Ishafizan Ishak
question is subjective. good thing abt es is that u scale and throw in 
servers as needed. plus query performance also depends on your index 
settings/mappings/replicas.
i have a cluster instance at 
digitalocean https://www.digitalocean.com/pricing/

no of nodes: 10
- master: 3 (1gb 1 core)
- client: 5 (512mb 1 core)
- data: 3 (8gb 4 core)

total shards: 82
~160M docs

On Wednesday, April 29, 2015 at 10:17:04 AM UTC+8, Xudong You wrote:
>
> hi,
> I am building ES on cloud Virtual machines, the cloud platform provides 
> different tier VMs to choose, say, 4 CPU cores, 28G memory, or 8 CPU cores, 
> 14G memory etc. Different kind VM has different cost. To save our cost, I 
> want to choose the VM whose cost not exceed our budget and has best 
> performance or query.
> So, from query performance point of view, should I choose VM with more CPU 
> cores or more memory? Anyone has experience on the best combination of CPU 
> & Memory for ES performance?
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ebc21ece-e250-478f-b933-7eb37d0e8dfd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Problem with "long" type field

2015-04-28 Thread Igor Motov
It typically happens if you put data into elasticsearch using Sense or some 
other JavaScript-based applications. Large numbers like this one are 
getting rounded in JavaScript before they reach Elasticsearch. Please see 
https://github.com/elastic/elasticsearch/issues/5518#issuecomment-38540645 
for more information about the issue.

On Tuesday, 28 April 2015 23:02:16 UTC-4, Stabaoc wrote:
>
> I meet a problem . When I index an value , for example {"id": 
> -8848340816900692111},
> then i search it ,it shows that "id": -8848340816900692000. 
> Anyone can help? I want know why does elasticsearch do this and how can i 
> deal with.
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e12eb941-fe3c-43d3-931d-2efea9577525%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: More memory or more CPU cores help better performance?

2015-04-28 Thread Mark Walkom
Depends - you will want to do some tests to see what sort of resources your
use case requires.
Start with smaller machines and go from there.

On 29 April 2015 at 12:17, Xudong You  wrote:

> hi,
> I am building ES on cloud Virtual machines, the cloud platform provides
> different tier VMs to choose, say, 4 CPU cores, 28G memory, or 8 CPU cores,
> 14G memory etc. Different kind VM has different cost. To save our cost, I
> want to choose the VM whose cost not exceed our budget and has best
> performance or query.
> So, from query performance point of view, should I choose VM with more CPU
> cores or more memory? Anyone has experience on the best combination of CPU
> & Memory for ES performance?
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/e449f0bb-5c92-4aee-84f5-285171e8070c%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8aSE92XiUnu6JotvRnjXif2kxbQBRSo%2BkUKa8Q1cujTw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Problem with "long" type field

2015-04-28 Thread Stabaoc
I meet a problem . When I index an value , for example {"id": 
-8848340816900692111},
then i search it ,it shows that "id": -8848340816900692000. 
Anyone can help? I want know why does elasticsearch do this and how can i 
deal with.
Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7753596d-31e6-4e6d-abe6-1945c9b8bd39%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


More memory or more CPU cores help better performance?

2015-04-28 Thread Xudong You
hi,
I am building ES on cloud Virtual machines, the cloud platform provides 
different tier VMs to choose, say, 4 CPU cores, 28G memory, or 8 CPU cores, 
14G memory etc. Different kind VM has different cost. To save our cost, I 
want to choose the VM whose cost not exceed our budget and has best 
performance or query.
So, from query performance point of view, should I choose VM with more CPU 
cores or more memory? Anyone has experience on the best combination of CPU 
& Memory for ES performance?


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e449f0bb-5c92-4aee-84f5-285171e8070c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: possible networking problem?

2015-04-28 Thread Mark Walkom
KB4 logs to stdout, so once you start the binary you should see lots of
output of in your command prompt.

On 29 April 2015 at 09:23, Colleen Roe  wrote:

> I've searched the Kibana  installation directories and don't see any log
> files.
>
> On Tue, Apr 28, 2015 at 3:39 PM, Mark Walkom  wrote:
>
>> What do your Kibana logs show?
>>
>> On 29 April 2015 at 07:53, Sitka  wrote:
>>
>>> I have installed elasticsearch and kibana.   I started elasticsearch and
>>> did a GET to test it out.  Everything worked.  I installed kibana next.
>>> When I test it doing "http://localhost:5061"; it fails. I inserted a TCP
>>> monitoring tool to see the traffic in  both directions.  Kibana is refusing
>>> the connection with "java.net.ConnectException: Connection refused:
>>> connect".  I also run netstat and indeed port 5061 is up and listening.
>>>
>>> Anyone got any ideas on this?  BTW, everything is running on my desktop.
>>>
>>> Thanks.
>>>
>>> Kibana 4.0.2-windows
>>> elasticsearch 1.5.2
>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/b6f1a7e4-20eb-4176-97a0-50f2a1f93911%40googlegroups.com
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/elasticsearch/TWft6sC0E9U/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-LcCu_99SeWYOOaLdNF%3DbTEYsxoWAfb_reOxb%2Bbs_YWg%40mail.gmail.com
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAChkSb%2BOOkHPEBObSEeW_oFfx%3D_PJ2-D%3DJY_gY-%2BTMErAz_3FQ%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8EiQrcwQNdC5QnZx6iiYb9VqX4eF1jmQYp%2BXkD2C3YfA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Cluster falling into YELLOW state during use of Snapshot API

2015-04-28 Thread Mark Walkom
What do your ES logs show?

On 29 April 2015 at 10:02, Steven B  wrote:

> Our Elasticsearch cluster is comprised of 4 very large instances running
> Elasticsearch 1.4.0.
> The Snapshot API is set up to take backups once a day in the evening. We
> maintain 5 snapshots at a time, expiring the old ones as they drop off.
>
> Note that we have a few thousand indices in the cluster.
>
> While the Snapshot API is in progress, many times the cluster will fall
> into a yellow state. Here's an example of the output from GET
> _cluster/health?pretty=true when this happens:
>
> {
>"cluster_name": "elasticsearch-cluster",
>"status": "yellow",
>"timed_out": false,
>"number_of_nodes": 4,
>"number_of_data_nodes": 4,
>...
>"relocating_shards": 0,
>"initializing_shards": 1,
>"unassigned_shards": 1
> }
>
> It would appear the Snapshot API is somehow interfering with shards
> initialization and/or allocation while the Snapshot is in progress. Once
> the cluster falls into the YELLOW state during the Snapshot, it stays in
> the YELLOW state until the snapshot completes. When the Snapshot completes,
> the cluster immediately switches back to a GREEN state after all unassigned
> shards are allocated to nodes.
>
> Has anyone else experienced this behavior with Elasticsearch Snapshots?
> If so, are there thoughts on what might be causing it? Was there a
> resolution?
>
> Thanks,
> Steven
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/90740d3e-67b5-4094-ae82-a2085425c5bf%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-eejOCU6XESZR7%2BJhsavhaCn%3DsjsdSyQLvpRm2%2BaE3VQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Cluster falling into YELLOW state during use of Snapshot API

2015-04-28 Thread Steven B
Our Elasticsearch cluster is comprised of 4 very large instances running 
Elasticsearch 1.4.0.
The Snapshot API is set up to take backups once a day in the evening. We 
maintain 5 snapshots at a time, expiring the old ones as they drop off.

Note that we have a few thousand indices in the cluster.

While the Snapshot API is in progress, many times the cluster will fall 
into a yellow state. Here's an example of the output from GET 
_cluster/health?pretty=true when this happens:

{
   "cluster_name": "elasticsearch-cluster",
   "status": "yellow",
   "timed_out": false,
   "number_of_nodes": 4,
   "number_of_data_nodes": 4,
   ...
   "relocating_shards": 0,
   "initializing_shards": 1,
   "unassigned_shards": 1
}

It would appear the Snapshot API is somehow interfering with shards 
initialization and/or allocation while the Snapshot is in progress. Once 
the cluster falls into the YELLOW state during the Snapshot, it stays in 
the YELLOW state until the snapshot completes. When the Snapshot completes, 
the cluster immediately switches back to a GREEN state after all unassigned 
shards are allocated to nodes.

Has anyone else experienced this behavior with Elasticsearch Snapshots?  If 
so, are there thoughts on what might be causing it? Was there a resolution?

Thanks,
Steven


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/90740d3e-67b5-4094-ae82-a2085425c5bf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: possible networking problem?

2015-04-28 Thread Colleen Roe
I've searched the Kibana  installation directories and don't see any log
files.

On Tue, Apr 28, 2015 at 3:39 PM, Mark Walkom  wrote:

> What do your Kibana logs show?
>
> On 29 April 2015 at 07:53, Sitka  wrote:
>
>> I have installed elasticsearch and kibana.   I started elasticsearch and
>> did a GET to test it out.  Everything worked.  I installed kibana next.
>> When I test it doing "http://localhost:5061"; it fails. I inserted a TCP
>> monitoring tool to see the traffic in  both directions.  Kibana is refusing
>> the connection with "java.net.ConnectException: Connection refused:
>> connect".  I also run netstat and indeed port 5061 is up and listening.
>>
>> Anyone got any ideas on this?  BTW, everything is running on my desktop.
>>
>> Thanks.
>>
>> Kibana 4.0.2-windows
>> elasticsearch 1.5.2
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/b6f1a7e4-20eb-4176-97a0-50f2a1f93911%40googlegroups.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/TWft6sC0E9U/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-LcCu_99SeWYOOaLdNF%3DbTEYsxoWAfb_reOxb%2Bbs_YWg%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAChkSb%2BOOkHPEBObSEeW_oFfx%3D_PJ2-D%3DJY_gY-%2BTMErAz_3FQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


What is the best practice around filtering out search results with curse words

2015-04-28 Thread varun kumar
Hey All,
I want to filter out docs with hate words in my search result. Currently we 
are having bool filter in every search query for the list of all words. And 
this results in tons of slow queries, since the list of hate words is long 
(So much of hatred around :( )

I was wondering what are the best practices for this spam/hate words 
filtering.

Here are what we are considering:
1. Pre-process : Scan the doc prior to indexing and hence mark them bad or 
do not index them.
Problem :  The documents are indexed from several processes and it is 
difficult to force the rule on any new component some one writes.

2. Creating a percolator and running it periodically (Not sure of the best 
frequency and timing) to tag all documents with bad words as "badDoc" : 
true. Hence have a filter in all the queries.
Problem : Not sure of the performance impact due to periodical running 
of percolator, secondly the same problem of discipline in all queries to 
exclude badDoc

Personally I would favor a pure ES solution and I am sure this is not a new 
problem, and hence seeking expert guidance and best practices. 
Any guidance/links would be helpful!

Thanks and Regards
Varun



-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7e3915d1-4c51-4c00-aa57-516f52d7983f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: possible networking problem?

2015-04-28 Thread Mark Walkom
What do your Kibana logs show?

On 29 April 2015 at 07:53, Sitka  wrote:

> I have installed elasticsearch and kibana.   I started elasticsearch and
> did a GET to test it out.  Everything worked.  I installed kibana next.
> When I test it doing "http://localhost:5061"; it fails. I inserted a TCP
> monitoring tool to see the traffic in  both directions.  Kibana is refusing
> the connection with "java.net.ConnectException: Connection refused:
> connect".  I also run netstat and indeed port 5061 is up and listening.
>
> Anyone got any ideas on this?  BTW, everything is running on my desktop.
>
> Thanks.
>
> Kibana 4.0.2-windows
> elasticsearch 1.5.2
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/b6f1a7e4-20eb-4176-97a0-50f2a1f93911%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-LcCu_99SeWYOOaLdNF%3DbTEYsxoWAfb_reOxb%2Bbs_YWg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Unable to get elasticsearch-hadoop working with Hive/Beeline

2015-04-28 Thread Costin Leau

Hi,

It seems you are running into a classpath problem. The class mentioned in the exception 
(org/elasticsearch/hadoop/serialization/dto/Node) is part of the elasticsearch-hadoop-hive-XXX. jar - you can verify 
this yourself.
The fact that it is not found at runtime suggests that the a different or incomplete jar is used instead. This can occur 
for example if a different jar is available in the Hive/Hadoop classpath which is picked up automatically and overrides 
the one you use in your script.


So first try and double check the existing classpath - in the vast majority of Hive problems, this was the issue (and 
old version was picked up instead). You can also verify this by trying to register the table - you should get an 
exception right away. Once that's done, try different ways of adding the jar to your script classpath - it might be that 
beeline has a different mechanism than vanilla Hive.


Hope this helps,

On 4/29/15 12:58 AM, Rasmus Aveskogh wrote:


Hi!

I've followed the various guides to get going with the elasticsearch-hadoop-integration in Hive, but I run into some 
issue:


|
>add jar hdfs://host:9000//lib/elasticsearch-hadoop-hive-2.1.0.Beta4.jar;
INFO :converting to 
localhdfs://host:9000//lib/elasticsearch-hadoop-hive-2.1.0.Beta4.jar

INFO 
:Added[/tmp/15207d6b-e4b5-446b-bbe2-cff282056983_resources/elasticsearch-hadoop-hive-2.1.0.Beta4.jar]to
 classpath

INFO 
:Addedresources:[hdfs://host:9000//lib/elasticsearch-hadoop-hive-2.1.0.Beta4.jar]

Norows affected (0.122seconds)
|


Then I am able to create an external table:


|

>CREATE EXTERNAL table estest (field STRING)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource'='hadoop/hadoop','es.index.auto.create'='false');

Norows affected (0.094seconds)

|

However, when I try to interact I get this error:

|

>select*fromestest;
Error:java.lang.NoClassDefFoundError:org/elasticsearch/hadoop/serialization/dto/Node(state=,code=0)

|

As you can see I've followed the recommendation to put the jar file in HDFS, and it seems like the jar is picked up in 
the classpath since without the 'add jar' I get another error stating that the EsStorageHandler can't be found. Any 
clues as to why this is happening?


-ra
--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
elasticsearch+unsubscr...@googlegroups.com .
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9c88299a-8646-4aa0-ba65-aa834d542dff%40googlegroups.com 
.

For more options, visit https://groups.google.com/d/optout.



--
Costin

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/55400593.4000107%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Unable to get elasticsearch-hadoop working with Hive/Beeline

2015-04-28 Thread Rasmus Aveskogh

Hi!

I've followed the various guides to get going with the 
elasticsearch-hadoop-integration in Hive, but I run into some issue:

> add jar hdfs://host:9000//lib/elasticsearch-hadoop-hive-2.1.0.Beta4.jar;
INFO  : converting to local hdfs:
//host:9000//lib/elasticsearch-hadoop-hive-2.1.0.Beta4.jar

INFO  : Added [/tmp/15207d6b-e4b5-446b-bbe2-cff282056983_resources/
elasticsearch-hadoop-hive-2.1.0.Beta4.jar] to class path

INFO  : Added resources: [hdfs:
//host:9000//lib/elasticsearch-hadoop-hive-2.1.0.Beta4.jar]

No rows affected (0.122 seconds)


Then I am able to create an external table:

> CREATE EXTERNAL table estest (field STRING)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource' = 'hadoop/hadoop', 'es.index.auto.create' = 
'false') ;

No rows affected (0.094 seconds)

However, when I try to interact I get this error:

> select * from estest;
Error: java.lang.NoClassDefFoundError: org/elasticsearch/hadoop/
serialization/dto/Node (state=,code=0)

As you can see I've followed the recommendation to put the jar file in 
HDFS, and it seems like the jar is picked up in the classpath since without 
the 'add jar' I get another error stating that the EsStorageHandler can't 
be found. Any clues as to why this is happening?

-ra

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9c88299a-8646-4aa0-ba65-aa834d542dff%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


possible networking problem?

2015-04-28 Thread Sitka
I have installed elasticsearch and kibana.   I started elasticsearch and 
did a GET to test it out.  Everything worked.  I installed kibana next. 
 When I test it doing "http://localhost:5061"; it fails. I inserted a TCP 
monitoring tool to see the traffic in  both directions.  Kibana is refusing 
the connection with "java.net.ConnectException: Connection refused: 
connect".  I also run netstat and indeed port 5061 is up and listening.

Anyone got any ideas on this?  BTW, everything is running on my desktop.

Thanks.

Kibana 4.0.2-windows
elasticsearch 1.5.2

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b6f1a7e4-20eb-4176-97a0-50f2a1f93911%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch puppet module's problem

2015-04-28 Thread Mark Walkom
You don't want to set that in the init script, use the init_defaults hash
instead.

On 29 April 2015 at 00:09, Sergey Zemlyanoy  wrote:

> And guys,
>
> how can I increase heap size using this module?  I don't see how to
> control parameter ES_HEAP_SIZE located in init script rather then
> distribute full init script by puppet
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/76ebd44a-441b-45d0-afaf-51347d9ee7bc%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8xNNAO%3D-83AiCpy%3D61eO96Y4faP6oTeXwO56UEZTt2ug%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Connecting Kibana to Elasticsearch on Kubernetes

2015-04-28 Thread Satnam Singh
Hello,

I've upgraded to Elasticsearch 1.5.2 and Kibana 4.0.2 which I am deploying 
in a Kubernetes  cluster.
Specifically, I am running Elasticsearch in one "pod" (a Kubernetes 
container with its own IP) and I am running Kibana in another pod (again 
with a distinct IP address).
The Kubernetes cluster runs a DNS service which will map the name 
"elasticsearch-logging.default:9200" to the pod running Elasticsearch.
This works fine: I can exec into any Docker container in a pod and run 
"curl http://elasticsearch-logging.default:9200"; and the right thing 
happens.
I've configured Kibana to let it know where Elasticsearch is running:

elasticsearch_url: "http://elasticsearch-logging.default:9200";
elasticsearch_preserve_host: true

Since I want to access Kibana from outside the cluster I use a proxy 
running on the master node of the cluster (after adding certificates to my 
browser for the SSL connection) e.g.

https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/

Sadly, this does not work. I get the error:

Error: Unable to check for Kibana index ".kibana"
Error: unknown error
at respond 
(https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/index.js?_b=6004:81693:15)
at checkRespForFailure 
(https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/index.js?_b=6004:81659:7)
at 
https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/index.js?_b=6004:80322:7
at deferred.promise.then.wrappedErrback 
(https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/index.js?_b=6004:20897:78)
at deferred.promise.then.wrappedErrback 
(https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/index.js?_b=6004:20897:78)
at deferred.promise.then.wrappedErrback 
(https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/index.js?_b=6004:20897:78)
at 
https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/index.js?_b=6004:21030:76
at Scope.$get.Scope.$eval 
(https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/index.js?_b=6004:22017:28)
at Scope.$get.Scope.$digest 
(https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/index.js?_b=6004:21829:31)
at Scope.$get.Scope.$apply 
(https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/index.js?_b=6004:22121:24)


And this is what the console shows:



Visiting 
https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/kibana-logging/elasticsearch
 
works (i.e. returns the status of Elasticsearch).
So does visiting Elasticsearch via the proxy i.e. 
https://104.197.26.147/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging/

I wonder if anyone has some advice about what is going wrong here?
Thank you kindly.

Cheers,

Satnam
 


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/44f86cfe-4844-42b9-9823-b3adb95b834f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: mlockall needs to be docuemented for systemd

2015-04-28 Thread Mark Walkom
This has been raised in https://github.com/elastic/elasticsearch/issues/9357

On 29 April 2015 at 05:04, Karl Putland  wrote:

>
> drwxr-xr-x. 26 root root 12288 Apr 28 14:57 .
> [root@node8 system]# diff -u elasticsearch.service.2015-04-28\@14\:57~
> elasticsearch.service
> --- elasticsearch.service.2015-04-28@14:57~ 2015-04-27
> 05:34:43.0 -0400
> +++ elasticsearch.service   2015-04-28 14:57:21.606106000 -0400
> @@ -12,7 +12,7 @@
>  # See MAX_OPEN_FILES in sysconfig
>  LimitNOFILE=65535
>  # See MAX_LOCKED_MEMORY in sysconfig, use "infinity" when
> MAX_LOCKED_MEMORY=unlimited and using bootstrap.mlockall: true
> -#LimitMEMLOCK=infinity
> +LimitMEMLOCK=infinity
>  # Shutdown delay in seconds, before process is tried to be killed with
> KILL (if configured)
>  TimeoutStopSec=20
>
>
>
> Karl Putland
> Senior Engineer
> *SimpleSignal*
> Anywhere: 303-242-8608
> 
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CA%2BEXWsx4Bi8GpKfyGT2oUL8zTfY6u1z8X6mZAyr40Of4tJCdTQ%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-U_DCuq_kc%2B4mBvwLGO_Z0Ht5U%2BrwioudiccrOtNWq3g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


How to Boost

2015-04-28 Thread GWired
I am attempting to boost values for queries.

I'm searching across all fields and tables and returning 25 results for 
each type.

This is working fine however I need to Boost if the field name Name or the 
Field Name ID have the value in it.

I'm using ElasticSearchClient and sending this search.

search = new
{

query = new
{
query_string = new
{
query = keyword,
default_field = "_all"
}
},
from = 0,
size = limitAllTypes,
aggs = new
{
top_types = new
{
terms = new
{
field = "_type"
},
aggs = new
{
top_type_hits = new
{
top_hits = new
{
size = limitPerType
}
}
}
}
}

ElasticsearchResponse searchResponse = 
client.Search("jdbc", search, null);

How do i tell this to boost the name and id fields over all other fields.

If I'm searching for "My Searched Company" and that is in the Name field I 
want it at the top of the list.  vs in notes, addresses or whatever other 
columns etc.


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ccffa252-30c4-444d-9bbb-cd28a1acec50%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Extracting fuzzy match terms

2015-04-28 Thread mark
All Lucene queries implement extractTerms [1] and this API is used by 
highlighter implementations to get the expanded set of terms in 
wildcards/fuzzy etc.
This set of terms isn't exposed directly in elasticsearch today but you may 
be able to hack something together using scripts or a custom Java plugin - 
look at SearchContext.current().query().extractTerms().

Cheers
Mark

[1] 
http://lucene.apache.org/core/5_1_0/core/org/apache/lucene/search/Query.html#extractTerms(java.util.Set)


On Tuesday, April 28, 2015 at 12:00:49 PM UTC+1, Graham Turner wrote:
>
> Thanks Mark.
>
> I did wonder about the highlighter, but using it would mean potentially 
> retrieving every hit and parsing it, which feels pretty impractical for 
> large searches.  
>
> Presumably the fuzzy query has to identify a full list of matching terms 
> internally - is there any way we could somehow hook into this, or retrieve 
> the list separately to the query results?  A mechanism similar to the 
> suggester, just accepting a single fuzzy term or a wildcard term would be 
> perfect.  I appreciate this probably isn't a common request, but I'm sure 
> it would have other use cases.  Something to consider for a future release 
> perhaps?  :-)
>
> Cheers
>
> Graham
>
>
> On Monday, 27 April 2015 17:41:17 UTC+1, ma...@elastic.co wrote:
>>
>> Hi Graham,
>> If you were to use the highlighter functionality you would essentially 
>> "see what the search engine saw".
>> With some client-side coding you could parse out the expanded search 
>> terms because they would be surrounded by tags in matching docs.
>> Of course this wouldn't provide a de-duped list of terms and would be 
>> inefficient to return an exhaustive list of all expansions used but may be 
>> an approach to investigate. 
>>
>> Cheers
>> Mark
>>
>> On Monday, April 27, 2015 at 5:08:55 PM UTC+1, Graham Turner wrote:
>>>
>>> Hi,
>>>
>>> I'm working on a proof-of-concept for a client, replacing an existing 
>>> legacy search system with an elastic based alternative.  One of the 
>>> requirements that comes from the existing system is that, when performing a 
>>> fuzzy or wildcard search, the user can view all the matching terms, and 
>>> include/exclude them manually from the subsequent search.
>>>
>>> Thus, if a fuzzy search for 'graham' is submitted (or a wildcard like 
>>> 'gr*m*'), it might match grayam, graeme, grahum, grahem, etc.  The users 
>>> want to be able to see this list of matched terms, then, for instance, 
>>> exclude 'grayam' from the expanded terms list, so that all the other 
>>> expansions are used, but not the specifically excluded one. 
>>>
>>> I’m struggling to retrieve this list of terms in the first place.  
>>> Ideally I’d like to submit a simple query for a fuzzy or wildcard term, and 
>>> have it return just the possible matching terms (up to a given limit).
>>>
>>> I’ve had reasonable success using the term suggester for fuzzy-type 
>>> responses, but can’t use this for wildcard expansions. 
>>>
>>> Is there a good way to do this using 'out-of-the-box' elastic 
>>> functionality?  
>>>
>>> Any advice / hints gratefully accepted!
>>>
>>> Thanks
>>>
>>> Graham
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d8672e94-9063-4005-9d53-15b5cd0c6beb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to process "Lat" & "Long" fields using default Logstash config and mapping to use in Kibana 4 tile map

2015-04-28 Thread Rodger Moore
Merci beaucoup, even in English!

Nice blog, this helps a lot :)

Cheers

Op dinsdag 28 april 2015 20:45:12 UTC+2 schreef David Pilato:
>
> Lucky you! I just blogged about it :)
>
> http://david.pilato.fr/blog/2015/04/28/exploring-capitaine-train-dataset/
>
> --
> David ;-)
> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>
> Le 28 avr. 2015 à 18:07, Rodger Moore > 
> a écrit :
>
> Hi David,
>
> Thanks again for your answer. For some reason I am doing something wrong 
> and its driving me nuts. I've tried your method but the tile map is showing 
> me no results whatsoever. How did you define your template in Elasticsearch 
> for this "location" field? 
>
> Thanks,
>
> Rodger
>
> Op zondag 26 april 2015 18:34:01 UTC+2 schreef David Pilato:
>>
>> It's not an issue IMO but just a default configuration.
>>
>> FYI here is a sample config file I just used to parse some CSV data:
>>
>> input {
>>   stdin {}
>> }
>>
>>
>> filter {
>>   csv {
>> separator => ";"
>> columns => [
>>   "id","name","slug","uic","uic8_sncf","longitude","latitude",
>> "parent_station_id","is_city","country",
>>   "is_main_station","time_zone","is_suggestable","sncf_id",
>> "sncf_is_enabled","idtgv_id","idtgv_is_enabled",
>>   "db_id","db_is_enabled","idbus_id","idbus_is_enabled","ouigo_id",
>> "ouigo_is_enabled",
>>   "trenitalia_id","trenitalia_is_enabled","ntv_id","ntv_is_enabled",
>> "info_fr",
>>   "info_en","info_de","info_it","same_as"
>> ]
>>   }
>>
>>
>>   if [id] == "id" {
>> drop { }
>>   } else {
>> mutate {
>>   convert => { "longitude" => "float" }
>>   convert => { "latitude" => "float" }
>> }
>>
>>
>> mutate {
>>   rename => {
>> "longitude" => "[location][lon]" 
>> "latitude" => "[location][lat]" 
>>   }
>> }
>>
>>
>> mutate {
>>   remove_field => [ "message", "host", "@timestamp", "@version" ]
>> }
>>   }
>> }
>>
>>
>> output {
>> #  stdout { codec => rubydebug }
>>   stdout { codec => dots }
>>   elasticsearch {
>> protocol => "http"
>> host => "localhost"
>> index => "sncf"
>> index_type => "gare"
>> template => "sncf_template.json"
>> template_name => "sncf"
>> document_id => "%{id}"
>>   }
>> }
>>
>>
>> Hope this helps
>>
>> Le dimanche 26 avril 2015 13:50:54 UTC+2, Rodger Moore a écrit :
>>>
>>> Hi there again!
>>>
>>> This problem is caused by, what I believe, a bug in Logstash or 
>>> Elasticsearch. I used a very small test csv file with only 1 or 2 records 
>>> per date. The default Logstash template creates 1 index per date. For some 
>>> reason the creation of indices goes wrong when it comes to field types and 
>>> very few records per index. After I changed the index creation template in 
>>> the output config to:
>>>
>>> output {
>>>
>>>   elasticsearch {
>>> protocol => "http"
>>> index => "logstash-%{+.MM}"
>>> }
>>> }
>>>
>>> thus creating only 1 index per month the problem with wrong field types 
>>> was gone. If the folks from Elastic want to reproduce this, I enclosed the 
>>> config files and test file. Changed status to solved.
>>>
>>> Cheers,
>>>
>>> Rodger.
>>>
>>> Op zaterdag 25 april 2015 22:13:45 UTC+2 schreef Rodger Moore:

 Hi there!

 My question is fairly simple but I'm having trouble finding a solution. 
 I have a csv file containing Lat and Lon coordinates in separate fields 
 named "Latitude" and "Longitude". Most of the info I found on the net is 
 focussed on GeoIP (which is great functionality btw) but besides some 
 posts  
 in 
 Google Groups I failed finding a good tutorial for this use-case.

 What is the simplest way of getting separate Long / Lat fields into a 
 geo_point and putting these coordinates on a Tile Map in Kibana 4 using 
 the 
 default Logstash (mapping) - ES - Kibana settings? I am using logstash 
 1.4.2 | Elasticsearch 1.5.0. and Kibana 4.0.1. 

 Summary: --> csv containing Long / Lat in separate fields --> Logstash 
 --> ES --> Kibana4?

 Any help very much appreciated!

 Cheers,

 Rodger

>>>  -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearc...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/61c9e345-c997-43ac-ab58-7c753fecf0f0%40googlegroups.com
>  
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an

Possible approaches for indexing field as both analyzed and not_analyzed

2015-04-28 Thread Chris Marino
We have a State field in our documents that contains the full name of a 
U.S. state or territory. We want to use it as a filter w some queries, but 
we also want the names of states to be searchable via full text searches. 
Thus, we know we'll need to have multiple versions of this field - one that 
is analyzed/tokenized for full text searches and one that is not analyzed 
for filtering.

Our first option is to set the State field as "not_analyzed" and use the 
_all field for full text searches. 

Our second option is to leverage multi fields (State, State.raw) and map 
the State field like:

"State" : {
"type" : "string",
"index" : "analyzed",
"fields" : {
  "raw" : {"type" : "string", "index" : "not_analyzed"}
}
  }

Are there any considerations we need to take into account before going w 
one over the other?

Thanks,
Chris

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d821a644-2206-42e1-b093-4f3cae95814d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Convert bulk request to json document and publish that document to ES as a seperate task

2015-04-28 Thread Manjula Piyumal
Hi Jörg,

Thanks for the quick response. I'll try it in that way.

Thanks
Manjula

On Wed, Apr 29, 2015 at 12:32 AM, joergpra...@gmail.com <
joergpra...@gmail.com> wrote:

> You are using the binary stream protocol of ES in the writeTo() method
> which is not appropriate for writing to files.
>
> Once you added requests to a bulk request, you can not get your content
> back as JSON.
>
> A better approach is to use an XContentBuilder with an OutputStream, and
> add the content to it, independent of BulkRequestBuilder.
>
> Jörg
>
> On Tue, Apr 28, 2015 at 8:21 PM, Manjula Piyumal  > wrote:
>
>> Hi,
>>
>> I'm expecting to implement ES back statistic collecting application. For
>> that I'm expecting to publish data using bulk API. I want to separate out
>> request generating and request publishing tasks. First task is creating
>> bulk request using JAVA API and write that request to a temp file as a JSON
>> document and then request publisher publishes the requests which are in the
>> files.
>> I have tried to write created bulk request using writeTo method as
>> follows. It writes the request as a semi-json document containing some
>> random weird bytes(resulting file is attached).
>>
>> FileOutputStream fileOutputStream = new FileOutputStream(new
>> File("request.txt"));
>> OutputStreamStreamOutput streamOutput = new
>> OutputStreamStreamOutput(fileOutputStream);
>> bulkRequestBuilder.request().writeTo(streamOutput);
>>
>> I'm not sure whether I have missed something here. I want to understand
>> what goes wrong while writing the request and is it possible to implement
>> my application by separating request generating and publishing tasks as
>> mentioned above. Any help would be appreciated.
>>
>> Thanks
>> Manjula
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/690f5a0a-3c62-43f3-b662-259b4dce79e1%40googlegroups.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/RCz0JgIP5kY/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHGAVSjOWX6%2BQ08pNUBWczf6ySYwGDm%2BXMsepMnsWysvA%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



-- 
*Manjula Piyumal De Silva*

Software Engineer,
AdroitLogic Private Ltd,

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAFLAp0XqWcdKdnyrw3vg6%2Be4tCp%3DQY2MAYDrngKfEOMTkbpkUA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


mlockall needs to be docuemented for systemd

2015-04-28 Thread Karl Putland
drwxr-xr-x. 26 root root 12288 Apr 28 14:57 .
[root@node8 system]# diff -u elasticsearch.service.2015-04-28\@14\:57~
elasticsearch.service
--- elasticsearch.service.2015-04-28@14:57~ 2015-04-27
05:34:43.0 -0400
+++ elasticsearch.service   2015-04-28 14:57:21.606106000 -0400
@@ -12,7 +12,7 @@
 # See MAX_OPEN_FILES in sysconfig
 LimitNOFILE=65535
 # See MAX_LOCKED_MEMORY in sysconfig, use "infinity" when
MAX_LOCKED_MEMORY=unlimited and using bootstrap.mlockall: true
-#LimitMEMLOCK=infinity
+LimitMEMLOCK=infinity
 # Shutdown delay in seconds, before process is tried to be killed with
KILL (if configured)
 TimeoutStopSec=20



Karl Putland
Senior Engineer
*SimpleSignal*
Anywhere: 303-242-8608


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CA%2BEXWsx4Bi8GpKfyGT2oUL8zTfY6u1z8X6mZAyr40Of4tJCdTQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Convert bulk request to json document and publish that document to ES as a seperate task

2015-04-28 Thread joergpra...@gmail.com
You are using the binary stream protocol of ES in the writeTo() method
which is not appropriate for writing to files.

Once you added requests to a bulk request, you can not get your content
back as JSON.

A better approach is to use an XContentBuilder with an OutputStream, and
add the content to it, independent of BulkRequestBuilder.

Jörg

On Tue, Apr 28, 2015 at 8:21 PM, Manjula Piyumal 
wrote:

> Hi,
>
> I'm expecting to implement ES back statistic collecting application. For
> that I'm expecting to publish data using bulk API. I want to separate out
> request generating and request publishing tasks. First task is creating
> bulk request using JAVA API and write that request to a temp file as a JSON
> document and then request publisher publishes the requests which are in the
> files.
> I have tried to write created bulk request using writeTo method as
> follows. It writes the request as a semi-json document containing some
> random weird bytes(resulting file is attached).
>
> FileOutputStream fileOutputStream = new FileOutputStream(new
> File("request.txt"));
> OutputStreamStreamOutput streamOutput = new
> OutputStreamStreamOutput(fileOutputStream);
> bulkRequestBuilder.request().writeTo(streamOutput);
>
> I'm not sure whether I have missed something here. I want to understand
> what goes wrong while writing the request and is it possible to implement
> my application by separating request generating and publishing tasks as
> mentioned above. Any help would be appreciated.
>
> Thanks
> Manjula
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/690f5a0a-3c62-43f3-b662-259b4dce79e1%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHGAVSjOWX6%2BQ08pNUBWczf6ySYwGDm%2BXMsepMnsWysvA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to process "Lat" & "Long" fields using default Logstash config and mapping to use in Kibana 4 tile map

2015-04-28 Thread David Pilato
Lucky you! I just blogged about it :)

http://david.pilato.fr/blog/2015/04/28/exploring-capitaine-train-dataset/

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

> Le 28 avr. 2015 à 18:07, Rodger Moore  a écrit :
> 
> Hi David,
> 
> Thanks again for your answer. For some reason I am doing something wrong and 
> its driving me nuts. I've tried your method but the tile map is showing me no 
> results whatsoever. How did you define your template in Elasticsearch for 
> this "location" field? 
> 
> Thanks,
> 
> Rodger
> 
> Op zondag 26 april 2015 18:34:01 UTC+2 schreef David Pilato:
>> 
>> It's not an issue IMO but just a default configuration.
>> 
>> FYI here is a sample config file I just used to parse some CSV data:
>> 
>> input {
>>   stdin {}
>> }
>> 
>> 
>> filter {
>>   csv {
>> separator => ";"
>> columns => [
>>   
>> "id","name","slug","uic","uic8_sncf","longitude","latitude","parent_station_id","is_city","country",
>>   
>> "is_main_station","time_zone","is_suggestable","sncf_id","sncf_is_enabled","idtgv_id","idtgv_is_enabled",
>>   
>> "db_id","db_is_enabled","idbus_id","idbus_is_enabled","ouigo_id","ouigo_is_enabled",
>>   
>> "trenitalia_id","trenitalia_is_enabled","ntv_id","ntv_is_enabled","info_fr",
>>   "info_en","info_de","info_it","same_as"
>> ]
>>   }
>> 
>> 
>>   if [id] == "id" {
>> drop { }
>>   } else {
>> mutate {
>>   convert => { "longitude" => "float" }
>>   convert => { "latitude" => "float" }
>> }
>> 
>> 
>> mutate {
>>   rename => {
>> "longitude" => "[location][lon]" 
>> "latitude" => "[location][lat]" 
>>   }
>> }
>> 
>> 
>> mutate {
>>   remove_field => [ "message", "host", "@timestamp", "@version" ]
>> }
>>   }
>> }
>> 
>> 
>> output {
>> #  stdout { codec => rubydebug }
>>   stdout { codec => dots }
>>   elasticsearch {
>> protocol => "http"
>> host => "localhost"
>> index => "sncf"
>> index_type => "gare"
>> template => "sncf_template.json"
>> template_name => "sncf"
>> document_id => "%{id}"
>>   }
>> }
>> 
>> 
>> Hope this helps
>> 
>> Le dimanche 26 avril 2015 13:50:54 UTC+2, Rodger Moore a écrit :
>>> 
>>> Hi there again!
>>> 
>>> This problem is caused by, what I believe, a bug in Logstash or 
>>> Elasticsearch. I used a very small test csv file with only 1 or 2 records 
>>> per date. The default Logstash template creates 1 index per date. For some 
>>> reason the creation of indices goes wrong when it comes to field types and 
>>> very few records per index. After I changed the index creation template in 
>>> the output config to:
>>> 
>>> output {
>>> 
>>>   elasticsearch {
>>> protocol => "http"
>>> index => "logstash-%{+.MM}"
>>> }
>>> }
>>> 
>>> thus creating only 1 index per month the problem with wrong field types was 
>>> gone. If the folks from Elastic want to reproduce this, I enclosed the 
>>> config files and test file. Changed status to solved.
>>> 
>>> Cheers,
>>> 
>>> Rodger.
>>> 
>>> Op zaterdag 25 april 2015 22:13:45 UTC+2 schreef Rodger Moore:
 
 Hi there!
 
 My question is fairly simple but I'm having trouble finding a solution. I 
 have a csv file containing Lat and Lon coordinates in separate fields 
 named "Latitude" and "Longitude". Most of the info I found on the net is 
 focussed on GeoIP (which is great functionality btw) but besides some 
 posts in Google Groups I failed finding a good tutorial for this use-case.
 
 What is the simplest way of getting separate Long / Lat fields into a 
 geo_point and putting these coordinates on a Tile Map in Kibana 4 using 
 the default Logstash (mapping) - ES - Kibana settings? I am using logstash 
 1.4.2 | Elasticsearch 1.5.0. and Kibana 4.0.1. 
 
 Summary: --> csv containing Long / Lat in separate fields --> Logstash --> 
 ES --> Kibana4?
 
 Any help very much appreciated!
 
 Cheers,
 
 Rodger
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/61c9e345-c997-43ac-ab58-7c753fecf0f0%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9495D301-7A2E-47E5-A5E0-37D759E7AEFE%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Re: Tranfer Kibana 4 dashboards

2015-04-28 Thread David Pilato
I'm using SNAPSHOT elasticsearch feature for that.

I save .kibana index.

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

> Le 28 avr. 2015 à 19:55, Kellan Strong  a écrit :
> 
> Hello All,
> 
> In Kibana 3 you were able to save the dashboards you created to your desktop 
> and able to up load them somewhere else. Is this possible in Kibana4 like I 
> would like to transfer my dashboards from INT->QA. I have copied the index 
> over but my QA boxes don't seem to pick up the index as like it does it INT.
> 
> Any help would be appreciated.
> 
> Thanks,
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/1b9b5f08-c1a5-4895-a86d-ae6e435ea8a6%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/EE301455-5743-4647-B6A1-6DA7B80AE92A%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Convert bulk request to json document and publish that document to ES as a seperate task

2015-04-28 Thread Manjula Piyumal
Hi,

I'm expecting to implement ES back statistic collecting application. For 
that I'm expecting to publish data using bulk API. I want to separate out 
request generating and request publishing tasks. First task is creating 
bulk request using JAVA API and write that request to a temp file as a JSON 
document and then request publisher publishes the requests which are in the 
files. 
I have tried to write created bulk request using writeTo method as follows. 
It writes the request as a semi-json document containing some random weird 
bytes(resulting file is attached). 
 
FileOutputStream fileOutputStream = new FileOutputStream(new 
File("request.txt"));
OutputStreamStreamOutput streamOutput = new 
OutputStreamStreamOutput(fileOutputStream);
bulkRequestBuilder.request().writeTo(streamOutput);

I'm not sure whether I have missed something here. I want to understand 
what goes wrong while writing the request and is it possible to implement 
my application by separating request generating and publishing tasks as 
mentioned above. Any help would be appreciated.

Thanks
Manjula

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/690f5a0a-3c62-43f3-b662-259b4dce79e1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

øGXultraesb
SystemMetricssy_es$system1430245275571Ã{"usedMemoryGauge":9.517179699145299E7,"openFDGauge":482.0,"cpuUsageGauge":0.0,"dataPointCount":117,"streamEntityName":"sy_es$system","activeThreadsGauge":75.0,"streamEntityType":"SystemMetrics"}ÿÿÿý
øGX

Tranfer Kibana 4 dashboards

2015-04-28 Thread Kellan Strong
Hello All,

In Kibana 3 you were able to save the dashboards you created to your 
desktop and able to up load them somewhere else. Is this possible in 
Kibana4 like I would like to transfer my dashboards from INT->QA. I have 
copied the index over but my QA boxes don't seem to pick up the index as 
like it does it INT.

Any help would be appreciated.

Thanks,

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1b9b5f08-c1a5-4895-a86d-ae6e435ea8a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: simple query string with flags returns no results

2015-04-28 Thread Daniel Nill
I'm not.  Sorry, copy and paste fail.

On Tuesday, April 28, 2015 at 10:31:58 AM UTC-7, Roger de Cordova Farias 
wrote:
>
> Are you actually using a comma after the "firstname^1.3"? It is invalid 
> JSON in both cases
>
> 2015-04-28 14:15 GMT-03:00 Daniel Nill >:
>
>> curl -XPUT "http://0.0.0.0:9200/users"; -d '{
>>   "first_name": "daniel",
>>   "last_name": "nill"
>> }'
>>
>>
>> curl -XGET 'http://0.0.0.0:9200/users/_search"; -d '{
>>   "query": {
>> "bool": {
>>   "must": {
>> "simple_query_string": {
>>
>>   "query":"daniel nill",
>>   "fields":[
>> "lastname^6.5",
>> "firstname^1.3",
>>   ],
>>
>>   "default_operator":"and",
>>   "flags":"AND|OR|NOT|PHRASE|PRECEDENCE"
>> }
>>   }
>> }
>>   }
>> }'
>>
>> this returns no results
>>
>> However,
>>
>> curl -XGET 'http://0.0.0.0:9200/users/_search"; -d '{
>>   "query": {
>> "bool": {
>>   "must": {
>> "simple_query_string": {
>>
>>   "query":"daniel nill",
>>   "fields":[
>> "lastname^6.5",
>> "firstname^1.3",
>>   ],
>>
>>   "default_operator":"and"
>> }
>>   }
>> }
>>   }
>> }'
>>
>> This returns results.
>>
>> Any idea what I'm missing?
>>
>> This is on 1.5.1
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/83d32c16-80b7-4428-904b-4d5bc9055be0%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2e675e28-e586-444b-a969-ec6aa98cccfb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


IndexMissingException when indexing document on non-existent index

2015-04-28 Thread dbakarcic
Hello, 
 
 I'm running into an IndexMissingException when indexing a document on a 
non-existent index on my application tests. I'm using elasticsearch 1.4.1 
running as a local node for testing purposes as following:

private lazy val node = 
nodeBuilder().clusterName(clusterName).local(true).settings(settings).build

The index is supposed to be auto created when indexing a document for the 
first time. The curious thing is that I can reproduce this issue 
systematically only on our jenkins server, which rans the application 
tests. On my local development environment, the exception is rarely 
reproduced. This makes me think about a race condition somewhere between 
the index creation and the actual document indexing.

Below is the complete exception stack trace:
 
org.elasticsearch.indices.IndexMissingException: [index-name] missing
at 
org.elasticsearch.cluster.metadata.MetaData.concreteIndices(MetaData.java:768) 
~[elasticsearch-1.4.1.jar:na]
at 
org.elasticsearch.cluster.metadata.MetaData.concreteIndices(MetaData.java:691) 
~[elasticsearch-1.4.1.jar:na]
at 
org.elasticsearch.cluster.metadata.MetaData.concreteSingleIndex(MetaData.java:748)
 
~[elasticsearch-1.4.1.jar:na]
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.doStart(TransportShardReplicationOperationAction.java:361)
 
~[elasticsearch-1.4.1.jar:na]
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:342)
 
~[elasticsearch-1.4.1.jar:na]
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:97)
 
~[elasticsearch-1.4.1.jar:na]
at 
org.elasticsearch.action.index.TransportIndexAction.innerExecute(TransportIndexAction.java:134)
 
~[elasticsearch-1.4.1.jar:na]
at 
org.elasticsearch.action.index.TransportIndexAction.access$000(TransportIndexAction.java:60)
 
~[elasticsearch-1.4.1.jar:na]
at 
org.elasticsearch.action.index.TransportIndexAction$1.onResponse(TransportIndexAction.java:94)
 
~[elasticsearch-1.4.1.jar:na]
at 
org.elasticsearch.action.index.TransportIndexAction$1.onResponse(TransportIndexAction.java:91)
 
~[elasticsearch-1.4.1.jar:na]
at 
org.elasticsearch.action.support.TransportAction$ThreadedActionListener$1.run(TransportAction.java:113)
 
~[elasticsearch-1.4.1.jar:na]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_40]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
~[na:1.7.0_40]
at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_40]

¿Does anyone ran into this issue before? ¿Can you help me figure out what 
might be the problem?

Thanks in advance,
Damián.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5bcd5922-7991-4897-8c70-ed780dcff7b6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Dont want to match on stop words

2015-04-28 Thread jim ross
Got an index with about a thousand movies names. If I search for a movie 
that doesn't yet exist in the index, say "The Avengers: Age of Ultron", it 
will come back with matches for things like "Sponge Bob: The Movie", 
presumably because it matches on the word  "the" even though its is 
completely irrelevant in this case and I do NOT want to show these as 
results, looks dumb!. I don't want to to completely eliminate stop words 
either since a movie name like "Her" should still be valid and find a 
match. I tried using cutoff_frequency = 0.001 but doesn't make any 
difference here. 
My query is like:
   "query": {
  "filtered": {
 "query": {
"match": {
"movie_name": {
"query": "believe",
"cutoff_frequency": 0.01
   }
}
 }
  }
   }
An ideas on the best strategy here? is there way to specify that a minimum 
percentage of words must match? 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7dc1fd6c-9e97-49f4-aa7f-2e328649804f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: simple query string with flags returns no results

2015-04-28 Thread Roger de Cordova Farias
Are you actually using a comma after the "firstname^1.3"? It is invalid
JSON in both cases

2015-04-28 14:15 GMT-03:00 Daniel Nill :

> curl -XPUT "http://0.0.0.0:9200/users"; -d '{
>   "first_name": "daniel",
>   "last_name": "nill"
> }'
>
>
> curl -XGET 'http://0.0.0.0:9200/users/_search"; -d '{
>   "query": {
> "bool": {
>   "must": {
> "simple_query_string": {
>
>   "query":"daniel nill",
>   "fields":[
> "lastname^6.5",
> "firstname^1.3",
>   ],
>
>   "default_operator":"and",
>   "flags":"AND|OR|NOT|PHRASE|PRECEDENCE"
> }
>   }
> }
>   }
> }'
>
> this returns no results
>
> However,
>
> curl -XGET 'http://0.0.0.0:9200/users/_search"; -d '{
>   "query": {
> "bool": {
>   "must": {
> "simple_query_string": {
>
>   "query":"daniel nill",
>   "fields":[
> "lastname^6.5",
> "firstname^1.3",
>   ],
>
>   "default_operator":"and"
> }
>   }
> }
>   }
> }'
>
> This returns results.
>
> Any idea what I'm missing?
>
> This is on 1.5.1
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/83d32c16-80b7-4428-904b-4d5bc9055be0%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAJp2533kQ4uQR%2B9fGhwKqDi2P_cM-R6gNDcCheG098E2X_DoiQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: inner_hits and highlighting

2015-04-28 Thread Itamar Syn-Hershko
I think I've heard the ES team discourage the extensive use of this
aggregation type, mainly because it is highly expensive. Adding
highlighting support to it will more than double it's cost, and I'd
personally vote against it.

--

Itamar Syn-Hershko
http://code972.com | @synhershko 
Freelance Developer & Consultant
Lucene.NET committer and PMC member

On Tue, Apr 28, 2015 at 8:17 PM, Nikolas Everett  wrote:

> If its not in the issues its unlikely that its planned. If it isn't
> planned I think filing an issue is a good thing - just be super clear what
> you want to do with examples in curl/gist form. If it is planned maybe add
> your proposed usage to the issue.
>
> Nik
>
> On Tue, Apr 28, 2015 at 11:26 AM, Ian Battersby 
> wrote:
>
>> Been playing with the new *experimental* inner_hits functionality
>> released in 1.5.0, mainly with child/parent related documents. It seems to
>> work really well but notice that highlighting doesn't seem supported on
>> content/fields within inner_hits; a quick scan of the code-base seems to
>> confirm this. Anyone know if this is already under consideration for a
>> future release?
>>
>> Thanks, Ian.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/6512722f-caa0-4f48-baf0-c255d8685cb0%40googlegroups.com
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAPmjWd2SdkCbZYdrjJE6PJ7TnF7Kce1ke0ZyuVpkVmVpgAW%3DUQ%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAHTr4ZtRmigxAk8O-ZKp-cfQ9W5GOQ05Tk58knjcObLtUDLi_A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: inner_hits and highlighting

2015-04-28 Thread Nikolas Everett
If its not in the issues its unlikely that its planned. If it isn't planned
I think filing an issue is a good thing - just be super clear what you want
to do with examples in curl/gist form. If it is planned maybe add your
proposed usage to the issue.

Nik

On Tue, Apr 28, 2015 at 11:26 AM, Ian Battersby 
wrote:

> Been playing with the new *experimental* inner_hits functionality
> released in 1.5.0, mainly with child/parent related documents. It seems to
> work really well but notice that highlighting doesn't seem supported on
> content/fields within inner_hits; a quick scan of the code-base seems to
> confirm this. Anyone know if this is already under consideration for a
> future release?
>
> Thanks, Ian.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/6512722f-caa0-4f48-baf0-c255d8685cb0%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd2SdkCbZYdrjJE6PJ7TnF7Kce1ke0ZyuVpkVmVpgAW%3DUQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


simple query string with flags returns no results

2015-04-28 Thread Daniel Nill
curl -XPUT "http://0.0.0.0:9200/users"; -d '{
  "first_name": "daniel",
  "last_name": "nill"
}'


curl -XGET 'http://0.0.0.0:9200/users/_search"; -d '{
  "query": {
"bool": {
  "must": {
"simple_query_string": {

  "query":"daniel nill",
  "fields":[
"lastname^6.5",
"firstname^1.3",
  ],

  "default_operator":"and",
  "flags":"AND|OR|NOT|PHRASE|PRECEDENCE"
}
  }
}
  }
}'

this returns no results

However,

curl -XGET 'http://0.0.0.0:9200/users/_search"; -d '{
  "query": {
"bool": {
  "must": {
"simple_query_string": {

  "query":"daniel nill",
  "fields":[
"lastname^6.5",
"firstname^1.3",
  ],

  "default_operator":"and"
}
  }
}
  }
}'

This returns results.

Any idea what I'm missing?

This is on 1.5.1

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/83d32c16-80b7-4428-904b-4d5bc9055be0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: howto: food for dogs ==> dogfood

2015-04-28 Thread Maarten Roosendaal
Hi,

Thanks for the quick response.

Synonyms are not a scalable way of doing things since our index is 
constantly changing and new products with new content is added or remove. 
Besides you'll never cover everything users type. It's a good alternative 
but i'm exploring alternatives with the mindset that you probably can't 
cover everything.

Some other examples
"food for horses"
"food for cats"
"cable for printer"
but also
"tears for fears" (we have loads of different producttypes from books and 
cd, to tv's to petfood)

this means we just can't use stopwords out of the box. In terms of what's 
in the index, "food" and "cats" and "horse" are present. I use the stemming 
to also get "cat" and "horses" although i have to really get into testing 
that.


Op dinsdag 28 april 2015 16:38:53 UTC+2 schreef Charlie Hull:
>
> On 28/04/2015 15:33, Maarten Roosendaal wrote: 
> > Hi, 
> > 
> > We have users typing stuff like "food for dogs" and we've indexed the 
> > data with "dogfood". What is the best strategy to get a match with 
> > elasticsearch's filters and or analyzers? 
>
> Very much depends on the relation between the terms entered and the 
> terms in your index. If they're simply synonyms, ES has facilities for 
> this 
> (
> http://www.elastic.co/guide/en/elasticsearch/guide/master/using-synonyms.html),
>  
>
> if not you need to look at how to add the missing terms to your index 
> (term expansion at index time, for example you could use WordNet or a 
> similar public resource to add related terms) or at query time. 
>
> Can you give a few more examples? 
>
> Charlie 
>
> > 
> > Thanks, 
> > Maarten 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "elasticsearch" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to elasticsearc...@googlegroups.com  
> > . 
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/elasticsearch/c35ceba0-f5af-47f2-821f-384e4b3272bf%40googlegroups.com
>  
> > <
> https://groups.google.com/d/msgid/elasticsearch/c35ceba0-f5af-47f2-821f-384e4b3272bf%40googlegroups.com?utm_medium=email&utm_source=footer>.
>  
>
> > For more options, visit https://groups.google.com/d/optout. 
>
>
> -- 
> Charlie Hull 
> Flax - Open Source Enterprise Search 
>
> tel/fax: +44 (0)8700 118334 
> mobile:  +44 (0)7767 825828 
> web: www.flax.co.uk 
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/202f13ce-b9a8-466f-aa40-bb302d70b91d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Looking for consultant to create mapping and queries

2015-04-28 Thread Zelfapp
I've grabbed the book, and may very well contact you at some point for 
consulting if the book doesn't help to clear up the issues we're having 
with relevancy. Thank you for the info. 

On Monday, April 27, 2015 at 7:21:51 PM UTC-7, Doug Turnbull wrote:
>
> Hey Zelfapp(?) Nate(?),
>
> You are facing a challenge that many search developers have come across. 
> Sounds like a search relevancy problem. The search engine isn't Google out 
> of the box with all its psychic intuitiveness of what users want. It's a 
> tool to create a user experience for users for your content. It requires 
> very specific tuning and analysis.
>
> Anyway, my colleague John Berryman and I are writing a book 
>  on this subject that focuses on making 
> search results for your search app more intuitive. You might find that 
> useful: http://manning.com/turnbull
>
> Yes I'm a consultant too -- feel free to ping me directly over email if 
> you want help. My company's site:
> http://opensourceconnections.com
>
> Other consulting firms that come to mind include:
> Sematext http://sematext.com
> Search Technologies http://www.searchtechnologies.com/
>
> Cheers
> -Doug
>
> On Mon, Apr 27, 2015 at 1:58 PM, Zelfapp > 
> wrote:
>
>> I'm looking for a consultant to coach us on how to map our data and what 
>> queries we should use for our particular data. Our data set is very simple 
>> and consists of only 900 documents at this time. However, as we continue to 
>> tweak our map and analyzers and queries the search results we're getting 
>> are not what we want. There seems to be a million ways to skin a cat in ES, 
>> but clearly we're doing it wrong or partially wrong. Please reply if you're 
>> an ES guru and have some time today or tomorrow to work on this.
>>
>> If this is the wrong forum to inquire about paid consultants, my 
>> apologies, and please direct me to where I can find some ES gurus to work 
>> with. Thanks.
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/4f59043b-b1d4-4d6c-97ff-1084731cc814%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> *Doug Turnbull **| *Search Relevance Consultant | OpenSource Connections, 
> LLC | 240.476.9983 | http://www.opensourceconnections.com 
> Author: Taming Search  from Manning 
> Publications 
> This e-mail and all contents, including attachments, is considered to be 
> Company Confidential unless explicitly stated otherwise, regardless 
> of whether attachments are marked as such.
>  

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b05c52e6-bf72-4433-869c-25d5838ed56b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Cluster with Different Node Sizes

2015-04-28 Thread Nikolas Everett
On Tue, Apr 28, 2015 at 12:43 PM, Ji ZHANG  wrote:

Hi,
>
> I'm deploying ElasticSearch on a cluster with different node sizes, some
> have 32GB memory, and some have 16GB. I hope more shards will be allocated
> on nodes with bigger memory.
>
> I googled a bit, there're some settings that can exclude some indices from
> some nodes. But it's not very convenient. So I'm wondering whether there's
> a 'weight' setting for individual node, or ES has already been allocating
> shards based on node memory size?
>
> Thanks.
>
>
Nope. I asked for it a few years ago but its never been a high enough
priority. We don't have weights on the indexes either.

Your best bet is to pin the any heavier shards to machine with more ram via
a tag on all the machines. The shards will still be able to move between
those nodes just fine.

Nik

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd33Vmp_jvQVrzGNeqcqYJ%2BWaFeQEy%2Br5RVCxsRECZMwqQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Cluster with Different Node Sizes

2015-04-28 Thread Ji ZHANG
Hi,

I'm deploying ElasticSearch on a cluster with different node sizes, some 
have 32GB memory, and some have 16GB. I hope more shards will be allocated 
on nodes with bigger memory.

I googled a bit, there're some settings that can exclude some indices from 
some nodes. But it's not very convenient. So I'm wondering whether there's 
a 'weight' setting for individual node, or ES has already been allocating 
shards based on node memory size?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3bdccd1c-b18d-41b2-bc70-78a6d9aed51d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to process "Lat" & "Long" fields using default Logstash config and mapping to use in Kibana 4 tile map

2015-04-28 Thread Rodger Moore
Hi David,

Thanks again for your answer. For some reason I am doing something wrong 
and its driving me nuts. I've tried your method but the tile map is showing 
me no results whatsoever. How did you define your template in Elasticsearch 
for this "location" field? 

Thanks,

Rodger

Op zondag 26 april 2015 18:34:01 UTC+2 schreef David Pilato:
>
> It's not an issue IMO but just a default configuration.
>
> FYI here is a sample config file I just used to parse some CSV data:
>
> input {
>   stdin {}
> }
>
>
> filter {
>   csv {
> separator => ";"
> columns => [
>   "id","name","slug","uic","uic8_sncf","longitude","latitude",
> "parent_station_id","is_city","country",
>   "is_main_station","time_zone","is_suggestable","sncf_id",
> "sncf_is_enabled","idtgv_id","idtgv_is_enabled",
>   "db_id","db_is_enabled","idbus_id","idbus_is_enabled","ouigo_id",
> "ouigo_is_enabled",
>   "trenitalia_id","trenitalia_is_enabled","ntv_id","ntv_is_enabled",
> "info_fr",
>   "info_en","info_de","info_it","same_as"
> ]
>   }
>
>
>   if [id] == "id" {
> drop { }
>   } else {
> mutate {
>   convert => { "longitude" => "float" }
>   convert => { "latitude" => "float" }
> }
>
>
> mutate {
>   rename => {
> "longitude" => "[location][lon]" 
> "latitude" => "[location][lat]" 
>   }
> }
>
>
> mutate {
>   remove_field => [ "message", "host", "@timestamp", "@version" ]
> }
>   }
> }
>
>
> output {
> #  stdout { codec => rubydebug }
>   stdout { codec => dots }
>   elasticsearch {
> protocol => "http"
> host => "localhost"
> index => "sncf"
> index_type => "gare"
> template => "sncf_template.json"
> template_name => "sncf"
> document_id => "%{id}"
>   }
> }
>
>
> Hope this helps
>
> Le dimanche 26 avril 2015 13:50:54 UTC+2, Rodger Moore a écrit :
>>
>> Hi there again!
>>
>> This problem is caused by, what I believe, a bug in Logstash or 
>> Elasticsearch. I used a very small test csv file with only 1 or 2 records 
>> per date. The default Logstash template creates 1 index per date. For some 
>> reason the creation of indices goes wrong when it comes to field types and 
>> very few records per index. After I changed the index creation template in 
>> the output config to:
>>
>> output {
>>
>>   elasticsearch {
>> protocol => "http"
>> index => "logstash-%{+.MM}"
>> }
>> }
>>
>> thus creating only 1 index per month the problem with wrong field types 
>> was gone. If the folks from Elastic want to reproduce this, I enclosed the 
>> config files and test file. Changed status to solved.
>>
>> Cheers,
>>
>> Rodger.
>>
>> Op zaterdag 25 april 2015 22:13:45 UTC+2 schreef Rodger Moore:
>>>
>>> Hi there!
>>>
>>> My question is fairly simple but I'm having trouble finding a solution. 
>>> I have a csv file containing Lat and Lon coordinates in separate fields 
>>> named "Latitude" and "Longitude". Most of the info I found on the net is 
>>> focussed on GeoIP (which is great functionality btw) but besides some 
>>> posts  
>>> in 
>>> Google Groups I failed finding a good tutorial for this use-case.
>>>
>>> What is the simplest way of getting separate Long / Lat fields into a 
>>> geo_point and putting these coordinates on a Tile Map in Kibana 4 using the 
>>> default Logstash (mapping) - ES - Kibana settings? I am using logstash 
>>> 1.4.2 | Elasticsearch 1.5.0. and Kibana 4.0.1. 
>>>
>>> Summary: --> csv containing Long / Lat in separate fields --> Logstash 
>>> --> ES --> Kibana4?
>>>
>>> Any help very much appreciated!
>>>
>>> Cheers,
>>>
>>> Rodger
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/61c9e345-c997-43ac-ab58-7c753fecf0f0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


inner_hits and highlighting

2015-04-28 Thread Ian Battersby
Been playing with the new *experimental* inner_hits functionality released 
in 1.5.0, mainly with child/parent related documents. It seems to work 
really well but notice that highlighting doesn't seem supported on 
content/fields within inner_hits; a quick scan of the code-base seems to 
confirm this. Anyone know if this is already under consideration for a 
future release?

Thanks, Ian.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6512722f-caa0-4f48-baf0-c255d8685cb0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Need suggestion: How to boost specific documents for a given search term

2015-04-28 Thread Xudong You
Hi ES experts, 

I need your help on index design for a real scenario. It might be a long 
question, let me try explain it as concise as possible.

We are building a search engine to provide site search for our customers, 
the document in index could be something like this:

{ "Path":"http://www.foo.com/doc/abc/1";, "Title":"Title 1", 
"Description":"The description of doc 1", ... }
{ "Path":"http://www.foo.com/doc/abc/2";, "Title":"Title 2", 
"Description":"The description of doc 2", ... }
{ "Path":"http://www.foo.com/doc/abc/3";, "Title":"Title 3", 
"Description":"The description of doc 3", ... }
...

For each query, the returned hit documents are by default sorted by 
relevance, but our customer also wants to *boost some specific documents 
for some keywords,*
They will give us the following like boosting configuration XML:



http://www.foo.com/doc/abc/1


http://www.foo.com/doc/abc/2
http://www.foo.com/doc/abc/1


http://www.foo.com/doc/abc/3
http://www.foo.com/doc/abc/2
http://www.foo.com/doc/abc/1



That mean, if user search “keyword1", the top 1 hit document should be the 
document whose Path field value is "http://www.foo.com/doc/abc/1";, 
regardless the relevance score of that document. Similarly, if search 
"keyword3", the top 3 hit documents should be 
"http://www.foo.com/doc/abc/3";, "http://www.foo.com/doc/abc/2"; and 
"http://www.foo.com/doc/abc/1"; respectively.

To satisfy this special requirement, my design is, firstly invert the 
original boosting XML to following format:

http://www.foo.com/doc/abc/1”>

   
   
   


http://www.foo.com/doc/abc/2”>

   
   

 
http://www.foo.com/doc/abc/3”>

   




Then add a nested field "Boost", which contains a list of keyword/rank 
field, to the document as following example:
{
"Boost": [ 
   { "keyword":"keyword1", "rank": 1},
   { "keyword":"keyword2", "rank": 9900},
   { "keyword":"keyword3", "rank": 9800}
] 
"Path":"http://www.foo.com/doc/abc/1";, 
"Title":"Title 1", 
"Description":"The description of doc 1",
 ...
 }

{
"Boost": [ 
   { "keyword":"keyword2", "rank": 1},
   { "keyword":"keyword3", "rank": 9900}
] 
"Path":"http://www.foo.com/doc/abc/2";, 
"Title":"Title 2", 
"Description":"The description of doc 2",
 ...
 }

{
"Boost": [ 
   { "keyword":"keyword3", "rank": 1}
] 
"Path":"http://www.foo.com/doc/abc/3";, 
"Title":"Title 3", 
"Description":"The description of doc 3",
 ...
 }

Then in query time, use nested query to get the rank value of each matched 
document for a given search keyword, and use the score script to adjust the 
relevance score by the rank value. Since the rank value from boosting XML 
is much larger than normal relevance score ( generally less than 5), the 
adjusted score of the documents which configured in boosting XML for given 
keyword should be top scores.

Does this design work well? Any suggestions to better design?

Thanks in advance!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3eb89d0f-b9a4-4d84-bc04-e0c764b9e314%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: howto: food for dogs ==> dogfood

2015-04-28 Thread Charlie Hull

On 28/04/2015 15:33, Maarten Roosendaal wrote:

Hi,

We have users typing stuff like "food for dogs" and we've indexed the
data with "dogfood". What is the best strategy to get a match with
elasticsearch's filters and or analyzers?


Very much depends on the relation between the terms entered and the 
terms in your index. If they're simply synonyms, ES has facilities for 
this 
(http://www.elastic.co/guide/en/elasticsearch/guide/master/using-synonyms.html), 
if not you need to look at how to add the missing terms to your index 
(term expansion at index time, for example you could use WordNet or a 
similar public resource to add related terms) or at query time.


Can you give a few more examples?

Charlie



Thanks,
Maarten

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscr...@googlegroups.com
.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/c35ceba0-f5af-47f2-821f-384e4b3272bf%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.



--
Charlie Hull
Flax - Open Source Enterprise Search

tel/fax: +44 (0)8700 118334
mobile:  +44 (0)7767 825828
web: www.flax.co.uk

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/553F9B6D.4080906%40flax.co.uk.
For more options, visit https://groups.google.com/d/optout.


Re: howto: food for dogs ==> dogfood

2015-04-28 Thread Itamar Syn-Hershko
Synonyms

--

Itamar Syn-Hershko
http://code972.com | @synhershko 
Freelance Developer & Consultant
Lucene.NET committer and PMC member

On Tue, Apr 28, 2015 at 5:33 PM, Maarten Roosendaal  wrote:

> Hi,
>
> We have users typing stuff like "food for dogs" and we've indexed the data
> with "dogfood". What is the best strategy to get a match with
> elasticsearch's filters and or analyzers?
>
> Thanks,
> Maarten
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/c35ceba0-f5af-47f2-821f-384e4b3272bf%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAHTr4ZuZqC78O%2Bz_QwBTEfWK-MDWDPH19W_TiL_SOTApBsny6A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


howto: food for dogs ==> dogfood

2015-04-28 Thread Maarten Roosendaal
Hi,

We have users typing stuff like "food for dogs" and we've indexed the data 
with "dogfood". What is the best strategy to get a match with 
elasticsearch's filters and or analyzers?

Thanks,
Maarten

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c35ceba0-f5af-47f2-821f-384e4b3272bf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


incorrect data from elasticsearch query?

2015-04-28 Thread trekr5


Hello,

I'm attempting to extract the number of status 500 errors from an 
ElasticSearch server over a range of time and I believe that the data is 
incorrect or too high and believe that the query may be incorrect. (I'm 
using a search query from logstash and dumping it straight into a ruby 
script.

#elastic.rb

require 'elasticsearch'

client = Elasticsearch::Client.new hosts: [{host: '10.10.10.10', port: 
9200}]

value = client.search index: '2015.04.28',
body: {
  "facets"=> {
"0"=> {
  "date_histogram"=> {
"field"=> "@timestamp",
"interval"=> "15m"
  },
  "global"=> true,
  "facet_filter"=> {
"fquery"=> {
  "query"=> {
"filtered"=> {
  "query"=> {
"query_string"=> {
  "query"=> "type:iis6 AND status:500"
}
  },
  "filter"=> {
"bool"=> {
  "must"=> [
{
  "range"=> {
"@timestamp"=> {
  "from"=> "#{last_time}",
  "to"=>  "#{current_time}"
}
  }
}
  ]
}
  }
   }
  }
}
  }
}
  },
  "size"=> 0
}

values = value["hits"]["total"] # where current_time is current time in 
epoch and last_time is current_time-7200(2 hours)

I'm getting a very high value (over 340,000) when I should be getting a 
value of say 272 errors over a 2 hour period.

Can you please tell me what I'm doing wrong?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/36623448-ae2d-426b-b870-54804797853b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Enable/disable Shield in Node Client.

2015-04-28 Thread Tom
Thank you, Jay. It actually worked for me, I should have checked the 
solution from the SO thread despite authors negative claims. :)

On Tuesday, 28 April 2015 15:29:14 UTC+2, Jay Modi wrote:
>
> Hi Tom,
>
> For the nodes that you don't want to use Shield with, you should be able 
> to add the following to your node creation line:
>
> .settings(ImmutableSettings.builder().put("shield.enabled", false))
>
> -Jay
>
> On Tuesday, April 28, 2015 at 7:27:51 AM UTC-4, Tom wrote:
>>
>> Hello all,
>>
>> I am coding a Java application that will connect to various Elasticseach 
>> instances, some of them are secured by Shield, some are unsecured. I need 
>> have a choice to connect via NodeClient (only not secured instances) or 
>> TransportClient (both secured and not secured instances). This requires me 
>> to have shield as Maven dependency in my project, but in a situation when I 
>> connect via NodeClient to a not secured instance, Shield comes in a way and 
>> complains about the lack of license plugin. I would like to 
>> disable/bypass/not use Shield plugin in such a scenario - is there a 
>> property I can set?
>>
>> Example code:
>>
>> final Node node = 
>> nodeBuilder().clusterName(clusterName).client(true).node();
>> client = node.client();
>>
>> I can connect via this code to a not secured instance, but in a moment I 
>> add Shield as a Maven dependency, it starts to complain about the license.
>>
>> I found a similar thread on StackOverflow, but there is no response 
>> there: 
>> http://stackoverflow.com/questions/29744120/disabling-shield-for-intergration-tests
>>
>> Thank you in advance for advice.
>>
>> Tom
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ce3ff310-f9c1-4e27-b28b-5ce6da07d54a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch puppet module's problem

2015-04-28 Thread Sergey Zemlyanoy
And guys,

how can I increase heap size using this module?  I don't see how to control 
parameter ES_HEAP_SIZE located in init script rather then distribute full 
init script by puppet

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/76ebd44a-441b-45d0-afaf-51347d9ee7bc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Copying fields to a geopoint type ?

2015-04-28 Thread Dimitris Ganosis
I can use it in Kibana 3 but not in Kibana 4. Any idea why?

On Wednesday, 1 April 2015 20:31:05 UTC+1, Pascal VINCENT wrote:
>
> I finally come up with :
>
>  if [latitude] and [longitude] {
> mutate {
> add_field => [ "[location]", "%{longitude}" ]
> add_field => [ "[location]", "%{latitude}" ]
> }
> mutate {
> convert => [ "[location]", "float" ]
> }
>   } 
>
>


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/83648add-5ea4-4a68-826a-58e7141dea12%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: updating the mappings and settings of an existing index.

2015-04-28 Thread Marcus Teixeira
obs: I´m using ruby on rails.

Em terça-feira, 28 de abril de 2015 10:10:49 UTC-3, Marcus Teixeira 
escreveu:
>
> Hi... I have a application, I´m using elasticsearch, mongodb, radis to go 
> on heroku to run this.
>
> I made some changes on this files "searchable" and "elasticsearch", 
> because I wanna remove ngran filter. by what I am seeing I will have to 
> create a new index, but how I can do this?
>
>
> what are the steps?
>
> $ curl -X POST 
> https://x3ercej5:y5divyk63msqj...@oak-1405661.us-east-1.bonsai.io/_close
>
> $ heroku run rake environment elasticsearch:import:all DIR=app/models 
> --trace
>
> $ curl -X POST 
> https://x3ercej5:y5divyk63msqj...@oak-1405661.us-east-1.bonsai.io/_open
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3c9cc8b3-f4eb-438f-932a-f71784a14ffb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Enable/disable Shield in Node Client.

2015-04-28 Thread Jay Modi
Hi Tom,

For the nodes that you don't want to use Shield with, you should be able to 
add the following to your node creation line:

.settings(ImmutableSettings.builder().put("shield.enabled", false))

-Jay

On Tuesday, April 28, 2015 at 7:27:51 AM UTC-4, Tom wrote:
>
> Hello all,
>
> I am coding a Java application that will connect to various Elasticseach 
> instances, some of them are secured by Shield, some are unsecured. I need 
> have a choice to connect via NodeClient (only not secured instances) or 
> TransportClient (both secured and not secured instances). This requires me 
> to have shield as Maven dependency in my project, but in a situation when I 
> connect via NodeClient to a not secured instance, Shield comes in a way and 
> complains about the lack of license plugin. I would like to 
> disable/bypass/not use Shield plugin in such a scenario - is there a 
> property I can set?
>
> Example code:
>
> final Node node = 
> nodeBuilder().clusterName(clusterName).client(true).node();
> client = node.client();
>
> I can connect via this code to a not secured instance, but in a moment I 
> add Shield as a Maven dependency, it starts to complain about the license.
>
> I found a similar thread on StackOverflow, but there is no response there: 
> http://stackoverflow.com/questions/29744120/disabling-shield-for-intergration-tests
>
> Thank you in advance for advice.
>
> Tom
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/593abe50-08f0-40b3-bacf-27beec15b6e6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: marvel agent issue with elasticsearch 1.4.5

2015-04-28 Thread Gurvinder Singh
It truns out the problem was caused by mix of nodes during our rolling
upgrade. So when cluster with nodes from 1.3.6 & 1.4.5 will face the
problem I mentioned below. Once all nodes are upgraded to 1.4.5, marvel
issue went away. Useful to know while upgrade :)

- Gurvinder
On 04/28/2015 02:48 PM, Gurvinder Singh wrote:
> Today we upgraded the cluster to recent Elasticsearch 1.4 branch, after
> upgrade we are seeing this message in the master logs.
> 
> [2015-04-28 14:40:47,942][ERROR][marvel.agent ] [els-master]
> Exporter [es_exporter] has thrown an exception:
> java.lang.NullPointerException
> at
> org.elasticsearch.index.cache.query.QueryCacheStats.add(QueryCacheStats.java:52)
> at
> org.elasticsearch.action.admin.indices.stats.CommonStats.add(CommonStats.java:389)
> at
> org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse.getTotal(IndicesStatsResponse.java:117)
> at
> org.elasticsearch.marvel.agent.exporter.ESExporter.exportIndicesStats(ESExporter.java:198)
> at
> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportIndicesStats(AgentService.java:301)
> at
> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:260)
> at java.lang.Thread.run(Thread.java:745)
> 
> 
> I tried to install the latest version of marvel, but still the same
> issue. I wonder if someone else is facing it or we have something
> special going on in our setup.
> 
> - Gurvinder
> 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/553F89FB.9040500%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: marvel agent issue with elasticsearch 1.4.5

2015-04-28 Thread David Pilato
I would open an issue in Elasticsearch repo.

David

> Le 28 avr. 2015 à 14:48, Gurvinder Singh  a 
> écrit :
> 
> Today we upgraded the cluster to recent Elasticsearch 1.4 branch, after
> upgrade we are seeing this message in the master logs.
> 
> [2015-04-28 14:40:47,942][ERROR][marvel.agent ] [els-master]
> Exporter [es_exporter] has thrown an exception:
> java.lang.NullPointerException
>at
> org.elasticsearch.index.cache.query.QueryCacheStats.add(QueryCacheStats.java:52)
>at
> org.elasticsearch.action.admin.indices.stats.CommonStats.add(CommonStats.java:389)
>at
> org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse.getTotal(IndicesStatsResponse.java:117)
>at
> org.elasticsearch.marvel.agent.exporter.ESExporter.exportIndicesStats(ESExporter.java:198)
>at
> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportIndicesStats(AgentService.java:301)
>at
> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:260)
>at java.lang.Thread.run(Thread.java:745)
> 
> 
> I tried to install the latest version of marvel, but still the same
> issue. I wonder if someone else is facing it or we have something
> special going on in our setup.
> 
> - Gurvinder
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/553F81AC.9030201%40gmail.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/FCBCB58A-30DB-40BE-B4D3-928F8C801897%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


updating the mappings and settings of an existing index.

2015-04-28 Thread Marcus Teixeira
Hi... I have a application, I´m using elasticsearch, mongodb, radis to go 
on heroku to run this.

I made some changes on this files "searchable" and "elasticsearch", because 
I wanna remove ngran filter. by what I am seeing I will have to create a 
new index, but how I can do this?


what are the steps?

$ curl -X POST 
https://x3ercej5:y5divyk63msqj...@oak-1405661.us-east-1.bonsai.io/_close

$ heroku run rake environment elasticsearch:import:all DIR=app/models 
--trace

$ curl -X POST 
https://x3ercej5:y5divyk63msqj...@oak-1405661.us-east-1.bonsai.io/_open


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/facc4ac8-5be2-43bd-b1d3-87889cc27647%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


searchable.rb
Description: Binary data


elasticsearch.yml
Description: Binary data


Re: What is the correct _primary_first syntax? What is the relevant debug logger ?

2015-04-28 Thread Itamar Syn-Hershko
?preference=_primary_first

see
http://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-preference.html

No verbose mode at the moment

--

Itamar Syn-Hershko
http://code972.com | @synhershko 
Freelance Developer & Consultant
Lucene.NET committer and PMC member

On Mon, Apr 27, 2015 at 8:53 AM, Itai Frenkel  wrote:

> Hello,
>
> What is the correct syntax of using _primary_first in search and search
> template queries?
>
> GET myindex/_search/template?preference=_primary_first
>
> or
>
> GET myindex/_search/template?routing=_primary_first
>
> Is there any verbose mode that can log the list of shards that were
> actually accessed?
>
> thanks,
> Itai
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/6f6d44a0-f689-4168-85cf-574610f73155%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAHTr4ZsgOPJqY2dB1A_jytn5EwxoJo5Lkjp9BMWr%3DFHT9o-b3g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Script not executing _update_by_query

2015-04-28 Thread Nikolas Everett
On Tue, Apr 28, 2015 at 8:49 AM, Zaid Amir  wrote:

NVM, found the culprit.
>>
>
> Was missing a '.':
>
> def str = ctx_source.path; -> def str = ctx*.*_source.path;
>
> Weird how there was nothing in the logs about this
>
>
+1. If you can make it reproduceable in a readable gist I'd file it as an
issue. Groovy should be complaining about ctx_source not being in scope.

Nik

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd0yaupUzEWectNtV2NXObOgm5Ho8qx%2Bjkhpvjpi2bs3vg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Script not executing _update_by_query

2015-04-28 Thread Zaid Amir

>
> NVM, found the culprit.
>

Was missing a '.':

def str = ctx_source.path; -> def str = ctx*.*_source.path;

Weird how there was nothing in the logs about this



-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a8b1a2b7-c0c4-492d-b4e4-29b90389d2a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


marvel agent issue with elasticsearch 1.4.5

2015-04-28 Thread Gurvinder Singh
Today we upgraded the cluster to recent Elasticsearch 1.4 branch, after
upgrade we are seeing this message in the master logs.

[2015-04-28 14:40:47,942][ERROR][marvel.agent ] [els-master]
Exporter [es_exporter] has thrown an exception:
java.lang.NullPointerException
at
org.elasticsearch.index.cache.query.QueryCacheStats.add(QueryCacheStats.java:52)
at
org.elasticsearch.action.admin.indices.stats.CommonStats.add(CommonStats.java:389)
at
org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse.getTotal(IndicesStatsResponse.java:117)
at
org.elasticsearch.marvel.agent.exporter.ESExporter.exportIndicesStats(ESExporter.java:198)
at
org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportIndicesStats(AgentService.java:301)
at
org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:260)
at java.lang.Thread.run(Thread.java:745)


I tried to install the latest version of marvel, but still the same
issue. I wonder if someone else is facing it or we have something
special going on in our setup.

- Gurvinder

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/553F81AC.9030201%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Script not executing _update_by_query

2015-04-28 Thread Zaid Amir
Hi,

I am trying to update several documents at the same time using the update 
by query plugin.

The problem seems to be with the script as the query it self returns the 
correct result with no issues. The thing is ES seems to be able to execute 
the script as there are no exceptions in the logs. Yet nothing gets updated.

So here is the document I am indexing:

curl -XPUT 'localhost:9200/users/files/1' -d '
{
"path" : "path/to/file"
"size": 200
}'



No here is my update by query request to change the path field from 
'path/to/file' to 'another/path/to/file':

curl -XPOST  'localhost:9200/users/files/_update_by_query' -d ' 
{
  "query": {
"bool": {
  "must": [
{
  "term": {
"path": "path/to/file"
  }
}]
}
  },
  "script": "def str = ctx_source.path;\ndef str2 = 
str.replaceAll(\"path/to/file\", 
\"another/path/to/file\");\nctx._source.path = str2;"
}'



And this is what I get:
{
  "ok":true,
  "took":516,
  "total":75,
  "updated":0,
  "indices":[
{
  "new_index":{}
}]
}

so the query matched 75 documents but did not update any.
Anyone knows how can I make it work.

**Here is the script in pretty form:

 def str = ctx_source.path;
 def str2 = str.replaceAll("path/to/file", "another/path/to/file");
 ctx._source.path = str2;


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/225aa66d-53cb-4cd9-bd94-395eb169b10b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


field value is auto-resorted while being called from scriptFunction() ?

2015-04-28 Thread Yu Qian
 

Hi,

in my example, the value of my field "vectors" is an array. the mapping 
looks like this.

 "vectors": {

  "type": "float",

  "index": "not_analyzed"

}

for example, when I query a document by its id, part of the result might 
look like this, 

* "vectors":[3.2, 1.5, 2.2] **

I want to use my own score function for termQuery, which is implemented in 
a groovy script

however I got this in Groovy   

  doc['vectors'].values[0] = 1.5, doc['vectors'].values[1] = 2.2, ...

How do I fix it to keep the original order ??  such that 
doc['vectors'].values[0] = 3.2 

I'm very desperate with this scripting now ... help needed !

Thanks.


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/55e8700d-317f-46ba-83f9-2b9ef536ecca%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch puppet module's problem

2015-04-28 Thread Sergey Zemlyanoy
Thanks for advice! It works

BTW as I see there is no purge handling for instance, right?  So it's not 
enough just to remove instance name from hiera, I need to pass ensure => 
absent explicitly
 
Thanks
Sergey

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9b2e9fa4-b85b-40df-a9a8-a195c5dcb2eb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Prefix "and" search

2015-04-28 Thread Kathy Cashel
This looks very promising. Thanks so much Deepak!

Kathy


On Monday, April 27, 2015 at 8:44:28 AM UTC-4, deepak.chauhan wrote:
>
> use slop with phrase_prefix
>
>
> On Mon, Apr 27, 2015 at 5:58 PM, Kathy Cashel  > wrote:
>
>> My client wants a search that returns both prefix and exact matches per 
>> token. Most (but not all) of the text being searched is human names. The 
>> idea is that "jane smith" and "smith, j" and "j smit" will all return the 
>> same document, so I'm using tokens.
>>
>> The trick is that it needs to be an "and" search: all tokens in the 
>> search string must be present in the results. And the prefix query does not 
>> seem to offer this option. Edgengram would be a great answer, but the 
>> client wants results returned alphabetically - using no relevance at all - 
>> and the long tail on ngram searches makes the alpha sort unfeasible.
>>
>> So for now I'm using a simple keyword tokenizer (plus some custom 
>> analyzers) to index, and then splitting the search string on spaces to 
>> build queries like the below. It seems very awkward / cumbersome / brittle.
>>
>> All suggestions appreciated.
>>
>> {
>> "query" : {
>> "bool" : {
>> "must" : [{
>> "bool" : {
>> "should" : [{
>> "match" : {
>> "text" : "j"
>> }
>> }, {
>> "prefix" : {
>> "text" : "j"
>> }
>> }
>> ]
>> }
>> }, {
>> "bool" : {
>> "should" : [{
>> "match" : {
>> "text" : "smith"
>> }
>> }, {
>> "prefix" : {
>> "text" : "smith"
>> }
>> }
>> ]
>> }
>> }
>> ]
>> }
>> }
>> }
>>
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/bf12f1dc-f8d5-4e05-92fa-ce5f36c07d1c%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d36515c0-803e-4210-864a-d7ba9af06891%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Enable/disable Shield in Node Client.

2015-04-28 Thread Tom
Hello all,

I am coding a Java application that will connect to various Elasticseach 
instances, some of them are secured by Shield, some are unsecured. I need 
have a choice to connect via NodeClient (only not secured instances) or 
TransportClient (both secured and not secured instances). This requires me 
to have shield as Maven dependency in my project, but in a situation when I 
connect via NodeClient to a not secured instance, Shield comes in a way and 
complains about the lack of license plugin. I would like to 
disable/bypass/not use Shield plugin in such a scenario - is there a 
property I can set?

Example code:

final Node node = 
nodeBuilder().clusterName(clusterName).client(true).node();
client = node.client();

I can connect via this code to a not secured instance, but in a moment I 
add Shield as a Maven dependency, it starts to complain about the license.

I found a similar thread on StackOverflow, but there is no response there: 
http://stackoverflow.com/questions/29744120/disabling-shield-for-intergration-tests

Thank you in advance for advice.

Tom


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3941330e-9e85-49a0-b95f-cf7fe5c51a4e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: upgrade java for elasticsearch node

2015-04-28 Thread Jason Wee
Hey David,

Did a few tests in test cluster, well yea, just like what you have
mentioned. Stumble on TransportSerializationException during index query.

org.elasticsearch.transport.TransportSerializationException: Failed to
deserialize exception response from stream\n\tat
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)

It's a pity, I guess we have to upgrade the elasticsearch client java
version as well.

On another node, upgrading java in elasticsearch node require full shutdown
on all es nodes, do the java upgrade and start one by one.. No such thing
as rolling upgrading.

Jason

On Wed, Apr 22, 2015 at 6:13 PM, Jason Wee  wrote:

> Thank Jörg, fully aware of Java 7 eol.
>
> Index will remain as is as in, after java upgraded to 7 for all nodes,
> client can query/index without any problem. If internally lucene index
> need to upgrade, so be it, everything just okay. That was what I mean.
>
> Well yea, backup is very important too, that's plan B.
>
> Jason
>
> On Wed, Apr 22, 2015 at 5:59 PM, joergpra...@gmail.com
>  wrote:
> > Please note, Java 7 has reached end of life, and will no longer receive
> > updates
> >
> > https://www.java.com/en/download/faq/java_7.xml
> >
> > I recommend Java 8.
> >
> > ES is sensitive to JVM changes (hash codes for hash maps are computed
> > differently in Java 8) but this exposes only in rare cases.
> >
> > I am not sure what you mean by "index will remain as is". In any case, I
> > would always backup the data before upgrade.
> >
> > Jörg
> >
> > On Wed, Apr 22, 2015 at 11:51 AM, Jason Wee  wrote:
> >>
> >> Thanks david, my follow up questions.
> >>
> >> > So, basically shutdown all nodes and clients. Then upgrade your JVM.
> >> That sounds to me no rolling upgrade :( the users will experience down
> >> time, but with your recommendation, when jvm is upgraded on all nodes
> >> and clients, the es instance in the es node just start one by one and
> >> index will remain as is?
> >>
> >> Jason
> >>
> >>
> >> On Wed, Apr 22, 2015 at 5:39 PM, David Pilato  wrote:
> >> > You need to upgrade both at the same time.
> >> > Otherwise, you might get non serializable exceptions and the cluster
> >> > might
> >> > not behave correctly.
> >> >
> >> > So, basically shutdown all nodes and clients. Then upgrade your JVM.
> >> >
> >> > That said, you might be able to upgrade clients after that but I’d be
> >> > super
> >> > conservative here and don’t try to mix things.
> >> >
> >> > My 2 cents
> >> >
> >> > --
> >> > David Pilato - Developer | Evangelist
> >> > elastic.co
> >> > @dadoonet | @elasticsearchfr | @scrutmydocs
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > Le 22 avr. 2015 à 09:14, Jason Wee  a écrit :
> >> >
> >> > Hello,
> >> >
> >> > We are using java6 for our elasticsearch node of version 0.90 and
> >> > googled,
> >> > nothing discuss specifically on how to or procedure to upgrade java
> >> > deployed
> >> > in elasticsearch cluster.
> >> >
> >> > The one came close is this,
> >> >
> >> >
> https://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/JRE_VERSION_MIGRATION.txt
> >> >
> >> > Can anybody with experience share how to best upgrade java 6 to java 7
> >> > update 72 for all the node in the elasticsearch cluster? It would be
> >> > best if
> >> > after java upgraded, the data is still persist, indexable and
> queryable.
> >> >
> >> > Please also comment about which should be upgrade first? The jdk used
> in
> >> > java transport client or the jvm in elasticsearch node.
> >> >
> >> > Thanks.
> >> >
> >> > Jason
> >> >
> >> > --
> >> > You received this message because you are subscribed to the Google
> >> > Groups
> >> > "elasticsearch" group.
> >> > To unsubscribe from this group and stop receiving emails from it, send
> >> > an
> >> > email to elasticsearch+unsubscr...@googlegroups.com.
> >> > To view this discussion on the web visit
> >> >
> >> >
> https://groups.google.com/d/msgid/elasticsearch/3b06f455-6aa8-4c2c-8670-e62183c7b253%40googlegroups.com
> .
> >> > For more options, visit https://groups.google.com/d/optout.
> >> >
> >> >
> >> > --
> >> > You received this message because you are subscribed to the Google
> >> > Groups
> >> > "elasticsearch" group.
> >> > To unsubscribe from this group and stop receiving emails from it, send
> >> > an
> >> > email to elasticsearch+unsubscr...@googlegroups.com.
> >> > To view this discussion on the web visit
> >> >
> >> >
> https://groups.google.com/d/msgid/elasticsearch/A367C8CE-47A4-4C0D-8A3F-4AEA4D2C2026%40pilato.fr
> .
> >> >
> >> > For more options, visit https://groups.google.com/d/optout.
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups
> >> "elasticsearch" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an
> >> email to elasticsearch+unsubscr...@googlegroups.com.
> >> To view this discussion on the web visit
> >>
> https://groups.google.com/d/msgid/elasticsearch/CAHO4itwhtLuM3A

Re: Extracting fuzzy match terms

2015-04-28 Thread Graham Turner
Thanks Mark.

I did wonder about the highlighter, but using it would mean potentially 
retrieving every hit and parsing it, which feels pretty impractical for 
large searches.  

Presumably the fuzzy query has to identify a full list of matching terms 
internally - is there any way we could somehow hook into this, or retrieve 
the list separately to the query results?  A mechanism similar to the 
suggester, just accepting a single fuzzy term or a wildcard term would be 
perfect.  I appreciate this probably isn't a common request, but I'm sure 
it would have other use cases.  Something to consider for a future release 
perhaps?  :-)

Cheers

Graham


On Monday, 27 April 2015 17:41:17 UTC+1, ma...@elastic.co wrote:
>
> Hi Graham,
> If you were to use the highlighter functionality you would essentially 
> "see what the search engine saw".
> With some client-side coding you could parse out the expanded search terms 
> because they would be surrounded by tags in matching docs.
> Of course this wouldn't provide a de-duped list of terms and would be 
> inefficient to return an exhaustive list of all expansions used but may be 
> an approach to investigate. 
>
> Cheers
> Mark
>
> On Monday, April 27, 2015 at 5:08:55 PM UTC+1, Graham Turner wrote:
>>
>> Hi,
>>
>> I'm working on a proof-of-concept for a client, replacing an existing 
>> legacy search system with an elastic based alternative.  One of the 
>> requirements that comes from the existing system is that, when performing a 
>> fuzzy or wildcard search, the user can view all the matching terms, and 
>> include/exclude them manually from the subsequent search.
>>
>> Thus, if a fuzzy search for 'graham' is submitted (or a wildcard like 
>> 'gr*m*'), it might match grayam, graeme, grahum, grahem, etc.  The users 
>> want to be able to see this list of matched terms, then, for instance, 
>> exclude 'grayam' from the expanded terms list, so that all the other 
>> expansions are used, but not the specifically excluded one. 
>>
>> I’m struggling to retrieve this list of terms in the first place.  
>> Ideally I’d like to submit a simple query for a fuzzy or wildcard term, and 
>> have it return just the possible matching terms (up to a given limit).
>>
>> I’ve had reasonable success using the term suggester for fuzzy-type 
>> responses, but can’t use this for wildcard expansions. 
>>
>> Is there a good way to do this using 'out-of-the-box' elastic 
>> functionality?  
>>
>> Any advice / hints gratefully accepted!
>>
>> Thanks
>>
>> Graham
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/bb544adc-bf72-4d9c-a000-2ce08604488c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Couchbase XDCR error

2015-04-28 Thread David Pilato
You should better report that to couchbase mailing list or transport plugin 
repo.
The logs you shown are from their plugin AFAIK.


-- 
David Pilato - Developer | Evangelist 
elastic.co
@dadoonet  | @elasticsearchfr 
 | @scrutmydocs 






> Le 28 avr. 2015 à 09:45, Naisarg Patel  a écrit :
> 
> Hello All,
> 
> I got the error from XDCR replication. 
> 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 970. Please see logs 
> for details.
> 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 914. Please see logs 
> for details.
> 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 880. Please see logs 
> for details.
> 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 814. Please see logs 
> for details.
> 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 745. Please see logs 
> for details.
> 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 660. Please see logs 
> for details.
> 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 532. Please see logs 
> for details.
> 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 473. Please see logs 
> for details.
> 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 466. Please see logs 
> for details.
> 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 399. Please see logs 
> for details.
> 
> When I looked into XDCR error I found the bellow errors.
> 
>   {error,{code,503}}}
> [xdcr:error,2015-04-28T0:41:37.617,ns_1@192.168.200.1:<0.11779.772>:xdc_vbucket_rep:terminate:489]Replication
>  (CAPI mode) 
> `f4268e62702130298bf87f17cc481219/tax_subscription/tax_subscription` 
> (`tax_subscription/32` -> 
> `http://*@192.168.200.2:9091/tax_subscription%2f32`) failed: 
> {http_request_failed,"POST",
>  
> "http://Administrator:*@192.168.200.2:9091/tax_subscription%2f32/_revs_diff";,
>  {error,{code,503}}}
> [xdcr:error,2015-04-28T0:41:37.617,ns_1@192.168.200.1:<0.30184.768>:xdc_vbucket_rep:terminate:489]Replication
>  (CAPI mode) 
> `f4268e62702130298bf87f17cc481219/tax_subscription/tax_subscription` 
> (`tax_subscription/166` -> 
> `http://*@192.168.200.2:9091/tax_subscription%2f166`) failed: 
> {http_request_failed,"POST",
>  
> "http://Administrator:*@192.168.200.2:9091/tax_subscription%2f166/_revs_diff";,
>  {error,{code,503}}}
> [xdcr:error,2015-04-28T0:41:37.617,ns_1@192.168.200.1:<0.9167.774>:xdc_vbucket_rep:terminate:489]Replication
>  (CAPI mode) 
> `f4268e62702130298bf87f17cc481219/tax_subscription/tax_subscription` 
> (`tax_subscription/83` -> 
> `http://*@192.168.200.2:9091/tax_subscription%2f83`) failed: 
> {http_request_failed,"POST",
>  
> "http://Administrator:*@192.168.200.2:9091/tax_subscription%2f83/_revs_diff";,
>  {error,{code,503}}}
> 
> 
> 
> Bellow is the configuration details.
> 
> Couchbase : 3.0.1 
> Elasticsearch : 1.5
> transport-couchbase : 2.0.0
> 
> elasticsearch.yml file
> 
> couchbase.username: Administrator
> couchbase.password: asdfasdf
> couchbase.maxConcurrentRequests: 1024
> couchbase.num_vbuckets: 1024
> couchbase.defaultDocumentType: cbdoc
> 
> Thanks,
> Naisarg
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/cbb05ece-8912-4370-9a97-68f2cad36c22%40googlegroups.com
>  
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9F28592A-6A36-4C8A-8EA5-D96172779D52%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Re: Safe to use prefix query on analysed field?

2015-04-28 Thread David Pilato
It means that the text you give to the prefix query is not analyzed.

So if you have in the inverted index 

* abc
* def

And you search with prefix AB, it won’t work as abc does not start with AB

Make sense?

-- 
David Pilato - Developer | Evangelist 
elastic.co
@dadoonet  | @elasticsearchfr 
 | @scrutmydocs 






> Le 28 avr. 2015 à 09:59, David Kemp  a écrit :
> 
> The documentation on prefix queries states: "Matches documents that have 
> fields containing terms with a specified prefix (not analyzed)".
> 
> I took this to mean that the mapping for the field had to be not_analized. 
> However, after some experimentation, I found that it does sort of work on an 
> analysed field, but the query is not analysed. Eg. I found I could use a 
> custom analyser on the field that uses a keyword tokenizer and a lowercase 
> token filter and, so long as I lower case the query string myself, I can use 
> a prefix query to get a case insensitive prefix match.
> 
> So I am wondering if this is valid and, if so, am I alone in thinking the 
> documentation is misleading? Perhaps it should  state that the query is not 
> analysed and to take care when matching on an analysed field?
> 
> - David
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/5f278ca2-9c2c-417b-a375-13e56682131e%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/632F9B0A-C34F-4CEA-AA13-77D1165962EE%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Re: Looking for consultant to create mapping and queries

2015-04-28 Thread Charlie Hull

On 28/04/2015 03:21, Doug Turnbull wrote:

Hey Zelfapp(?) Nate(?),

You are facing a challenge that many search developers have come across.
Sounds like a search relevancy problem. The search engine isn't Google
out of the box with all its psychic intuitiveness of what users want.
It's a tool to create a user experience for users for your content. It
requires very specific tuning and analysis.

Anyway, my colleague John Berryman and I are writing a book
 on this subject that focuses on making
search results for your search app more intuitive. You might find that
useful: http://manning.com/turnbull

Yes I'm a consultant too -- feel free to ping me directly over email if
you want help. My company's site:
http://opensourceconnections.com

Other consulting firms that come to mind include:
Sematext http://sematext.com
Search Technologies http://www.searchtechnologies.com/


Hi,

We're also ES (and other open source search engine) consultants and 
would be happy to help- see www.flax.co.uk. There doesn't seem to be a 
public list of these (so far) - it would be good to have something like 
the list of Solr consultants at https://wiki.apache.org/solr/Support, 
which is an alphabetically arranged list of people/companies across the 
world.


Doug and John's book is also highly recommended!

Cheers

Charlie


Cheers
-Doug

On Mon, Apr 27, 2015 at 1:58 PM, Zelfapp mailto:n...@usamm.com>> wrote:

I'm looking for a consultant to coach us on how to map our data and
what queries we should use for our particular data. Our data set is
very simple and consists of only 900 documents at this time.
However, as we continue to tweak our map and analyzers and queries
the search results we're getting are not what we want. There seems
to be a million ways to skin a cat in ES, but clearly we're doing it
wrong or partially wrong. Please reply if you're an ES guru and have
some time today or tomorrow to work on this.

If this is the wrong forum to inquire about paid consultants, my
apologies, and please direct me to where I can find some ES gurus to
work with. Thanks.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearch+unsubscr...@googlegroups.com
.
To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/4f59043b-b1d4-4d6c-97ff-1084731cc814%40googlegroups.com

.
For more options, visit https://groups.google.com/d/optout.




--
*Doug Turnbull **| *Search Relevance Consultant | OpenSource
Connections, LLC | 240.476.9983 |http://www.opensourceconnections.com

Author:Taming Search from Manning Publications
This e-mail and all contents, including attachments, is considered to be
Company Confidential unless explicitly stated otherwise, regardless
of whether attachments are marked as such.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscr...@googlegroups.com
.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CALG6HL8ri6Wdi-TLNB%2BmQDSJSNNJJrSgnWytQzUTK_5WR7Hkhg%40mail.gmail.com
.
For more options, visit https://groups.google.com/d/optout.



--
Charlie Hull
Flax - Open Source Enterprise Search

tel/fax: +44 (0)8700 118334
mobile:  +44 (0)7767 825828
web: www.flax.co.uk

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/553F447A.5050505%40flax.co.uk.
For more options, visit https://groups.google.com/d/optout.


Safe to use prefix query on analysed field?

2015-04-28 Thread David Kemp
The documentation on prefix queries states: "Matches documents that have fields 
containing terms with a specified prefix (not analyzed)".

I took this to mean that the mapping for the field had to be not_analized. 
However, after some experimentation, I found that it does sort of work on an 
analysed field, but the query is not analysed. Eg. I found I could use a custom 
analyser on the field that uses a keyword tokenizer and a lowercase token 
filter and, so long as I lower case the query string myself, I can use a prefix 
query to get a case insensitive prefix match.

So I am wondering if this is valid and, if so, am I alone in thinking the 
documentation is misleading? Perhaps it should  state that the query is not 
analysed and to take care when matching on an analysed field?

- David

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5f278ca2-9c2c-417b-a375-13e56682131e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Couchbase XDCR error

2015-04-28 Thread Naisarg Patel
Hello All,

I got the error from XDCR replication. 
   
   - 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 970. Please see 
   logs for details.
   - 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 914. Please see 
   logs for details.
   - 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 880. Please see 
   logs for details.
   - 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 814. Please see 
   logs for details.
   - 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 745. Please see 
   logs for details.
   - 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 660. Please see 
   logs for details.
   - 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 532. Please see 
   logs for details.
   - 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 473. Please see 
   logs for details.
   - 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 466. Please see 
   logs for details.
   - 2015-04-28 00:36:13 [Vb Rep] Error replicating vbucket 399. Please see 
   logs for details.

   
When I looked into XDCR error I found the bellow errors.

  {error,{code,503}}}
[xdcr:error,2015-04-28T0:41:37.617,ns_1@192.168.200.1:<0.11779.772>:xdc_vbucket_rep:terminate:489]Replication
 
(CAPI mode) 
`f4268e62702130298bf87f17cc481219/tax_subscription/tax_subscription` 
(`tax_subscription/32` -> 
`http://*@192.168.200.2:9091/tax_subscription%2f32`) failed: 
{http_request_failed,"POST",
 
"http://Administrator:*@192.168.200.2:9091/tax_subscription%2f32/_revs_diff";,
 {error,{code,503}}}
[xdcr:error,2015-04-28T0:41:37.617,ns_1@192.168.200.1:<0.30184.768>:xdc_vbucket_rep:terminate:489]Replication
 
(CAPI mode) 
`f4268e62702130298bf87f17cc481219/tax_subscription/tax_subscription` 
(`tax_subscription/166` -> 
`http://*@192.168.200.2:9091/tax_subscription%2f166`) failed: 
{http_request_failed,"POST",
 
"http://Administrator:*@192.168.200.2:9091/tax_subscription%2f166/_revs_diff";,
 {error,{code,503}}}
[xdcr:error,2015-04-28T0:41:37.617,ns_1@192.168.200.1:<0.9167.774>:xdc_vbucket_rep:terminate:489]Replication
 
(CAPI mode) 
`f4268e62702130298bf87f17cc481219/tax_subscription/tax_subscription` 
(`tax_subscription/83` -> 
`http://*@192.168.200.2:9091/tax_subscription%2f83`) failed: 
{http_request_failed,"POST",
 
"http://Administrator:*@192.168.200.2:9091/tax_subscription%2f83/_revs_diff";,
 {error,{code,503}}}



Bellow is the configuration details.

Couchbase : 3.0.1 
Elasticsearch : 1.5
transport-couchbase : 2.0.0

elasticsearch.yml file

couchbase.username: Administrator
couchbase.password: asdfasdf
couchbase.maxConcurrentRequests: 1024
couchbase.num_vbuckets: 1024
couchbase.defaultDocumentType: cbdoc

Thanks,
Naisarg

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/cbb05ece-8912-4370-9a97-68f2cad36c22%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.