Improving Elasticsearch performance on a single node by increasing shards

2014-05-13 Thread Rujuta Deshpande
Hi, 

I have set up an elasticsearch cluster with 1 shard and 0 replicas. 
My system has 16 GB RAM and I have allocated 8 GB to the ES Max/Min Heap. 

We are indexing a large number of logs everyday and the size of our daily 
index is approximately 3,500,000 documents. 

We are using Kibana to query ES and generate reports. Most of our reports 
are histograms and hence require heavy facetting. My observation was this - 
the dashboards took very long to load (depending on the time limit 
selected) and the field cache size being unlimited (by default) started 
rising and eventually resulted in an Out of Memory Error. 

I have now restricted the field cache size to 50% of the available heap 
memory. Although this does result in reducing this error, there is not much 
difference in the performance and search takes long. 

Another observation is that with 1 shard and 0 replicas, my ES node is not 
making use of the other CPU cores. I have 4 cores on my system and the CPU% 
shown by the top command for the elasticsearch process just barely exceeds 
100%. I believe this indicates that it uses one core in entirety but not 
all 4 cores. 
 
Will increasing the number of shards make better use of the multi-core 
architecture and enable parallel search? Also, if so, what is the best way 
to get this working? Should I make changes in the ES configuration file and 
then re-start the cluster? How does this affect  the currently existing 
indices? We create 2 indices per day. Now when I increase the number of 
shards for the cluster, how will the data in the previously created indices 
get distributed among the newly created shards? 

Thanks, 
Rujuta

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fa490e88-87a4-4c9d-aeef-b93f51e7e7e0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Improving Elasticsearch performance on a single node by increasing shards

2014-05-13 Thread Rujuta Deshpande
We plan to store data for only about 3- 6 months and hence, we thought this 
configuration might be okay. 

A couple of simultaneous Kibana dashboard queries (mainly to generate 
histograms) resulted in the system load reaching 10.  This happened owing 
to a large number of disk  I/O operations.
 
We would appreciate any help in pin pointing exactly which queries reaching 
Elasticsearch are taking so long and how we can go about trying to optimize 
them. 

Thanks. 

On Tuesday, May 13, 2014 1:33:19 PM UTC+5:30, Jörg Prante wrote:

 Elasticsearch is using all cores by default. If you do not see 100% of CPU 
 use, this is no reason to worry. 100% CPU would signal bad programming 
 style (this would be a bug). You should watch the system load. If system 
 load is low, you have either not enough query load, or your configuration 
 prevents Elasticsearch from being fully utilized. Just by executing some 
 Kibana queries you will never be able to push Elasticseach to execessivley 
 high system load. Try 100 or 1000 or 10k queries a second, and you will see 
 increasing load.

 There are many reasons for Out of memory. It depends on the kind of 
 query and the kind of data you use. In many cases you can save enormous 
 amounts of memory just be rewriting heavy queries or setting up a lean 
 field mapping. Additionally you may modify Elasticsearch default settings 
 to save heap usage. But all of this has limits.

 You should set up a cluster that can scale. For example, if you have 2 
 indices per day, and you like to use your node for a year, you'll end up 
 with 750 indices, which is by default settings 750*5  3500 shards, which 
 is pretty heavy for a single node. Have you tested 3500 shards on a single 
 node?

 If you are bound to your data structure in the index and the Kibana 
 queries and can not change them, there is not much you can do except 
 upgrading ES and adding more nodes, as Mark already noted. 

 Jörg


 On Tue, May 13, 2014 at 9:30 AM, Mark Walkom 
 ma...@campaignmonitor.comjavascript:
  wrote:

 Given you're only on one server you are limited with what you can do.
 You'd be better off adding another node if you can, maybe someone else 
 can comment on the rest.
  
 Regards,
 Mark Walkom

 Infrastructure Engineer
 Campaign Monitor
 email: ma...@campaignmonitor.com javascript:
 web: www.campaignmonitor.com


 On 13 May 2014 16:34, Rujuta Deshpande ruj...@gmail.com javascript:wrote:

 Hi, 

 I have set up an elasticsearch cluster with 1 shard and 0 replicas. 
 My system has 16 GB RAM and I have allocated 8 GB to the ES Max/Min 
 Heap. 

 We are indexing a large number of logs everyday and the size of our 
 daily index is approximately 3,500,000 documents. 

 We are using Kibana to query ES and generate reports. Most of our 
 reports are histograms and hence require heavy facetting. My observation 
 was this - the dashboards took very long to load (depending on the time 
 limit selected) and the field cache size being unlimited (by default) 
 started rising and eventually resulted in an Out of Memory Error. 

 I have now restricted the field cache size to 50% of the available heap 
 memory. Although this does result in reducing this error, there is not much 
 difference in the performance and search takes long. 

 Another observation is that with 1 shard and 0 replicas, my ES node is 
 not making use of the other CPU cores. I have 4 cores on my system and the 
 CPU% shown by the top command for the elasticsearch process just barely 
 exceeds 100%. I believe this indicates that it uses one core in entirety 
 but not all 4 cores. 
  
 Will increasing the number of shards make better use of the multi-core 
 architecture and enable parallel search? Also, if so, what is the best way 
 to get this working? Should I make changes in the ES configuration file and 
 then re-start the cluster? How does this affect  the currently existing 
 indices? We create 2 indices per day. Now when I increase the number of 
 shards for the cluster, how will the data in the previously created indices 
 get distributed among the newly created shards? 

 Thanks, 
 Rujuta

 -- 
 You received this message because you are subscribed to the Google 
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/fa490e88-87a4-4c9d-aeef-b93f51e7e7e0%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/fa490e88-87a4-4c9d-aeef-b93f51e7e7e0%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit

Re: Elasticsearch configuration for uninterrupted indexing

2014-03-25 Thread Rujuta Deshpande
Well it was for the entire machine. Now, I have changed it to a 4 GB 
machine. Even 4 GB is not enough right now and I do face the same problem. 
I am trying to benchmark the max/min Heap size I will have to allocate to 
an elasticsearch instance to be able to achieve uninterrupted indexing 
without running into memory errors. So, are you saying that the only 
solution to this problem is an increase in memory ? 

Thanks,
Rujuta

On Monday, March 24, 2014 9:56:25 PM UTC+5:30, Ivan Brusic wrote:

 I do not think splitting the application into 2 separate JVMs will solve 
 your issues. Is the 2GB per JVM or the total of the machine? For analytic 
 applications, with multiples facets, 2 GBs might not be sufficient.

 -- 
 Ivan


 On Sun, Mar 23, 2014 at 10:04 PM, Rujuta Deshpande 
 ruj...@gmail.comjavascript:
  wrote:

 Hi, 

 Thank you for the response. However, in our scenario, both the nodes are 
 on the same machine. Our setup doesn't allow us to have two separate 
 machines for each node. Also, we're indexing logs using logstash. 
 Sometimes, we have to query data from the logs over a period of two or 
 three months and then, we're thrown an out of memory error. This affects 
 the indexing that is simultaneously going on and we lose events. 

 I'm not sure what configuration of elasticsearch will help achieve this.

 Thanks,
 Rujuta

 On Friday, March 21, 2014 10:36:51 PM UTC+5:30, Ivan Brusic wrote:

 One of the main usage of having a data-less node is that it would act as 
 a coordinator between the other nodes. It will gather all the responses 
 from the other nodes/shards and reduce them into one.

 In your case, the data-less node is gathering all the data from just one 
 node. In other words, it is not doing much since the reduce phase is 
 basically a pass-thru operation. With a two node cluster, I would say you 
 are better off having both machines act as full nodes.

 Cheers,

 Ivan



 On Fri, Mar 21, 2014 at 5:04 AM, Rujuta Deshpande ruj...@gmail.comwrote:

 Hi, 

 I am setting up a system consisting of elasticsearch-logstash-kibana 
 for log analysis. I am using one machine (2 GB RAM, 2 CPUs) running 
 logstash, kibana and  two instances of elasticsearch. Two other machines, 
 each running  logstash-forwarder are pumping logs into the ELK system. 

 The reasoning behind using two ES instances was this - I needed one 
 uninterrupted instance to index the incoming logs and I also needed to 
 query the currently existing indices. However, I didn't want any complex 
 querying to result in loss of events owing to Out of Memory Errors because 
 of excessive querying. 

 So, one elasticsearch node was master = true  and data = true which did 
 the indexing (called the writer node) and the other node, was master = 
 false and data = false (this was the workhorse or reader node) .

 I assumed that, in cases of excessive querying, although the data is 
 stored on the writer node, the reader node will query the data and all the 
 processing will take place on the reader as a result of which issues like 
 out of memory error etc will be avoided and uninterrupted indexing will 
 take place. 

 However, while testing this, I realized that the reader hardly uses the 
 heap memory ( Checked this in Marvel )  and when I fire a complex search 
 query - which was a search request using the python API where the 'size' 
 parameter was set to 1, the writer node throws an out of memory error, 
 indicating that the processing also takes place on the writer node only. 
 My 
 min and max heap size was set to 256m  for this test. I also ensured that 
 I 
 was firing the search query to the port on which the reader node was 
 listening (Port 9200). The writer node was running on Port 9201.  

 Was my previous understanding of the problem incorrect - i.e. having 
 one reader and one writer node, doesn't help in uninterrupted indexing of 
 documents? If this is so, what is the use of having a separate workhorse 
 or 
 reader node? 

 My eventual aim is to be able to query elasticsearch and fetch large 
 amounts of data at a time without interrupting/slowing down the indexing 
 of 
 documents. 

 Thank you. 

 Rujuta 

 -- 
 You received this message because you are subscribed to the Google 
 Groups elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.

 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/a8fcd5f0-447a-4654-9115-9bc4e524b246%
 40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/a8fcd5f0-447a-4654-9115-9bc4e524b246%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript

Re: Elasticsearch configuration for uninterrupted indexing

2014-03-23 Thread Rujuta Deshpande
Hi, 

Thank you for the response. However, in our scenario, both the nodes are on 
the same machine. Our setup doesn't allow us to have two separate machines 
for each node. Also, we're indexing logs using logstash. Sometimes, we have 
to query data from the logs over a period of two or three months and then, 
we're thrown an out of memory error. This affects the indexing that is 
simultaneously going on and we lose events. 

I'm not sure what configuration of elasticsearch will help achieve this.

Thanks,
Rujuta

On Friday, March 21, 2014 10:36:51 PM UTC+5:30, Ivan Brusic wrote:

 One of the main usage of having a data-less node is that it would act as a 
 coordinator between the other nodes. It will gather all the responses from 
 the other nodes/shards and reduce them into one.

 In your case, the data-less node is gathering all the data from just one 
 node. In other words, it is not doing much since the reduce phase is 
 basically a pass-thru operation. With a two node cluster, I would say you 
 are better off having both machines act as full nodes.

 Cheers,

 Ivan



 On Fri, Mar 21, 2014 at 5:04 AM, Rujuta Deshpande 
 ruj...@gmail.comjavascript:
  wrote:

 Hi, 

 I am setting up a system consisting of elasticsearch-logstash-kibana for 
 log analysis. I am using one machine (2 GB RAM, 2 CPUs) running logstash, 
 kibana and  two instances of elasticsearch. Two other machines, each 
 running  logstash-forwarder are pumping logs into the ELK system. 

 The reasoning behind using two ES instances was this - I needed one 
 uninterrupted instance to index the incoming logs and I also needed to 
 query the currently existing indices. However, I didn't want any complex 
 querying to result in loss of events owing to Out of Memory Errors because 
 of excessive querying. 

 So, one elasticsearch node was master = true  and data = true which did 
 the indexing (called the writer node) and the other node, was master = 
 false and data = false (this was the workhorse or reader node) .

 I assumed that, in cases of excessive querying, although the data is 
 stored on the writer node, the reader node will query the data and all the 
 processing will take place on the reader as a result of which issues like 
 out of memory error etc will be avoided and uninterrupted indexing will 
 take place. 

 However, while testing this, I realized that the reader hardly uses the 
 heap memory ( Checked this in Marvel )  and when I fire a complex search 
 query - which was a search request using the python API where the 'size' 
 parameter was set to 1, the writer node throws an out of memory error, 
 indicating that the processing also takes place on the writer node only. My 
 min and max heap size was set to 256m  for this test. I also ensured that I 
 was firing the search query to the port on which the reader node was 
 listening (Port 9200). The writer node was running on Port 9201.  

 Was my previous understanding of the problem incorrect - i.e. having one 
 reader and one writer node, doesn't help in uninterrupted indexing of 
 documents? If this is so, what is the use of having a separate workhorse or 
 reader node? 

 My eventual aim is to be able to query elasticsearch and fetch large 
 amounts of data at a time without interrupting/slowing down the indexing of 
 documents. 

 Thank you. 

 Rujuta 

 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/a8fcd5f0-447a-4654-9115-9bc4e524b246%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/a8fcd5f0-447a-4654-9115-9bc4e524b246%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b552fc2c-1a22-49b5-b0a9-ddc54c134834%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch configuration for uninterrupted indexing

2014-03-21 Thread Rujuta Deshpande
Hi, 

I am setting up a system consisting of elasticsearch-logstash-kibana for 
log analysis. I am using one machine (2 GB RAM, 2 CPUs) running logstash, 
kibana and  two instances of elasticsearch. Two other machines, each 
running  logstash-forwarder are pumping logs into the ELK system. 

The reasoning behind using two ES instances was this - I needed one 
uninterrupted instance to index the incoming logs and I also needed to 
query the currently existing indices. However, I didn't want any complex 
querying to result in loss of events owing to Out of Memory Errors because 
of excessive querying. 

So, one elasticsearch node was master = true  and data = true which did the 
indexing (called the writer node) and the other node, was master = false 
and data = false (this was the workhorse or reader node) .

I assumed that, in cases of excessive querying, although the data is stored 
on the writer node, the reader node will query the data and all the 
processing will take place on the reader as a result of which issues like 
out of memory error etc will be avoided and uninterrupted indexing will 
take place. 

However, while testing this, I realized that the reader hardly uses the 
heap memory ( Checked this in Marvel )  and when I fire a complex search 
query - which was a search request using the python API where the 'size' 
parameter was set to 1, the writer node throws an out of memory error, 
indicating that the processing also takes place on the writer node only. My 
min and max heap size was set to 256m  for this test. I also ensured that I 
was firing the search query to the port on which the reader node was 
listening (Port 9200). The writer node was running on Port 9201.  

Was my previous understanding of the problem incorrect - i.e. having one 
reader and one writer node, doesn't help in uninterrupted indexing of 
documents? If this is so, what is the use of having a separate workhorse or 
reader node? 

My eventual aim is to be able to query elasticsearch and fetch large 
amounts of data at a time without interrupting/slowing down the indexing of 
documents. 

Thank you. 

Rujuta 

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a8fcd5f0-447a-4654-9115-9bc4e524b246%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.