Well it was for the entire machine. Now, I have changed it to a 4 GB
machine. Even 4 GB is not enough right now and I do face the same problem.
I am trying to benchmark the max/min Heap size I will have to allocate to
an elasticsearch instance to be able to achieve uninterrupted indexing
While it is possible to create an ES cluster with dedicated reader/writer
nodes, this is not the default and in many cases, dedication of nodes is
not required at all. ES has some better heuristics built in to relief the
admin from tedious jobs like setting up dedicated nodes.
So I wonder how you
I do not think splitting the application into 2 separate JVMs will solve
your issues. Is the 2GB per JVM or the total of the machine? For analytic
applications, with multiples facets, 2 GBs might not be sufficient.
--
Ivan
On Sun, Mar 23, 2014 at 10:04 PM, Rujuta Deshpande
Hi,
Thank you for the response. However, in our scenario, both the nodes are on
the same machine. Our setup doesn't allow us to have two separate machines
for each node. Also, we're indexing logs using logstash. Sometimes, we have
to query data from the logs over a period of two or three
Hi,
I am setting up a system consisting of elasticsearch-logstash-kibana for
log analysis. I am using one machine (2 GB RAM, 2 CPUs) running logstash,
kibana and two instances of elasticsearch. Two other machines, each
running logstash-forwarder are pumping logs into the ELK system.
The
One of the main usage of having a data-less node is that it would act as a
coordinator between the other nodes. It will gather all the responses from
the other nodes/shards and reduce them into one.
In your case, the data-less node is gathering all the data from just one
node. In other words, it