Hi Jorg,
Thanks for the answer.
The idea behind the restriction of a single machine was for instance to
install ELK on a machine and perform fast indexing and review of a set of
log. What I got wrong is that the log size can be important (hundreds of
Gb) so this architecture will not work, acc
With ES on a single machine, "tuning" does not cure the symptoms in the
long run. ES was designed to scale out on many nodes, so the simplest path
is to add nodes.
In a restricted environment, you could try to disable features that consume
a fairly amount of resources: disable _source and _all fie
Setting up the JVM memory to 50% (12G) did not ease the problem as I
noticed GC collection up to 3min :)
Will really need to add a bunch of RAM to my machine..
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and
My understanding was that ES 1.1 was using memory mapped files and so the field
cache would not be part of the heap.
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an emai
Thanks both of you for the answer.
I have just decrease the Java Jvm memory to 50% (12G), I will see if that
helps.
@Jilles:
- I am using the defaut Logstash template and I things that by default the
_all field is disable...Ah no that not the case :( I will correct this
settings but why by def
You should tweak cache sizes. At least the field data cache needs to be
restricted (unbounded by default). Also, ensuring the various circuit
breakers are turned on will help. Another tip is to disable the _all field
if you don't need it.
All this should reduce the amount of memory ES uses and
When it comes to capacity the answer is, it depends.
Given you're at around 430GB on a single node now, I'd add another node and
then see how things look at around the 8-900GB mark (spread across both).
Another clarification; The recommended operating procedure is to use half
your system RAM for
Hi Mark,
Thanks for your quick answer.
I cannot increase the RAM for ES, as I am already using 75% of the ram for
the JVM.
I will take a look at disabling the bloom filter cache to see if that
change anything.
Regarding the option of adding more nodes:
- Do you have an idea of how many nodes a
You are more than likely reaching the limits of the node.
Your options are to delete data, add more RAM (you should have 50% of
system for heap), close some old indexes or add nodes. Adding more nodes
spreads the shards of your indexes across the cluster which is essentially
spreading the load.
Yo