Hey guys,

i´l just want to ask here if anybody has some better experience with bigger 
java heap sizes than me.

Our problem is that we only have 1 big server with 256 gb RAM and 64 for 
our ES system which indexes about 4-6k events/s. Actually i have running 3 
instances, 2 with 32 GB java heap and slow HDD´s as target for older 
indices and 1 with 96GB java heap and fast SSD storage.

I think about splitting the 96GB instance to 2 X 32 GB RAM because our 
server sometimes is running out of CPU and given to the ES documentation 
heaps larger 32gb lead to uncompressed pointers and so far to much higher 
cpu usage.

what make me worry is that instances which that little RAM arent able to 
handle the indexing and search load(which is quite high).

Anyone with some experience to share?

Cheers

Stephen

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fdffe493-961a-4158-9585-da324ede5ca6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to