Hello,
Currently i have one index (by day) which contains logs from several
applications. The size is ~50-80Gb/day.
We often searches/aggregate documents by applications.
So would it be better to split this index into smaller indexs (from 1 to 10-15
indexs about 2-10Gb)?
Would the response time
Well thanks you.
Based on the answers, i understand this : put everything in one big index with
one shard per server.
When the shards are too big then add another server.
Coming for dbms world, it's strange for me.
For example, in mysql, we create 1 table for each application and so the tables
that is neither a master or a data node, though
once you set a node to master, you can leverage it as a client as well.
On 12 November 2014 05:02, lagarutte via elasticsearch
elasti...@googlegroups.com javascript: wrote:
Ok, since the master doesn't contain any data, and don't do lot ofs IOs
Hello,
I'm currently thinking of creating VM nodes for the masters.
Today, several nodes have master and data node roles.
But I have OOM memory errors and so masters crashed frequently.
What would be the correct hardware sizing for a master node only (like 2
CPUs, 4GB RAM) for managing a
Hello,
On one of my ELS cluster, i have node with different hardware capacity.
1 node : 8 GB RAM and 200GB disk
1 node : 4 GB RAM and 20GB disk
2 node : 64GB RAM with 4To Disk
I find that ELS tries to balance the same amount of data on each node.
The 2 smaller node are near full (disks and cpu)
allocation.
See
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-allocation.html
for some ideas.
On 11 November 2014 19:43, lagarutte via elasticsearch
elasti...@googlegroups.com javascript: wrote:
Hello,
On one of my ELS cluster, i have node
+1, Mark Walkom a écrit :
I'd suggest you go for 8GB system RAM with a small disk and then also use
these nodes as clients - ie query management.
You may need more RAM, but that should be a good start.
On 11 November 2014 19:35, lagarutte via elasticsearch
elasti...@googlegroups.com
Hi,
i have one ELS 1.1.2 cluster with 7 nodes.
800GB data.
When i shutdown a node for various reasons, ELS automatically rebalance the
missing shard on the other node.
To prevent this, I tried this (specified in the official doc) :
transient : {
cluster.routing.allocation.enable : none
, 2014 at 3:55 PM, lagarutte via elasticsearch
elasti...@googlegroups.com javascript: wrote:
Hi,
i have one ELS 1.1.2 cluster with 7 nodes.
800GB data.
When i shutdown a node for various reasons, ELS automatically rebalance
the missing shard on the other node.
To prevent this, I tried