Hello,
Currently i have one index (by day) which contains logs from several
applications. The size is ~50-80Gb/day.
We often searches/aggregate documents by applications.
So would it be better to split this index into smaller indexs (from 1 to 10-15
indexs about 2-10Gb)?
Would the response time
If you have 50-80 G/d you need quite a number of machines. Smaller indexes
and higher shard count on the same number of machines do not help, the
search performance will be worse.
ES is fine with many indexes. Take big indexes that span over many
machines, and once the index is complete, execute
Well thanks you.
Based on the answers, i understand this : put everything in one big index with
one shard per server.
When the shards are too big then add another server.
Coming for dbms world, it's strange for me.
For example, in mysql, we create 1 table for each application and so the tables
Some answers inlined.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 27 déc. 2014 à 15:17, lagarutte via elasticsearch
elasticsearch@googlegroups.com a écrit :
Well thanks you.
Based on the answers, i understand this : put everything in one big index
with one