Splitting index into smaller ones

2014-12-27 Thread lagarutte via elasticsearch
Hello, Currently i have one index (by day) which contains logs from several applications. The size is ~50-80Gb/day. We often searches/aggregate documents by applications. So would it be better to split this index into smaller indexs (from 1 to 10-15 indexs about 2-10Gb)? Would the response time

Re: Splitting index into smaller ones

2014-12-27 Thread joergpra...@gmail.com
If you have 50-80 G/d you need quite a number of machines. Smaller indexes and higher shard count on the same number of machines do not help, the search performance will be worse. ES is fine with many indexes. Take big indexes that span over many machines, and once the index is complete, execute

Re: Splitting index into smaller ones

2014-12-27 Thread lagarutte via elasticsearch
Well thanks you. Based on the answers, i understand this : put everything in one big index with one shard per server. When the shards are too big then add another server. Coming for dbms world, it's strange for me. For example, in mysql, we create 1 table for each application and so the tables

Re: Splitting index into smaller ones

2014-12-27 Thread David Pilato
Some answers inlined. -- David ;-) Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs Le 27 déc. 2014 à 15:17, lagarutte via elasticsearch elasticsearch@googlegroups.com a écrit : Well thanks you. Based on the answers, i understand this : put everything in one big index with one