For Elasticsearch, try m3.xlarge and set ES_HEAP_SIZE to 7 or 8GB. You may 
also want to have more than one node in your cluster.

You might also want to split Logstash off onto a separate instance. It is 
CPU intensive but not particularly RAM intensive. Set the -w {n} flag in 
the startup script to allow Logstash to run multiple threads across 
multiple cores. You might start with a m3.large for this and use -w 2 and 
see how it goes.

On Wednesday, August 13, 2014 9:38:10 AM UTC-6, AK wrote:
>
> Hi,
>
> I recently launched ELK and I'm receiving about 3,000,000 - 8,000,000 docs 
> per day (~ 5GB)
> I'm running on AWS on a small server, and after a week of data collection 
> the system becomes very very slow, mainly when I am looking for data older 
> than 2 days.
> Do you have a recommendation for servers in points such as cpu, memory and 
> iops and elstic settings like shards.
>
> Thanks
> AK
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/90398be0-4804-44d7-9f8e-e033daa7050b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to