When you look to the guys @ found (https://www.found.no/pricing/) then the 
data on one ES server is 8 times memory,
if it should run smooth, but do not know how valuable that is. If you have 
a lot of ES nodes, then consider one master
node without data, it's a best practice I have read somewhere.

16GB Memory equals 128GB data.

On Friday, August 29, 2014 7:27:28 PM UTC+2, Rob Blackin wrote:
>
> We are trying to implement a 5 TB, 10 Billion item Elasticsearch cluster. 
> The key is an integer and the item data is fairly small.
>
> We seem to run into issues around loading. Seems to slow down as the index 
> gets bigger.
>
> We are doing this on EC2 i2.xlarge nodes.
>
> How many documents/TB do you think we can load per node max?
>
> So if we can do 2 Billion each then we need 5 nodes. We are trying to size 
> it.  
>
> Any advice is welcome. Even if it is that this is not a good thing to do :)
>
> thanks
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c3e4601d-8564-47f6-b3b3-0fdb91fac96e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to