At the moment, we're able to bulk index data at a rate faster than we 
actually need. Indexing is not as important to use as being able to quickly 
search for data. Once we start reaching ~30 million documents indexed, we 
start to see performance decreasing in ours search queries. What are the 
best techniques for sacrificing indexing time in order to improve search 
performance?


A bit more info:

- We have the resources to improve our hardware (memory, CPU, etc) but we'd 
like to maximize the improvements that can be made programmatically or 
using properties before going for hardware increases.

- Our searches make very heavy uses of faceting and aggregations.

- When we run the optimize query, we see *significant* improvements in our 
search times (between 50% and 80% improvements), but as documented, this is 
usually a pretty expensive operation. Is there a way to sacrifice indexing 
time in order to have Elasticsearch index the data more efficiently? (I 
guess sort of mimicking the optimization behavior at index time)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0e134001-9a55-40c5-a8fc-4c1485a3e6fc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to