When creating the index, specify a mapping and use field_data. Take the
example from the documentation and embed it into a normal mapping
definition. However, I realised that it'll only work with
string/numeric/geo_point, so you may need to use timestamp for your time
field.
Hope it helps.
El jueves, 2 de octubre de 2014 03:29:30 UTC+2, Dave Galbraith escribió:
>
> Hi! So I have millions and millions of documents in my Elasticsearch, each
> one of which has a field called "time". I need the results of my queries to
> come back in chronological order. So I put a
> "sort":{"time":{"order":"asc"}} in all my queries. This was going great
> on smaller data sets but then Elasticsearch started sending me 500s and
> circuit breaker exceptions started showing up in the logs with "data for
> field time would be too large". So I checked out
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-fielddata.html
>
> and that looks a lot like what I've been seeing: seems like it's trying to
> pull all the millions of time values into memory even if they're not
> relevant to my query. What are my options for fixing this? I can't
> compromise chronological order, it's at the heart of my application. "More
> memory" would be a short-term fix but the idea is to scale this thing to
> trillions and trillions of points and that's a race I don't want to run.
> Can I make these exceptions go away without totally tanking performance?
> Thanks!
>
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/8ee0351d-21a7-4fd8-9082-d9eb0b5238d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.