BTW. I should mention that I also filed a bug for this earlier today.
https://github.com/elasticsearch/elasticsearch/issues/8684
Clinton Gormley kindly replied to that and provided some additional insight.
It indeed seems our mapping is part of the problem but there's also the es
side of thing
If the field you suspect causing this is a string field in the mapping then
you can try to close and open the index. This will then sync the in-memory
representation of the mapping with what is in the cluster state.
On 27 November 2014 at 16:49, Jilles van Gurp
wrote:
> Thanks for the explanatio
Thanks for the explanation. I suspect many logstash users might be running
into this one since you typically use a dynamic mapping with that. We have
some idea where this is happening though and we can probably fix it
properly. This happened during index roll over and we indeed are indexing a
l
This looks like a mapping issue to me (not 100% sure). A document that is
in the translog has a string field (with value: 'finished'), but it is
mapped as a number field (long, integer, double, etc.) in the mapping. This
causes the number format exception that you're seeing in your logs when
that d