I'm still having this problem... has anybody got an idea what the cause / 
solution might be?

Thank you! :)

On Tuesday, 7 October 2014 14:29:22 UTC+2, Robin Clarke wrote:
>
> I'm getting a lot of these errors in my Elasticsearch logs, and am also 
> experiencing a lot of slowness on the cluster... 
>
> New used memory 7670582710 [7.1gb] from field [machineName.raw] would be 
> larger than configured breaker: 7666532352 [7.1gb], breaking
> ...
> New used memory 7674188379 [7.1gb] from field [@timestamp] would be larger 
> than configured breaker: 7666532352 [7.1gb], breaking
>
> I've looked at the documentation about memory limits 
> <http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html>,
>  
> but I don't really understand what is causing this, and more importantly 
> how to avoid this...
>
> My cluster is 10 machines @ 32GB memory and 8 CPU cores each.  I have one 
> ES node on each machine with 12GB memory allocated.  On each machine there 
> is additionally one logstash agent (1GB) and one redis server (2GB).
> I have 10 indexes open with one replication per shard (so each node should 
> only be holding 22 shards (two more for kibana-int)).
>
> I'm using Elasticsearch 1.3.3, Logstash 1.4.2
>
> Thanks for your help!
>
> -Robin-
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5935b1f4-809c-46ac-ba03-f1df33a8737e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to