Graphing MySQL status variables in Kibana
Hi All, I've been trying to graph data obtains from a SHOW STATUS mysql query. When attempting to graph Threads_connected using Kibana is received the following error... Oops! ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData] I checked my mappings and the Value field has been added to elasticsearch as a string even though it only ever contains integer counts. I don;t really want to re-dump my data and change mappings so I've managed to fix the query using dynamic scripting like so... curl -XGET 'http://hostname:9200/_all/_search?pretty' -d '{ facets: { 0: { date_histogram: { key_field: @timestamp, value: doc['Value'].Intvalue, interval: 5m }, global: true, facet_filter: { fquery: { query: { filtered: { query: { query_string: { query: source:mysql_status AND Variable_name:Threads_connected } }, filter: { bool: { must: [ { range: { @timestamp: { from: 1411520355984, to: 1411563555984 } } } ] } } } } } } } }, size: 0 I have simply changed.. value: Value, to... value: doc['Value'].Intvalue, This query returns what I want but I am unable to get this into Kibana without it throwing some kind of error. So my question would be... how do I get my historgram to use data from the above query? Rhys -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/77748482-1a0c-4b92-89bf-368fd635c252%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Kibana elasticsearch query question
Hi All, I've been trying to graph data obtains from a SHOW STATUS mysql query. When attempting to graph Threads_connected using Kibana is received the following error... Oops! ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData] I checked my mappings and the Value field has been added to elasticsearch as a string even though it only ever contains integer counts. I don;t really want to re-dump my data and change mappings so I've managed to fix the query using dynamic scripting like so... curl -XGET 'http://hostname:9200/_all/_search?pretty' -d '{ facets: { 0: { date_histogram: { key_field: @timestamp, value: doc['Value'].Intvalue, interval: 5m }, global: true, facet_filter: { fquery: { query: { filtered: { query: { query_string: { query: source:mysql_status AND Variable_name:Threads_connected } }, filter: { bool: { must: [ { range: { @timestamp: { from: 1411520355984, to: 1411563555984 } } } ] } } } } } } } }, size: 0 I have simply changed.. value: Value, to... value: doc['Value'].Intvalue, This query returns what I want but I am unable to get this into Kibana without it throwing some kind of error. So my question would be... how do I get my historgram to use data from the above query? Rhys -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/2bec5082-40a1-4384-b80b-46fce8920cdb%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Set active index
Hi All, I have setup a EFK system just for testing at the moment. It's running in a VM with not much RAM and I am having problem with the elasticsearch process because of this. The VIRT = 12GB which is approximately the total size of the indexes. My indexes are split by date like so... logstash-2014.06.01 logstash-2014.06.02... and so on. I'm guessing elasticsearch is trying to hold all of this in RAM. Is there a way I can setup elasticsearch to only search a specific index (or number of indices)? Is it just a case of archiving the logs I don't want ES to deal with? Ideally I'd like to work with only the last day or two of indexes which will hopefully all fit into RAM. Cheers, Rhys -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/4b2533dd-752b-442f-9ba3-a71de0cac6ff%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Data too large error
I occasionally get the following error in Kibana from elasticsearch *1. * *Oops!ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException:Data too large, data for field [@timestamp] would be larger than limit of[639015321/609.4mb]]* I even get this when searching for 5 min of data through the Kibana log stash search. It will disappear and reappear for the reason apparent to me. My total data size is not that big yet. Elastic search has been left on the default of splitting the indices by day. The current size in 168M for today. Any hints? Cheers,. Rhys -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/ebad6813-41d4-4e41-9a88-ff94058cd2b6%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.