Re: Running Elasticsearch on AWS EC2

2014-11-16 Thread moshe zada
You can try using https://github.com/stec-inc/EnhanceIO which create block 
device with magnetic disks and SSDs as cache.

On Saturday, November 15, 2014 10:01:20 AM UTC+2, Tomer Levy wrote:
>
> Hello,
>
> I'm trying to run Elasticsearch on EC2. After watching the webinar on 
> running Elasticsearch on EC2, It was mentioned that the only working 
> solution is using ephemeral storage since EBS is not fast enough.
> However, the current EC2 instance M3.large or even m3.xlarge come with 
> very small disks that won't be enough (40GB). I want to ship 25GB a day and 
> retain data for 30 days. 
>
> How do you recommend that I go about setting it up? Has anyone 
> successfully used EBS? 
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/66202533-8419-470c-a560-b90379da636e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


marvel in multicluster environment

2014-11-02 Thread moshe zada
Hi,
I am trying to set up a marvel deploy that monitor two different cluster:

ES Blue Cluster --->\
  \
   | > ES Monitoring Cluster
  /
ES Red Cluster --->/

the problem that there is no cluster_name filter and when i am trying to 
add it some panel ignoring it.
(e.g cluster summery uses the filter but nodes panel ignoring it)

Does marvel need to be able monitor more then one cluster?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/902ac37d-5c3f-4eb9-aeae-aa4b397c77e1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Json Data not getting parsed when sent to Elasticsearch

2014-08-24 Thread moshe zada
what is your logstash configuration?
did you tried the json codec ?

On Sunday, August 24, 2014 4:54:08 PM UTC+3, Didjit wrote:
>
> Hi,
>
> The following is a debug from Logstash:
>
> {
> "message" => 
> "{\"EventTime\":\"2014-08-24T09:44:46-0400\",\"URI\":\"
> http://ME/rest/venue/ME/hours/2014-08-24\
> ",\"uri_payload\":{\"value\":[{\"open\":\"2014-08-24T13:00:00.000+\",\"close\":\"2014-08-24T23:00:00.000+\",\"isOpen\":true,\"date\":\"2014-08-24\"}],\"Count\":1}}\r",
>"@version" => "1",
>  "@timestamp" => "2014-08-24T13:44:48.036Z",
>"host" => "127.0.0.1:60778",
>"type" => "MY_Detail",
>   "EventTime" => "2014-08-24T09:44:46-0400",
> "URI" => "http://ME/rest/venue/ME//hours/2014-08-24";,
> "uri_payload" => {
> "value" => [
> [0] {
>   "open" => "2014-08-24T13:00:00.000+",
>  "close" => "2014-08-24T23:00:00.000+",
> "isOpen" => true,
>   "date" => "2014-08-24"
> }
> ],
> "Count" => 1,
> "0" => {}
> },
>  "MYId" => "ME"
> }
> ___
>
> When i look into Elasticsearch, the fields under URI Payload are not 
> parsed. It shows:
>
> uri_payload.value as the field with "
> {"open":"2014-08-21T13:00:00.000+","close":"2014-08-21T23:00:00.000+","isOpen":true,"date":"2014-08-21"}"
>
> How can I get all the parsed values as fields in elasticsearch? In my 
> example, fields Open, Close, IsOpen. Initially I thought Logstash was not 
> parsing all the json, but looking at the debug it is.
>
> Thank you,
>
> Chris
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fe60df4d-cd36-43c9-a08c-7213abc2dd18%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


date_histogram facet float possible overflow

2014-08-24 Thread moshe zada
 

HI all,

I am using ELK stack to visualising our monitoring data, yesterday i came 
across a weird problem: ElasticSearch date_histogram facet returned 
floating results that look like an overflow ("min" : 4.604480259023595*E*
18).
Our dataflow is : collectd (cpu/memory) -> sends it to riemann -> logstash 
-> elasticsearch 

At first the values were correct, after a few days the values became huge 
(see attached snapshot of kibana graph)

*filtered query + Result:*

*query:*
url -XGET 'http://localhost:9200/logstash-2014.08.24/_search?pretty' -d '{
  "query": {
"filtered": {
  "query": {
"bool": {
  "should": [
{
  "query_string": {
"query": 
"subservice.raw:\"processes-cpu_percent/gauge-collectd\" AND 
(plugin_instance:\"cpu_percent\")"
  }
}
  ]
}
  },
  "filter": {
"bool": {
  "must": [
{
  "range": {
"@timestamp": {
  "from": 1408884312966,
  "to": 1408884612966
}
  }
},
{
  "range": {
"@timestamp": {
  "from": 1408884311948,
  "to": 1408884327941
}
  }
},
{
  "fquery": {
"query": {
  "query_string": {
"query": 
"subservice:(\"processes-cpu_percent/gauge-collectd\")"
  }
},
"_cache": false
  }
}
  ]
}
  }
}
  },
  "size": 500,
  "sort": [
{
  "metric": {
"order": "desc",
"ignore_unmapped": false
  }
},
{
  "@timestamp": {
"order": "desc",
"ignore_unmapped": false
  }
}
  ]
}'




*result:*
{
  "took" : 47,
  "timed_out" : false,
  "_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
  },
  "hits" : {
"total" : 2,
"max_score" : null,
"hits" : [ {
  "_index" : "logstash-2014.08.24",
  "_type" : "gauge",
  "_id" : "SlzG8bGJQziU0LMoN7nrbQ",
  "_score" : null,
  "_source":{"host":"host1","service":
"instance-2014-08-24T1106/processes-cpu_percent/gauge-collectd","state":null
,"description":null,"metric":0.7,"tags":["collectd"],"time":
"2014-08-24T12:45:25.000Z","ttl":20.0,"type":"gauge","source":"host1",
"ds_type":"gauge","plugin_instance":"cpu_percent","ds_name":"value",
"type_instance":"collectd","plugin":"processes","ds_index":"0","@version":
"1","@timestamp":"2014-08-24T12:45:15.079Z"},
  "sort" : [ 4604480259023595110, 1408884325088 ]

}, {

  "_index" : "logstash-2014.08.24",
  "_type" : "gauge",
  "_id" : "8hxToMjpQ5WQIw15DQqIGA",
  "_score" : null,
  "_source":{"host":"host1","service":
"instance-2014-08-24T1106/processes-cpu_percent/gauge-collectd","state":null
,"description":null,"metric":0.5,"tags":["collectd"],"time":
"2014-08-24T12:45:15.000Z","ttl":20.0,"type":"gauge","source":"host1",
"ds_type":"gauge","plugin_instance":"cpu_percent","ds_name":"value",
"type_instance":"collectd","plugin":"processes","ds_index":"0","@version":
"1","@timestamp":"2014-08-24T12:45:15.079Z"},
  "sort" : [ 4602678819172646912, 1408884315079 ]
} ]
  }
}




*date histogram Facet + Results:query:*
curl -XGET 'http://localhost:9200/logstash-2014.08.24/_search?pretty' -d '{
  "facets": {
"0": {
  "date_histogram": {
"key_field": "@timestamp",
"value_field": "metric",
"interval": "1s"
  },
  "global": true,
  "facet_filter": {
"fquery": {
  "query": {
"filtered": {
  "query": {
"query_string": {
  "query": 
"subservice.raw:\"processes-cpu_percent/gauge-collectd\" AND 
(plugin_instance:cpu_percent) AND *"
}
  },
  "filter": {
"bool": {
  "must": [
{
  "range": {
"@timestamp": {
  "from": 1408884199622,
  "to": 1408884499623
}
  }
},
{
  "range": {
"@timestamp": {
  "from": 1408884311948,
  "to": 1408884327941
}
  }
},
{
  "fquery": {
"query": {
  "query_string": {
"query": 
"subservice:(\"processes-cpu_percent/gauge-collectd\")"
  }
},
"_cache": true
  }
}
  ]
}