Kibana never fully loads, stays searching

2015-05-11 Thread Tony Chong
Hi, 

Strange issue. Kibana and indexing works correctly at first but after 
several weeks, the UI always shows that Kibana is searching. No matter what 
I try to query or time interval, it's always searching and never returns 
anything. I don't see anything funky in the logs and if I delete all my 
indices, everything works fine. The only thing I can think of is that every 
6 days, I delete an index that is 6 days old, because I don't have enough 
machines and disk space.

Cluster health seems okay:

{
   
   - cluster_name: prod,
   - status: green,
   - timed_out: false,
   - number_of_nodes: 12,
   - number_of_data_nodes: 11,
   - active_primary_shards: 154,
   - active_shards: 154,
   - relocating_shards: 0,
   - initializing_shards: 0,
   - unassigned_shards: 0,
   - number_of_pending_tasks: 0
   
}


Running on Ec2. performance looks okay on each node. My indexes total about 
8TB across 11 machines. I have one machine set as a read only ES node and 
that's where Kibana lives as well. The instance types are m3.2xlarge.

Any suggestions would be appreciated. Thanks!

Tony

-- 
Please update your bookmarks! We have moved to https://discuss.elastic.co/
--- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9313a568-a2d6-4ffa-b5b1-629349fc058a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


org.elasticsearch.index.mapper.MapperParsingException: failed to parse -- NEED HELP

2015-04-22 Thread Tony Chong
Hi,

Hello. I have read about similar problems online but haven't really figured 
out what the solution is, and was hoping somebody can point me in the right 
direction. 

I'm using ELK.

ES 1.5.0
Logstash 1.5.0rc2
Kibana 4.0.1


I have all type of application logs that are written out as JSON, but the 
JSON can be nested, and there are values that can sometimes be null. If I 
delete all my indices, and restart everything from scratch (restart 
elasticsearch, kibana and logstash), everything works fine. Every 2 or 3 
weeks, I start seeing errors such as these:

[2015-04-21 21:15:22,197][DEBUG][action.bulk  ] 
[elasticsearch09] [logstash-2015.04.21][20] failed to execute bulk item 
(index) index {[logstash-2015.04.21][exchange][AUzd1drI5RY1eWdGgIZm], 
source[{log_type:error_server_response,app_name:Adap.TV Advertising 
iOS,app_id:538e2980a1df5de779e7,pub_app_id:653967729,streaming_url:http://u-ads.adap.tv/a/h/eaLAQ7VgTRxUq8XV5RJZpiQfAqV5UMkXExLyYUmEPHM=?cb=2015-04-21T05%3A16%3A28%2B00%3A00pet=prerollpageUrl=apps%3A%2F%2FCharades!%20Guess%20Words%20and%20Draw%20or%20Doodle%20Something%20Taboo%20Tilt%20Your%20Heads%20Jump%20Up%20with%20Friends%20Freea.ip=76.165.97.2a.aid=15eaddd2-52fe-401f-b33c-333f5f78e01fa.idfa=15eaddd2-52fe-401f-b33c-333f5f78e01fa.ua=Mozilla%2F5.0%20(iPhone%3B%20CPU%20iPhone%20OS%207_1_1%20like%20Mac%20OS%20X)%20AppleWebKit%2F537.51.2%20(KHTML%2C%20like%20Gecko)%20Mobile%2F11D201eov=eov,error:Cannot
 
have empty 
VAST,host:ip-10-233-53-75,level:info,message:,timestamp:2015-04-21
 
05:16:29.021,@version:1,@timestamp:2015-04-21T21:15:22.087Z,type:exchange}]}
org.elasticsearch.index.mapper.MapperParsingException: object mapping for 
[exchange] tried to parse field [error] as object, but got EOF, has a 
concrete value been provided to it?
at 
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:495)
at 
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:544)
at 
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:493)
at 
org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:438)
at 
org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:432)
at 
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:149)
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:515)
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:422)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)


AND

[2015-04-21 21:19:09,286][DEBUG][action.bulk  ] 
[elasticsearch03] [logstash-2015.04.21][19] failed to execute bulk item 
(index) index {[logstash-2015.04.21][requestAds][AUzd2VGHPBf0vCHmQV4j], 
source[{country:BR,region:18,city:Curitiba,latitude:null,longitude:null,device_language:pt,browser_user_agent:VungleDroid/3.3.0,is_sd_card_available:1,device_make:motorola,device_model:XT1033,device_height:1184,device_width:720,os_version:5.0.2,platform:android,sound_enabled:false,volume:0,device_id:5ec495a3-80cb-4682-a0a9-c66c2ca122ea,ifa:5ec495a3-80cb-4682-a0a9-c66c2ca122ea,isu:5ec495a3-80cb-4682-a0a9-c66c2ca122ea,user_age:null,user_gender:null,ip_address:189.123.219.242,connection:wifi,network_operator:TIM,pub_app_id:507686ae771615941001aca5,pub_app_bundle_id:com.kiloo.subwaysurf,ad_app_id:null,campaign_id:null,creative_id:null,event_id:null,sleep:-1,strategy:null,expiry:null,post_bundle:null,video_url:null,show_close:null,show_close_incentivized:null,video_height:null,video_width:null,call_to_action_url:null,call_to_action_destination:null,countdown:null,delay:null,error:Cached
 
ad is 
better,shouldStream:false,message:,is_test:false,host:ip-10-155-170-179,level:info,timestamp:2015-04-21
 
21:18:21.280,@version:1,@timestamp:2015-04-21T21:19:07.996Z,type:requestAds}]}
org.elasticsearch.index.mapper.MapperParsingException: failed to parse 
[error]
at 
org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:410)
at 
org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:706)
at 
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:497)
at 
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:544)
at 
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:493)
at 
org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:438)
at 
org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:432)
at 

Re: Elasticsearch service often goes down or gets killed

2015-04-21 Thread Tony Chong
I have seen these types of issues because the heap size was not big enough. 
It WILL just die and you will not know what happened. 

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/22b9822d-d532-47cb-adac-fc9acbf74f49%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


org.elasticsearch.index.mapper.MapperParsingException: failed to parse - need guidance

2015-04-21 Thread Tony Chong

Hi,

Using ELK
ES 1.5.0
LS 1.5.0rc2
Kibana 4.0.1

I have read about similiar issues but wasn't really sure what the proper 
way to fix this is. I basically have a bunch of log files written out in 
nested JSON. The logs can be a combination of various keys:values, 
sometimes with the value being null (literally) as you can see in my logs 
excerpt below.


2 thing of note: if i wipe out all my indicies, and restart the ELK stack, 
everything works again. This only seems to happen after about 2 to 3 weeks 
of being online. Second point, this wasn't an issue with logstash 1.4.2 and 
the corresponding ES version. Any assistance would be awesome. 

Thanks!

Tony



[2015-04-21 21:19:09,286][DEBUG][action.bulk  ] 
[elasticsearch03] [logstash-2015.04.21][19] failed to execute bulk item 
(index) index {[logstash-2015.04.21][requestAds][AUzd2VGHPBf0vCHmQV4j], 
source[{country:BR,region:18,city:Curitiba,latitude:null,longitude:null,device_language:pt,browser_user_agent:VungleDroid/3.3.0,is_sd_card_available:1,device_make:motorola,device_model:XT1033,device_height:1184,device_width:720,os_version:5.0.2,platform:android,sound_enabled:false,volume:0,device_id:5ec495a3-80cb-4682-a0a9-c66c2ca122ea,ifa:5ec495a3-80cb-4682-a0a9-c66c2ca122ea,isu:5ec495a3-80cb-4682-a0a9-c66c2ca122ea,user_age:null,user_gender:null,ip_address:189.123.219.242,connection:wifi,network_operator:TIM,pub_app_id:507686ae771615941001aca5,pub_app_bundle_id:com.kiloo.subwaysurf,ad_app_id:null,campaign_id:null,creative_id:null,event_id:null,sleep:-1,strategy:null,expiry:null,post_bundle:null,video_url:null,show_close:null,show_close_incentivized:null,video_height:null,video_width:null,call_to_action_url:null,call_to_action_destination:null,countdown:null,delay:null,error:Cached
 
ad is 
better,shouldStream:false,message:,is_test:false,host:ip-10-155-170-179,level:info,timestamp:2015-04-21
 
21:18:21.280,@version:1,@timestamp:2015-04-21T21:19:07.996Z,type:requestAds}]}
org.elasticsearch.index.mapper.MapperParsingException: failed to parse 
[error]
at 
org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:410)
at 
org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:706)
at 
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:497)
at 
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:544)
at 
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:493)
at 
org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:438)
at 
org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:432)
at 
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:149)
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:515)
at 
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:422)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NumberFormatException: For input string: Cached ad is 
better
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
at java.lang.Long.parseLong(Long.java:483)
at 
org.elasticsearch.common.xcontent.support.AbstractXContentParser.longValue(AbstractXContentParser.java:145)
at 
org.elasticsearch.index.mapper.core.LongFieldMapper.innerParseCreateField(LongFieldMapper.java:300)
at 
org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:236)
at 
org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:400)
... 12 more
[2015-04-21 21:19:09,286][DEBUG][action.bulk  ] 
[elasticsearch03] [logstash-2015.04.21][8] failed to execute bulk item 
(index) index {[logstash-2015.04.21][albatross][AUzd2VGOtXgWXEfZvOn3], 
source[{log_type:albatross_vast_error,error:Cannot have empty 

Re: Kibana response time is too slow, need help identifying why

2014-08-05 Thread Tony Chong
Well, my slow logs are 0 bytes. My logging.yml looks okay but I don't think 
they are configured. I looked at the ES docs and saw that I should have 
these set somewhere. I'm thinking elastic search.yml configuration file?


#index.search.slowlog.threshold.query.warn: 10s
#index.search.slowlog.threshold.query.info: 5s
#index.search.slowlog.threshold.query.debug: 2s
#index.search.slowlog.threshold.query.trace: 500ms

#index.search.slowlog.threshold.fetch.warn: 1s
#index.search.slowlog.threshold.fetch.info: 800ms
#index.search.slowlog.threshold.fetch.debug: 500ms
#index.search.slowlog.threshold.fetch.trace: 200ms


And querying for hot threads never returns a response. I have marvel 
installed as well. Is there something else I can look at? Thanks,

Tony

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1354b750-e44a-4138-864a-153449f99df4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Kibana response time is too slow, need help identifying why

2014-08-05 Thread Tony Chong
Turns out you shouldn't use the head plugin when querying for hot threads. 
I was able to get them by querying the API directly. Thanks for the tip!

On Monday, August 4, 2014 11:28:16 PM UTC-7, Tony Chong wrote:

 Well, my slow logs are 0 bytes. My logging.yml looks okay but I don't 
 think they are configured. I looked at the ES docs and saw that I should 
 have these set somewhere. I'm thinking elastic search.yml configuration 
 file?


 #index.search.slowlog.threshold.query.warn: 10s
 #index.search.slowlog.threshold.query.info: 5s
 #index.search.slowlog.threshold.query.debug: 2s
 #index.search.slowlog.threshold.query.trace: 500ms

 #index.search.slowlog.threshold.fetch.warn: 1s
 #index.search.slowlog.threshold.fetch.info: 800ms
 #index.search.slowlog.threshold.fetch.debug: 500ms
 #index.search.slowlog.threshold.fetch.trace: 200ms


 And querying for hot threads never returns a response. I have marvel 
 installed as well. Is there something else I can look at? Thanks,

 Tony



-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5e9c8cc8-ef5b-422c-b15e-48264036028d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Kibana response time is too slow, need help identifying why

2014-08-04 Thread Tony Chong
Hello, 

Like many others, I have the ELK stack. With very little data in elastic 
search, kibana 3 is super fast, but in my production environment, kibana 
sometimes even fails to show any data.

Here are my hardware specs. 

kibana + ES + nginx = m2.2xlarge + 20GB JVM Heap + 1TB ssd EBS volume
5 other ES machines = m3.xlarge + 10GB JVM Heap + 1TB ebs EBS volumes

We are doing about 150GB per index, 600 million documents. 

6 shards per index, replication 1. 

I don't know if I'm severely under provisioned in the amount of machines I 
need, or if my Kibana is misconfigured. Using the ES head plugin, I can 
run a search against a Logstash index and search for a host and get a 
really fast response time so my suspicion is with Kibana.

http://mylogstash_server.com:9200/logstash-2014.8.03/_search 

{
query : {
term : { host : web03d }
}
}

Thanks to all who care to respond!
Tony

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6083da50-6524-41b0-8555-7c81dd6c5cfb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Kibana Elasticsearch Shards and replication

2014-07-07 Thread Tony Chong
Hi everyone,

Sorry if this has been covered but a few pages of searching through the 
group hasn't sprung an answer for this. 

If I decided to have 3 elasticsearch nodes, with 3 shards, and 0 replicas, 
would kibana be able to retrieve all the data in my ES cluster or just the 
data from the elasticsearch listed in its configuration, considering not 
all the shards would live on that node?

Thanks,

Tony

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1b7d0cdd-9689-4fb4-9cd5-09c907e1b9a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.