Hi Steve,
I suggest to check if these indices exist yet, querying ES directly:

curl -XGET localhost:9200/graylog_59/_stats?pretty

 curl -XGET http://localhost:9200/_cat/shards


curl -XGET http:///localhost:9200/_cluster/health?pretty

HTH
Ciao
Alberto

On Tuesday, November 10, 2015 at 1:16:52 AM UTC+1, Steve Kirkpatrick wrote:
>
> Hello,
>
> Running Graylog V1.2.2 using the VM appliance from graylog.org.
>
> Been having performance issues.  When I first start Graylog, everything is 
> snappy.  By the next day, things have gotten more sluggish.  Sometimes it 
> takes 5-10 attempts to login to the web interface.  
>
> One problem I have is that two of the indices have dropped off the list on 
> the Systems->Indices page.
> After some googling, I decided to try Maintenance->Recalculate index 
> ranges.
> The job completes but neither of the two indices reappear in the list.
>
> I found these errors in /var/log/graylog/server/current:
>
> 2015-11-09_23:28:53.06554 INFO  [RebuildIndexRangesJob] Re-calculating 
> index ranges.
> 2015-11-09_23:28:53.06590 INFO  [SystemJobManager] Submitted SystemJob 
> <a3802c80-8739-11e5-8dd3-005056b859d5> 
> [org.graylog2.indexer.ranges.RebuildIndexRangesJob]
> 2015-11-09_23:28:53.12839 INFO  [MongoIndexRangeService] Calculated range 
> of [graylog_47] in [56ms].
> ...
> 2015-11-09_23:28:54.49844 INFO  [MongoIndexRangeService] Calculated range 
> of [graylog_55] in [101ms].
> 2015-11-09_23:28:54.81895 INFO  [MongoIndexRangeService] Calculated range 
> of [graylog_58] in [211ms].
> 2015-11-09_23:28:54.94361 INFO  [MongoIndexRangeService] Calculated range 
> of [graylog_57] in [123ms].
> 2015-11-09_23:28:55.04214 ERROR [Indices] Error while calculating 
> timestamp stats in index <graylog_59>
> 2015-11-09_23:28:55.04216 
> org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to 
> execute phase [query], all shards failed; shardFailures 
> {[XnEo6hwLTeyUZ4EluxaIEw][graylog_59][0]: 
> RemoteTransportException[[X-Cutioner][inet
> [/172.20.39.61:9300]][indices:data/read/search[phase/query]]]; nested: 
> ClassCastException; }{[XnEo6hwLTeyUZ4EluxaIEw][graylog_59][1]: 
> RemoteTransportException[[X-Cutioner][inet[/172.20.39.61:9300]][indices:data/read/search[phase/query]]];
>  
> nested: ClassCastException; }{[XnEo6hwLTeyUZ4EluxaIEw][graylog_59][2]: 
> RemoteTransportException[[X-Cutioner][inet[/172.20.39.61:9300]][indices:data/read/search[phase/query]]];
>  
> nested: ClassCastException; }{[XnEo6hwLTeyUZ4EluxaIEw][graylog_
> 59][3]: 
> RemoteTransportException[[X-Cutioner][inet[/172.20.39.61:9300]][indices:data/read/search[phase/query]]];
>  
> nested: ClassCastException; }
> 2015-11-09_23:28:55.04217       at 
> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:237)
> 2015-11-09_23:28:55.04218       at 
> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onFailure(TransportSearchTypeAction.java:183)
> 2015-11-09_23:28:55.04218       at 
> org.elasticsearch.search.action.SearchServiceTransportAction$6.handleException(SearchServiceTransportAction.java:249)
> 2015-11-09_23:28:55.04219       at 
> org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:190)
> 2015-11-09_23:28:55.04219       at 
> org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:180)
> 2015-11-09_23:28:55.04220       at 
> org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:130)
> 2015-11-09_23:28:55.04220       at 
> org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> 2015-11-09_23:28:55.04220       at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 2015-11-09_23:28:55.04221       at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> 2015-11-09_23:28:55.04221       at 
> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
> 2015-11-09_23:28:55.04222       at 
> org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
> 2015-11-09_23:28:55.04222       at 
> org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
> 2015-11-09_23:28:55.04223       at 
> org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
> 2015-11-09_23:28:55.04223       at 
> org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> 2015-11-09_23:28:55.04224       at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> 2015-11-09_23:28:55.04225       at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
> 2015-11-09_23:28:55.04225       at 
> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
> 2015-11-09_23:28:55.04226       at 
> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
> 2015-11-09_23:28:55.04226       at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
> 2015-11-09_23:28:55.04226       at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
> 2015-11-09_23:28:55.04227       at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
> 2015-11-09_23:28:55.04228       at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
> 2015-11-09_23:28:55.04228       at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
> 2015-11-09_23:28:55.04228       at 
> org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
> 2015-11-09_23:28:55.04229       at 
> org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
> 2015-11-09_23:28:55.04229       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 2015-11-09_23:28:55.04230       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 2015-11-09_23:28:55.04230       at java.lang.Thread.run(Thread.java:745)
> 2015-11-09_23:28:55.04250 INFO  [RebuildIndexRangesJob] Could not 
> calculate range of index [graylog_59]. Skipping.
> 2015-11-09_23:28:55.04252 org.elasticsearch.indices.IndexMissingException: 
> [graylog_59] missing
> 2015-11-09_23:28:55.04252       at 
> org.graylog2.indexer.indices.Indices.timestampStatsOfIndex(Indices.java:482)
> 2015-11-09_23:28:55.04253       at 
> org.graylog2.indexer.ranges.MongoIndexRangeService.calculateRange(MongoIndexRangeService.java:118)
> 2015-11-09_23:28:55.04253       at 
> org.graylog2.indexer.ranges.RebuildIndexRangesJob.execute(RebuildIndexRangesJob.java:96)
> 2015-11-09_23:28:55.04253       at 
> org.graylog2.system.jobs.SystemJobManager$1.run(SystemJobManager.java:88)
> 2015-11-09_23:28:55.04254       at 
> com.codahale.metrics.InstrumentedScheduledExecutorService$InstrumentedRunnable.run(InstrumentedScheduledExecutorService.java:235)
> 2015-11-09_23:28:55.04254       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2015-11-09_23:28:55.04254       at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2015-11-09_23:28:55.04255       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> 2015-11-09_23:28:55.04255       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> 2015-11-09_23:28:55.04256       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 2015-11-09_23:28:55.04256       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 2015-11-09_23:28:55.04257       at java.lang.Thread.run(Thread.java:745)
> 2015-11-09_23:28:55.20408 INFO  [MongoIndexRangeService] Calculated range 
> of [graylog_61] in [161ms].
> 2015-11-09_23:28:55.38073 INFO  [MongoIndexRangeService] Calculated range 
> of [graylog_60] in [175ms].
>
> graylog_59 is one of the two missing indices.
>
> Is it possible to "fix" these indices and gain access to the data 
> contained within them?
> I originally configured the system to keep 30 indices, each with 24 hours 
> of data.
> Today I reconfigured that to 60 indices at 12 hours each.  Not sure if 
> that will help with the performance issues.
> If their a rule-of-thumb for index sizing?
>
> Anything else I should be looking at to figure out the performance issues?
> The performance graphs for the VM look OK in vSphere; no resources appear 
> to be overwhelmed.
>
> Thanks for any guidance.
>
> Steve.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/79bacb59-4594-4cca-b5a4-017f32c16d8b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to