Re: [graylog2] Re: Elasticsearch cluster unhealthy (RED)

2016-07-11 Thread Arief Hydayat
Hi Marcus,

Thanks a lot. Been few days trying and it was my bad. Suppose to be I 
change the localhost with the specific IP that I've been setup.
the curl command that you given it's work now and I can get the return 
value of those command.

>From the curl http://localhost:9200/_cat/indices command I can see:






*yellow open graylog_8 4 1 13715099 0 5.2gb 5.2gbyellow open graylog_7 4 1 
20001845 0 7.4gb 7.4gbyellow open graylog_6 4 1 20003032 0 7.3gb 
7.3gbyellow open graylog_5 4 1 2307 0 6.9gb 6.9gbyellow open graylog_4 
4 1 20002381 0 7.4gb 7.4gbyellow open graylog_3 4 1 20001081 0 7.2gb 7.2gb*

I've tried to delete some older indices thru webinterface as your mentioned 
but the status still yellow. If we delete all the indices and remain the 
current indices which is running what will happen anyway?


On Wednesday, June 29, 2016 at 3:21:41 PM UTC+8, Marcus Franke wrote:
>
> Hi,
>
> there are some REST API endpoints in elasticsearch you can check:
>
> General Overview:
> curl 'http://localhost:9200/_cluster/health?pretty=true'
>
> Overview over your indices:
> curl http://localhost:9200/_cat/indices
>
> This will list you the index that is red, I guess not enough diskspace and 
> thus
> unallocated shards. I had the same problem.
>
>
> https://www.elastic.co/guide/en/elasticsearch/reference/2.3/cat-indices.html
>
> My problem was the newly created deflector index could not be allocated, I 
> deleted
> some older indices from the graylog webinterface and curl'ed the 
> unallocated 
> index away:
>
> curl -XDELETE http://localhost:9200/graylog2_1234/
>
> that particular index was created again, as my current deflector was _full_
> and everything was fine again. Now I have a tighter look on the diskspace
> of my ES nodes.
>
>
> Greetings,
> Marcus
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/40c64d1f-c0ad-4930-8b8b-6a053a779598%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Help for wildcards

2016-07-11 Thread Bruno Ribeiro
Hello,

I need a help for wildcards.

I want to find a modification in file server, but i know only the file name 
is anual_revenues.

If I use the query, 

source: servername AND ObjectName:*revenues* - > I have several results 
contains revenues in objectname field.

But I use the query, 

source: servername AND ObjectName:*anual_revenues* - > I found nothing

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/9fced09a-2647-4674-8ed1-9a8b755a27ff%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Graylog IO Exception Error

2016-07-11 Thread Ariel Godinez
Increasing the heap size on ES and Graylog respectively fixed the issue. 

On Friday, July 8, 2016 at 11:07:46 AM UTC-5, Ariel Godinez wrote:
>
> After further investigation I think this was due to elasticsearch and 
> graylog being overloaded. I have increased their heap sizes accordingly and 
> will see how the system performs.
>
> Ariel
>
> On Wednesday, July 6, 2016 at 12:21:11 PM UTC-5, Ariel Godinez wrote:
>>
>> Hello,
>>
>> I've been using graylog for a couple weeks now and started to notice some 
>> unusual behavior today. I am currently running a single node setup.
>>
>> The Issue:
>>
>> Every once in awhile I start to notice that that graylog is dragging 
>> quite a bit (the loading spinner is persisting much longer than usual) so I 
>> go check the logs and find the following error message. 
>>
>> ERROR [ServerRuntime$Responder] An I/O error has occurred while writing a 
>> response message entity to the container output stream.
>> org.glassfish.jersey.server.internal.process.MappableException: 
>> java.io.IOException: Connection closed
>> at 
>> org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:92)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1130)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:711)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:444)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:434)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:329) 
>> [graylog.jar:?]
>> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) 
>> [graylog.jar:?]
>> at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) 
>> [graylog.jar:?]
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:315) 
>> [graylog.jar:?]
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:297) 
>> [graylog.jar:?]
>> at org.glassfish.jersey.internal.Errors.process(Errors.java:267) 
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305) 
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:384)
>>  
>> [graylog.jar:?]
>> at 
>> org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:224) 
>> [graylog.jar:?]
>> at 
>> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>>  
>> [graylog.jar:?]
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>  
>> [?:1.8.0_91]
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>  
>> [?:1.8.0_91]
>> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
>> Caused by: java.io.IOException: Connection closed
>> at 
>> org.glassfish.grizzly.asyncqueue.TaskQueue.onClose(TaskQueue.java:317) 
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.onClose(AbstractNIOAsyncQueueWriter.java:501)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.transport.TCPNIOTransport.closeConnection(TCPNIOTransport.java:412)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.NIOConnection.doClose(NIOConnection.java:604) 
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.NIOConnection$5.run(NIOConnection.java:570) 
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.DefaultSelectorHandler.execute(DefaultSelectorHandler.java:235)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.NIOConnection.terminate0(NIOConnection.java:564) 
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.transport.TCPNIOConnection.terminate0(TCPNIOConnection.java:291)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.writeCompositeRecord(TCPNIOAsyncQueueWriter.java:197)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.transport.TCPNIOAsyncQueueWriter.write0(TCPNIOAsyncQueueWriter.java:92)
>>  
>> ~[graylog.jar:?]
>> at 
>> org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.p

[graylog2] Re: Graylog slow processing.

2016-07-11 Thread Hema Kumar
Hi Jan,
Upgrading to 2.x version will take at-least 6-7 months for us to 
migrate. About the heap, it is at 70% and no issues with it. No logs are 
showing up based on the slow rate, apart from what i have posted. There was 
an error on indice which mentioned about not able to calculate the range, i 
did the "Recalculate index cycles" which fixed that, but the slow 
processing has accumulated about 100,000,000 messages in the journal on the 
Master Node. 

Still finding what is happening.. :-( 

Thanks Hema. 

On Friday, July 8, 2016 at 5:40:47 PM UTC+5:30, Hema Kumar wrote:
>
> Hi,
>I am using graylog 1.3.3 with ES 1.7.5, from yesterday we are seeing 
> the process buffer filled up on the master node and the outgoing process is 
> too slow than normal, I have tried restarting GL and ES but did not fix the 
> issue, below are the log warn and errors we see that repeats continuously. 
>
> We have 4 graylog server and 7 elasticsearch nodes, Only the Master 
> graylog is processing slow and sometimes the 3rd node, rest of the nodes 
> are working fine. 
>
> Could you please help me on this, i have been breaking my head since 
> yesterday. 
>
>
> 2016-07-08T01:53:21.355-06:00 WARN  [GelfChunkAggregator] Error while 
> expiring GELF chunk entries
> java.util.NoSuchElementException
> at 
> java.util.concurrent.ConcurrentSkipListMap.firstKey(ConcurrentSkipListMap.java:2036)
> at 
> java.util.concurrent.ConcurrentSkipListSet.first(ConcurrentSkipListSet.java:396)
> at 
> org.graylog2.inputs.codecs.GelfChunkAggregator$ChunkEvictionTask.run(GelfChunkAggregator.java:288)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> 2016-07-08T05:37:47.803-06:00 ERROR [AnyExceptionClassMapper] Unhandled 
> exception in REST resource
> org.elasticsearch.action.search.ReduceSearchPhaseException: Failed to 
> execute phase [fetch], [reduce]
> at 
> org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$2.onFailure(TransportSearchQueryThenFetchAction.java:159)
> at 
> org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:41)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassCastException: 
> org.elasticsearch.search.aggregations.bucket.terms.LongTerms$Bucket cannot 
> be cast to 
> org.elasticsearch.search.aggregations.bucket.terms.StringTerms$Bucket
> at 
> org.elasticsearch.search.aggregations.bucket.terms.StringTerms$Bucket.compareTerm(StringTerms.java:85)
> at 
> org.elasticsearch.search.aggregations.bucket.terms.InternalOrder$4.compare(InternalOrder.java:87)
> at 
> org.elasticsearch.search.aggregations.bucket.terms.InternalOrder$4.compare(InternalOrder.java:83)
> at 
> org.elasticsearch.search.aggregations.bucket.terms.InternalOrder$CompoundOrder$CompoundOrderComparator.compare(InternalOrder.java:284)
> at 
> org.elasticsearch.search.aggregations.bucket.terms.InternalOrder$CompoundOrder$CompoundOrderComparator.compare(InternalOrder.java:270)
> at 
> org.elasticsearch.search.aggregations.bucket.terms.support.BucketPriorityQueue.lessThan(BucketPriorityQueue.java:37)
> at 
> org.elasticsearch.search.aggregations.bucket.terms.support.BucketPriorityQueue.lessThan(BucketPriorityQueue.java:26)
> at 
> org.apache.lucene.util.PriorityQueue.upHeap(PriorityQueue.java:225)
> at org.apache.lucene.util.PriorityQueue.add(PriorityQueue.java:133)
> at 
> org.apache.lucene.util.PriorityQueue.insertWithOverflow(PriorityQueue.java:149)
> at 
> org.elasticsearch.search.aggregations.bucket.terms.InternalTerms.reduce(InternalTerms.java:195)
> at 
> org.elasticsearch.search.aggregations.InternalAggregations.reduce(InternalAggregations.java:140)
>   at 
> org.elasticsearch.search.aggregations.bucket.InternalSingleBucketAggregation.reduce(InternalSingleBucketAggregation.java:79)
> at 
> org.elasticsearch.search.aggregations.InternalAggregations.reduce(InternalAggregations.java:140)
> at 
> org.elasticsearch.search.controller.SearchPhaseController.merge(SearchPhaseController.java:407)
> at 
> org.elasti

[graylog2] Re: rsylog to graylog over tls

2016-07-11 Thread Jochen Schalanda
Hi John,

please refer to the rsyslog documentation for instructions about setting up 
TLS: http://www.rsyslog.com/doc/v8-stable/tutorials/tls_cert_client.html

Cheers,
Jochen

On Monday, 11 July 2016 10:23:24 UTC+2, john wrote:
>
> Hi,
>
> I created a cerificate and configured a tcp syslog input with tls.
>
> openssl req -x509 -days 365 -nodes -newkey rsa:2048 -keyout pkcs5-plain.pem 
> -out cert.pem 
> openssl pkcs8 -in pkcs5-plain.pem -topk8 -nocrypt -out pkcs8-plain.pem
>
> How do I need to configure rsyslog to be able to log to my input over tls ?
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/4bfd178a-0d22-41e0-8dce-e0f9880036ed%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] rsylog to graylog over tls

2016-07-11 Thread 'john' via Graylog Users


Hi,

I created a cerificate and configured a tcp syslog input with tls.

openssl req -x509 -days 365 -nodes -newkey rsa:2048 -keyout pkcs5-plain.pem 
-out cert.pem 
openssl pkcs8 -in pkcs5-plain.pem -topk8 -nocrypt -out pkcs8-plain.pem

How do I need to configure rsyslog to be able to log to my input over tls ?




-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/e406cab5-998f-4a97-9c75-7724bbbe2810%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Can't create extractors for inputs on a 2nd graylog node

2016-07-11 Thread Jan
Hi Group,

I've experienced an issue when I try to create an extractor for an input 
which is configured on a remote Graylog cluster node.

My setup has 4x Graylog nodes. Two of them are used exclusively for 
UI-access (*graylog-ui0* and *graylog-u*i1). The other two are used to 
receive log messages only (*graylog-log0* and *graylog-log1* which have 
web_enable=false).
I've configured two inputs (non-global) called Raw_UDP6000_log0 on host 
*graylog-log0* and Raw_UDP6000_log1 on host *graylog-log1*. Next I tried to 
setup an extractor for input Raw_UDP6000_log0 using the UI on *graylog-ui0*. 
The process fails with a 404 "No such input on this node." error.

To narrow down the root cause of this behaviour I created an input 
Raw_UDP6000_ui0 on host *graylog-ui0. *Creating an extractor with the same 
parameters for this input worked without any trouble.

Is this FAD (functions-as-designed) or a bug? I would expect that all nodes 
in a cluster are aware of all inputs and know how to create an extractor 
for them.

Regards,
Jan

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/a396b728-1fac-438d-94e6-7fecf8fd7cb2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Graylog slow processing.

2016-07-11 Thread Jan Doberstein
Hey Hema,


On 8. Juli 2016 at 14:10:50, Hema Kumar (vhs...@gmail.com) wrote:
> I am using graylog 1.3.3 with ES 1.7.5, from yesterday we are seeing the
> process buffer filled up on the master node and the outgoing process is too
> slow than normal, I have tried restarting GL and ES but did not fix the
> issue, below are the log warn and errors we see that repeats continuously.

Only to have it said - did you consider updating to 2.x version in the
near future?


> Could you please help me on this, i have been breaking my head since
> yesterday.

Did you checked the heap usage of the nodes? Maybe this could be a
bottleneck. You can find this in the Webinterface and the Node
overview.

regards
Jan

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/CAGm-bLZszqF1srP_ZDc6W8u%2Bad-O7L42_%2Bp_-zL8pSi5owrbMg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.