[graylog2] Re: [ANNOUNCE] Graylog v2.0-beta.1 has been released

2016-03-24 Thread Arie
Super,

Are there some guidelines on upgrading from 1.3.4 > 2.0?

Thanks,
Arie.


Op donderdag 24 maart 2016 18:12:45 UTC+1 schreef lennart:
>
> Hi everyone, 
>
> we just released the first beta of Graylog v2.0. This release is 
> feature complete. 
>
> Announcement here: 
> https://www.graylog.org/blog/50-announcing-graylog-v2-0-beta-1 
>
> Thanks, 
> Lennart 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/9ee39e54-3dcd-461a-95ce-e256f0421ae7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: [ANNOUNCE] Graylog v1.3.4 has been released and contains an important security fix

2016-03-19 Thread Arie
Did an upgrade without any problems.

  Tank you for the work.

Op woensdag 16 maart 2016 20:24:17 UTC+1 schreef lennart:
>
> Hi everyone, 
>
> we just released Graylog v1.3.4, which contains an important security 
> fix. Read more in the release notes and upgrade: 
>
> * https://www.graylog.org/blog/49-graylog-1-3-4-is-now-available 
>
> Thanks, 
> Lennart 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/15a7bd40-fb63-4b51-9030-c5d0da169791%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Whts the best way or Tool for monitoring apache logs by using Graylog

2016-02-11 Thread Arie
Ranjith,

I wloud propose a  imported setting on apache if it would be possible.
By default it does not write processing time to the log file, but this is 
one of the
most useful parameters on measuring and mainting your server.

Is is about %T

http://httpd.apache.org/docs/current/mod/mod_log_config.html



On Friday, January 29, 2016 at 8:14:05 AM UTC+1, Ranjith Vadakkeel wrote:
>
> Thanks Jochen.
>
> On Thursday, January 21, 2016 at 8:57:23 PM UTC+5:30, Jochen Schalanda 
> wrote:
>>
>> Hi Ranjith,
>>
>> you can use the typical log file collectors like nxlog (
>> https://nxlog.co/products/nxlog-community-edition), logstash (
>> https://www.elastic.co/products/logstash), Graylog Collector (
>> http://docs.graylog.org/en/1.3/pages/collector.html), or even good old 
>> rsyslog with the imfile input (http://www.rsyslog.com/) to send your 
>> Apache httpd access and error logs to Graylog. All of the mentioned 
>> applications support either Graylog's native GELF protocol or the syslog 
>> protocol.
>>
>> If you are brave enough (and can modify the Apache httpd configuration), 
>> you can give the Apache httpd module mod_log_gelf (
>> https://github.com/Graylog2/apache-mod_log_gelf) a try.
>>
>>
>> Cheers,
>> Jochen
>>
>> On Thursday, 21 January 2016 15:37:10 UTC+1, Ranjith Vadakkeel wrote:
>>>
>>> Hi experts,
>>>
>>> newb here, imported ubuntu ova and trying to monitor some apache logs 
>>> from test server. I have default file logging location for apache. I cant 
>>> make any changes on apache settings for this requirement. Can any one 
>>> suggest best way or tool for forwarding this file to graylog setup..? 
>>> Please suggest.
>>>
>>> Apache server OS version : Rhel 6.5
>>>
>>> Please do let us know if you required any further info.
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/2cef4326-e943-462e-b946-30f2206e0973%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Journal filling in a short time

2016-02-11 Thread Arie
Hi,

You couls reconfigure elasticsearch for a start:

try changing this: 

index.refresh_interval: 5s
Or even use a value of 30 sec, this improves the throuput of elastic.

On centos6
/etc/sysconfig/elasticsearch 

  ES_HEAP_SIZE=8g (/etc/init.d/elasticsearch) < set it to 50% of your 
memory.


Good luck.


On Wednesday, January 13, 2016 at 2:11:04 PM UTC+1, roberto...@gmail.com 
wrote:
>
> Dear, Ia have Graylog 1.2 with just one Elasticsearch node. I receive lots 
> of logs from different devices. After a pair of hours, I often notice that 
> incoming messages are higher than outgoing messages, and so the journal is 
> fullfilled and the message processing mechanism stops, and I have to delete 
> messages from journal manually.
>
> This is a sample verbose message from the Nodes of Graylog:
>
> Processing *1,126* incoming and *500* outgoing msg/s. *130,739 unprocessed 
> messages* are currently in the journal, in 1 segments. *857 messages* have 
> been appended to, and *857 messages* have been read from the journal in 
> the last second.
>
> Is there any way to process more messages and have higher outgoing 
> messages? Or any other way to avoid the fullfilling of the journal ?
>
> Thanks a lot,
>
> Roberto
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/a6bde08d-3c0f-433f-8300-b5ebb8e546b0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Histogram error graylog 1.3

2015-12-16 Thread Arie
Hi all,

Is there a possible historgram error in gl 1.3?

When we select "search in all messages" and hit the search button, we get a 
strange timeline.
We have data for just over a jear and get something like this:

12:001990  12:00
 2010   and some data at the end.


Doing an actual search request on data, the timeline is correct.

Wev had this in previous versions but not in version 1.2.2-1.


Running on Centos 6.7
openjdk version "1.8.0_65"
OpenJDK Runtime Environment (build 1.8.0_65-b17)
OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)


-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/ba5d1be7-1355-454d-9b21-16341bf78234%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Re: GeoIp lookup plugin

2015-12-16 Thread Arie
Hi Jason,

In our test setup we connect to elastic with kibana (4.1.2). You could try 
that for the time being,
until it is possible in graylog.

Arie.

On Sunday, December 13, 2015 at 1:05:56 AM UTC+1, Jason Haar wrote:
>
> On 13/12/15 09:22, Arie wrote:
>
> What would you like to store in elastic, 
> I see that you work at trimble, as far as I know the is navigation 
> equipment.
>
>
> Nope - nothing so obvious. I'm the security manager, I have security logs 
> containing IP addresses, and I'd like the option of "showing stuff" with 
> maps. Adding lat/long to GELF input data was easy - actually getting 
> graylog to do something with it is what this is all about. ie Kibana and 
> Splunk can make pretty maps - graylog can't
>
> Frankly maps aren't that interesting to me - but they are interesting to 
> normal people - visualization makes things pop :-)
>
>
> -- 
> Cheers
>
> Jason Haar
> Corporate Information Security Manager, Trimble Navigation Ltd.
> Phone: +1 408 481 8171
> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/bde89d61-c423-4431-aa5e-a6872ee78dd9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Re: GeoIp lookup plugin

2015-12-12 Thread Arie
What would you like to store in elastic,
I see that you work at trimble, as far as I know the is navigation 
equipment.


Op vrijdag 11 december 2015 01:25:41 UTC+1 schreef Jason Haar:
>
> On 10/12/15 23:03, Arie wrote: 
> > As far as I know it is not there yet, but kind of work in 
> > progres:https://graylog.ideas.aha.io/ideas/GL2E-I-364 
> That's not the case: the graylog staff have said there are no plans to 
> implement this - and that ticket should actually be closed :-( 
>
> (see "what can I do to prepare for geoip support?") 
>
> -- 
> Cheers 
>
> Jason Haar 
> Corporate Information Security Manager, Trimble Navigation Ltd. 
> Phone: +1 408 481 8171 
> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/58ca4088-2e46-4218-8bb0-e9ebb355aea8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: GeoIp lookup plugin

2015-12-10 Thread Arie
Hi,

As far as I know it is not there yet, but kind of work in 
progres:https://graylog.ideas.aha.io/ideas/GL2E-I-364
Make s me think about the fact that elasticsearch itself has spacial geo 
fields fields built in it.


On Monday, December 7, 2015 at 10:51:09 PM UTC+1, Raj Tanneru wrote:
>
> Hi,
>
> I am new to graylog and trying to explore the dashboard functionality and 
> plugins. I have below two questions.
>
>1. Besides statistics, quick values and generate charts, are there 
>plugins for any other widgets? Also is there a way to get all statistics 
>for a field as table on to the dashboard instead of just one statistical 
>function?
>2. Does graylog have plugin to convert ip address to geographic 
>location(city/country/region)? If so can you please point me to the 
>location. 
>
>
> Thanks,
> Raj Tanneru
>
> Information contained in this e-mail message is confidential. This e-mail 
> message is intended only for the personal use of the recipient(s) named 
> above. If you are not an intended recipient, do not read, distribute or 
> reproduce this transmission (including any attachments). If you have 
> received this email in error, please immediately notify the sender by email 
> reply and delete the original message. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/f34037f6-01a2-4c80-a746-9b1c06791ef0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Shards and diskspace

2015-12-10 Thread Arie
You might have to give the complete path to the plugin command.

/usr/share/elasticsearch/bin/plugin .

On Thursday, December 10, 2015 at 9:52:15 AM UTC+1, Per Erik Nordlien wrote:
>
>
>  
>
>> You could use a plugin like ELASTICHQ for elastic to have a better look 
>> at what your ES servers are doing.
>>
>>
> Sounds great, but when I run:
>sudo bin/plugin --install http://github.com/royrusso/elasticsearch-HQ
>sudo bin/plugin --install royrusso/elasticsearch-HQ
>sudo bin/plugin -i https://github.com/royrusso/elasticsearch-HQ
>
> They all produce the same result:
>bin/plugin: 1: eval: -Xmx64m: not found
>  
>
>
>  
>
>> On Wednesday, December 9, 2015 at 10:48:14 AM UTC+1, Per Erik Nordlien 
>> wrote:
>>>
>>> Dear all.
>>>
>>> I have 2 separate Graylog installations going based on the downloadable 
>>> OVA. I have successfully upgraded them to the latest version. They run fine 
>>> for some time but, then unassigned shards starts to pop up and they run out 
>>> of disk. I have a hard time getting any retention scheme to work. I assume 
>>> that any "sudo graylog-ctl set-retention" command would help me prevent 
>>> that Graylog run out of disk space. I have tried many commands to resolve 
>>> the unassigned shards issue but, when it comes to curl commands I have no 
>>> idea what I'm doing, what to check for or even if I'm using them correct.
>>>
>>> First question: How do I resolve unassigned shards? Is there some dummy 
>>> guide out there somewhere that could help me through it?
>>>
>>> Second question: How do I get a retention scheme working? Is it correct 
>>> that it would prevent me from running out of disk?
>>>
>>> Regards
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/017669ea-c956-469c-ad32-979fc782f46c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Shards and diskspace

2015-12-09 Thread Arie
It will only prevent you when configured correctly, you have to look at the 
size of your indexes,
and define your strategy based on the usage.

You could use a plugin like ELASTICHQ for elastic to have a better look at 
what your ES servers are doing.

On Wednesday, December 9, 2015 at 10:48:14 AM UTC+1, Per Erik Nordlien 
wrote:
>
> Dear all.
>
> I have 2 separate Graylog installations going based on the downloadable 
> OVA. I have successfully upgraded them to the latest version. They run fine 
> for some time but, then unassigned shards starts to pop up and they run out 
> of disk. I have a hard time getting any retention scheme to work. I assume 
> that any "sudo graylog-ctl set-retention" command would help me prevent 
> that Graylog run out of disk space. I have tried many commands to resolve 
> the unassigned shards issue but, when it comes to curl commands I have no 
> idea what I'm doing, what to check for or even if I'm using them correct.
>
> First question: How do I resolve unassigned shards? Is there some dummy 
> guide out there somewhere that could help me through it?
>
> Second question: How do I get a retention scheme working? Is it correct 
> that it would prevent me from running out of disk?
>
> Regards
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/213b4452-4530-458c-abe3-d80415d0fc7d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Graylog cant handle large amounts of incoming logs

2015-12-08 Thread Arie
Like the config file tell's, you could increase the 
processbuffer_processors and / or outputbuffer_processors 
if your buffers are filling up. Keep an eye on processor resources.

# The number of parallel running processors.
# Raise this number if your buffers are filling up.
processbuffer_processors = 5
outputbuffer_processors = 3

On Monday, November 23, 2015 at 10:55:48 AM UTC+1, Matthew Simon wrote:
>
> Hi Guys 
>
>
> I have a problem!
>
>
> I receive large amounts of logs to my Graylog2 server and i feel that the 
> server cant keep up with the incoming logs, Is there a way that I can 
> optimize my configuration to handle large amounts of LOGS. 
>
>
> Please see the image bellow.
>
>
> Thanks in advance.
>
>
>
>
> 
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/fec911dc-910c-4e16-8caa-ab1a09849385%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Graylog Collector Configuration Settings

2015-12-08 Thread Arie
Hi,

Im not knowng this for sure, but I'dd guess you need one configuration item 
per file-type.

hth,,

Arie

On Tuesday, December 8, 2015 at 1:29:11 PM UTC+1, Sean McGurk wrote:
>
> Hi all,
>
> I have configured a graylog collector with the following settings:
>
> server-url = "http://xxx.xxx.xxx.xxx:12900/;
>
> collector-id = /etc/graylog/collector/collector-id
>
> inputs {
>
>   syslog {
>
> type = "file"
>
> path = "/var/log/syslog"
>
>   }
>
>   apache-logs {
>
>   type = "file"
>
>   // /var/log/apache2/**/*.{access,error}.log
>
>   path-glob-root = "/var/log/apache2"
>
>   path-glob-pattern = "**/*.{access,error}.log"
>
>   }
>
> }
>
> outputs {
>
>   graylog-server {
>
> type = "gelf"
>
> host = "xxx.xxx.xxx.xxx"
>
> port = 12201
>
>   }
>
> }
>
>
>
> And have created an input on the server have configured a graylog collector 
> with the following settings:
>
>
>- recv_buffer_size: 1048576
>- port: 12201
>- tls_key_file: graylog-user
>- tls_key_password: ***
>- use_null_delimiter: true
>- tls_client_auth_cert_file:
>- max_message_size: 2097152
>- tls_client_auth: disabled
>- override_source:
>- bind_address: xxx.xxx.xxx.xxx
>- tls_cert_file:
>
> And while I am able to see syslog messages sent in by the collector, I am 
> unable to see apache log messages.
>
> Does anyone know where I am going wrong?
>
> Thanks,
>
> Seán
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/7d4fb416-edb2-4854-bf83-2756b57b091f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: [ANNOUNCE] Graylog v1.3 beta has been released

2015-12-03 Thread Arie
Upgraded on test node to 1.3 beta2
No problems.

Running on centos-6.7

Need to mention that the new init script is diffrent, from the previous 
version,
replacing it gave no errors.

Nice welcome screen :-)



Op donderdag 3 december 2015 14:08:52 UTC+1 schreef lennart:
>
> Hey everyone, 
>
> we just released a beta version of Graylog v1.3. You can find the 
> announcement blog post here: 
>
> * https://www.graylog.org/graylog-v1-3-beta-is-out/ 
>
> Please help us with testing and report any issues you find or problems you 
> have. 
>
> Thank you, 
> Lennart 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/5844a793-a09e-4d21-ba18-2cf6338090f8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Matched groups regex on extractors

2015-11-09 Thread Arie
Hi,

In streams we use this regex for this: * message must match regular 
expression: *

*(?=.*NAME1).*NAME2.*NAME3 *hth,,

Arie


Op vrijdag 6 november 2015 09:11:29 UTC+1 schreef Josep Maria Comas Serrano:
>
> Any ideas? Any experience using matched groups?
>
> Best,
>
> JM
>
> El martes, 3 de noviembre de 2015, 13:06:18 (UTC+1), Josep Maria Comas 
> Serrano escribió:
>>
>> There's an issue when configuring an extractor using regex + matched 
>> groups
>>
>> After some tests, we've made it simply to show you the final error:
>>
>> The regex pattern is (Name1)|(Name2)
>>
>> So if Name1 exists, the result is Name1. And if Name2 exists, and there's 
>> no Name1, the result is Name2.
>>
>> Tests:
>>
>> - Message includes Name1 and not Name2: The result is Name1, correct.
>> - Message includes Name1 and Name2: the result is Name1, correct.
>> - Message includes not Name1 and not Name2: warning "Regular Expression 
>> did not match". We think is correct for now.
>> - Message includes not Name1 and Name2: error "Could not try regular 
>> expression,Make sure that is valid". The result should have been Name2, 
>> instead we have an error.
>>
>> Maybe it's some bug? Or must be configured some other way? The pattern 
>> works if we test it, for example, on the regex-testing web regex101.com.
>>
>> Best,
>>
>> JM
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/da4bd459-68ac-41ce-8ad2-6e0efd6c313b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Fast ES query, slow Graylog page load

2015-11-01 Thread Arie
I sometimes notice the same behavior with much lesser data in ES.


Op zaterdag 24 oktober 2015 19:33:01 UTC+2 schreef Jesse Skrivseth:
>
> In one instance running 1.2.1 we have 3.8TB of data, which holds roughly 
> 30 days of data. When I do a simple "*" query across the last 14 days, the 
> ES query finishes in about 6 seconds. Notice these 14 day queries returned:
> Found *1,111,506,619 messages*  in 5,869 ms, searched in 987 indices 
> 
> .
> But the page took 52.01s to load
>
> Found *1,111,516,915 messages*  in 6,650 ms, searched in 987 indices 
> 
> .
> But the page took 46.12s to load
>
> When I try to do a query for the last 30 days, I end up with timeouts 
> (HTTP 504). We're running 3 ES nodes - r3.2xlarge (8 core, 64gb RAM, SSD 
> EBS volumes) in AWS. I think the cluster is up to the task of doing such 
> queries, but it seems that maybe Graylog is doing some processing of the 
> result set that might be slow. 
>
> Any pointers here? Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/8f82aec6-bedc-4ef4-8098-11ad669494ec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: [Suggestion] inputs to support optional ES prefix

2015-11-01 Thread Arie
Hi,

Good idea. Altho easier said than done, I guess a lot on graylog has to be 
changed
because managing indices is done by graylog to. Each indice may need a 
different
maintenance strategy.

Another approach could be setting an optional  lifetime field on each doc 
on the input,
and regularly delete documents that become due date. Graylog could even 
benefit from this
approach because clearing up some data sooner than your indices are full 
could speed
up search on what is left.

Arie.

Op zondag 1 november 2015 03:32:03 UTC+1 schreef Jesse Skrivseth:
>
> If each Input could support an optional ES prefix, it would make 
> multitenancy on a single graylog host much easier. This would require each 
> to support their own retention strategy and max size. Stream matching could 
> still be used to grant granular access to users. Having a prefix allows 
> easier index management - snapshots, archival, and data purging, rather 
> than an all or nothing approach as required otherwise.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/c1b38fba-47b9-4385-ac1d-b62f25d5970f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: [ANN] Graylog 1.2.2 has been released

2015-11-01 Thread Arie
Hi,

Up and running in test and production without problems for a few day now.

Arie.

Op vrijdag 30 oktober 2015 13:21:14 UTC+1 schreef Bernd Ahlers:
>
> Moin! 
>
> three days ago we released Graylog 1.2.2, which is a bugfix release for 
> the Graylog 1.2 series. 
>
> Please find the full release notes for 1.2.2 at 
> https://www.graylog.org/graylog-1-2-2-is-now-available/. 
>
> Regards, 
> Bernd 
>
> -- 
> Developer 
>
> Tel.: +49 (0)40 609 452 077 
> Fax.: +49 (0)40 609 452 078 
>
> TORCH GmbH - A Graylog company 
> Steckelhörn 11 
> 20457 Hamburg 
> Germany 
>
> Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
> Geschäftsführer: Lennart Koopmann (CEO) 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/1b6c0cc7-7f1a-4515-94c3-e2b4d2c0925d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: query about performance of time retention_policy settings

2015-11-01 Thread Arie
I asked this at the elastic.on-tour in Amsterdam

They told me that the size of an indice could be up to 50GB (the sum of its 
shards)

Altho I think there is one thing in relation to graylog and search speed. 
Graylog
appears th know what is stored where in relation of time to improve search 
speed I thing.



Op dinsdag 4 augustus 2015 15:18:34 UTC+2 schreef Jochen Schalanda:
>
> Hi Jason,
>
> the answer to your question depends on multiple factors, like the 
> structure of your log messages, their average size, the available hardware 
> resources for Graylog and Elasticsearch, and the kind of queries you've 
> been running.
>
> In short, modern hardware with decent amounts of memory should easily be 
> able to handle Elasticsearch indices with lots (i. e. millions) of indexed 
> documents. For your specific case, you should simply test if switching to a 
> time-based rotation strategy helps your use cases or not.
>
>
> Cheers,
> Jochen
>
> On Tuesday, 4 August 2015 00:00:25 UTC+2, Jason Haar wrote:
>>
>> Hi there
>>
>> I currently have the default "rotation_strategy = count" with 20M 
>> documents with 20 indices, and have an incoming syslog feed into it. So 
>> that roughly means when the 400,000,001 syslog record enters the system, 
>> the first index is deleted (as the 21st index is created)
>>
>> I want to move to the "time" strategy: basically I want to keep 30 days 
>> of logs around. So I could do "rotation_strategy = time" and 
>> "elasticsearch_max_time_per_index = 1d" and 
>> "elasticsearch_max_number_of_indices = 30"
>>
>> All well and good. However, how does the performance vary with index 
>> size? As the default is 20M records, that implies to me that was chosen by 
>> the developers for good reason - so should I try to approximately match 
>> that with "time" too? eg should I let it run a few days, get a feel for 
>> G/hour growth rate and then choose a "elasticsearch_max_time_per_index" 
>> value that would create ~20M record indices and then change 
>> "elasticsearch_max_number_of_indices" to multiply out to 30? Or should I 
>> instead increase elasticsearch's sharding: eg if the indices are 10x the 
>> "count" model, should I have 10x more shards per indice to keep about the 
>> same performance?
>>
>> eg I just did a search over the past 24 hours and it had to go through 8 
>> indices - would it have performed as well if there was only one (bigger) 
>> index?
>>
>> Sorry these questions are so dumb - hopefully I'm learning fast :-)
>>
>> Thanks
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/4dc628da-e8a7-47bb-b3f7-3c4368c58a9f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Best practice for extractors/inputs.

2015-10-22 Thread Arie
I've had this answer from Kay:

Hi! 

Generally speaking: 

If your log senders need special treatment (i.e. if you need to set up 
different extractors), then use different inputs. 
If you send gelf directly, you are generally ok with one input. 
Syslog-like inputs often need special extractors, so in those cases 
you have special "plain text" inputs with extractors, Cisco "syslog" 
is like that in many cases. Or ESXi. 

Another case is if you want to tag sources in a special way, using a 
"static field". Those are per input, so you would configure different 
inputs. Use-cases could be different applications deployed across many 
servers, where you don't really care about which server actually 
handled the request, or rather at some level you don't care. This 
often includes GELF sent from applications, where you cannot 
differentiate between messages because they all look similar. By using 
different target addresses, you can tag them. 

Other than that standard network considerations apply, e.g. load 
balancing or firewalls. 

HTH, 
Kay



Op woensdag 21 oktober 2015 09:27:54 UTC+2 schreef Patrick Brennan:
>
> Hi all,
>
> We have just stood up a Proof-of-Concept Graylog cluster and we are 
> ingesting log data from around 50 nodes.  The Graylog cluster itself is 
> working fine and is stable ingesting at something like 8000 msgs/sec.  Now 
> it's time to try to do something useful with that data.  And herein lies 
> the crux of my question.
>
> At the moment I have a single input configured of type "GELF TCP" 
> listening on port TCP/12201.  This is behind a load balancer and all logs 
> are being forwarded with graylog collector.  A subset of the collector 
> configuration is below:
>
> inputs {
>   syslog {
> type = "file"
> path-glob-root = "/var/log"
> path-glob-pattern = "{syslog,auth.log,dpkg.log,kern.log}"
>   }
>   nginx-logs {
> type = "file"
> path-glob-root = "/var/log/nginx"
> path-glob-pattern = "*log"
>   }
>   app-logs [
> type = "file"
> path = "/var/log/application.json"
>   }
> }
>
> outputs {
>   gelf-tcp {
> type = "gelf"
> host = "server"
> port = 12201
> [ ... SNIP ... ]
>   }
> }
>
>
> Obviously we are sending logs of many different formats to the same 
> graylog input.  Some are JSON, some are syslog, some are http combined, and 
> there are many others as well).
>
> I am curious what others do in this situation.  I imported the nginx 
> content pack and it created an input (on a different port) for nginx access 
> logs and another (again on a different port) for nginx error logs.  Is this 
> best practice?  It doesn't seem overly desirable to me as it pushes the 
> classification of logs into the collector which I was trying to avoid.  The 
> alternative would seem to be to have all extractors running on my single 
> input, but I can't see any easy way to keep this under control.  Both from 
> a number of extractors perspective, but also to constrain particular 
> extractors to particular message types (for example based on a regex 
> against source_file).
>
> I would appreciate anyone else's thoughts or experiences.
>
> Thanks!
> Patrick
>
> BTW, having used an ELK based stack previously, I am really like Graylog 
> thus far.  Kudos to the developers for actually starting out by designing 
> an architecture.  :)
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/c069a038-8c7b-4e9b-9743-dbaa7889bbdc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] SNMP Plugin - BIND Error

2015-10-02 Thread Arie
Hi Marius,

The ports above are working, but within our networked landscape a lot of 
snmp services are
default configured.

Thanks,,
Arie

On Friday, October 2, 2015 at 10:38:20 AM UTC+2, Marius Sturm wrote:
>
> Hi Arie,
> only the root user can bind ports below 1024. Could you try a higher port 
> number, like 1162?
>
> On 2 October 2015 at 10:33, Arie <satya...@gmail.com > wrote:
>
>> Hi all,
>>
>> Verry happy knowing there is an snmp plugin.
>>
>> We are trying to run it on port 162 with is the default, but this gives 
>> us a bind error.
>>
>> Message in Server.log:
>> "org.jboss.netty.channel.ChannelException: Failed to bind to: /
>> 0.0.0.0:162"
>>
>> There is nothing using listening on this port in our centos6 enviroment.
>>
>>
>> Anyone having an idea or solution here that helps?
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Graylog Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to graylog2+u...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/graylog2/bbcf243f-04b7-4bf2-bb48-eb97b3b78d75%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/graylog2/bbcf243f-04b7-4bf2-bb48-eb97b3b78d75%40googlegroups.com?utm_medium=email_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Developer
>
> Tel.: +49 (0)40 609 452 077
> Fax.: +49 (0)40 609 452 078
>
> TORCH GmbH - A Graylog Company
> Steckelhörn 11
> 20457 Hamburg
> Germany
>
> https://www.graylog.com <https://www.torch.sh/>
>
> Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175
> Geschäftsführer: Lennart Koopmann (CEO)
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/c21d7fc1-f02c-4e7f-8082-138a4545dab0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] SNMP Plugin - BIND Error

2015-10-02 Thread Arie
Hi all,

Verry happy knowing there is an snmp plugin.

We are trying to run it on port 162 with is the default, but this gives us 
a bind error.

Message in Server.log:
"org.jboss.netty.channel.ChannelException: Failed to bind to: /0.0.0.0:162"

There is nothing using listening on this port in our centos6 enviroment.


Anyone having an idea or solution here that helps?

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/bbcf243f-04b7-4bf2-bb48-eb97b3b78d75%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Problem with streeam alerts after updating to graylog1.2

2015-09-24 Thread Arie
HI,

Problem solved after updating to 1.2.1.
Great work, many thanks,,



On Thursday, September 17, 2015 at 3:06:00 PM UTC+2, Edmundo Alvarez wrote:
>
> After looking at the documents provided by Arie and Ubay, I can confirm 
> the issue, and we already identified the cause. We are working to provide a 
> proper solution for this issue, but if you really can't wait, there is a 
> temporary solution and more information in here: 
> https://github.com/Graylog2/graylog2-server/issues/1428 
>
> Thank you for your patience, and sorry for any inconveniences this may 
> have caused. 
>
> Regards, 
>
> Edmundo 
>
> > On 17 Sep 2015, at 14:55, Arie <satya...@gmail.com > 
> wrote: 
> > 
> > I have that error to, but the clone appeard. 
> > We put "Clone" in front of the name of the clone, you have to do that 
> :-) 
> > 
> > 
> > On Thursday, September 17, 2015 at 2:02:42 PM UTC+2, Ubay wrote: 
> > Thank you but it didn't work for me. I got the error message: 
> > 
> > Could not clone Stream 
> > Cloning Stream failed with status: Internal Server error. 
> > 
> > 
> > In the server.log file the error "Read operation to server 
> localhost:27017 failed on database graylog2" is present again. 
> > 
> > Regards. 
> > 
> > El jueves, 17 de septiembre de 2015, 12:33:54 (UTC+1), Arie escribió: 
> > HI, 
> > 
> > My workaround is to clone it, and create the callback again if needed. 
> > 
> > Arie. 
> > 
> > On Thursday, September 17, 2015 at 1:09:38 PM UTC+2, Ubay wrote: 
> > Hello, 
> > 
> >   We have the same problem after upgrading to 1.2.0. The callbacks 
> created before version 1.1.6 are not displayed. We also get the error 
> message log: ERROR [AnyExceptionClassMapper] Unhandled exception in REST 
> resource 
> > com.mongodb.MongoException$Network: Read operation to server 
> localhost:27017 failed on database graylog2 
> > 
> >   Regards. 
> > 
> > El jueves, 17 de septiembre de 2015, 7:55:27 (UTC+1), Arie escribió: 
> > Send it by mail 
> > 
> > hth,, 
> > 
> > Arie 
> > 
> > On Wednesday, September 16, 2015 at 3:15:12 PM UTC+2, Edmundo Alvarez 
> wrote: 
> > We saw a similar problem with an alert callback that was created in 1.0, 
> it could be the same problem that you are experiencing. Could you share 
> with us your "alarmcallbackconfigurations" MongoDB collection in order to 
> further investigate the issue? Please send it to edm...@graylog.com if it 
> contains any sensitive information. 
> > 
> > In case you don't know how to get the collection, you can get the 
> MongoDB collection by executing the following command in a terminal: 
> > mongo :/ <<< 
> 'db.alarmcallbackconfigurations.find()' 
> > 
> > Please remember to replace , , and 
>  with the actual values for your environment. You 
> may also need to add a username and password if your setup requires 
> authentication. 
> > 
> > Edmundo 
> > 
> > > On 16 Sep 2015, at 13:09, Arie <satya...@gmail.com> wrote: 
> > > 
> > > That is very well possible, 1.0 or 1.01 but not totally shore of it. 
> > > 
> > > In one of the upgrades I had a problem with some data that was the 
> result of a Yum update from repository 
> > > where old data was deleted an a wrong/missing node-id file. 
> > > 
> > > We use the contend-pack function for backup of a lot of settings, an 
> what I see now is that the callback 
> > > function is not present there, and may be missing in my present stream 
> configs. 
> > > 
> > > Arie 
> > > 
> > > Op woensdag 16 september 2015 11:45:55 UTC+2 schreef Edmundo Alvarez: 
> > > That previous 1.1 version, was it an upgrade from 1.0 by any chance? 
> > > 
> > > Edmundo 
> > > 
> > > > On 16 Sep 2015, at 11:39, Arie <satya...@gmail.com> wrote: 
> > > > 
> > > > And second: 
> > > > 
> > > > In the alert the "callbacks" part in the GUI keeps "loading" going 
> on endlesly. 
> > > > I remember editing the calback email condition so we are closer 
> intho the problem I guess 
> > > > 
> > > > Arie 
> > > > 
> > > > Op woensdag 16 september 2015 11:22:52 UTC+2 schreef Edmundo 
> Alvarez: 
> > > > Hi Arie, 
> > > > 
> > > > From which version did you upgrade to 1.2? It would also be helpful 
> to know if that was a clean installation or an upgrade from

Re: [graylog2] Problem with streeam alerts after updating to graylog1.2

2015-09-17 Thread Arie
Send it by mail

hth,,

Arie

On Wednesday, September 16, 2015 at 3:15:12 PM UTC+2, Edmundo Alvarez wrote:
>
> We saw a similar problem with an alert callback that was created in 1.0, 
> it could be the same problem that you are experiencing. Could you share 
> with us your "alarmcallbackconfigurations" MongoDB collection in order to 
> further investigate the issue? Please send it to edm...@graylog.com 
>  if it contains any sensitive information. 
>
> In case you don't know how to get the collection, you can get the MongoDB 
> collection by executing the following command in a terminal: 
> mongo :/ <<< 
> 'db.alarmcallbackconfigurations.find()' 
>
> Please remember to replace , , and 
>  with the actual values for your environment. You 
> may also need to add a username and password if your setup requires 
> authentication. 
>
> Edmundo 
>
> > On 16 Sep 2015, at 13:09, Arie <satya...@gmail.com > 
> wrote: 
> > 
> > That is very well possible, 1.0 or 1.01 but not totally shore of it. 
> > 
> > In one of the upgrades I had a problem with some data that was the 
> result of a Yum update from repository 
> > where old data was deleted an a wrong/missing node-id file. 
> > 
> > We use the contend-pack function for backup of a lot of settings, an 
> what I see now is that the callback 
> > function is not present there, and may be missing in my present stream 
> configs. 
> > 
> > Arie 
> > 
> > Op woensdag 16 september 2015 11:45:55 UTC+2 schreef Edmundo Alvarez: 
> > That previous 1.1 version, was it an upgrade from 1.0 by any chance? 
> > 
> > Edmundo 
> > 
> > > On 16 Sep 2015, at 11:39, Arie <satya...@gmail.com> wrote: 
> > > 
> > > And second: 
> > > 
> > > In the alert the "callbacks" part in the GUI keeps "loading" going on 
> endlesly. 
> > > I remember editing the calback email condition so we are closer intho 
> the problem I guess 
> > > 
> > > Arie 
> > > 
> > > Op woensdag 16 september 2015 11:22:52 UTC+2 schreef Edmundo Alvarez: 
> > > Hi Arie, 
> > > 
> > > From which version did you upgrade to 1.2? It would also be helpful to 
> know if that was a clean installation or an upgrade from an even earlier 
> version. 
> > > 
> > > Regards, 
> > > 
> > > Edmundo 
> > > 
> > > > On 16 Sep 2015, at 11:10, Arie <satya...@gmail.com> wrote: 
> > > > 
> > > > I'dd had an error on producing the clone, but it appeard to be 
> there. After putting the receivers in it, 
> > > > il looks like it is worrking. So whot is wrong with the original 
> alerts. ? 
> > > > 
> > > > 
> > > > Op woensdag 16 september 2015 10:55:54 UTC+2 schreef Arie: 
> > > > Cloning the stream is not possible either 
> > > > 
> > > > Op woensdag 16 september 2015 10:53:47 UTC+2 schreef Arie: 
> > > > Hi, 
> > > > 
> > > > we are encountering problems with stream alerts after the update. 
> > > > When editing/testing the alert condition we get this message in the 
> GUI. 
> > > > 
> > > > Could not retrieve AlarmCallbacks 
> > > > Fetching AlarmCallbacks failed with status: Internal Server Error 
> > > > 
> > > > 
> > > > server logfile (partial): 
> > > > 
> > > >  ERROR [AnyExceptionClassMapper] Unhandled exception in REST 
> resource 
> > > > com.mongodb.MongoException$Network: Read operation to server 
> localhost:27017 failed on database graylog2 
> > > > at 
> com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:298) 
> > > > at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:269) 
> > > > at 
> com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84) 
> > > > at 
> com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66) 
> > > > at com.mongodb.DBCursor._check(DBCursor.java:498) 
> > > > at com.mongodb.DBCursor._hasNext(DBCursor.java:621) 
> > > > at com.mongodb.DBCursor._fill(DBCursor.java:726) 
> > > > at com.mongodb.DBCursor.toArray(DBCursor.java:763) 
> > > > at org.mongojack.DBCursor.toArray(DBCursor.java:426) 
> > > > at org.mongojack.DBCursor.toArray(DBCursor.java:411) 
> > > > 
> > > > 
> > > > Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can 

Re: [graylog2] Problem with streeam alerts after updating to graylog1.2

2015-09-17 Thread Arie
HI,

My workaround is to clone it, and create the callback again if needed.

Arie.

On Thursday, September 17, 2015 at 1:09:38 PM UTC+2, Ubay wrote:
>
> Hello,
>
>   We have the same problem after upgrading to 1.2.0. The callbacks created 
> before version 1.1.6 are not displayed. We also get the error message log: 
> ERROR [AnyExceptionClassMapper] Unhandled exception in REST resource
> com.mongodb.MongoException$Network: Read operation to server 
> localhost:27017 failed on database graylog2
>
>   Regards.
>
> El jueves, 17 de septiembre de 2015, 7:55:27 (UTC+1), Arie escribió:
>>
>> Send it by mail
>>
>> hth,,
>>
>> Arie
>>
>> On Wednesday, September 16, 2015 at 3:15:12 PM UTC+2, Edmundo Alvarez 
>> wrote:
>>>
>>> We saw a similar problem with an alert callback that was created in 1.0, 
>>> it could be the same problem that you are experiencing. Could you share 
>>> with us your "alarmcallbackconfigurations" MongoDB collection in order to 
>>> further investigate the issue? Please send it to edm...@graylog.com if 
>>> it contains any sensitive information. 
>>>
>>> In case you don't know how to get the collection, you can get the 
>>> MongoDB collection by executing the following command in a terminal: 
>>> mongo :/ <<< 
>>> 'db.alarmcallbackconfigurations.find()' 
>>>
>>> Please remember to replace , , and 
>>>  with the actual values for your environment. You 
>>> may also need to add a username and password if your setup requires 
>>> authentication. 
>>>
>>> Edmundo 
>>>
>>> > On 16 Sep 2015, at 13:09, Arie <satya...@gmail.com> wrote: 
>>> > 
>>> > That is very well possible, 1.0 or 1.01 but not totally shore of it. 
>>> > 
>>> > In one of the upgrades I had a problem with some data that was the 
>>> result of a Yum update from repository 
>>> > where old data was deleted an a wrong/missing node-id file. 
>>> > 
>>> > We use the contend-pack function for backup of a lot of settings, an 
>>> what I see now is that the callback 
>>> > function is not present there, and may be missing in my present stream 
>>> configs. 
>>> > 
>>> > Arie 
>>> > 
>>> > Op woensdag 16 september 2015 11:45:55 UTC+2 schreef Edmundo Alvarez: 
>>> > That previous 1.1 version, was it an upgrade from 1.0 by any chance? 
>>> > 
>>> > Edmundo 
>>> > 
>>> > > On 16 Sep 2015, at 11:39, Arie <satya...@gmail.com> wrote: 
>>> > > 
>>> > > And second: 
>>> > > 
>>> > > In the alert the "callbacks" part in the GUI keeps "loading" going 
>>> on endlesly. 
>>> > > I remember editing the calback email condition so we are closer 
>>> intho the problem I guess 
>>> > > 
>>> > > Arie 
>>> > > 
>>> > > Op woensdag 16 september 2015 11:22:52 UTC+2 schreef Edmundo 
>>> Alvarez: 
>>> > > Hi Arie, 
>>> > > 
>>> > > From which version did you upgrade to 1.2? It would also be helpful 
>>> to know if that was a clean installation or an upgrade from an even earlier 
>>> version. 
>>> > > 
>>> > > Regards, 
>>> > > 
>>> > > Edmundo 
>>> > > 
>>> > > > On 16 Sep 2015, at 11:10, Arie <satya...@gmail.com> wrote: 
>>> > > > 
>>> > > > I'dd had an error on producing the clone, but it appeard to be 
>>> there. After putting the receivers in it, 
>>> > > > il looks like it is worrking. So whot is wrong with the original 
>>> alerts. ? 
>>> > > > 
>>> > > > 
>>> > > > Op woensdag 16 september 2015 10:55:54 UTC+2 schreef Arie: 
>>> > > > Cloning the stream is not possible either 
>>> > > > 
>>> > > > Op woensdag 16 september 2015 10:53:47 UTC+2 schreef Arie: 
>>> > > > Hi, 
>>> > > > 
>>> > > > we are encountering problems with stream alerts after the update. 
>>> > > > When editing/testing the alert condition we get this message in 
>>> the GUI. 
>>> > > > 
>>> > > > Could not retrieve AlarmCallbacks 
>>> > > > Fetching AlarmCallbacks failed with status: Internal Server Error 
>&

Re: [graylog2] Problem with streeam alerts after updating to graylog1.2

2015-09-17 Thread Arie
I have that error to, but the clone appeard.
We put "Clone" in front of the name of the clone, you have to do that :-)


On Thursday, September 17, 2015 at 2:02:42 PM UTC+2, Ubay wrote:
>
> Thank you but it didn't work for me. I got the error message:
>
> Could not clone Stream
> Cloning Stream failed with status: Internal Server error.
>
>
> In the server.log file the error "Read operation to server localhost:27017 
> failed on database graylog2" is present again.
>
> Regards.
>
> El jueves, 17 de septiembre de 2015, 12:33:54 (UTC+1), Arie escribió:
>>
>> HI,
>>
>> My workaround is to clone it, and create the callback again if needed.
>>
>> Arie.
>>
>> On Thursday, September 17, 2015 at 1:09:38 PM UTC+2, Ubay wrote:
>>>
>>> Hello,
>>>
>>>   We have the same problem after upgrading to 1.2.0. The callbacks 
>>> created before version 1.1.6 are not displayed. We also get the error 
>>> message log: ERROR [AnyExceptionClassMapper] Unhandled exception in REST 
>>> resource
>>> com.mongodb.MongoException$Network: Read operation to server 
>>> localhost:27017 failed on database graylog2
>>>
>>>   Regards.
>>>
>>> El jueves, 17 de septiembre de 2015, 7:55:27 (UTC+1), Arie escribió:
>>>>
>>>> Send it by mail
>>>>
>>>> hth,,
>>>>
>>>> Arie
>>>>
>>>> On Wednesday, September 16, 2015 at 3:15:12 PM UTC+2, Edmundo Alvarez 
>>>> wrote:
>>>>>
>>>>> We saw a similar problem with an alert callback that was created in 
>>>>> 1.0, it could be the same problem that you are experiencing. Could you 
>>>>> share with us your "alarmcallbackconfigurations" MongoDB collection in 
>>>>> order to further investigate the issue? Please send it to 
>>>>> edm...@graylog.com if it contains any sensitive information. 
>>>>>
>>>>> In case you don't know how to get the collection, you can get the 
>>>>> MongoDB collection by executing the following command in a terminal: 
>>>>> mongo :/ <<< 
>>>>> 'db.alarmcallbackconfigurations.find()' 
>>>>>
>>>>> Please remember to replace , , and 
>>>>>  with the actual values for your environment. You 
>>>>> may also need to add a username and password if your setup requires 
>>>>> authentication. 
>>>>>
>>>>> Edmundo 
>>>>>
>>>>> > On 16 Sep 2015, at 13:09, Arie <satya...@gmail.com> wrote: 
>>>>> > 
>>>>> > That is very well possible, 1.0 or 1.01 but not totally shore of it. 
>>>>> > 
>>>>> > In one of the upgrades I had a problem with some data that was the 
>>>>> result of a Yum update from repository 
>>>>> > where old data was deleted an a wrong/missing node-id file. 
>>>>> > 
>>>>> > We use the contend-pack function for backup of a lot of settings, an 
>>>>> what I see now is that the callback 
>>>>> > function is not present there, and may be missing in my present 
>>>>> stream configs. 
>>>>> > 
>>>>> > Arie 
>>>>> > 
>>>>> > Op woensdag 16 september 2015 11:45:55 UTC+2 schreef Edmundo 
>>>>> Alvarez: 
>>>>> > That previous 1.1 version, was it an upgrade from 1.0 by any chance? 
>>>>> > 
>>>>> > Edmundo 
>>>>> > 
>>>>> > > On 16 Sep 2015, at 11:39, Arie <satya...@gmail.com> wrote: 
>>>>> > > 
>>>>> > > And second: 
>>>>> > > 
>>>>> > > In the alert the "callbacks" part in the GUI keeps "loading" going 
>>>>> on endlesly. 
>>>>> > > I remember editing the calback email condition so we are closer 
>>>>> intho the problem I guess 
>>>>> > > 
>>>>> > > Arie 
>>>>> > > 
>>>>> > > Op woensdag 16 september 2015 11:22:52 UTC+2 schreef Edmundo 
>>>>> Alvarez: 
>>>>> > > Hi Arie, 
>>>>> > > 
>>>>> > > From which version did you upgrade to 1.2? It would also be 
>>>>> helpful to know if that was a clean installation or an upgrade from an 
>>>>> e

Re: [graylog2] Re: Problem with streeam alerts after updating to graylog1.2

2015-09-16 Thread Arie
Hi Edmundo,

It was a rpm upgrade from the latest stable version: 1.1.6-1.noarch.rpm
No modifications to the config file.

Arie.

Op woensdag 16 september 2015 11:22:52 UTC+2 schreef Edmundo Alvarez:
>
> Hi Arie, 
>
> From which version did you upgrade to 1.2? It would also be helpful to 
> know if that was a clean installation or an upgrade from an even earlier 
> version. 
>
> Regards, 
>
> Edmundo 
>
> > On 16 Sep 2015, at 11:10, Arie <satya...@gmail.com > 
> wrote: 
> > 
> > I'dd had an error on producing the clone, but it appeard to be there. 
> After putting the receivers in it, 
> > il looks like it is worrking. So whot is wrong with the original alerts. 
> ? 
> > 
> > 
> > Op woensdag 16 september 2015 10:55:54 UTC+2 schreef Arie: 
> > Cloning the stream is not possible either 
> > 
> > Op woensdag 16 september 2015 10:53:47 UTC+2 schreef Arie: 
> > Hi, 
> > 
> > we are encountering problems with stream alerts after the update. 
> > When editing/testing the alert condition we get this message in the GUI. 
> > 
> > Could not retrieve AlarmCallbacks 
> > Fetching AlarmCallbacks failed with status: Internal Server Error 
> > 
> > 
> > server logfile (partial): 
> > 
> >  ERROR [AnyExceptionClassMapper] Unhandled exception in REST resource 
> > com.mongodb.MongoException$Network: Read operation to server 
> localhost:27017 failed on database graylog2 
> > at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:298) 
> > at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:269) 
> > at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84) 
> > at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66) 
> > at com.mongodb.DBCursor._check(DBCursor.java:498) 
> > at com.mongodb.DBCursor._hasNext(DBCursor.java:621) 
> > at com.mongodb.DBCursor._fill(DBCursor.java:726) 
> > at com.mongodb.DBCursor.toArray(DBCursor.java:763) 
> > at org.mongojack.DBCursor.toArray(DBCursor.java:426) 
> > at org.mongojack.DBCursor.toArray(DBCursor.java:411) 
> > 
> > 
> > Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can not 
> construct instance of java.lang.String, problem: Expected an ObjectId to 
> deserialise to string, but found class java.lang.String 
> >  at [Source: 
> de.undercouch.bson4jackson.io.LittleEndianInputStream@2909ef06; pos: 21] 
> (through reference chain: 
> org.graylog2.alarmcallbacks.AlarmCallbackConfigurationAVImpl["id"]) 
> > at 
> com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
>  
>
> > at 
> com.fasterxml.jackson.databind.DeserializationContext.instantiationException(DeserializationContext.java:889)
>  
>
> > at 
> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:55)
>  
>
> > at 
> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:37)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:520)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:461)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:377)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1100)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:294)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:131)
>  
>
> > at 
> com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3674)
>  
>
> > at 
> com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2062) 
>
> > at 
> org.mongojack.internal.stream.JacksonDBDecoder.decode(JacksonDBDecoder.java:77)
>  
>
> > at com.mongodb.Response.(Response.java:85) 
> > at com.mongodb.DBPort$1.execute(DBPort.java:164) 
> > at com.mongodb.DBPort$1.execute(DBPort.java:158) 
> > at com.mongodb.DBPort.doOperation(DBPort.java:187) 
> > at com.mongodb.DBPort.call(DBPort.java:158) 
> > 
> > 2015-09-16T10:43:35.254+02:00 ERROR [AnyExcepti

[graylog2] Re: Problem with streeam alerts after updating to graylog1.2

2015-09-16 Thread Arie
Cloning the stream is not possible either

Op woensdag 16 september 2015 10:53:47 UTC+2 schreef Arie:
>
> Hi,
>
> we are encountering problems with stream alerts after the update.
> When editing/testing the alert condition we get this message in the GUI.
>
> *Could not retrieve AlarmCallbacks*
> *Fetching AlarmCallbacks failed with status: Internal Server Error*
>
>
> server logfile (partial):
>
>  ERROR [AnyExceptionClassMapper] Unhandled exception in REST resource
> com.mongodb.MongoException$Network: Read operation to server 
> localhost:27017 failed on database graylog2
> at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:298)
> at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:269)
> at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84)
> at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66)
> at com.mongodb.DBCursor._check(DBCursor.java:498)
> at com.mongodb.DBCursor._hasNext(DBCursor.java:621)
> at com.mongodb.DBCursor._fill(DBCursor.java:726)
> at com.mongodb.DBCursor.toArray(DBCursor.java:763)
> at org.mongojack.DBCursor.toArray(DBCursor.java:426)
> at org.mongojack.DBCursor.toArray(DBCursor.java:411)
>
>
> Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can not 
> construct instance of java.lang.String, problem: Expected an ObjectId to 
> deserialise to string, but found class java.lang.String
>  at [Source: 
> de.undercouch.bson4jackson.io.LittleEndianInputStream@2909ef06; pos: 21] 
> (through reference chain: 
> org.graylog2.alarmcallbacks.AlarmCallbackConfigurationAVImpl["id"])
> at 
> com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
> at 
> com.fasterxml.jackson.databind.DeserializationContext.instantiationException(DeserializationContext.java:889)
> at 
> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:55)
> at 
> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:37)
> at 
> com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:520)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:461)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:377)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1100)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:294)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:131)
> at 
> com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3674)
> at 
> com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2062)
> at 
> org.mongojack.internal.stream.JacksonDBDecoder.decode(JacksonDBDecoder.java:77)
> at com.mongodb.Response.(Response.java:85)
> at com.mongodb.DBPort$1.execute(DBPort.java:164)
> at com.mongodb.DBPort$1.execute(DBPort.java:158)
> at com.mongodb.DBPort.doOperation(DBPort.java:187)
> at com.mongodb.DBPort.call(DBPort.java:158)
>
> 2015-09-16T10:43:35.254+02:00 ERROR [AnyExceptionClassMapper] Unhandled 
> exception in REST resource
> com.mongodb.MongoException$Network: Read operation to server 
> localhost:27017 failed on database graylog2
> at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:298)
> at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:269)
> at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84)
> at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66)
> at com.mongodb.DBCursor._check(DBCursor.java:498)
> at com.mongodb.DBCursor._hasNext(DBCursor.java:621)
> at com.mongodb.DBCursor._fill(DBCursor.java:726)
> at com.mongodb.DBCursor.toArray(DBCursor.java:763)
> at org.mongojack.DBCursor.toArray(DBCursor.java:426)
> at org.mongojack.DBCursor.toArray(DBCursor.java:411)
> at 
> org.graylog2.alarmcallbacks.AlarmCallbackConfigurationServiceMJImpl.getForStreamId(AlarmCallbackConfigurationServiceMJImpl.java:48)
> at 
> org.graylog2.alarmcallbacks.AlarmCallbackConfigurationServiceMJImpl.getForStream(AlarmCallbackConfigurationServiceMJImpl.java:53)
> at 
> org.graylog2.rest.resources.streams.alerts.StreamAlertReceiverResource.sendDummyAlert(

[graylog2] Re: Problem with streeam alerts after updating to graylog1.2

2015-09-16 Thread Arie
I'dd had an error on producing the clone, but it appeard to be there. After 
putting the receivers in it,
il looks like it is worrking. So whot is wrong with the original alerts. ?


Op woensdag 16 september 2015 10:55:54 UTC+2 schreef Arie:
>
> Cloning the stream is not possible either
>
> Op woensdag 16 september 2015 10:53:47 UTC+2 schreef Arie:
>>
>> Hi,
>>
>> we are encountering problems with stream alerts after the update.
>> When editing/testing the alert condition we get this message in the GUI.
>>
>> *Could not retrieve AlarmCallbacks*
>> *Fetching AlarmCallbacks failed with status: Internal Server Error*
>>
>>
>> server logfile (partial):
>>
>>  ERROR [AnyExceptionClassMapper] Unhandled exception in REST resource
>> com.mongodb.MongoException$Network: Read operation to server 
>> localhost:27017 failed on database graylog2
>> at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:298)
>> at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:269)
>> at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84)
>> at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66)
>> at com.mongodb.DBCursor._check(DBCursor.java:498)
>> at com.mongodb.DBCursor._hasNext(DBCursor.java:621)
>> at com.mongodb.DBCursor._fill(DBCursor.java:726)
>> at com.mongodb.DBCursor.toArray(DBCursor.java:763)
>> at org.mongojack.DBCursor.toArray(DBCursor.java:426)
>> at org.mongojack.DBCursor.toArray(DBCursor.java:411)
>>
>>
>> Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can not 
>> construct instance of java.lang.String, problem: Expected an ObjectId to 
>> deserialise to string, but found class java.lang.String
>>  at [Source: 
>> de.undercouch.bson4jackson.io.LittleEndianInputStream@2909ef06; pos: 21] 
>> (through reference chain: 
>> org.graylog2.alarmcallbacks.AlarmCallbackConfigurationAVImpl["id"])
>> at 
>> com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
>> at 
>> com.fasterxml.jackson.databind.DeserializationContext.instantiationException(DeserializationContext.java:889)
>> at 
>> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:55)
>> at 
>> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:37)
>> at 
>> com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:520)
>> at 
>> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:461)
>> at 
>> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:377)
>> at 
>> com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1100)
>> at 
>> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:294)
>> at 
>> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:131)
>> at 
>> com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3674)
>> at 
>> com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2062)
>> at 
>> org.mongojack.internal.stream.JacksonDBDecoder.decode(JacksonDBDecoder.java:77)
>> at com.mongodb.Response.(Response.java:85)
>> at com.mongodb.DBPort$1.execute(DBPort.java:164)
>> at com.mongodb.DBPort$1.execute(DBPort.java:158)
>> at com.mongodb.DBPort.doOperation(DBPort.java:187)
>> at com.mongodb.DBPort.call(DBPort.java:158)
>>
>> 2015-09-16T10:43:35.254+02:00 ERROR [AnyExceptionClassMapper] Unhandled 
>> exception in REST resource
>> com.mongodb.MongoException$Network: Read operation to server 
>> localhost:27017 failed on database graylog2
>> at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:298)
>> at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:269)
>> at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84)
>> at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66)
>> at com.mongodb.DBCursor._check(DBCursor.java:498)
>> at com.mongodb.DBCursor._hasNext(DBCursor.java:621)
>> at com.mongodb.DBCursor._fill(DBCursor.java:726)
>> at com.mongodb.DB

[graylog2] Problem with streeam alerts after updating to graylog1.2

2015-09-16 Thread Arie
Hi,

we are encountering problems with stream alerts after the update.
When editing/testing the alert condition we get this message in the GUI.

*Could not retrieve AlarmCallbacks*
*Fetching AlarmCallbacks failed with status: Internal Server Error*


server logfile (partial):

 ERROR [AnyExceptionClassMapper] Unhandled exception in REST resource
com.mongodb.MongoException$Network: Read operation to server 
localhost:27017 failed on database graylog2
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:298)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:269)
at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84)
at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66)
at com.mongodb.DBCursor._check(DBCursor.java:498)
at com.mongodb.DBCursor._hasNext(DBCursor.java:621)
at com.mongodb.DBCursor._fill(DBCursor.java:726)
at com.mongodb.DBCursor.toArray(DBCursor.java:763)
at org.mongojack.DBCursor.toArray(DBCursor.java:426)
at org.mongojack.DBCursor.toArray(DBCursor.java:411)


Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can not 
construct instance of java.lang.String, problem: Expected an ObjectId to 
deserialise to string, but found class java.lang.String
 at [Source: 
de.undercouch.bson4jackson.io.LittleEndianInputStream@2909ef06; pos: 21] 
(through reference chain: 
org.graylog2.alarmcallbacks.AlarmCallbackConfigurationAVImpl["id"])
at 
com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
at 
com.fasterxml.jackson.databind.DeserializationContext.instantiationException(DeserializationContext.java:889)
at 
org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:55)
at 
org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:37)
at 
com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:520)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:461)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:377)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1100)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:294)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:131)
at 
com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3674)
at 
com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2062)
at 
org.mongojack.internal.stream.JacksonDBDecoder.decode(JacksonDBDecoder.java:77)
at com.mongodb.Response.(Response.java:85)
at com.mongodb.DBPort$1.execute(DBPort.java:164)
at com.mongodb.DBPort$1.execute(DBPort.java:158)
at com.mongodb.DBPort.doOperation(DBPort.java:187)
at com.mongodb.DBPort.call(DBPort.java:158)

2015-09-16T10:43:35.254+02:00 ERROR [AnyExceptionClassMapper] Unhandled 
exception in REST resource
com.mongodb.MongoException$Network: Read operation to server 
localhost:27017 failed on database graylog2
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:298)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:269)
at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84)
at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66)
at com.mongodb.DBCursor._check(DBCursor.java:498)
at com.mongodb.DBCursor._hasNext(DBCursor.java:621)
at com.mongodb.DBCursor._fill(DBCursor.java:726)
at com.mongodb.DBCursor.toArray(DBCursor.java:763)
at org.mongojack.DBCursor.toArray(DBCursor.java:426)
at org.mongojack.DBCursor.toArray(DBCursor.java:411)
at 
org.graylog2.alarmcallbacks.AlarmCallbackConfigurationServiceMJImpl.getForStreamId(AlarmCallbackConfigurationServiceMJImpl.java:48)
at 
org.graylog2.alarmcallbacks.AlarmCallbackConfigurationServiceMJImpl.getForStream(AlarmCallbackConfigurationServiceMJImpl.java:53)
at 
org.graylog2.rest.resources.streams.alerts.StreamAlertReceiverResource.sendDummyAlert(StreamAlertReceiverResource.java:164)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)

Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can not 
construct instance of java.lang.String, problem: Expected an ObjectId to 
deserialise to string, but found class 

Re: [graylog2] Re: Problem with streeam alerts after updating to graylog1.2

2015-09-16 Thread Arie
And second:

In the alert the "callbacks" part in the GUI keeps "loading" going on 
endlesly.
I remember editing the calback email condition so we are closer intho the 
problem I guess

Arie 

Op woensdag 16 september 2015 11:22:52 UTC+2 schreef Edmundo Alvarez:
>
> Hi Arie, 
>
> From which version did you upgrade to 1.2? It would also be helpful to 
> know if that was a clean installation or an upgrade from an even earlier 
> version. 
>
> Regards, 
>
> Edmundo 
>
> > On 16 Sep 2015, at 11:10, Arie <satya...@gmail.com > 
> wrote: 
> > 
> > I'dd had an error on producing the clone, but it appeard to be there. 
> After putting the receivers in it, 
> > il looks like it is worrking. So whot is wrong with the original alerts. 
> ? 
> > 
> > 
> > Op woensdag 16 september 2015 10:55:54 UTC+2 schreef Arie: 
> > Cloning the stream is not possible either 
> > 
> > Op woensdag 16 september 2015 10:53:47 UTC+2 schreef Arie: 
> > Hi, 
> > 
> > we are encountering problems with stream alerts after the update. 
> > When editing/testing the alert condition we get this message in the GUI. 
> > 
> > Could not retrieve AlarmCallbacks 
> > Fetching AlarmCallbacks failed with status: Internal Server Error 
> > 
> > 
> > server logfile (partial): 
> > 
> >  ERROR [AnyExceptionClassMapper] Unhandled exception in REST resource 
> > com.mongodb.MongoException$Network: Read operation to server 
> localhost:27017 failed on database graylog2 
> > at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:298) 
> > at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:269) 
> > at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84) 
> > at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66) 
> > at com.mongodb.DBCursor._check(DBCursor.java:498) 
> > at com.mongodb.DBCursor._hasNext(DBCursor.java:621) 
> > at com.mongodb.DBCursor._fill(DBCursor.java:726) 
> > at com.mongodb.DBCursor.toArray(DBCursor.java:763) 
> > at org.mongojack.DBCursor.toArray(DBCursor.java:426) 
> > at org.mongojack.DBCursor.toArray(DBCursor.java:411) 
> > 
> > 
> > Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can not 
> construct instance of java.lang.String, problem: Expected an ObjectId to 
> deserialise to string, but found class java.lang.String 
> >  at [Source: 
> de.undercouch.bson4jackson.io.LittleEndianInputStream@2909ef06; pos: 21] 
> (through reference chain: 
> org.graylog2.alarmcallbacks.AlarmCallbackConfigurationAVImpl["id"]) 
> > at 
> com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
>  
>
> > at 
> com.fasterxml.jackson.databind.DeserializationContext.instantiationException(DeserializationContext.java:889)
>  
>
> > at 
> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:55)
>  
>
> > at 
> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:37)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:520)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:461)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:377)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1100)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:294)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:131)
>  
>
> > at 
> com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3674)
>  
>
> > at 
> com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2062) 
>
> > at 
> org.mongojack.internal.stream.JacksonDBDecoder.decode(JacksonDBDecoder.java:77)
>  
>
> > at com.mongodb.Response.(Response.java:85) 
> > at com.mongodb.DBPort$1.execute(DBPort.java:164) 
> > at com.mongodb.DBPort$1.execute(DBPort.java:158) 
> > at com.mongodb.DBPort.doOperation(DBPort.java:187) 
> > at com.mongodb.DBPort.call(DBP

Re: [graylog2] Re: Problem with streeam alerts after updating to graylog1.2

2015-09-16 Thread Arie
That is very well possible, 1.0 or 1.01 but not totally shore of it.

In one of the upgrades I had a problem with some data that was the result 
of a Yum update from repository
where old data was deleted an a wrong/missing node-id file.

We use the contend-pack function for backup of a lot of settings, an what I 
see now is that the callback
function is not present there, and may be missing in my present stream 
configs.

Arie

Op woensdag 16 september 2015 11:45:55 UTC+2 schreef Edmundo Alvarez:
>
> That previous 1.1 version, was it an upgrade from 1.0 by any chance? 
>
> Edmundo 
>
> > On 16 Sep 2015, at 11:39, Arie <satya...@gmail.com > 
> wrote: 
> > 
> > And second: 
> > 
> > In the alert the "callbacks" part in the GUI keeps "loading" going on 
> endlesly. 
> > I remember editing the calback email condition so we are closer intho 
> the problem I guess 
> > 
> > Arie 
> > 
> > Op woensdag 16 september 2015 11:22:52 UTC+2 schreef Edmundo Alvarez: 
> > Hi Arie, 
> > 
> > From which version did you upgrade to 1.2? It would also be helpful to 
> know if that was a clean installation or an upgrade from an even earlier 
> version. 
> > 
> > Regards, 
> > 
> > Edmundo 
> > 
> > > On 16 Sep 2015, at 11:10, Arie <satya...@gmail.com> wrote: 
> > > 
> > > I'dd had an error on producing the clone, but it appeard to be there. 
> After putting the receivers in it, 
> > > il looks like it is worrking. So whot is wrong with the original 
> alerts. ? 
> > > 
> > > 
> > > Op woensdag 16 september 2015 10:55:54 UTC+2 schreef Arie: 
> > > Cloning the stream is not possible either 
> > > 
> > > Op woensdag 16 september 2015 10:53:47 UTC+2 schreef Arie: 
> > > Hi, 
> > > 
> > > we are encountering problems with stream alerts after the update. 
> > > When editing/testing the alert condition we get this message in the 
> GUI. 
> > > 
> > > Could not retrieve AlarmCallbacks 
> > > Fetching AlarmCallbacks failed with status: Internal Server Error 
> > > 
> > > 
> > > server logfile (partial): 
> > > 
> > >  ERROR [AnyExceptionClassMapper] Unhandled exception in REST resource 
> > > com.mongodb.MongoException$Network: Read operation to server 
> localhost:27017 failed on database graylog2 
> > > at 
> com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:298) 
> > > at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:269) 
> > > at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84) 
> > > at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66) 
> > > at com.mongodb.DBCursor._check(DBCursor.java:498) 
> > > at com.mongodb.DBCursor._hasNext(DBCursor.java:621) 
> > > at com.mongodb.DBCursor._fill(DBCursor.java:726) 
> > > at com.mongodb.DBCursor.toArray(DBCursor.java:763) 
> > > at org.mongojack.DBCursor.toArray(DBCursor.java:426) 
> > > at org.mongojack.DBCursor.toArray(DBCursor.java:411) 
> > > 
> > > 
> > > Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can 
> not construct instance of java.lang.String, problem: Expected an ObjectId 
> to deserialise to string, but found class java.lang.String 
> > >  at [Source: 
> de.undercouch.bson4jackson.io.LittleEndianInputStream@2909ef06; pos: 21] 
> (through reference chain: 
> org.graylog2.alarmcallbacks.AlarmCallbackConfigurationAVImpl["id"]) 
> > > at 
> com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
>  
>
> > > at 
> com.fasterxml.jackson.databind.DeserializationContext.instantiationException(DeserializationContext.java:889)
>  
>
> > > at 
> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:55)
>  
>
> > > at 
> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:37)
>  
>
> > > at 
> com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:520)
>  
>
> > > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:461)
>  
>
> > > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:377)
>  
>
> > > at 
> com.

Re: [graylog2] Re: Problem with streeam alerts after updating to graylog1.2

2015-09-16 Thread Arie
Hi Edmundo,

The (email) calback alerts that fail where made in a version prior to 1.1.

A callback alert recently made in the latest version (1.1.6) was fine after 
the update.

hth,,

Arie


Op woensdag 16 september 2015 11:22:52 UTC+2 schreef Edmundo Alvarez:
>
> Hi Arie, 
>
> From which version did you upgrade to 1.2? It would also be helpful to 
> know if that was a clean installation or an upgrade from an even earlier 
> version. 
>
> Regards, 
>
> Edmundo 
>
> > On 16 Sep 2015, at 11:10, Arie <satya...@gmail.com > 
> wrote: 
> > 
> > I'dd had an error on producing the clone, but it appeard to be there. 
> After putting the receivers in it, 
> > il looks like it is worrking. So whot is wrong with the original alerts. 
> ? 
> > 
> > 
> > Op woensdag 16 september 2015 10:55:54 UTC+2 schreef Arie: 
> > Cloning the stream is not possible either 
> > 
> > Op woensdag 16 september 2015 10:53:47 UTC+2 schreef Arie: 
> > Hi, 
> > 
> > we are encountering problems with stream alerts after the update. 
> > When editing/testing the alert condition we get this message in the GUI. 
> > 
> > Could not retrieve AlarmCallbacks 
> > Fetching AlarmCallbacks failed with status: Internal Server Error 
> > 
> > 
> > server logfile (partial): 
> > 
> >  ERROR [AnyExceptionClassMapper] Unhandled exception in REST resource 
> > com.mongodb.MongoException$Network: Read operation to server 
> localhost:27017 failed on database graylog2 
> > at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:298) 
> > at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:269) 
> > at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84) 
> > at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66) 
> > at com.mongodb.DBCursor._check(DBCursor.java:498) 
> > at com.mongodb.DBCursor._hasNext(DBCursor.java:621) 
> > at com.mongodb.DBCursor._fill(DBCursor.java:726) 
> > at com.mongodb.DBCursor.toArray(DBCursor.java:763) 
> > at org.mongojack.DBCursor.toArray(DBCursor.java:426) 
> > at org.mongojack.DBCursor.toArray(DBCursor.java:411) 
> > 
> > 
> > Caused by: com.fasterxml.jackson.databind.JsonMappingException: Can not 
> construct instance of java.lang.String, problem: Expected an ObjectId to 
> deserialise to string, but found class java.lang.String 
> >  at [Source: 
> de.undercouch.bson4jackson.io.LittleEndianInputStream@2909ef06; pos: 21] 
> (through reference chain: 
> org.graylog2.alarmcallbacks.AlarmCallbackConfigurationAVImpl["id"]) 
> > at 
> com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
>  
>
> > at 
> com.fasterxml.jackson.databind.DeserializationContext.instantiationException(DeserializationContext.java:889)
>  
>
> > at 
> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:55)
>  
>
> > at 
> org.mongojack.internal.ObjectIdDeserializers$ToStringDeserializer.deserialize(ObjectIdDeserializers.java:37)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:520)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:461)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:377)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1100)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:294)
>  
>
> > at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:131)
>  
>
> > at 
> com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3674)
>  
>
> > at 
> com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2062) 
>
> > at 
> org.mongojack.internal.stream.JacksonDBDecoder.decode(JacksonDBDecoder.java:77)
>  
>
> > at com.mongodb.Response.(Response.java:85) 
> > at com.mongodb.DBPort$1.execute(DBPort.java:164) 
> > at com.mongodb.DBPort$1.execute(DBPort.java:158) 
> > at com.mongodb.DBPort.doOperation(DBPort.java:187) 
> > at com.mongodb.DBPort.call(DBPort.java:158) 
> 

[graylog2] Re: Tuning for optimized performance and growth

2015-08-19 Thread Arie
Hi BKeep

Just for performance, some tips:

Consider using 3 replicas if you can handle the space. Each node will be 
capable to handle a search
requests so you will have speed improvement on search (a lot). It is where 
the replica's are meant for
with backup as an second advantage
https://www.elastic.co/guide/en/elasticsearch/guide/current/replica-shards.html.

Consider using a master only node with no data for elasticsearch. This node 
will be responsible
for managing the cluster.

You are using Centos, and it is for that.

/etc/grub.conf
add to the kernel line cgroup_disable=memory for better and full memory 
performance.
Because you are running es in a virtual environment, you could also change 
the disk scedular by adding: elevator=noop because
your external disks is managing this.

/etc/fstab:
Disable tmpfs (ramdisk) just put an # in front of the line.
At least change the mount options for the volume with /var into sometning 
like:
  /dev/mapper/vg_nagios-lv_root /   ext4
defaults,noatime,nodiratime,nobarrier,data=writeback,journal_ioprio=4
1 1

Take a look at this best practice foor your virtual enviroment:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=2053145

Disable swapping by putting swappiness = 0 at the and in /etc/sysctl.conf

Did you implement this?
http://ithubinfo.blogspot.nl/2013/07/how-to-increase-ulimit-open-file-and.html

You couls allso gain some performance with this, but you should consider 
the amount of free memory you have:
https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/


I like using Elastic HQ for a nice look on my cluster. 
http://www.elastichq.org/

Like you, I would want to know what is better in increasing record count vs 
the number of indexes.
This might depend on the history you generally do a search on (wild guess). 
I would increase the number of indexes.


A.

On Friday, August 14, 2015 at 4:52:37 AM UTC+2, BKeep wrote:

 This question may be better answered on the Elasticsearch forum but I 
 thought I would give the GL list a try first. I recently added two 
 additional nodes to a working cluster and would like some help/ideas on 
 tuning for optimized performance and growth. My environment has 4 data 
 nodes each spec'd out with 4 vCPU's, 12GB of Ram (ES HEAP is at 6GB), 250GB 
 of storage (207GB on /var) running CentOS v6.7. Graylog is at v1.1.6, ES at 
 v1.6.2 and openjdk 1.8. I am also using the stock settings for 20 indices 
 with 20 Million records each. I have set 4 shards with one replica. The 
 master node runs ES, GL, and GL web using the same specs, except instead of 
 250GB of storage, it only has 120GB. All nodes are thick provisioned VMDK's 
 on a VMware cluster. Right now with our current sending rate, I see indices 
 rotate about every 4-12 hours and generally shards have a size between 
 1.5GB's to 2GB's. The total used storage on the data nodes is ~73GB used 
 with ~124GB available. 

 Okay, so finally to my question. I would like to increase either the 
 number of indices or increase the number of records per index. Is one 
 method preferred over the other? If the records count increases from 20 
 Million to 30 Million, would that increase/decrease index/search 
 performance or should the index limit be set to 30 indices. Basically, 
 which method would allow for increased historical data retention with the 
 least overhead if that makes sense.

 Regards,
 Brandon


-- 
You received this message because you are subscribed to the Google Groups 
Graylog Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/a9471dbb-fae3-498f-bc73-9eb0c28445eb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: About shards

2015-07-22 Thread Arie
Juan,

the part in de es config is done by the graylog ihmo, maybe even overridden 
by it.

You can put a comment (#) in front of that.

And in elasticsearch conf I have this:
# index.number_of_shards: 1
# index.number_of_replicas: 0

Arie.

Op woensdag 22 juli 2015 15:03:45 UTC+2 schreef Juan Andres Ramirez:

 Hello Arie,
In my graylog conf I have this:
 elasticsearch_shards = 1
 elasticsearch_replicas = 0

 And in elasticsearch conf I have this:
 index.number_of_shards: 1
 index.number_of_replicas: 0

 So why I have 16 shards in my cluster Health?, That is my question.

 Thank you.


 On Tuesday, July 21, 2015 at 6:25:54 PM UTC-3, Arie wrote:

 Hi Juan,

 IHMO for production having 4 ES nodes 4 shards can be fine. The data will 
 be shared on the 4 nodes leaving you
 with 4 shards. (one on each node) Turning replicas to 1 wil create 1 
 replicated shard for each one there is. This gives you a backup
 and improves search speed.

 This is only in count for the new index that will be created if our setup 
 is already running, but there are some commands
 in es that can make that happen for the current index.
 see: 
 https://www.elastic.co/guide/en/elasticsearch/reference/1.6/indices-update-settings.html

 Choosing the number of replicas:

 https://www.elastic.co/guide/en/elasticsearch/guide/current/replica-shards.html

 So for backup take one replica, for speed improvement choose 3 when 
 having 4 nodes.
 Every node is than capable of serving a search request.

 A.


 Op dinsdag 21 juli 2015 15:19:21 UTC+2 schreef Juan Andres Ramirez:

 The cluster health:


 {
   cluster_name : elasticsearch,
   status : yellow,
   timed_out : false,
   number_of_nodes : 2,
   number_of_data_nodes : 1,
   active_primary_shards : 16,
   active_shards : 16,
   relocating_shards : 0,
   initializing_shards : 0,
   unassigned_shards : 1,
   delayed_unassigned_shards : 0,
   number_of_pending_tasks : 0,
   number_of_in_flight_fetch : 0
 }





 On Tuesday, July 21, 2015 at 9:54:23 AM UTC-3, Juan Andres Ramirez wrote:

 Hello guys,
 I was searching the answer in this group and in the web, but I 
 can't found the answer.

 1- Graylog create 1 shard per indice?, so in this moment I have 17 
 shards and in my config I have :
 elasticsearch_shards = 1
 elasticsearch_replicas = 0

 So I'm in development phase, I don't need replicas.

 2- If I will change in config elasticsearch_shards = 2 , then I'm going 
 to have 34 shards 2 per index?.

 My last question, If I'm going to create an Elasticsearch a cluster 
 with 4 nodes and change the setup elasticsearch_replicas = 1 , I'm going 
 to 
 have 17 shard in every node automatically? 

 I have problem to know how to work the elasticsearch cluster and the 
 configuration to failover.

 Thank you.



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: About shards

2015-07-21 Thread Arie
Hi Juan,

IHMO for production having 4 ES nodes 4 shards can be fine. The data will 
be shared on the 4 nodes leaving you
with 4 shards. (one on each node) Turning replicas to 1 wil create 1 
replicated shard for each one there is. This gives you a backup
and improves search speed.

This is only in count for the new index that will be created if our setup 
is already running, but there are some commands
in es that can make that happen for the current index.
see: 
https://www.elastic.co/guide/en/elasticsearch/reference/1.6/indices-update-settings.html

Choosing the number of replicas:
https://www.elastic.co/guide/en/elasticsearch/guide/current/replica-shards.html

So for backup take one replica, for speed improvement choose 3 when having 
4 nodes.
Every node is than capable of serving a search request.

A.


Op dinsdag 21 juli 2015 15:19:21 UTC+2 schreef Juan Andres Ramirez:

 The cluster health:


 {
   cluster_name : elasticsearch,
   status : yellow,
   timed_out : false,
   number_of_nodes : 2,
   number_of_data_nodes : 1,
   active_primary_shards : 16,
   active_shards : 16,
   relocating_shards : 0,
   initializing_shards : 0,
   unassigned_shards : 1,
   delayed_unassigned_shards : 0,
   number_of_pending_tasks : 0,
   number_of_in_flight_fetch : 0
 }





 On Tuesday, July 21, 2015 at 9:54:23 AM UTC-3, Juan Andres Ramirez wrote:

 Hello guys,
 I was searching the answer in this group and in the web, but I 
 can't found the answer.

 1- Graylog create 1 shard per indice?, so in this moment I have 17 shards 
 and in my config I have :
 elasticsearch_shards = 1
 elasticsearch_replicas = 0

 So I'm in development phase, I don't need replicas.

 2- If I will change in config elasticsearch_shards = 2 , then I'm going 
 to have 34 shards 2 per index?.

 My last question, If I'm going to create an Elasticsearch a cluster with 
 4 nodes and change the setup elasticsearch_replicas = 1 , I'm going to have 
 17 shard in every node automatically? 

 I have problem to know how to work the elasticsearch cluster and the 
 configuration to failover.

 Thank you.



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Alert when there are Indexer Failures

2015-07-11 Thread Arie
David,

You could put the logdata of graylog into graylog with the graylog 
collector service or with
the use of your local syslog tool.

Then you could configure a stream alert on the events coming in, en get 
this alerts by mail or sms.

hth,,
  Arie

Op vrijdag 10 juli 2015 14:20:59 UTC+2 schreef David Gerdeman:

 Is it possible to set up an alert or notification of some kind that will 
 trigger when there are indexer failures?  I seem to randomly have indexing 
 issues and I would like to be able to catch them faster.


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: HTTP Monitor plugin for graylog

2015-07-08 Thread Arie
Sivasamy,

I'm testing with a 2sec interval, and let it run for the night.

Thank you for working on it so fast.

Arie.

Op woensdag 8 juli 2015 16:58:29 UTC+2 schreef Sivasamy Kaliappan:

 Aire,

 I have fixed hang and too many threads issue. I have tested with 50 ms 
 interval and a timeout of 20 seconds  (higher timeout is due to some issue 
 in the 3rdparty library) and it work fine.

 Get the latest jar 
 https://github.com/sivasamyk/graylog2-plugin-input-httpmonitor/raw/master/graylog2-plugin-input-httpmonitor-1.0.0.jar
  
 and let me know your feedback.

 Thanks for your feedback.

 Regards,
 Siva.

 On Wednesday, July 8, 2015 at 5:27:45 PM UTC+5:30, Arie wrote:

 Sivasamy.

 Thank you for your reaction.

 another thing is that with the old and the newer version my Treads are 
 increasing and going sky hi.
 Normally this is 700 - 1000, but i have numbers from 4000 upto 12360 

 hth,,

 Arie

 On Wednesday, July 8, 2015 at 8:33:24 AM UTC+2, Sivasamy Kaliappan wrote:

 Arie,

 The configuration page is in a different structure every time i select 
 it.

 Siva I assume you are talking about the order of the fields listed in 
 input configuration window. This is fixed as part of #1282 
 https://github.com/Graylog2/graylog2-server/issues/1282 in 
 graylog-server

 Graylog is heaving pain when i configure 20ms for example. Gives me this 
 error

 Siva I agree 20 ms is too small an interval. I will check my local 
 setup with this config.

 On Wednesday, July 8, 2015 at 1:27:53 AM UTC+5:30, Arie wrote:

 Hi Sivasamy,

 In testing it experience the following. The configuration page is in a 
 different structure
 every time i select it. The items are not listed the same list each 
 time. Maybe due to the following:

 Graylog is heaving pain when i configure 20ms for example. Gives me 
 this error,

 You are creating too many HashedWheelTimer instances. 
  HashedWheelTimer is a shared resource that must be reused across the 
 application, so that only a few instances are created. 

 Looks like setting in milliseconds is to intensive for graylog it is 
 giving the  SharedResourceMisuseDetector error,
 and my memory is filled up like crazy.

 Setting the interval in seconds (even one second) is fine here.

 I don't see it as a replacement for the logs, but rather for 
 testing/monitoring (functionality) purposes. Thank you :-)

 hth,,

 Arie


 Op dinsdag 7 juli 2015 09:29:01 UTC+2 schreef Sivasamy Kaliappan:

 Oops.. Here is the right link: 
 https://github.com/sivasamyk/graylog2-plugin-input-httpmonitor

 On Tuesday, July 7, 2015 at 1:14:41 AM UTC+5:30, Arie wrote:



 beware of the dot in the url :-)

 Op maandag 6 juli 2015 18:49:57 UTC+2 schreef Sivasamy Kaliappan:

 Hello All,

 I have developed a HTTP monitor plugin for graylog for monitoring 
 HTTP URLs.
 Try the plugin and  post your feedback here

 https://github.com/sivasamyk/graylog2-plugin-input-httpmonitor.

 If you find any issues or have new features, open a new github issue.

 Regards,
 Siva.



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: HTTP Monitor plugin for graylog

2015-07-07 Thread Arie
Hi, I switched back to the older version due to hangups on my test 
environment.

WARN  [NodePingThread] Did not find meta info of this node. 
Re-registering. 

Op dinsdag 7 juli 2015 21:57:53 UTC+2 schreef Arie:

 Hi Sivasamy,

 In testing it experience the following. The configuration page is in a 
 different structure
 every time i select it. The items are not listed the same list each time. 
 Maybe due to the following:

 Graylog is heaving pain when i configure 20ms for example. Gives me this 
 error,

 You are creating too many HashedWheelTimer instances.  HashedWheelTimer 
 is a shared resource that must be reused across the application, so that 
 only a few instances are created. 

 Looks like setting in milliseconds is to intensive for graylog it is 
 giving the  SharedResourceMisuseDetector error,
 and my memory is filled up like crazy.

 Setting the interval in seconds (even one second) is fine here.

 I don't see it as a replacement for the logs, but rather for 
 testing/monitoring (functionality) purposes. Thank you :-)

 hth,,

 Arie


 Op dinsdag 7 juli 2015 09:29:01 UTC+2 schreef Sivasamy Kaliappan:

 Oops.. Here is the right link: 
 https://github.com/sivasamyk/graylog2-plugin-input-httpmonitor

 On Tuesday, July 7, 2015 at 1:14:41 AM UTC+5:30, Arie wrote:



 beware of the dot in the url :-)

 Op maandag 6 juli 2015 18:49:57 UTC+2 schreef Sivasamy Kaliappan:

 Hello All,

 I have developed a HTTP monitor plugin for graylog for monitoring HTTP 
 URLs.
 Try the plugin and  post your feedback here

 https://github.com/sivasamyk/graylog2-plugin-input-httpmonitor.

 If you find any issues or have new features, open a new github issue.

 Regards,
 Siva.



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: HTTP Monitor plugin for graylog

2015-07-06 Thread Arie
Hi Siva,,

Already testing it. It rocks, you are awesome.
This is useful if you can't get to the server logs themselves.

Looks like the smallest interval is 1 minut. Could this be shorter, like 20 
ms?

Thanks,

Arie.


Op maandag 6 juli 2015 18:49:57 UTC+2 schreef Sivasamy Kaliappan:

 Hello All,

 I have developed a HTTP monitor plugin for graylog for monitoring HTTP 
 URLs.
 Try the plugin and  post your feedback here

 https://github.com/sivasamyk/graylog2-plugin-input-httpmonitor.

 If you find any issues or have new features, open a new github issue.

 Regards,
 Siva.


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: best practice in logging a DB table

2015-07-01 Thread Arie
Can R be a solution to your problem? Some vendors are implementing this 
over their
RDBMS solutions.

Example:
http://www.ibm.com/developerworks/data/library/techarticle/dm-1402db2andr/



Op dinsdag 30 juni 2015 17:25:42 UTC+2 schreef kuku blof:

 hi,
 is there any already build plugin for logging table contents from an RDBMS 
 DB (e.g sql-server) ?
 if not what would be the best practice in doing so ? should I come with a 
 service the periodically polls a table and send the new records over the 
 network to a gray-log server ?

 thanks for any help,
 guy.


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Newbie Questin (Web Interface)

2015-06-26 Thread Arie
Hi,,

SOMETIMES, the fields don't show up on the right (even when I select 
'all')

It appears here to, not having all the fields in for example a search. on 
data of all time. I just push the button below.
The field that do appear is from data that is present mostly.




On Friday, June 26, 2015 at 6:15:13 AM UTC+2, slhac tivist wrote:

 No one else had a problem with this?

 On Tuesday, June 23, 2015 at 4:04:05 PM UTC-5, slhac tivist wrote:

 Hello All,

 Just started using graylog. Love it. Read the docs, but still having this 
 problem:

 1) Using the web interface I made a TEST input, and setup some 
 extractors.

 2) From System|Inputs I select Messages from this input for TEST. Great.

 Here's the problem:

 1) SOMETIMES, the fields don't show up on the right (even when I select 
 'all')

 2) SOMETIMES, the Regex will work fine in the Extractormenu, but won't 
 work when viewing the messages.

 Probably an easy fix, but I can't figure this out.

 So if anyone has any idea or suggestions, I'm all ears! :p

 Thanks in advance!



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Best way to detect anomalies with Graylog?

2015-06-24 Thread Arie
Hi,

If you have less then 500mb of data/dayly, you could look @ Prelert. It can 
look at your data in ES directly.



Arie


Op woensdag 24 juni 2015 14:36:11 UTC+2 schreef Niklas'ThYpHoOn' Grebe:

 Hi, 

 I’m interested in detecting anomalies. At the moment we discover them 
 manually by looking at our dashboards with long time range graphs and with 
 stream filters adding alerts on them. But it’s very tough to detect 
 anomalies with them automatically. 
 I’m just wondering how do you detect anomalies usually via Graylog or with 
 another system besides of Graylog? Or do i miss a feature in Graylog? 


 Greetings, 
 Nik 


 PS: Thumbs up for the latest releases, Graylog is great! :) 


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: migrating to a new graylog master

2015-06-13 Thread Arie
Hi Chuck,

Did you copy the node-id file to the new server? Some things in the 
configuration are
related to that. I am curious what the effect of that would be.

Arie.

ps
To the guys at Graylog,,

Maybe an idea to use or extend the export function so that it can be used 
as a backup/restore
system for complete configurations.



Op zaterdag 13 juni 2015 20:29:05 UTC+2 schreef Chuck Musser:

 I use the functionality of importing and exporting contend-packs for this 
 kind of functionality.
 I use that for making backup of my configs to.

 Arie,

 That might get us part of the way there, but I was looking for a solution 
 that completely transferred information from an existing Graylog server to 
 a new one. My first attempt was a low-level copy of the MongoDB (in the 
 hopes that would get everything) but that hasn't worked yet. Maybe another 
 approach would be to set up a new Graylog server as a non-master in the 
 same cluster and then somehow promote it to master. Not sure if the 
 system works that way, though. So, still looking for tips on how others 
 have migrated from a test all-in-one setup to a production one, on 
 different (and separate hardware).

 Chuck


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Auto start problem

2015-06-12 Thread Arie
Hi,

you coud do this in /etc/init/d/graylog-server

Add a sleep of 10-15 secs right after start.

  start() {
  /bin/sleep 10


On Friday, June 12, 2015 at 9:50:04 AM UTC+2, Ubay wrote:

 Hello,

Is there any way to delay the startup of graylog until the elastic 
 search state is green?

Thank you .

Regards.

 El miércoles, 23 de abril de 2014, 21:44:13 (UTC+1), Charles Farinella 
 escribió:

 Elasticsearch apparently takes a little while to come up on boot, the 
 cluster state is indeed red when the machine first boots, shortly after 
 goes to yellow.  I'll have to spend some time learning a little about 
 elasticsearch, in the meantime it's not all that difficult to start graylog 
 manually.  Thanks for your help.

 On Thursday, April 17, 2014 4:33:11 PM UTC-4, Charles Farinella wrote:

 CentOS 6.5, Graylog2 0.20.1, Elasticsearch-0.90.10-1

 I have an init.d script that starts the graylog server fine if I run it 
 manually.  I have it in chkconfig hoping to start it at boot but it fails. 
  All related services including the graylog2-web-interface start 
 successfully.  Below find the error and the start script.  Why does it 
 start fine manually and not at boot?


 


 2014-04-17 15:04:56,720 INFO : org.graylog2.Main - Graylog2 0.20.1 
 starting up. (JRE: Oracle Corporation 1.7.0_51 on Linux 
 2.6.32-431.5.1.el6.x86_64)
 2014-04-17 15:04:57,786 INFO : org.graylog2.plugin.system.NodeId - Node 
 ID: 46cb0d6a-9e2f-43d8-92d1-dda391daa727
 2014-04-17 15:04:57,827 INFO : org.graylog2.Core - No rest_transport_uri 
 set. Falling back to [http://192.168.24.94:12900].
 2014-04-17 15:04:59,348 INFO : org.graylog2.buffers.ProcessBuffer - 
 Initialized ProcessBuffer with ring size 1024 and wait strategy 
 BlockingWaitStrategy.
 2014-04-17 15:04:59,413 INFO : org.graylog2.buffers.OutputBuffer - 
 Initialized OutputBuffer with ring size 1024 and wait strategy 
 BlockingWaitStrategy.
 2014-04-17 15:05:01,700 INFO : org.elasticsearch.node - 
 [graylog2-server] version[0.90.10], pid[1893], 
 build[0a5781f/2014-01-10T10:18:37Z]
 2014-04-17 15:05:01,700 INFO : org.elasticsearch.node - 
 [graylog2-server] initializing ...
 2014-04-17 15:05:01,833 INFO : org.elasticsearch.plugins - 
 [graylog2-server] loaded [], sites []
 2014-04-17 15:05:12,156 INFO : org.elasticsearch.node - 
 [graylog2-server] initialized
 2014-04-17 15:05:12,156 INFO : org.elasticsearch.node - 
 [graylog2-server] starting ...
 2014-04-17 15:05:12,295 INFO : org.elasticsearch.transport - 
 [graylog2-server] bound_address {inet[/0:0:0:0:0:0:0:0:9350]}, 
 publish_address {inet[/192.168.24.94:9350]}
 2014-04-17 15:05:15,317 WARN : org.elasticsearch.discovery - 
 [graylog2-server] waited for 3s and no initial state was set by the 
 discovery
 2014-04-17 15:05:15,318 INFO : org.elasticsearch.discovery - 
 [graylog2-server] graylog2/OFJUviCYTXmxhtOhJqeuLw
 2014-04-17 15:05:15,319 INFO : org.elasticsearch.node - 
 [graylog2-server] started
 2014-04-17 15:05:15,452 INFO : org.elasticsearch.cluster.service - 
 [graylog2-server] detected_master 
 [Yellowjacket][6iyatNtCTrKZBpH-_urraQ][inet[/192.168.24.94:9300]], added 
 {[Yellowjacket][6iyatNtCTrKZBpH-_urraQ][inet[/192.168.24.94:9300]],}, 
 reason: zen-disco-receive(from master 
 [[Yellowjacket][6iyatNtCTrKZBpH-_urraQ][inet[/192.168.24.94:9300]]])
 2014-04-17 15:05:20,352 ERROR: org.graylog2.Main - 


 

 ERROR: No ElasticSearch master was found.


 
 /etc/init.d/graylog2-server:
 
 #!/bin/sh
 #
 # graylog2-server:   graylog2 message collector
 #
 # chkconfig: - 96 02
 # description:  This daemon listens for syslog and GELF messages and 
 stores them in mongodb
 #
 CMD=$1
 NOHUP=`which nohup`
 JAVA_HOME=/usr/java/latest
 JAVA_CMD=/usr/bin/java
 GRAYLOG2_SERVER_HOME=/opt/graylog2-server
  
 start() {
 echo Starting graylog2-server ...
 $NOHUP $JAVA_CMD -jar $GRAYLOG2_SERVER_HOME/graylog2-server.jar  
 /var/log/graylog2.log 21 
 }
  
 stop() {
 PID=`cat /tmp/graylog2.pid`
 echo Stopping graylog2-server ($PID) ...
 kill $PID
 }
  
 restart() {
 echo Restarting graylog2-server ...
 stop
 start
 }
  
 case $CMD in
 start)
 start
 ;;
 stop)
 stop
 ;;
 restart)
 restart
 ;;
 *)
 echo Usage $0 {start|stop|restart}
 RETVAL=1
 esac



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: How to extract number from a message ?

2015-06-12 Thread Arie
You could try this using a regex extractor.

On Friday, June 12, 2015 at 10:23:36 AM UTC+2, Sreenath V wrote:

 EXEC dbo.usp_AddUpdate_Basket 'mall', '21448C12Bsss962D117A842', 
 '8f2279sebe62568_default', '8f2279sebe62568_default', 'null', 1, 
 '1|2|', '0|0|0', '8768851|9007061|9190991', '2|1|1', 
 '799.9900|799.9900|1994.0', '3.4500|6.1000|41.1069', '||', '0|0|0'


 I want to extract '8768851|9007061|9190991' and store it in a field as 
 array of values. Is it possible to do this ?


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: migrating to a new graylog master

2015-06-12 Thread Arie
Hi Chuck,

I use the functionality of importing and exporting contend-packs for this 
kind of functionality.
I use that for making backup of my configs to.

hth,,
Arie.

Op vrijdag 12 juni 2015 22:57:11 UTC+2 schreef Chuck Musser:

 Hi,

 We've tested an all-in-one Graylog server (server, Elasticsearch and web 
 interface) and have decided to create a production-grade setup with the 
 various nodes in clusters. As an experiment, I tried migrating the graylog 
 master server to a new machine and got stuck.

 What I tried was:

 - downloading the OVA image, and running three instances configured to be 
 server, elasticsearch and web, using the graylog-ctl tool
 - exporting the mongodb database from the existing test server and 
 importing it into the new server instance
 - reboot the new graylog server
 - login with one of the admin users I know exist in the test database

 I was expecting all the stuff from the test machine--users, grok patterns, 
 inputs, streams, etc.--would be present in the new master, but they 
 weren't. I was able to log in with admin/admin (the default credentials 
 for these OVA images) and get in to the largely unconfigured default setup. 
 The import had no effect. I belatedly noticed that the test server's 
 mongodb_database was named graylog2, but the new server specified graylog 
 so I edited that in /opt/graylog/conf/graylog.conf and restarted again. 
 That didn't fix things.

 Is there an established procedure for doing this kind of migration?

 Thanks,

 Chuck


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Changing extractors and updating existing messages

2015-06-12 Thread Arie
Hi Trey,

One way you can do this is by search and replace queries or otherwise 
directly to elasticsearch.
I have little experience it, but it can be done. There is the possibility 
of batch processing it.

https://www.elastic.co/guide/en/elasticsearch/reference/current/_introducing_the_query_language.html



Op vrijdag 12 juni 2015 22:57:11 UTC+2 schreef Trey Dockendorf:

 I am very new to Graylog, and am really enjoying the experience so far. 
  Currently I'm on 1.0.2 since the Puppet module doesn't seem to yet support 
 1.1.x.  In testing the extractors I've found that messages received after 
 an extractor is added do not get the fields created.  Is there a way to go 
 back and have graylog re-index or apply extractor updates?  My goal is to 
 get all the raw data into Graylog now and adapt the extractors and streams 
 as I gain more experience.  However if I can't update existing messages 
 then the old log data becomes harder to use.  Is there an existing way to 
 update old messages?

 Thanks,
 - Trey


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Graylog 1.1 Beat 3 startu issue

2015-06-02 Thread Arie
Solved in 1.1 RC3

Op vrijdag 29 mei 2015 18:56:32 UTC+2 schreef Bernd Ahlers:

 Arie, 

 thank you for the report! I created an issue in GitHub for this: 
 https://github.com/Graylog2/graylog2-server/issues/1194 

 It will be fixed in 1.1.0-rc.2 or later. 

 Thanks, 
 Bernd 

 On 29 May 2015 at 16:27, Arie satya...@gmail.com javascript: wrote: 
  Hi, 
  
  When starting graylog with the following function enabled it fails on 
  bootup. 
  
  collector_expiration_threshold = 14d (or 20d or 30d) 
  
  Fail message: 
  
  Exception in thread main java.lang.ClassCastException: 
  com.github.joschi.jadconfig.util.Duration cannot be cast to 
  java.lang.Integer 
  at 
  
 com.github.joschi.jadconfig.validators.PositiveIntegerValidator.validate(PositiveIntegerValidator.java:11)
  

  at 
  
 com.github.joschi.jadconfig.JadConfig.validateParameter(JadConfig.java:207) 
  at 
  
 com.github.joschi.jadconfig.JadConfig.processClassFields(JadConfig.java:141) 

  at 
 com.github.joschi.jadconfig.JadConfig.process(JadConfig.java:99) 
  at 
  
 org.graylog2.bootstrap.CmdLineTool.readConfiguration(CmdLineTool.java:316) 
  at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:161) 
  at org.graylog2.bootstrap.Main.main(Main.java:58) 
  
  Centos 6.6 and either with java 1.7 or 1.8 
  
  
  The other collector functions are fine. 
  
  Arie. 
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups 
  graylog2 group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an 
  email to graylog2+u...@googlegroups.com javascript:. 
  For more options, visit https://groups.google.com/d/optout. 



 -- 
 Developer 

 Tel.: +49 (0)40 609 452 077 
 Fax.: +49 (0)40 609 452 078 

 TORCH GmbH - A Graylog company 
 Steckelhörn 11 
 20457 Hamburg 
 Germany 

 Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
 Geschäftsführer: Lennart Koopmann (CEO) 


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: how to keep the log message in one field?

2015-06-02 Thread Arie
Mark,,

Thank you for mentioning it in case I want to do the same thing.

Logs between server 2008 and later appear to be different from earlier 
versions. The need a different
confi file.


Arie..


Op dinsdag 2 juni 2015 01:04:12 UTC+2 schreef graylog...@gmail.com:

 Hello

 Thanks for info but my case is different (I think!) 
 If I'm not wrong your configuration for NXLOG is to fetch live eventlogs, 

 in my case I have a huge archive (5TB) of windows logs that have been 
 already exported as text file, so I'm not accessing the live eventlogs on a 
 windows system.


 Best regards
 Mark



 On Sunday, May 31, 2015 at 1:49:06 AM UTC+10, graylog...@gmail.com wrote:

 Hello

 I'm having a problem with graylog and nxlog feed 

 I have a huge archive of windows event logs, I have been trying to import 
 these logs into graylog using nxlog and gelf

 It all works well, nxlog pickup the logs and imports them but the 
 messages are being split in several records rather tha a single one, 


 Example if the event log contains the follow


 *{1331892664000, 4624, Success, Security, 
 Microsoft-Windows-Security-Auditing, An account was successfully logged 
 on.*

 *Subject:*
 * Security ID: S-1-0-0*
 * Account Name: -*
 * Account Domain: -*
 * Logon ID: 0x0*

 *Logon Type: 3*


 *This event is generated when a logon session is created. It is generated 
 on the computer that was accessed.*

 *Key length indicates the length of the generated session key. This will 
 be 0 if no session key was requested. }  *


 It gets loaded into graylog as:

 Record 1: *{1331892664000, 4624, Success, Security, 
 Microsoft-Windows-Security-Auditing, An account was successfully logged 
 on.*
 Record 2: *Subject*
 Record 3*: **Security ID: S-1-0-0*

 etc.
 etc


 I just would like to have all the message stored in one record

 Do you have any idea how this could be achieved?

 Thanks!
 Mark







-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Graylog 1.1.0-beta.2 collector issue in webinterface

2015-06-01 Thread Arie
Bernd,

looks like it is solved in 1.10.4-rc1. Thank you.



On Thursday, May 28, 2015 at 5:54:33 PM UTC+2, Bernd Ahlers wrote:

 Arie, 

 thanks for he report. There is an issue and a pull request to fix the 
 issue on GitHub. 

 https://github.com/Graylog2/graylog2-web-interface/issues/1334 
 https://github.com/Graylog2/graylog2-server/pull/1190 

 This will be fixed in the next beta or rc. 

 Regards, 
 Bernd 

 Arie [Thu, May 28, 2015 at 07:12:30AM -0700] wrote: 
 Hi Bernd, 
  
 Just installed and tried it, the error is still there. 
  
 Tested it with a windows and linux collector, and in both cases, no 
 results. 
  
 Arie. 
  
 On Thursday, May 28, 2015 at 3:58:56 PM UTC+2, Bernd Ahlers wrote: 
  
  Arie, 
  
  thanks for the report. Do you still have that problem with beta.3? 
  
  Bernd 
  
  Arie [Thu, May 28, 2015 at 06:22:49AM -0700] wrote: 
  Hi All, 
   
  When we look @ System  Collectors and select show messages, 
  no messages are show in the UI. 
   
  Messages are visible with a normal search. 
   
   
  Running on centos-6.6 / elastic 1.5.2 / JRE 1.8 
   
  hth,, 
   
  Arie 
   
  -- 
  You received this message because you are subscribed to the Google 
 Groups 
  graylog2 group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an 
  email to graylog2+u...@googlegroups.com javascript:. 
  For more options, visit https://groups.google.com/d/optout. 
  
  
  -- 
  Developer 
  
  Tel.: +49 (0)40 609 452 077 
  Fax.: +49 (0)40 609 452 078 
  
  TORCH GmbH - A Graylog company 
  Steckelhörn 11 
  20457 Hamburg 
  Germany 
  
  Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
  Geschäftsführer: Lennart Koopmann (CEO) 
  
  
 -- 
 You received this message because you are subscribed to the Google Groups 
 graylog2 group. 
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to graylog2+u...@googlegroups.com javascript:. 
 For more options, visit https://groups.google.com/d/optout. 


 -- 
 Developer 

 Tel.: +49 (0)40 609 452 077 
 Fax.: +49 (0)40 609 452 078 

 TORCH GmbH - A Graylog company 
 Steckelhörn 11 
 20457 Hamburg 
 Germany 

 Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
 Geschäftsführer: Lennart Koopmann (CEO) 


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: how to keep the log message in one field?

2015-06-01 Thread Arie
That is one way to do it, this works up to server 2003, server 2008 and so 
on is a little different,
this way there is better handling of the logs.

define ROOT C:\Program Files\nxlog
#define ROOT C:\Program Files (x86)\nxlog

Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log

Extension gelf
Module   xm_gelf
/Extension

Input in
   Moduleim_mseventlog
   Sources   Application,System
/Input

Output out
Module  om_udp
Host10.64.91.18
Port8000
OutputType  GELF
/Output

 

Route 1
Pathin = out
/Route

Op maandag 1 juni 2015 09:04:28 UTC+2 schreef graylog...@gmail.com:

 Hello

 Found the issue, it was the configuration of NXLOG, I had to tell NXLOG 
 that the input was multiline and the headline/endline were {}, I changed 
 the nxlog.conf as below:

 Extension gelf
 Module  xm_gelf
 /Extension

 Extension multiline
 Module  xm_multiline
 HeaderLine  /^{/
 EndLine /^}/
 /Extension

 Input in

 Module  im_file
 File/media/winlogs/*
 SavePos  TRUE
 Recursive TRUE
 InputType   multiline
 /Input

 Output out
 Module  om_udp
 Host127.0.0.1
 Port12201
 OutputType  GELF
 /Output

 #Output out
 #Module om_file
 #File   /tmp/output
 #/Output




 On Sunday, May 31, 2015 at 1:49:06 AM UTC+10, graylog...@gmail.com wrote:

 Hello

 I'm having a problem with graylog and nxlog feed 

 I have a huge archive of windows event logs, I have been trying to import 
 these logs into graylog using nxlog and gelf

 It all works well, nxlog pickup the logs and imports them but the 
 messages are being split in several records rather tha a single one, 


 Example if the event log contains the follow


 *{1331892664000, 4624, Success, Security, 
 Microsoft-Windows-Security-Auditing, An account was successfully logged 
 on.*

 *Subject:*
 * Security ID: S-1-0-0*
 * Account Name: -*
 * Account Domain: -*
 * Logon ID: 0x0*

 *Logon Type: 3*


 *This event is generated when a logon session is created. It is generated 
 on the computer that was accessed.*

 *Key length indicates the length of the generated session key. This will 
 be 0 if no session key was requested. }  *


 It gets loaded into graylog as:

 Record 1: *{1331892664000, 4624, Success, Security, 
 Microsoft-Windows-Security-Auditing, An account was successfully logged 
 on.*
 Record 2: *Subject*
 Record 3*: **Security ID: S-1-0-0*

 etc.
 etc


 I just would like to have all the message stored in one record

 Do you have any idea how this could be achieved?

 Thanks!
 Mark







-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: how to keep the log message in one field?

2015-05-31 Thread Arie
Hi Mark,

Not experiencing this behavior here.

What is your nxlog config, and are you using a GELF TCP/UDP input?
Is NXlog the latest version? there was a problem with GELF in a earlier 
version.



Op zaterdag 30 mei 2015 17:49:06 UTC+2 schreef graylog...@gmail.com:

 Hello

 I'm having a problem with graylog and nxlog feed 

 I have a huge archive of windows event logs, I have been trying to import 
 these logs into graylog using nxlog and gelf

 It all works well, nxlog pickup the logs and imports them but the messages 
 are being split in several records rather tha a single one, 


 Example if the event log contains the follow


 *{1331892664000, 4624, Success, Security, 
 Microsoft-Windows-Security-Auditing, An account was successfully logged 
 on.*

 *Subject:*
 * Security ID: S-1-0-0*
 * Account Name: -*
 * Account Domain: -*
 * Logon ID: 0x0*

 *Logon Type: 3*


 *This event is generated when a logon session is created. It is generated 
 on the computer that was accessed.*

 *Key length indicates the length of the generated session key. This will 
 be 0 if no session key was requested. }  *


 It gets loaded into graylog as:

 Record 1: *{1331892664000, 4624, Success, Security, 
 Microsoft-Windows-Security-Auditing, An account was successfully logged 
 on.*
 Record 2: *Subject*
 Record 3*: **Security ID: S-1-0-0*

 etc.
 etc


 I just would like to have all the message stored in one record

 Do you have any idea how this could be achieved?

 Thanks!
 Mark







-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Re: grok extractors not working

2015-05-30 Thread Arie
Hi,

Are you using the latest version of NXLog? There was a problem in an older 
version
concerning Graylog/GELF.

Arie.

Op vrijdag 29 mei 2015 20:41:52 UTC+2 schreef Jesse Skrivseth:

 I'm not sure why, but suddenly the extractors are working today without 
 any further action on my part. There seems to be a very long delay between 
 when an extractor is configured and when it is in effect, at least in this 
 environment. 

 Another thing to note is that the data on this input is TLS encrypted GELF 
 via TCP, and the data is coming in from NXLog using GELF_TCP.

 On Thursday, May 28, 2015 at 3:25:05 PM UTC-6, Kay Röpke wrote:

 I'm not an expert on the OVAs so I would recommend simply setting up a 
 test instance to check this. Or you can wait until I get to it in the (my) 
 morning ;)


 

-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Re: collector questions

2015-05-29 Thread Arie
Bernd

Working on it, no promises. All new to me here :-)

Looking at the file it needs at least a ENDLOCAL at the end, but this is 
minor

Startup should be manual instead of auto ihmo.

Maybe the PROCRUN needs a --StopClass %COLLECTOR_CLASS% not sure here.

Looking @ http://commons.apache.org/proper/commons-daemon/procrun.html
SET 
COLLECTOR_JVM_OPTIONS=-Djava.library.path=%COLLECTOR_ROOT%\lib\sigar;-Dfile.encoding=UTF-8
 
-Xms12m -Xmx64m
ther should be ; oe # between the otions
SET 
COLLECTOR_JVM_OPTIONS=-Djava.library.path=%COLLECTOR_ROOT%\lib\sigar;-Dfile.encoding=UTF-8;-Xms12m;-Xmx64m

Can you hand me here the output f tour win7 installer that works?




On Thursday, May 28, 2015 at 4:43:25 PM UTC+2, Bernd Ahlers wrote:

 Arie, 

 can you put an ECHO in front of the %PROCRUN //IS//%SERVICE_NAME% 
 line. That should print the command as it would be executed. Then try to 
 fiddle with it until it works. That would be awesome! 

 Bernd 

 Arie [Wed, May 27, 2015 at 02:14:32PM -0700] wrote: 
 It appears to go wrong at this line: 
  
 %PROCRUN% //IS//%SERVICE_NAME% .. etc. 
  
 No errors before. 
  
  
  
 Op woensdag 27 mei 2015 22:25:02 UTC+2 schreef Bernd Ahlers: 
  
  Arie, 
  
  can you please check if this script works for you? 
  
  https://gist.github.com/bernd/d26366422d42154534db 
  
  Thanks! 
  
  Bernd 
  
  Arie [Wed, May 27, 2015 at 07:02:20AM -0700] wrote: 
  Okay it is running and sending data to graylog from windows, 
  Now it is only not installing as a service, having the following 
 error. 
   
  C:\collector\bingraylog-collector-service.bat install GA 
  Installing service for Graylog Collector 
   
  Service name: GA 
  JAVA_HOME:C:\Program Files\Java\jre7\ 
  ARCH: x86 
   
  WARNING: JAVA_HOME points to a JRE and not JDK installation; a client 
  (not 
  a server) JVM will be used... 
  [2015-05-27 16:00:35] [error] [ 2796] Unrecognized cmd option 
  C:\collector\bin\\windows\graylog-collector-service-x86.exe 
  [2015-05-27 16:00:35] [error] [ 2796] Invalid command line arguments 
  [2015-05-27 16:00:35] [error] [ 2796] Commons Daemon procrun failed 
 with 
  exit value: 1 (Failed to parse command line arguments) 
  ERROR: Failed to install service: GA 
   
  C:\collector\bin 
   
   
  On Wednesday, May 27, 2015 at 3:30:20 PM UTC+2, Arie wrote: 
   
   Sorry, I see the typo in my config :-( It is running now 
   
   
   On Wednesday, May 27, 2015 at 2:29:31 PM UTC+2, Arie wrote: 
   
   I am playing around with the collector. 
   
   From a linux machine we are getting data into our test machine, 
 Al-tho 
   data is flat/ one line message. 
   
   Within windows(2003) we have the following error: 
   
   C:\collector\bingraylog-collector.bat run -f 
   c:\collector\config\collector.conf 
   2015-05-27T13:20:06.846+0200 INFO  [main] cli.commands.Run - 
 Starting 
   Collector v0.2.1 (commit 93f4b8e) 
   2015-05-27T13:20:08.940+0200 INFO  [main] 
 collector.utils.CollectorId 
  - 
   Collector ID: dd6c1e19-19b5-422e-b06f-14799d5f7b14 
   2015-05-27T13:20:08.987+0200 ERROR [main] cli.commands.Run - 
   Configuration Error: [local-syslog] No configuration setting found 
 for 
  key 
   'type' 
   2015-05-27T13:20:09.002+0200 INFO  [main] cli.commands.Run - Exit 
   2015-05-27T13:20:09.002+0200 INFO  [Thread-1] cli.commands.Run - 
   Stopping... 
   
   in the config on windows (2003): 
   
   inputs { 
 local-syslog { 
 win-application { 
   type = windows-eventlog 
   source-name = Application 
   poll-interval = 1s 
   } 
 } 
   } 
   
   
   
   Is this just where we are now, and are there improvements coming 
 up/ 
   
   
   nice work. 
   
   
   
   
   
   
   
   
   
   
  -- 
  You received this message because you are subscribed to the Google 
 Groups 
  graylog2 group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an 
  email to graylog2+u...@googlegroups.com javascript:. 
  For more options, visit https://groups.google.com/d/optout. 
  
  
  -- 
  Developer 
  
  Tel.: +49 (0)40 609 452 077 
  Fax.: +49 (0)40 609 452 078 
  
  TORCH GmbH - A Graylog company 
  Steckelhörn 11 
  20457 Hamburg 
  Germany 
  
  Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
  Geschäftsführer: Lennart Koopmann (CEO) 
  
  
 -- 
 You received this message because you are subscribed to the Google Groups 
 graylog2 group. 
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to graylog2+u...@googlegroups.com javascript:. 
 For more options, visit https://groups.google.com/d/optout. 


 -- 
 Developer 

 Tel.: +49 (0)40 609 452 077 
 Fax.: +49 (0)40 609 452 078 

 TORCH GmbH - A Graylog company 
 Steckelhörn 11 
 20457 Hamburg 
 Germany 

 Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
 Geschäftsführer: Lennart Koopmann (CEO) 


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from

Re: [graylog2] Re: collector questions

2015-05-29 Thread Arie
Bernd,

Tested on installing, removing and managing the service from the script and 
console,
everything without problems on server 2003. We are running/testing it on 3 
servers
succesfully collecting Application and system logs form windows.

Thank you. Good weekend to you all.

Arie.

Op vrijdag 29 mei 2015 20:00:06 UTC+2 schreef Bernd Ahlers:

 Arie, 

 wow, thank you. Can you verify that this one works now? 


 https://gist.githubusercontent.com/bernd/57fa40c557ffbb303801/raw/01ec841120ef202a6eab159f741755f06c93fa34/graylog-collector-service.bat
  

 Thank you very much! 

 Bernd 

 Arie [Fri, May 29, 2015 at 05:25:49AM -0700] wrote: 
 Bernd,, 
  
 Other than the path, I have exactly the same output. But I have nailed 
 the 
 basterd :-) 
  
 It is in this line: 
  
 SET 
 PROCRUN=%COLLECTOR_BIN_DIR%windows\graylog-collector-service-%ARCH%.exe 
  
 should be 
  
 SET 
 PROCRUN=%COLLECTOR_BIN_DIR%windows\graylog-collector-service-%ARCH%.exe 
  
 Up and running here 
  
  
 Arie 
  
 === 
 C:\collector\binmy-collector-service.bat install GC 
 Installing service for Graylog Collector 
  
 Service name: GC 
 JAVA_HOME:C:\Program Files\Java\jre7\ 
 ARCH: x86 
  
 WARNING: JAVA_HOME points to a JRE and not JDK installation; a client 
 (not 
 a server) JVM will be used... 
  
 C:\collector\bin\\windows\graylog-collector-service-x86.exe //IS//GC 
 --Classpath C:\collector\graylog-collector.jar --Jvm 
 C:\Program Files\Java\jre7\\bin\client\jvm.dll --JvmMs 12m --JvmMx 64m 
 --JvmOptions -Djava.library.path=C:\collector\lib\sigar#- 
 Dfile.encoding=UTF-8#-Xms12m#-Xmx64m --StartPath C:\collector --Startup 
 auto --StartMode jvm --StartClass org.graylog.collector. 
 cli.Main --StartMethod main --StartParams 
 run;-f;C:\collector\config\collector.conf --StopMode jvm --StopClass 
 org.graylog.colle 
 ctor.cli.Main --StopMethod stop --StopTimeout 0 --PidFile GC.pid 
 --DisplayName Graylog Collector (GC) --Description Graylog 
  Collector 0.2.1 service. See http://www.graylog.org/ for details. 
 --LogPath C:\collector\logs --LogPrefix graylog-collector 
  --StdError auto --StdOutput auto 
 ERROR: Failed to install service: GC 
  
 === 
  
 Annoying. Trying to find a system requirement to install this on 2003 TS 
  
 Arie 
  
  
  
 On Friday, May 29, 2015 at 1:10:56 PM UTC+2, Bernd Ahlers wrote: 
  
  Arie, 
  
  the following command works for me on Windows 7. 
  
  ## 
  
 C:\Users\IEUser\Desktop\graylog-collector-0.2.1-20150529103656bin\graylog-collector-service.bat
  

  install GC 
  Installing service for Graylog Collector 
  
  Service name: GC 
  JAVA_HOME:C:\Program Files\Java\jre7\ 
  ARCH: x86 
  
  WARNING: JAVA_HOME points to a JRE and not JDK installation; a client 
 (not 
  a server) JVM will be used... 
  
  
 C:\Users\IEUser\Desktop\graylog-collector-0.2.1-20150529103656\bin\\windows\graylog-collector-service-x86.exe
  

  //IS//GC --Classpath 
  
 C:\Users\IEUser\Desktop\graylog-collector-0.2.1-20150529103656\graylog-collector.jar
  

  --Jvm C:\Program Files\Java\jre7\\bin\client\jvm.dll --JvmMs 12m 
 --JvmMx 
  64m --JvmOptions 
  
 -Djava.library.path=C:\Users\IEUser\Desktop\graylog-collector-0.2.1-20150529103656\lib\sigar#-Dfile.encoding=UTF-8#-Xms12m#-Xmx64m
  

  --StartPath 
  C:\Users\IEUser\Desktop\graylog-collector-0.2.1-20150529103656 
 --Startup 
  auto --StartMode jvm --StartClass org.graylog.collector.cli.Main 
  --StartMethod main --StartParams 
  
 run;-f;C:\Users\IEUser\Desktop\graylog-collector-0.2.1-20150529103656\config\collector.conf
  

  --StopMode jvm --StopClass org.graylog.collector.cli.Main --StopMethod 
 stop 
  --StopTimeout 0 --PidFile GC.pid --DisplayName Graylog Collector 
 (GC) 
  --Description Graylog Collector 0.2.1 service. See 
  http://www.graylog.org/ for details. --LogPath 
  C:\Users\IEUser\Desktop\graylog-collector-0.2.1-20150529103656\logs 
  --LogPrefix graylog-collector --StdError auto --StdOutput auto 
  
  Service 'GC' has been installed 
  ## 
  
  The script replaces the whitespace with # characters already. 
  
  Bernd 
  
  Arie [Fri, May 29, 2015 at 03:20:21AM -0700] wrote: 
  Bernd 
   
  Working on it, no promises. All new to me here :-) 
   
  Looking at the file it needs at least a ENDLOCAL at the end, but this 
 is 
  minor 
   
  Startup should be manual instead of auto ihmo. 
   
  Maybe the PROCRUN needs a --StopClass %COLLECTOR_CLASS% not sure here. 
   
  Looking @ http://commons.apache.org/proper/commons-daemon/procrun.html 
  SET 
  
 COLLECTOR_JVM_OPTIONS=-Djava.library.path=%COLLECTOR_ROOT%\lib\sigar;-Dfile.encoding=UTF-8
  

  
  -Xms12m -Xmx64m 
  ther should be ; oe # between the otions 
  SET 
  
 COLLECTOR_JVM_OPTIONS=-Djava.library.path=%COLLECTOR_ROOT%\lib\sigar;-Dfile.encoding=UTF-8;-Xms12m;-Xmx64m
  

  
   
  Can you hand me here the output f tour win7 installer that works

[graylog2] Graylog 1.1 Beat 3 startu issue

2015-05-29 Thread Arie
Hi,

When starting graylog with the following function enabled it fails on 
bootup.

collector_expiration_threshold = 14d (or 20d or 30d)

Fail message:

Exception in thread main java.lang.ClassCastException: 
com.github.joschi.jadconfig.util.Duration cannot be cast to 
java.lang.Integer
at 
com.github.joschi.jadconfig.validators.PositiveIntegerValidator.validate(PositiveIntegerValidator.java:11)
at 
com.github.joschi.jadconfig.JadConfig.validateParameter(JadConfig.java:207)
at 
com.github.joschi.jadconfig.JadConfig.processClassFields(JadConfig.java:141)
at com.github.joschi.jadconfig.JadConfig.process(JadConfig.java:99)
at 
org.graylog2.bootstrap.CmdLineTool.readConfiguration(CmdLineTool.java:316)
at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:161)
at org.graylog2.bootstrap.Main.main(Main.java:58)

Centos 6.6 and either with java 1.7 or 1.8


The other collector functions are fine.

Arie.

-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Graylog 1.1.0-beta.2 collector issue in webinterface

2015-05-28 Thread Arie
Hi All,

When we look @ System  Collectors and select show messages,
no messages are show in the UI.

Messages are visible with a normal search.


Running on centos-6.6 / elastic 1.5.2 / JRE 1.8

hth,,

Arie

-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: collector questions

2015-05-27 Thread Arie
Sorry, I see the typo in my config :-( It is running now


On Wednesday, May 27, 2015 at 2:29:31 PM UTC+2, Arie wrote:

 I am playing around with the collector.

 From a linux machine we are getting data into our test machine, Al-tho 
 data is flat/ one line message.

 Within windows(2003) we have the following error:

 C:\collector\bingraylog-collector.bat run -f 
 c:\collector\config\collector.conf
 2015-05-27T13:20:06.846+0200 INFO  [main] cli.commands.Run - Starting 
 Collector v0.2.1 (commit 93f4b8e)
 2015-05-27T13:20:08.940+0200 INFO  [main] collector.utils.CollectorId - 
 Collector ID: dd6c1e19-19b5-422e-b06f-14799d5f7b14
 2015-05-27T13:20:08.987+0200 ERROR [main] cli.commands.Run - Configuration 
 Error: [local-syslog] No configuration setting found for key 'type'
 2015-05-27T13:20:09.002+0200 INFO  [main] cli.commands.Run - Exit
 2015-05-27T13:20:09.002+0200 INFO  [Thread-1] cli.commands.Run - 
 Stopping...

 in the config on windows (2003):

 inputs {
   local-syslog {
   win-application {
 type = windows-eventlog
 source-name = Application
 poll-interval = 1s
 }
   }
 }



 Is this just where we are now, and are there improvements coming up/


 nice work.










-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: collector questions

2015-05-27 Thread Arie
Okay it is running and sending data to graylog from windows,
Now it is only not installing as a service, having the following error.

C:\collector\bingraylog-collector-service.bat install GA
Installing service for Graylog Collector

Service name: GA
JAVA_HOME:C:\Program Files\Java\jre7\
ARCH: x86

WARNING: JAVA_HOME points to a JRE and not JDK installation; a client (not 
a server) JVM will be used...
[2015-05-27 16:00:35] [error] [ 2796] Unrecognized cmd option 
C:\collector\bin\\windows\graylog-collector-service-x86.exe
[2015-05-27 16:00:35] [error] [ 2796] Invalid command line arguments
[2015-05-27 16:00:35] [error] [ 2796] Commons Daemon procrun failed with 
exit value: 1 (Failed to parse command line arguments)
ERROR: Failed to install service: GA

C:\collector\bin


On Wednesday, May 27, 2015 at 3:30:20 PM UTC+2, Arie wrote:

 Sorry, I see the typo in my config :-( It is running now


 On Wednesday, May 27, 2015 at 2:29:31 PM UTC+2, Arie wrote:

 I am playing around with the collector.

 From a linux machine we are getting data into our test machine, Al-tho 
 data is flat/ one line message.

 Within windows(2003) we have the following error:

 C:\collector\bingraylog-collector.bat run -f 
 c:\collector\config\collector.conf
 2015-05-27T13:20:06.846+0200 INFO  [main] cli.commands.Run - Starting 
 Collector v0.2.1 (commit 93f4b8e)
 2015-05-27T13:20:08.940+0200 INFO  [main] collector.utils.CollectorId - 
 Collector ID: dd6c1e19-19b5-422e-b06f-14799d5f7b14
 2015-05-27T13:20:08.987+0200 ERROR [main] cli.commands.Run - 
 Configuration Error: [local-syslog] No configuration setting found for key 
 'type'
 2015-05-27T13:20:09.002+0200 INFO  [main] cli.commands.Run - Exit
 2015-05-27T13:20:09.002+0200 INFO  [Thread-1] cli.commands.Run - 
 Stopping...

 in the config on windows (2003):

 inputs {
   local-syslog {
   win-application {
 type = windows-eventlog
 source-name = Application
 poll-interval = 1s
 }
   }
 }



 Is this just where we are now, and are there improvements coming up/


 nice work.










-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] High CPU and did not find meta info issues since adding new Graylog servers and increased input messages/second

2015-05-22 Thread Arie
Same problem here with to many cpu's (not on grayling application).

What happens is that code swaps continuous between cores, in our case it 
helps to bind the
application to a core, but managing it is a dork to do. The virtual layer 
looses a lot of resources
in constantly managing resources over the cores. With 2 it can already be 
up to 10%!

We are running a lot of real-time applications and customer wants 
everything in the cloud. In our
experience 'cloud' delivers us the most of all our problems/glitches. Love 
having some older iron
to run graylog and elastic onto it.



Op vrijdag 22 mei 2015 06:08:44 UTC+2 schreef Pete GS:

 Ok, here's where I'm at with this...

 I tried implementing the kernel options on one of the Graylog servers as a 
 test but it made no appreciable difference. In fact shortly after the first 
 reboot the VM froze with a locked CPU error. It hasn't done that since a 
 subsequent reboot though. We're not running the PVSCSI adapter either.

 After observing this, I revisited Mathieu's comment regarding too many 
 CPU's.

 While I still see no contention issues for CPU resources, I started 
 wondering if there was some SMP related issue with CentOS where the extra 
 vCPU's just weren't providing enough extra to cater for the workload.

 I scaled all the nodes back to 8 vCPU's and added another four Graylog 
 servers, so I now have 8 servers receiving the inputs.

 So far this is running a lot better than the four servers with 16 and 20 
 vCPU's. They still peak at 100% but this is not sustained, even after 
 having an ElasticSearch issue (filling the disks again) that caused a 
 backlog in the message journal overnight.

 Almost all the message backlog in the journals have been processed again 
 and it's still working well so far, this is after 24 hours or so.

 I'll see how it runs over the weekend.

 Incidentally it seems I have inadvertently stumbled across a good number 
 for the process buffer processors... it seems to work well at 2 less than 
 the number of CPU's available to the server. Running with a buffer number 
 of 6 with 8 vCPU's seems to work well. Of course I'm not sure if this is 
 just in my particular environment or if it's a general thing.

 Cheers, Pete

 On Thursday, 14 May 2015 19:13:24 UTC+10, Pete GS wrote:

 Thanks very much Arie, I will check these tomorrow and report back.

 One thing I can confirm is the heap size is configured correctly.

 Cheers, Pete

 On 14 May 2015, at 05:35, Arie satya...@gmail.com javascript: wrote:

 Lets try some more options.

 I see you are running your stuf virtual. Then you can consider the 
 following for centos6

 In your startup kernel config you can add the following options 
 (/etc/grub.conf)

   nohz=off (for high cpu intensive systems)
   elevator=noop (disc scheduling is done by the virtual layer, so disable 
 that)
   cgroup_disable=memory (possibly not used, it fees up some memory and 
 allocation)
   
 if you use the pvscsi device, add the following:
   vmw_pvscsi.cmd_per_lun=254 
  vmw_pvscsi.ring_pages=32

  Check disk buffers on the virtual layer too. vmware kb 2053145
   see 
 http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKCdocType=kcexternalId=2053145sliceId=1docTypeID=DT_KB_1_1dialogID=621755330stateId=1%200%20593866502

  Optimize your disk for performance (up to 30%!!! yes):

  for the filesystems were graylog and or elastic is located add the 
 following to /etc/fstab

 example:
 /dev/mapper/vg_nagios-lv_root /  ext4 
 defaults,noatime,nobarrier,data=writeback 1 1
 and if you want to be more safe:
 /dev/mapper/vg_nagios-lv_root /  ext4 defaults,noatime,nobarrier 1 1

 is ES_HEAP_SIZE configured @ the correct place (I did that wrong at first)
 it is in /etc/systconfig/elasticsearch


 All these options together can improve system performance huge specially 
 when they are virtial.

 ps did you correctly cha

 ...



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Re: [ANNOUNCE] Graylog v1.1.0-beta.2 is out

2015-05-22 Thread Arie
Tank you Kay, will do this in our test environment.

Op vrijdag 22 mei 2015 23:36:32 UTC+2 schreef Kay Röpke:

 It is a 1.1 feature.

 The collector sends GELF/TCP but registration only works with 1.1.
 On May 22, 2015 11:15 PM, Arie satya...@gmail.com javascript: wrote:

 Hi Kay,

 Will the collector work with version 1.01 or is 1.1-beta needed?

 thanks,

 Arie.

 Op donderdag 21 mei 2015 08:58:55 UTC+2 schreef Kay Roepke:

 Hi Ankit,

 The blog post contains links to the log shipper at the very end of the 
 post.
 You can also access it via its Github repository: 
 https://github.com/Graylog2/collector and 
 https://github.com/Graylog2/collector/releases

 cheers,
 Kay

 On Thursday, 21 May 2015 08:55:53 UTC+2, Ankit Mittal wrote:

 Hi Lennart,


 Where i can find the light-weight log collector agent and its setup 
 steps.

 Regards,
 Ankit

  -- 
 You received this message because you are subscribed to the Google Groups 
 graylog2 group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to graylog2+u...@googlegroups.com javascript:.
 For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: [ANNOUNCE] Graylog v1.1.0-beta.2 is out

2015-05-22 Thread Arie
Hi Kay,

Will the collector work with version 1.01 or is 1.1-beta needed?

thanks,

Arie.

Op donderdag 21 mei 2015 08:58:55 UTC+2 schreef Kay Roepke:

 Hi Ankit,

 The blog post contains links to the log shipper at the very end of the 
 post.
 You can also access it via its Github repository: 
 https://github.com/Graylog2/collector and 
 https://github.com/Graylog2/collector/releases

 cheers,
 Kay

 On Thursday, 21 May 2015 08:55:53 UTC+2, Ankit Mittal wrote:

 Hi Lennart,


 Where i can find the light-weight log collector agent and its setup 
 steps.

 Regards,
 Ankit



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: access denied when running graylog-ctl status

2015-05-16 Thread Arie
Looks like grayling-sever is not running, try to start it manually.

Op donderdag 14 mei 2015 19:16:00 UTC+2 schreef Steve Di Bias:

 Greetings,

 Just ran into an issue with graylog after physical host crashed (no 
 graceful shutdown of the virtual appliance)...We are running Graylog OVA 
 appliance 1.0.2 and after bringing the VM backup we are seeing the 
 following error from web int:

 No Graylog servers available. Cannot log in.

 And from CLI I see the following:

 steve@graylog:~$  graylog-ctl status
 warning: elasticsearch: unable to open supervise/ok: access denied
 warning: etcd: unable to open supervise/ok: access denied
 warning: graylog-server: unable to open supervise/ok: access denied
 warning: graylog-web: unable to open supervise/ok: access denied
 warning: mongodb: unable to open supervise/ok: access denied
 warning: nginx: unable to open supervise/ok: access denied

 Any ideas what went wrong and how to resolve?

 Thanks!
 -Steve


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: High CPU and did not find meta info issues since adding new Graylog servers and increased input messages/second

2015-05-05 Thread Arie
What happens when you raise outputbuffer_processors = 5 to 
outputbuffer_processors = 10 ?

Op dinsdag 5 mei 2015 02:23:37 UTC+2 schreef Pete GS:

 Yesterday I did a yum update on all Graylog and MongoDB nodes and since 
 doing that and rebooting them all (there was a kernel update) it seems that 
 there are no longer issues connecting to the Mongo database.

 However, I'm still seeing excessively high CPU usage on the Graylog nodes 
 where all vCPU's are regularly exceeding 95%.

 What can contribute to this? I'm a little stumped at present.

 I would say our average messages/second is around 5,000 to 6,000 with 
 peaks up to about 12,000.

 Cheers, Pete

 On Friday, 1 May 2015 08:20:35 UTC+10, Pete GS wrote:

 Does anyone have any thoughts on this?

 Even if someone could identify some scenarios that would cause high CPU 
 on Graylog servers and in what circumstances Graylog would have trouble 
 contacting the MongoDB servers.

 Cheers, Pete

 On Wednesday, 29 April 2015 10:34:28 UTC+10, Pete GS wrote:

 Hi all,

 We acquired a company a while ago and last week we added all of their 
 logs to our Graylog environment which all come in from their Syslog server 
 via UDP.

 After this, I noticed that the Graylog servers were maxing CPU so to 
 alleviate this I increased CPU resources to the existing servers and added 
 two new servers.

 I'm still seeing generally high CPU usage with peaks of 100% on all four 
 of the Graylog servers but I now have issues where they also seem to have 
 issues connecting to MongoDB.

 I see lots of [NodePingThread] Did not find meta info of this node. 
 Re-registering. streaming through the log files but it only seems to 
 happen when I have more than two Graylog servers running.

 I have verified NTP is installed and configured and all servers 
 including the MongoDB and ElasticSearch servers are sync'ing with the same 
 NTP servers.

 We're doing less than 10,000 messages per second so with the resources 
 I've allocated I would have expected no issues whatsoever.

 I have seen this link: 
 https://groups.google.com/forum/?hl=en#!topic/graylog2/bW2glCdBIUI but 
 I don't believe it is our issue.

 If it truly is being caused by doing lots of reverse DNS lookups, I 
 would expect tcpdump to show me that traffic to our DNS servers, but I see 
 almost no DNS lookups at all.

 We have 6 inputs in total but only one receives the bulk of the Syslog 
 UDP messages. Most of the other inputs are GELF UDP inputs.

 We also have 11 streams, however pausing these streams seems to have 
 little to no impact on the CPU usage.

 All the Graylog servers are virtualised on top of vSphere 5.5 Update 2 
 with plenty of physical hardware available to service the workload (little 
 to no contention).

 The original two have 20 vCPU's and 32GB RAM, the additional two have 16 
 vCPU's and 32GB RAM.

 Java heap on all is set to 16GB.

 This is all running on CentOS 6.

 Any input would be greatly appreciated as I'm a bit stumped on how to 
 get this resolved at present.

 Here is the config file I'm using (censored where appropriate):

 is_master = false
 node_id_file = /etc/graylog2/server/node-id
 password_secret = Censored
 root_username = Censored
 root_password_sha2 = Censored
 plugin_dir = /usr/share/graylog2-server/plugin
 rest_listen_uri = http://172.22.20.66:12900/

 elasticsearch_max_docs_per_index = 2000
 elasticsearch_max_number_of_indices = 999
 retention_strategy = close
 elasticsearch_shards = 4
 elasticsearch_replicas = 1
 elasticsearch_index_prefix = graylog2
 allow_leading_wildcard_searches = true
 allow_highlighting = true
 elasticsearch_cluster_name = graylog2
 elasticsearch_node_name = bne3-0002las
 elasticsearch_node_master = false
 elasticsearch_node_data = false
 elasticsearch_discovery_zen_ping_multicast_enabled = false
 elasticsearch_discovery_zen_ping_unicast_hosts = 
 bne3-0001lai.server-web.com:9300,bne3-0002lai.server-web.com:9300,
 bne3-0003lai.server-web.com:9300,bne3-0004lai.server-web.com:9300,
 bne3-0005lai.server-web.com:9300,bne3-0006lai.server-web.com:9300,
 bne3-0007lai.server-web.com:9300,bne3-0008lai.server-web.com:9300,
 bne3-0009lai.server-web.com:9300
 elasticsearch_cluster_discovery_timeout = 5000
 elasticsearch_discovery_initial_state_timeout = 3s
 elasticsearch_analyzer = standard

 output_batch_size = 5000
 output_flush_interval = 1
 processbuffer_processors = 20
 outputbuffer_processors = 5
 #outputbuffer_processor_keep_alive_time = 5000
 #outputbuffer_processor_threads_core_pool_size = 3
 #outputbuffer_processor_threads_max_pool_size = 30
 #udp_recvbuffer_sizes = 1048576
 processor_wait_strategy = blocking
 ring_size = 65536

 inputbuffer_ring_size = 65536
 inputbuffer_processors = 2
 inputbuffer_wait_strategy = blocking

 message_journal_enabled = true
 message_journal_dir = /var/lib/graylog-server/journal
 message_journal_max_age = 24h
 message_journal_max_size = 150gb
 message_journal_flush_age = 1m
 message_journal_flush_interval = 100

[graylog2] Re: graylog-server startup failing on boot

2015-05-02 Thread Arie
It is my problem to,

You cloud try implementing a wait 10 in /etc/init.d/graylog-server

Someone else in this group had it to ad uses this solution.

Op donderdag 30 april 2015 23:04:00 UTC+2 schreef Mark Moorcroft:


 I have graylog/mongo/elastic installed via repo (RPM) on CentOS6. What I'm 
 seeing is any time I reboot the VM graylog-server fails to start. It seems 
 it tries to start up before elasticsearch has a chance to stabilize, 
 because if I service graylog-server restart later it will work. The problem 
 is this is a protected VM that I don't have root on, so I have to get the 
 system owner to restart the service for me. I'm not sure if elasticsearch 
 is taking too long to start, or if graylog-server needs a test so it waits 
 for the elasticsearch service to be running. As it is there seems to be no 
 wait loop, so graylog-server just dies, but it appears to leave the lock 
 file behind. None of this is good.


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: graylog-server startup failing on boot

2015-05-02 Thread Arie
To be more correct:
in /etc/init.d/graylog-server add:

/bin/sleep 20

and the graylog-server service starts perfectly.

Op donderdag 30 april 2015 23:04:00 UTC+2 schreef Mark Moorcroft:


 I have graylog/mongo/elastic installed via repo (RPM) on CentOS6. What I'm 
 seeing is any time I reboot the VM graylog-server fails to start. It seems 
 it tries to start up before elasticsearch has a chance to stabilize, 
 because if I service graylog-server restart later it will work. The problem 
 is this is a protected VM that I don't have root on, so I have to get the 
 system owner to restart the service for me. I'm not sure if elasticsearch 
 is taking too long to start, or if graylog-server needs a test so it waits 
 for the elasticsearch service to be running. As it is there seems to be no 
 wait loop, so graylog-server just dies, but it appears to leave the lock 
 file behind. None of this is good.


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Open file limit irritations

2015-04-29 Thread Arie
Hi,

When I installed this in a secure environment, I disable SELINUX completely.

you should also edit: /etc/security/limits.d/90-nproc.conf and add:

*  soft nproc   65535
*  hard nproc   65535
*  soft nofile  65535
*  hard nofile  65535

For speed on ES, you can optimize your filesystem(s):

   /dev/mapper/vg_nagios-lv_root /  ext4 
defaults,noatime,nobarrier,data=writeback,journal_ioprio=4 1 1
Edit in /etc/grub.conf and add

   cgroup_disable=memory to the kernel-boot line, this frees up memory.

Hope this all helps to improve your system, and helps on your problem

A.

On Tuesday, April 28, 2015 at 10:14:16 PM UTC+2, Alex Stuart wrote:

 Started to build the Graylog server with Elasticsearch on a RHEL 6.6 vm 
 machine with 2 cores and 4GB of RAM.

 Installation went fine and after a period i could access the web 
 interface. Straight when i log in i get a flickering number 3 above the 
 panel. One of those errors are:

 Elasticsearch nodes with too low open file limit  
 There are Elasticsearch nodes in the cluster that have a too low open file 
 limit (current limit: *4096* on *jean.domain.com http://jean.domain.com* 
 should be at least 64000) This will be causing problems that can be hard to 
 diagnose. Read how to raise the maximum number of open files in the 
 Elasticsearch setup documentation. 
 http://www.graylog2.org/resources/documentation/setup/elasticsearch

 So of course i started to read how i could fix this error. So i added

 fs.file-max = 65536

 to /etc/sysctl.conf, but also

 *   softnproc   65535
 *   hardnproc   65535
 *   softnofile  65535
 *   hardnofile  65535

 to /etc/security/limits.conf and rebooted the system. Once it has been 
 booted i checked if everything was set with ulimit -a:

 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 30490
 max locked memory   (kbytes, -l) 64
 max memory size (kbytes, -m) unlimited
 open files  (-n) 65535
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 65535
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited

 And it looks like it has been set. I can reach the web panel, but after 
 login, the message is still there and although i can click it away, it 
 comes back straight again. So i tried a few other stuff, like adding 
 ULIMIT_N=65535 and a bunch of others, but it still gives me the error.
 What am i doing wrong? Does anyone know how i can remove the error. I want 
 to show it to my boss, but it looks silly with that error on it :)
  


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Oracle java updates?

2015-04-24 Thread Arie
Just for your info,

Elastic advises always to use the latest version, to have full and 
errorless usage of their product.


On Thursday, April 23, 2015 at 11:43:18 PM UTC+2, Mark Moorcroft wrote:

 The elasticsearch wisdom seems to be to use the Oracle JRE. But has anyone 
 figured out how to keep the Oracle JRE updated on a standalone elastic 
 server that never runs a browser. I can't seem to find any documentation 
 about this. And I can't find any reference to a java command that checks 
 for pending updates on the command line. I don't see any sign that the 
 linux JRE has a control panel, and according to the documentation I found 
 Windows is the only platform the supports auto-update. Obviously if you use 
 the CentOS yum installed java then yum update handles the updates.


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: how to recover from elasticsearch running out of space

2015-04-20 Thread Arie
Hi,

Are these retention strategies applied only to newly created indices?

As far as I understand it is only for new indices.

You could install the ElasticHQ plugin, and delete one or more indices with 
it safely.
Do not forget to recalculate the indices from the grayling interface.

hth,,

Arie

Op maandag 20 april 2015 18:34:22 UTC+2 schreef Mustafa Khafateh:

 Hello,

 It looks like I allocated more space than available on disk. The 
 elasticsearch status is red, and graylog-server is not starting.

 This graylog instance is installed from the 1.0.1 OVA. The total root 
 partition size: ~17GB. The retention strategy was size, 1GB per index and 
 10 indices. (I'm guessing Ubuntu also takes up ~5GB.)

 I did sudo /usr/bin/graylog-ctl restart after both changing the 
 retention strategy decreasing the number/size of indices. graylog-server 
 and elasticsearch aren't starting up and giving the same error messages 
 below.

 Are these retention strategies applied only to newly created indices?

 Is there a way to delete old indices and start elasticsearch again?


 in /var/log/graylog/server/current:
 2015-04-16_21:23:24.37414 2015-04-16 17:23:24,374 WARN : 
 org.graylog2.indexer.esplugin.ClusterStateMonitor - No Elasticsearch data 
 nodes in cluster, cluster is completely offline.

 2015-04-16_21:23:27.58418 2015-04-16 17:23:27,584 WARN : 
 org.graylog2.initializers.BufferSynchronizerService - Elasticsearch is 
 unavailable. Not waiting to clear buffers and caches, as we have no healthy 
 cluster.

 2015-04-16_21:23:27.54738 2015-04-16 17:23:27,547 ERROR: org.graylog2.UI - 
 2015-04-16_21:23:27.54739 
 2015-04-16_21:23:27.54740 
 
 2015-04-16_21:23:27.54740 
 2015-04-16_21:23:27.54740 ERROR: The Elasticsearch cluster state is RED 
 which means shards are unassigned. This usually indicates a crashed and 
 corrupt cluster and needs to be investigated. Graylog will shut down.
 2015-04-16_21:23:27.54741 


 in /var/log/graylog/elasticsearch/current:
 2015-04-16_21:10:04.22641 [2015-04-16 17:10:04,226][WARN 
 ][cluster.routing.allocation.decider] [Berzerker] high disk watermark [10%] 
 exceeded on [cz0c8J6PRdG4Kuwnbz6rMQ][Berzerker] free: 718.3mb[4.7%], shards 
 will be relocated away from this node
 2015-04-16_21:10:04.22659 [2015-04-16 17:10:04,226][INFO 
 ][cluster.routing.allocation.decider] [Berzerker] high disk watermark 
 exceeded on one or more nodes, rerouting shards

 Thanks,
 Mustafa


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: graylog-server doesn't start automatically

2015-04-16 Thread Arie
Going to try your solution here, it occurs to me to on a cents installation.

Reboot, graylag-web up, and no grayling-server running.



Op donderdag 16 april 2015 16:49:28 UTC+2 schreef roberto...@gmail.com:

 In the /etc/init.d/graylog-server file I add the line:

 /bin/sleep 20

 and the graylog-server service starts perfectly.

 Maybe graylog-server has to wait more time for any condition I don't 
 know???

 Regards,

 Roberto

 El jueves, 16 de abril de 2015, 10:46:06 (UTC-3), roberto...@gmail.com 
 escribió:

 Dear, I've installed Graylog 1.0.1. Elasticsearch and graylog-web start 
 automatically but graylog-server doesn't.

 I edit /etc/rc.local with:

 /etc/init.d/graylog-server start 

 but after reboot the graylog-server is stopped.

 The only way to start the service is executing manually from terminal:

 # service graylog-server start

 How can I do in order to start graylog-server automatically on boot???

 Thanks a lot,

 Roberto



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: 1.0.1 Spontaneous restart followed by memory shortage

2015-04-13 Thread Arie
4 GB isn't a lot if you have it all on one machine.

 - You could start to give ES 1 GB of memory at max. (ES_HEAP_SIZE=1g) on 
centos in /etc/sysconfig/elasticsearch.
 - Second is to lower the field cache in elasticsearch.yml with:

   indices.fielddata.cache.size: 40% (or even lower)

 - Check is there is some swapping, this suf can't stand it.
 - Is there a lot of data stored and kept over time? ES tries to keep as 
much of the field data
   in memory as possible. If you do not need a lot of history, consider to 
keep data not
   that long, and configure graylog the correct way for this.

Install a plugin to check how your instance of es is running.
   

   /usr/share/elasticsearch/bin/plugin -i royrusso/elasticsearch-HQ
And check this on:
  http://hostadress:9200/_plugingin/HQ/ 
http://10.64.91.15:9200/_plugin/HQ/ 


Good luck ;-)


Op maandag 13 april 2015 08:13:59 UTC+2 schreef adrian...@solaforce.com:


 OK, no takers?  I'm the only one this has happened to, or I'm the only one 
 running an all-in-one-node config from one of the AWS images on a medium 
 machine?

 Could anyone at least recommend a good way to configure the memory usage 
 so it can reliably fit in 4GB memory?

 Thanks in advance.


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: email alert receiver issue

2015-04-13 Thread Arie
If you configure your email alert, and you test it (there is a button for 
it),
does the test arrive in your mailbox?

Op maandag 13 april 2015 17:55:09 UTC+2 schreef Илья И.:

 What is the difference between alert callbacks and alert receivers?
 There are two type of actions to be triggered when an alert is fired: 
 Alert callbacks or an email to a list of alert
 receivers.
 Alert callbacks are single actions that are just called once. For 
 example: The
 Email Alert Callback
 is triggering an
 email to exactly one receiver and the
 HTTP Alert Callback
 is calling a HTTP endpoint once.
 The alert receivers in difference will all receive an email about the 
 same alert.


 Actually, my problem is email receiving does not work. Messages routed 
 into the stream. Test email by clicking a button is coming in fine, but it 
 doesn't coming to, if the stream gets a message. Do I correctly understand 
 from the above quote that it needs to configure Email Alert Callback OR 
 alert recieve? At the stream=manage alerts i indicated only email 
 address in Alert receivers block. Am i wrong? Why email notifications does 
 not comes?


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Maximum Widget columns in Dashboard

2015-03-24 Thread Arie
This is my experience.

Watch for nog scaling your browser (ctrl + or -) grayling isn't
'fooled' by this, so it can appear there is space for a widget
while it is not.

Op dinsdag 24 maart 2015 19:14:47 UTC+1 schreef Phil Cohen:

 somehow, I fixed for myself...I deleted one of the widgets from the page 
 and then was able to move to the top right for a 3rd column.  Prior to this 
 the same exact drag attempt was failing.  

 On Tuesday, March 24, 2015 at 2:11:50 PM UTC-4, Phil Cohen wrote:

 Unfortunately, I am not observing the same behaviour - when I attempt to 
 drag anything to the upper right I am locked to just two total columns. 
  Please see attached screenshot, where I am attempting to drag the text 
 widget up to the 'unlock' icon (as far to the top right as possible) with 
 no possibility for more than two columns offered

 On Friday, March 20, 2015 at 6:48:26 AM UTC-4, Edmundo Alvarez wrote:

 Hi Phil, 

 There is no limitation on the number of columns, only the screen 
 resolution and the widget size. There was an issue reported for that a few 
 days ago, maybe it can help you: 
 https://github.com/Graylog2/graylog2-web-interface/issues/1139 

 Regards, 

 Edmundo 

  On 19 Mar 2015, at 21:12, Phil Cohen pcm...@gmail.com wrote: 
  
  Hello, I am using Graylog 1.0.0 and have run into an odd limitation 
  
  I cannot seem to make my dashboard more than two 'columns' wide - is 
 this a known limitation in this version?  screenshots from the 
 documentation make it seem like it supports three.  I have attached a 
 screenshot that hopefully illustrates what I mean - I'd like to be using 
 the grey space to the right.  If I attempt to drag a widget in that 
 direction, I do not get any representation of that as usable space. 
  
  Thanks 
  
  
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups graylog2 group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to graylog2+u...@googlegroups.com. 
  For more options, visit https://groups.google.com/d/optout. 
  Screen Shot 2015-03-19 at 4.10.35 PM.png 



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Stream or Search for Excessive Windows Events from the Same Source

2015-03-18 Thread Arie
Are you sending them with Gelf? All to the same input?

If you do, then you possibly could configure a stream alert on that input,
making a trigger on your event, and in the alert condition you can configure
the amount of alerts in a time based manner.

On Monday, March 16, 2015 at 11:38:38 PM UTC+1, Pete GS wrote:

 NXLog is how we send them also and we get source/system names, the problem 
 is alerting or searching based on the number of events from the same source 
 without having to specify a particular source.

 I haven't looked at Kibana at present so maybe that's also worth a shot.

 Cheers, Pete

 On Tuesday, 17 March 2015 03:31:32 UTC+10, Arie wrote:

 We send windows events with nxlog (type: gelf), and the system names are 
 automatically included.

 We look at ES with kibana and have created a view te see what is going on.

 Op maandag 16 maart 2015 05:48:12 UTC+1 schreef Pete GS:

 Hi all,

 We've been continuing to discuss various other use cases for Graylog 
 here and there is one scenario that I can't figure out a solution for.

 Essentially, if an unknown Windows issue occurs, it will generally 
 result in the Windows Event Logs being spammed with hundreds or thousands 
 of events within a very short time frame (usually seconds).

 Flagging a lot of Windows events in a short time frame is pretty simple, 
 but what is not simple is that this count of events needs to be on a unique 
 source.

 As we are currently sending hundreds of Windows server Event Logs to 
 Graylog, we can't set a stream up for each individual server.

 Is there any way anyone can think of solving this? We're currently 
 running 0.92.3 but I will soon be looking to upgrade to 1.0.

 The only way I can think to do this right now is to perform some 
 scheduled scripted searches via the REST API.

 Any help or thoughts would be greatly appreciated.

 Cheers, Pete



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Inputs gone after updating to 1.0.1 from the latest 0.9x

2015-03-18 Thread Arie
Hi all,

some help needed. After updating to 1.0.1 all my inputs (2) and extractors 
are gone.
Before the updateI created a contend pack, is there anyone that can help 
rewriting it
to get my inpus back?

below her is the pack.

{
  id : null,
  name : Nagios bundle,
  description : Backup,
  category : Monitoring,
  inputs : [ {
title : nagiosserver,
configuration : {
  port : 8100,
  allow_override_date : true,
  bind_address : 0.0.0.0,
  recv_buffer_size : 1048576
},
type : org.graylog2.inputs.syslog.tcp.SyslogTCPInput,
global : false,
extractors : [ {
  title : extracthostname,
  type : REGEX,
  configuration : {
regex_value : ([a-zA-Z0-9\\-.]+)([a-z\\.]?)*;
  },
  converters : [ ],
  order : 0,
  cursor_strategy : COPY,
  target_field : hostname,
  source_field : message,
  condition_type : NONE,
  condition_value : 
}, {
  title : service_message,
  type : SPLIT_AND_INDEX,
  configuration : {
index : 2,
split_by : ;
  },
  converters : [ ],
  order : 0,
  cursor_strategy : COPY,
  target_field : service_message,
  source_field : message,
  condition_type : NONE,
  condition_value : 
}, {
  title : alert_status,
  type : SPLIT_AND_INDEX,
  configuration : {
index : 3,
split_by : ;
  },
  converters : [ ],
  order : 0,
  cursor_strategy : COPY,
  target_field : alert_status,
  source_field : message,
  condition_type : NONE,
  condition_value : 
}, {
  title : error_message,
  type : SPLIT_AND_INDEX,
  configuration : {
index : 6,
split_by : ;
  },
  converters : [ ],
  order : 0,
  cursor_strategy : COPY,
  target_field : error_message,
  source_field : message,
  condition_type : NONE,
  condition_value : 
}, {
  title : CBS_partner,
  type : REGEX,
  configuration : {
regex_value : \\s([1-9][0-9]0_[A-Z][A-Z]*.\\b)
  },
  converters : [ ],
  order : 0,
  cursor_strategy : COPY,
  target_field : CBS_Partner,
  source_field : message,
  condition_type : NONE,
  condition_value : 
}, {
  title : RIS_Partner,
  type : REGEX,
  configuration : {
regex_value : \\s(SRK_[A-Z][A-Z]*.\\b)
  },
  converters : [ ],
  order : 0,
  cursor_strategy : COPY,
  target_field : RIS_Partner,
  source_field : message,
  condition_type : NONE,
  condition_value : 
} ],
static_fields : { }
  }, {
title : ohdnetwerk,
configuration : {
  port : 8000,
  bind_address : 0.0.0.0,
  recv_buffer_size : 1048576
},
type : org.graylog2.inputs.gelf.udp.GELFUDPInput,
global : false,
extractors : [ ],
static_fields : { }
  } ],
  streams : [ {
id : 54ae9b0724ac1c3ac18cf641,
title : Java Service Error,
description : Java Service Error Meldingen,
disabled : false,
outputs : [ ],
stream_rules : [ {
  type : EXACT,
  field : service_message,
  value : proc_JAVA,
  inverted : false
} ]
  }, {
id : 54b7b9cd24acf433218a83d7,
title : CBS Parter 900_SRK heeft status Fail,
description : Partner 900_SRK probleem op het CBS systeem,
disabled : false,
outputs : [ ],
stream_rules : [ {
  type : REGEX,
  field : message,
  value : (?=.*cbs-prod).*ALERT.*False.*900_SRK,
  inverted : false
} ]
  }, {
id : 54b7b3f524acf433218a7d80,
title : RIS Parter SRK_CBS heeft status Fail,
description : Parter SRK_CBS probleem op het RIS systeem,
disabled : false,
outputs : [ ],
stream_rules : [ {
  type : REGEX,
  field : message,
  value : (?=.*ris-prod).*ALERT.*False.*SRK_CBS,
  inverted : false
} ]
  }, {
id : 549b001f24ac266f4e59c913,
title : Hosts Down,
description : Hosts die down gemeld worden in Nagios,
disabled : false,
outputs : [ ],
stream_rules : [ {
  type : EXACT,
  field : service_message,
  value : DOWN,
  inverted : false
} ]
  }, {
id : 5464ab0124acdd8389e0f0f3,
title : Hosts Unreachable,
description : Message about Unreachable hosts,
disabled : false,
outputs : [ ],
stream_rules : [ {
  type : EXACT,
  field : service_message,
  value : UNREACHABLE,
  inverted : false
} ]
  } ],
  outputs : [ ],
  dashboards : [ {
title : nagios,
description : nagios,
dashboard_widgets : [ {
  description : Logged Events Total 8h,
  type : SEARCH_RESULT_COUNT,
  configuration : {
query : ,
timerange : {
  range : 28800,
  type : relative
}
  },
  col : 1,
  row : 1,
  cache_time : 60
}, {
  description : Server/Host events last 8h,
  type : SEARCH_RESULT_COUNT,
  configuration : {
query : (\SERVICE ALERT\ OR 

[graylog2] Re: Stream or Search for Excessive Windows Events from the Same Source

2015-03-16 Thread Arie
We send windows events with nxlog (type: gelf), and the system names are 
automatically included.

We look at ES with kibana and have created a view te see what is going on.

Op maandag 16 maart 2015 05:48:12 UTC+1 schreef Pete GS:

 Hi all,

 We've been continuing to discuss various other use cases for Graylog here 
 and there is one scenario that I can't figure out a solution for.

 Essentially, if an unknown Windows issue occurs, it will generally result 
 in the Windows Event Logs being spammed with hundreds or thousands of 
 events within a very short time frame (usually seconds).

 Flagging a lot of Windows events in a short time frame is pretty simple, 
 but what is not simple is that this count of events needs to be on a unique 
 source.

 As we are currently sending hundreds of Windows server Event Logs to 
 Graylog, we can't set a stream up for each individual server.

 Is there any way anyone can think of solving this? We're currently running 
 0.92.3 but I will soon be looking to upgrade to 1.0.

 The only way I can think to do this right now is to perform some scheduled 
 scripted searches via the REST API.

 Any help or thoughts would be greatly appreciated.

 Cheers, Pete


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Re: Stream matcher code

2015-02-23 Thread Arie
 Maciej  Thank jou.

Now I see, I was searching for these solutions to, but now we store the 
incomming messages fully,
and do the regex on all the data in complete messages

It is quit a hussle to get it done but it works here.

A.

On Monday, February 23, 2015 at 9:07:47 AM UTC+1, Maciej Strömich wrote:

 Arie,

 I need to place several fentries in Field and not in Value. Something 
 like:


 https://lh5.googleusercontent.com/-euR-_NE7cD4/VOrfZM9aJOI/AxA/Vp5LzcP0X2M/s1600/Screen%2BShot%2B2015-02-23%2Bat%2B09.05.26.png


  As I said we have several places where this string can be found 
 (depending if it's a trace or standard log) and want to catch all of them. 
 But as far as I see the fastest way is to change logging in our app.

 On Sunday, February 22, 2015 at 1:37:15 PM UTC+1, Arie wrote:

 A I sing this wrong, and is this not what he is asking for?

 https://lh3.googleusercontent.com/-NXlbaRKtFns/VOnNPLIxc2I/AAM/UjRfElbg814/s1600/stream-field.png

 A.


 Op vrijdag 20 februari 2015 16:30:37 UTC+1 schreef Bernd Ahlers:

 Maciek, 

 a regex match for a field value is not possible at the moment, sorry. 

 Bernd 

 On 20 February 2015 at 16:13, Arie satya...@gmail.com wrote: 
  At leas in 09.3 you can 
  
  Create a stream rule, and the first possibility on top is the field, 
  after that select regex 
  and in that you put (cvp) 
  
  
  
  
  On Friday, February 20, 2015 at 3:51:51 PM UTC+1, Maciej Strömich 
 wrote: 
  
  Hi, 
  
  
  I'm trying to create a stream which will catch log entries 
 containging 
  cvp string. It's not a problem if there's only one field which 
 needs to be 
  checked. I've several places where this value can be found and I'm 
 wondering 
  is it possible to use a regex inside a Field input. AFAIK seperate 
 stream 
  rules will use AND and not OR to match messages. 
  
  I've tried so far: 
  
  (?:url|message) 
  [url|message] 
  url||message 
  
  Is this even possible? 
  
  br, 
  Maciek 
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups 
  graylog2 group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an 
  email to graylog2+u...@googlegroups.com. 
  For more options, visit https://groups.google.com/d/optout. 



 -- 
 Developer 

 Tel.: +49 (0)40 609 452 077 
 Fax.: +49 (0)40 609 452 078 

 TORCH GmbH - A Graylog company 
 Steckelhörn 11 
 20457 Hamburg 
 Germany 

 Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
 Geschäftsführer: Lennart Koopmann (CEO) 



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Re: [ANN] Graylog v1.0 has been released

2015-02-20 Thread Arie
Bernd,

The files were deleted in the yum update process.
The system gave an obsolete on the old packages, and removed the config 
files.
The new files are default ones.

But already recovered form the problem, but opening a new issue.

thanks.

On Friday, February 20, 2015 at 12:52:03 PM UTC+1, Bernd Ahlers wrote:

 Arie, 

 you mean it actually deleted the old files (/etc/graylog2.con and files 
 in /etc/graylog2/server) even though you modified them? 

 Bernd 

 Arie [Thu, Feb 19, 2015 at 11:39:48PM -0800] wrote: 
 Congrats,, happy too, 
  
  but updating my rpms throwed my old graylog configs away. 
  on centos the old versions are considered obsolete. 
  
 Arie 
  
  
  
 On Thursday, February 19, 2015 at 8:38:23 PM UTC+1, lennart wrote: 
  
  We are very happy to announce that we released Graylog v1.0 today: 
  
https://www.graylog.org/announcing-graylog-v1-0-ga/ 
  
  We'd like you all for the immense support we got over the last 5 1/2 
  years and look forward to build on top of this foundation now. 
  
  Cheers, 
  Lennart (In behalf of the whole Graylog, Inc team) 
  
  
 -- 
 You received this message because you are subscribed to the Google Groups 
 graylog2 group. 
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to graylog2+u...@googlegroups.com javascript:. 
 For more options, visit https://groups.google.com/d/optout. 


 -- 
 Developer 

 Tel.: +49 (0)40 609 452 077 
 Fax.: +49 (0)40 609 452 078 

 TORCH GmbH - A Graylog company 
 Steckelhörn 11 
 20457 Hamburg 
 Germany 

 Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
 Geschäftsführer: Lennart Koopmann (CEO) 


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Graylog 1.0 startup error

2015-02-20 Thread Arie
Solverd the problem partially bij creating

/etc/graylog2/server/ and coping the node-id into there.

Now only the kafaka exeption remains

On Friday, February 20, 2015 at 1:36:33 PM UTC+1, Arie wrote:

 Hi All

 After succesfully updating to 1.0 from the latest 0.9 and starting up 
 after a reboot all was fine in our test environment.

 Now after a resatrt of the graylog-server service we have the following 
 error:

 2015-02-20T13:26:25.572+01:00 ERROR [CmdLineTool] Guice error (more detail 
 on log level debug): Error injecting constructor, 
 java.lang.RuntimeException: kafka.common.KafkaException: Failed to acquire 
 lock on file .lock in /var/lib/graylog-server/journal. A Kafka instance in 
 another process or thread is using this directory.

 Cleaning up the directory does not solve this startup error.

 What can be wrong/

 running on centos 6.6
 latest java
 es 1.4.3




-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Graylog 1.0 startup error

2015-02-20 Thread Arie
And found this to in the output:

Caused by: java.io.IOException: Directory '/etc/graylog2/server' could not 
be created

I delete the 'old' dirs on the server in /etc/


On Friday, February 20, 2015 at 1:36:33 PM UTC+1, Arie wrote:

 Hi All

 After succesfully updating to 1.0 from the latest 0.9 and starting up 
 after a reboot all was fine in our test environment.

 Now after a resatrt of the graylog-server service we have the following 
 error:

 2015-02-20T13:26:25.572+01:00 ERROR [CmdLineTool] Guice error (more detail 
 on log level debug): Error injecting constructor, 
 java.lang.RuntimeException: kafka.common.KafkaException: Failed to acquire 
 lock on file .lock in /var/lib/graylog-server/journal. A Kafka instance in 
 another process or thread is using this directory.

 Cleaning up the directory does not solve this startup error.

 What can be wrong/

 running on centos 6.6
 latest java
 es 1.4.3




-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Graylog 1.0 startup error

2015-02-20 Thread Arie
Hi All

After succesfully updating to 1.0 from the latest 0.9 and starting up after 
a reboot all was fine in our test environment.

Now after a resatrt of the graylog-server service we have the following 
error:

2015-02-20T13:26:25.572+01:00 ERROR [CmdLineTool] Guice error (more detail 
on log level debug): Error injecting constructor, 
java.lang.RuntimeException: kafka.common.KafkaException: Failed to acquire 
lock on file .lock in /var/lib/graylog-server/journal. A Kafka instance in 
another process or thread is using this directory.

Cleaning up the directory does not solve this startup error.

What can be wrong/

running on centos 6.6
latest java
es 1.4.3


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Re: Graylog 1.0 startup error

2015-02-20 Thread Arie
You are absolutely right about that, missed that in the diff

thank you.

On Friday, February 20, 2015 at 1:56:20 PM UTC+1, Bernd Ahlers wrote:

 I think you have to adjust the node-id setting in your 
 /etc/graylog/server/server.conf to point to the new directory. 
 (/etc/graylog/server/) 

 Bernd 

 On 20 February 2015 at 13:51, Arie satya...@gmail.com javascript: 
 wrote: 
  Problem solved partially. 
  
  graylog seems to rely on an old directory as mentioned earlier. 
  (/etc/graylog2/server/ and coping the node-id into there.) 
  
  Removed everything in the journal directory an I am running fine again. 
  
  hth 
  
  
  
  
  On Friday, February 20, 2015 at 1:44:42 PM UTC+1, Arie wrote: 
  
  Solverd the problem partially bij creating 
  
  /etc/graylog2/server/ and coping the node-id into there. 
  
  Now only the kafaka exeption remains 
  
  On Friday, February 20, 2015 at 1:36:33 PM UTC+1, Arie wrote: 
  
  Hi All 
  
  After succesfully updating to 1.0 from the latest 0.9 and starting up 
  after a reboot all was fine in our test environment. 
  
  Now after a resatrt of the graylog-server service we have the 
 following 
  error: 
  
  2015-02-20T13:26:25.572+01:00 ERROR [CmdLineTool] Guice error (more 
  detail on log level debug): Error injecting constructor, 
  java.lang.RuntimeException: kafka.common.KafkaException: Failed to 
 acquire 
  lock on file .lock in /var/lib/graylog-server/journal. A Kafka 
 instance in 
  another process or thread is using this directory. 
  
  Cleaning up the directory does not solve this startup error. 
  
  What can be wrong/ 
  
  running on centos 6.6 
  latest java 
  es 1.4.3 
  
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups 
  graylog2 group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an 
  email to graylog2+u...@googlegroups.com javascript:. 
  For more options, visit https://groups.google.com/d/optout. 



 -- 
 Developer 

 Tel.: +49 (0)40 609 452 077 
 Fax.: +49 (0)40 609 452 078 

 TORCH GmbH - A Graylog company 
 Steckelhörn 11 
 20457 Hamburg 
 Germany 

 Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
 Geschäftsführer: Lennart Koopmann (CEO) 


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Stream matcher code

2015-02-20 Thread Arie
At leas in 09.3 you can

Create a stream rule, and the first possibility on top is the field,
after that select regex
and in that you put (cvp)



On Friday, February 20, 2015 at 3:51:51 PM UTC+1, Maciej Strömich wrote:

 Hi,


 I'm trying to create a stream which will catch log entries containging 
 cvp string. It's not a problem if there's only one field which needs to 
 be checked. I've several places where this value can be found and I'm 
 wondering is it possible to use a regex inside a Field input. AFAIK 
 seperate stream rules will use AND and not OR to match messages. 

 I've tried so far:

 (?:url|message)
 [url|message]
 url||message

 Is this even possible?

 br,
 Maciek


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Re: graylog2-server receiveBufferSize

2015-02-07 Thread Arie
Hi Petar,

Nice you probably find your answer. I'll keep in mind that we are using 
regex
as well on inputs and streams for alerting.

Did you raise the processors in the graylog2.conf? Maybe your are running 
out on
processorbuffer_processors. You could double that for catching up your peak 
loads.

processbuffer_processors = 8


second thing to look at is the ring_size



Op woensdag 4 februari 2015 16:04:34 UTC+1 schreef Petar Koraca:

 Hello Arie,

 We use Microsoft Hyper-V as hypervisor.

 I believe our problem could be decompressGzip that decreases Graylog 
 throughput. I will let you know about our findings when we disable 
 compression on client side.

 Another thing that caused a lot of performance problems 
 was RegexExtractor. Throughput wen't down to few hundred per second.

 Btw I have tested Raw/Plaintext Input on another machine with 'pv 
 haproxy.log -L 50m -i 1 | nc 127.0.0.1 ' and got 30-35k per second on 
 single virtualized machine :)

 Cheers


 On Wednesday, January 28, 2015 at 9:29:14 PM UTC+1, Arie wrote:

 O I forgot (:-

 Are vmware-tools installed? We recently found some systems that where
 forgotten, and that has more impact than foreseen.

 Op woensdag 28 januari 2015 21:25:11 UTC+1 schreef Arie:

 Petar,

 we are running on bare metal, with a low load. Tested to 10k messages 
 with the http test input,
 with everything on one (test)server and running well.

 I can tell you that in our production systems in our private/local cloud 
 we are encountering severe
 network/disk related problems with our systems. All of your network is 
 CPU bound. Sometimes there
 are delays that we can count in seconds. All VM Hosts is running @75% 
 CPU.

 Must say that we had problems with sending windows eventlogs thru 
 UDP/GELF with nxlog, those were
 gone when switching tot TCP.

 Have you already done some graylog2 performance tweaks already?



 Op woensdag 28 januari 2015 14:41:43 UTC+1 schreef Petar Koraca:

 Thanks Arie. I have already tried that yesterday and did not help.

 I have removed unnecessary TCP input, and I don't have any 
 NettyTransport exceptions now.

 I still have problem with RecvQ in peaks (as seen in netstat) which 
 should be related to slow processing.

 Do you have any benchmark data with bare-metal vs VM, and different 
 processors numbers / ring_size in graylog2.conf ?


 On Wed, Jan 28, 2015 at 11:49 AM, Arie satya...@gmail.com wrote:

 Maybe this can be helpfull to you:


 https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html

 or this for more advanced network tuning:
 https://wwwx.cs.unc.edu/~sparkst/howto/network_tuning.php

 hth,,

 Arie



 On Tuesday, January 27, 2015 at 6:27:39 PM UTC+1, Petar Koraca wrote:

 Hello,

 I have some performance issues with graylog2-server 0.92.4 (cannot 
 process more than 7-8k per second), and I think it may be related to UDP 
 buffers. This is CentOS 6 virtual machine with 16 vCPU.

 $ netstat -ulptn|grep 12201
 tcp0  0 :::12201:::* 
LISTEN  2311/java   
 udp75960  0 :::12201:::* 
2311/java 

 I've noticed this in my logs:

 16:31:06,719 WARN [NettyTransport] receiveBufferSize (SO_RCVBUF) for 
 [id: 0x537c78d9, /0:0:0:0:0:0:0:0:12201] should be 1048576 but is 43690.

 I've set udp_recvbuffer_sizes=1048576 but no luck.

 Also, I've set net.core.rmem_max from 124928 to 26214400.

 Any idea where did this 43690 come from?

 if you need additional information I am at your disposal.

 Kind regards,

 Petar Koraca

  -- 
 You received this message because you are subscribed to a topic in the 
 Google Groups graylog2 group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/graylog2/SR9sqDyZqrU/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 graylog2+u...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: [ANN] Graylog v1.0-rc.3 released

2015-02-06 Thread Arie
 Congrats with the new layout of the websites and your logo. Like this.

Arie.

-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] search wildcard in quotes

2015-01-28 Thread Arie
Marciej,


THis is exactly as I told you.

For this type of query you have to specify a default_field AND your 
contend* search query.

The default field could be the input of your messages for example, or any 
other field that is relied to your search.


On Wednesday, January 28, 2015 at 1:24:36 PM UTC+1, Maciej Strömich wrote:

 This is not exactly true, or I'm misreading something in the elasticsearch 
 docs.


 http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html

 analyze_wildcard - By default, wildcards terms in a query string are not 
 analyzed. By setting this value to true, a best effort will be made to 
 analyze those as well.

 So it looks like the query is just incomplete or maybe there are other 
 unknown to me reasons behind this behaviour.
  

 On Wednesday, January 28, 2015 at 12:59:31 PM UTC+1, Edmundo Alvarez wrote:

 Hello, 

 As far as I know, it is not possible to use an exact phrase (a search 
 term enclosed in quotation marks) with wildcards inside in Elasticsearch. 
 The wildcard will be simply ignored. If you only want to check that your 
 query matches both Missing assetId and Missing assetIds, this is what I 
 would do: 

 message:Missing assetId OR message:Missing assetIds 

 I hope that helps. 

 Regards, 
 Edmundo 

 -- 
 Developer 

 Tel.: +49 (0)40 609 452 077 
 Mobile: +49 (0)171 27 22 181 
 Mobile (US): +1 (713) 321 8126 
 Fax.: +49 (0)40 609 452 078 

 TORCH GmbH 
 Steckelhörn 11 
 20457 Hamburg 
 Germany 
 https://www.torch.sh/ 

 Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175 
 Geschäftsführer: Lennart Koopmann (CEO) 

  On 28 Jan 2015, at 11:41, Arie satya...@gmail.com wrote: 
  
  An the second option I gave, does that work? 
  
  We experience exactly the same thing. 
  
  
  
  On Tuesday, January 27, 2015 at 2:37:50 PM UTC+1, Maciej Strömich 
 wrote: 
  
  Hi, 
  
  I know that  allow_leading_wildcard_searches and it's used to search 
 for terms like *something, and I know that it can cause increased memory 
 consumption. 
  
  My question is strictly connected to the query language. 
  
  when we query for 
  
  Missing assetIds 
  Misssing assetIds* 
  
  the results are found 
  
  but when we do a search for 
  
  Missing assetId* 
  
  there are no results found which is kind of strange because following 
 the docs you could assume that this should search for all occurrences of 
 Missing assetIds. 
  
  Maybe we're missing something that's why I've asked about the options 
 part :) 
  
  
  On Monday, January 26, 2015 at 10:55:47 PM UTC+1, Arie wrote: 
  Hi, 
  
  such a parameter exist in graylog2.conf, but don't know if it is wise 
 to use. 
  
 allow_leading_wildcard_searches = false 
  
  If we are using such searches and it is within an know source or other 
 qualified field 
  we use # source:hostname last acc* 
  
  hth,, 
  Arie. 
  
  On Monday, January 26, 2015 at 5:28:02 PM UTC+1, Maciej Strömich wrote: 
  Hi, 
  
  can someone elaborate a bit on using wildcard searches inside double 
 quotes in GL? 
  
  We're running 0.92 and have a case where we need to search for an exact 
 phrase with wildcard in the end and it doesn't work for us. 
  
  e.g. something like message:Missing assetId* 
  
  Maybe there's an option in graylog2-server conf which needs to be 
 turned on like allow_leading_wildcard_searches? 
  
  Digging a bit through a group I found only 
 https://groups.google.com/forum/#!searchin/graylog2/wildcard/graylog2/4IQubA243-A/BCnBpW78wQkJ
  
 which can be somehow connected with our issue 
  
  thanks. 
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups graylog2 group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to graylog2+u...@googlegroups.com. 
  For more options, visit https://groups.google.com/d/optout. 



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: search wildcard in quotes

2015-01-28 Thread Arie
An the second option I gave, does that work?

We experience exactly the same thing.



On Tuesday, January 27, 2015 at 2:37:50 PM UTC+1, Maciej Strömich wrote:


 Hi,

 I know that  allow_leading_wildcard_searches and it's used to search for 
 terms like *something, and I know that it can cause increased memory 
 consumption.

 My question is strictly connected to the query language.

 when we query for

 Missing assetIds
 Misssing assetIds*

 the results are found

 but when we do a search for 

 Missing assetId* 

 there are no results found which is kind of strange because following the 
 docs you could assume that this should search for all occurrences of 
 Missing assetIds.

 Maybe we're missing something that's why I've asked about the options part 
 :)


 On Monday, January 26, 2015 at 10:55:47 PM UTC+1, Arie wrote:

 Hi,

 such a parameter exist in graylog2.conf, but don't know if it is wise to 
 use.

allow_leading_wildcard_searches = false

 If we are using such searches and it is within an know source or other 
 qualified field
 we use # source:hostname last acc*

 hth,,
 Arie.

 On Monday, January 26, 2015 at 5:28:02 PM UTC+1, Maciej Strömich wrote:

 Hi,

 can someone elaborate a bit on using wildcard searches inside double 
 quotes in GL? 

 We're running 0.92 and have a case where we need to search for an exact 
 phrase with wildcard in the end and it doesn't work for us. 

 e.g. something like message:Missing assetId* 

 Maybe there's an option in graylog2-server conf which needs to be turned 
 on like allow_leading_wildcard_searches?

 Digging a bit through a group I found only 
 https://groups.google.com/forum/#!searchin/graylog2/wildcard/graylog2/4IQubA243-A/BCnBpW78wQkJ
  
 which can be somehow connected with our issue

 thanks.



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: graylog2-server receiveBufferSize

2015-01-28 Thread Arie
Maybe this can be helpfull to you:

https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html

or this for more advanced network tuning:
https://wwwx.cs.unc.edu/~sparkst/howto/network_tuning.php

hth,,

Arie


On Tuesday, January 27, 2015 at 6:27:39 PM UTC+1, Petar Koraca wrote:

 Hello,

 I have some performance issues with graylog2-server 0.92.4 (cannot process 
 more than 7-8k per second), and I think it may be related to UDP buffers. 
 This is CentOS 6 virtual machine with 16 vCPU.

 $ netstat -ulptn|grep 12201
 tcp0  0 :::12201:::*   
  LISTEN  2311/java   
 udp75960  0 :::12201:::*   
  2311/java 

 I've noticed this in my logs:

 16:31:06,719 WARN [NettyTransport] receiveBufferSize (SO_RCVBUF) for [id: 
 0x537c78d9, /0:0:0:0:0:0:0:0:12201] should be 1048576 but is 43690.

 I've set udp_recvbuffer_sizes=1048576 but no luck.

 Also, I've set net.core.rmem_max from 124928 to 26214400.

 Any idea where did this 43690 come from?

 if you need additional information I am at your disposal.

 Kind regards,

 Petar Koraca



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Re: graylog2-server receiveBufferSize

2015-01-28 Thread Arie
O I forgot (:-

Are vmware-tools installed? We recently found some systems that where
forgotten, and that has more impact than foreseen.

Op woensdag 28 januari 2015 21:25:11 UTC+1 schreef Arie:

 Petar,

 we are running on bare metal, with a low load. Tested to 10k messages with 
 the http test input,
 with everything on one (test)server and running well.

 I can tell you that in our production systems in our private/local cloud 
 we are encountering severe
 network/disk related problems with our systems. All of your network is CPU 
 bound. Sometimes there
 are delays that we can count in seconds. All VM Hosts is running @75% CPU.

 Must say that we had problems with sending windows eventlogs thru UDP/GELF 
 with nxlog, those were
 gone when switching tot TCP.

 Have you already done some graylog2 performance tweaks already?



 Op woensdag 28 januari 2015 14:41:43 UTC+1 schreef Petar Koraca:

 Thanks Arie. I have already tried that yesterday and did not help.

 I have removed unnecessary TCP input, and I don't have any NettyTransport 
 exceptions now.

 I still have problem with RecvQ in peaks (as seen in netstat) which 
 should be related to slow processing.

 Do you have any benchmark data with bare-metal vs VM, and different 
 processors numbers / ring_size in graylog2.conf ?


 On Wed, Jan 28, 2015 at 11:49 AM, Arie satya...@gmail.com wrote:

 Maybe this can be helpfull to you:


 https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html

 or this for more advanced network tuning:
 https://wwwx.cs.unc.edu/~sparkst/howto/network_tuning.php

 hth,,

 Arie



 On Tuesday, January 27, 2015 at 6:27:39 PM UTC+1, Petar Koraca wrote:

 Hello,

 I have some performance issues with graylog2-server 0.92.4 (cannot 
 process more than 7-8k per second), and I think it may be related to UDP 
 buffers. This is CentOS 6 virtual machine with 16 vCPU.

 $ netstat -ulptn|grep 12201
 tcp0  0 :::12201:::*   
  LISTEN  2311/java   
 udp75960  0 :::12201:::*   
  2311/java 

 I've noticed this in my logs:

 16:31:06,719 WARN [NettyTransport] receiveBufferSize (SO_RCVBUF) for 
 [id: 0x537c78d9, /0:0:0:0:0:0:0:0:12201] should be 1048576 but is 43690.

 I've set udp_recvbuffer_sizes=1048576 but no luck.

 Also, I've set net.core.rmem_max from 124928 to 26214400.

 Any idea where did this 43690 come from?

 if you need additional information I am at your disposal.

 Kind regards,

 Petar Koraca

  -- 
 You received this message because you are subscribed to a topic in the 
 Google Groups graylog2 group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/graylog2/SR9sqDyZqrU/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 graylog2+u...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Re: graylog2-server receiveBufferSize

2015-01-28 Thread Arie
Petar,

we are running on bare metal, with a low load. Tested to 10k messages with 
the http test input,
with everything on one (test)server and running well.

I can tell you that in our production systems in our private/local cloud we 
are encountering severe
network/disk related problems with our systems. All of your network is CPU 
bound. Sometimes there
are delays that we can count in seconds. All VM Hosts is running @75% CPU.

Must say that we had problems with sending windows eventlogs thru UDP/GELF 
with nxlog, those were
gone when switching tot TCP.

Have you already done some graylog2 performance tweaks already?



Op woensdag 28 januari 2015 14:41:43 UTC+1 schreef Petar Koraca:

 Thanks Arie. I have already tried that yesterday and did not help.

 I have removed unnecessary TCP input, and I don't have any NettyTransport 
 exceptions now.

 I still have problem with RecvQ in peaks (as seen in netstat) which should 
 be related to slow processing.

 Do you have any benchmark data with bare-metal vs VM, and different 
 processors numbers / ring_size in graylog2.conf ?


 On Wed, Jan 28, 2015 at 11:49 AM, Arie satya...@gmail.com javascript: 
 wrote:

 Maybe this can be helpfull to you:


 https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html

 or this for more advanced network tuning:
 https://wwwx.cs.unc.edu/~sparkst/howto/network_tuning.php

 hth,,

 Arie



 On Tuesday, January 27, 2015 at 6:27:39 PM UTC+1, Petar Koraca wrote:

 Hello,

 I have some performance issues with graylog2-server 0.92.4 (cannot 
 process more than 7-8k per second), and I think it may be related to UDP 
 buffers. This is CentOS 6 virtual machine with 16 vCPU.

 $ netstat -ulptn|grep 12201
 tcp0  0 :::12201:::* 
LISTEN  2311/java   
 udp75960  0 :::12201:::* 
2311/java 

 I've noticed this in my logs:

 16:31:06,719 WARN [NettyTransport] receiveBufferSize (SO_RCVBUF) for 
 [id: 0x537c78d9, /0:0:0:0:0:0:0:0:12201] should be 1048576 but is 43690.

 I've set udp_recvbuffer_sizes=1048576 but no luck.

 Also, I've set net.core.rmem_max from 124928 to 26214400.

 Any idea where did this 43690 come from?

 if you need additional information I am at your disposal.

 Kind regards,

 Petar Koraca

  -- 
 You received this message because you are subscribed to a topic in the 
 Google Groups graylog2 group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/graylog2/SR9sqDyZqrU/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 graylog2+u...@googlegroups.com javascript:.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: search wildcard in quotes

2015-01-26 Thread Arie
Hi,

such a parameter exist in graylog2.conf, but don't know if it is wise to 
use.

   allow_leading_wildcard_searches = false

If we are using such searches and it is within an know source or other 
qualified field
we use # source:hostname last acc*

hth,,
Arie.

On Monday, January 26, 2015 at 5:28:02 PM UTC+1, Maciej Strömich wrote:

 Hi,

 can someone elaborate a bit on using wildcard searches inside double 
 quotes in GL? 

 We're running 0.92 and have a case where we need to search for an exact 
 phrase with wildcard in the end and it doesn't work for us. 

 e.g. something like message:Missing assetId* 

 Maybe there's an option in graylog2-server conf which needs to be turned 
 on like allow_leading_wildcard_searches?

 Digging a bit through a group I found only 
 https://groups.google.com/forum/#!searchin/graylog2/wildcard/graylog2/4IQubA243-A/BCnBpW78wQkJ
  
 which can be somehow connected with our issue

 thanks.


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Filter order and priorities

2015-01-26 Thread Arie
Hi,,

* more flexible alert conditions (per stream)
  here we would like to check for field contents (and not just a single 
field)

At first we wanted this to, and it still will be of great value. In our 
situation we want to correlate on certain hostnames (field) AND a certain 
message (field).
We finally get around this through a regex filter, that in our situation 
just as flexible (or even more :-))

Another thing that would create great value is a way of statistical 
analyses over all the messages/docs, to get a self learning
application that knows what messages could be concidered normal or can be 
seen as an exception or anomaly.
This way graylog2 could be the new system monitoring application on close 
horizon.
(analogue to this story 
http://info.prelert.com/blog/the-secrets-to-successful-data-mining)


Ever thought of storing the configuration on ES instead of mongod? Just a 
thought here.

grtz,, Arie



On Sunday, January 25, 2015 at 12:56:34 PM UTC+1, Ronald Rink (d-fens GmbH) 
wrote:

  Hi Kai and thanks for the quick reply!

  The main reason I ask is that we need some features currently not 
 available in Graylog2, such as:

  * more flexible alert conditions (per stream)
   here we would like to check for field contents (and not just a single 
 field)
 * enhanced drools support
   currently drools rule matching is only performed at the very beginning, 
 we would like to have drools per stream or other criteria

  Our use case:
 We have a general drools rule, where we do some enrichment (joining 
 several drools streams). The rule consequence will then add fields to the 
 messages for post processing.
 Once the stream is assigned we have stream specific drools rules (and 
 static threshold matchers) that will pick up on the added fields of the 
 previous match and will perform final processing such as custom output 
 routing etc.

  This is the reason why need to know the priorities so we can inject our 
 filters accordingly.

  Another good feature would be to have some kind of *context* so that 
 filters/plugins can communicate with each other (and for example rely on 
 the same fact database for rule matching).

  If you need further information, please let me know and I can do some 
 kind of more structured write-up for this.

 Regards,
 Ronald

  On 25 Jan 2015, at 12:40, Kay Röpke kro...@gmail.com javascript: 
 wrote:

   Hi!

 Right now you can rely on this, and you are free to use any number you 
 want. In case of collisions the filter's names are being use to tie break 
 in a deterministic order.

 In the future we are very likely to revamp this part of the code, but that 
 will either still support (then) legacy MessageFilter plugins or support a 
 better way of dealing with the extensibility.

 We would appreciate feedback on what's missing so we can plan accordingly. 
 For the next few months however this will probably not change and certainly 
 not within the 1.x series.

 Best,
 Kay
 On Jan 25, 2015 10:20 AM, Ronald Rink (d-fens GmbH) ronal...@d-fens.net 
 javascript: wrote:

 Hi, currently the filters ExtractorFilter, StaticFieldFilter and 
 StreamMatcherFilter have defined priority hard coded in their source code. 
 Is this something that is fix and can be relied upon or is this *opaque* 
 and subject to change?
 The reason I ask for is that we implement filters that depend upon the 
 StreamMatcherFilter has already run and thus a message is already assigned 
 to a stream.

 So is it safe to specify a priority over *40* ? Are there any plans to 
 have an option in graylog2.conf to specify the priority of these filters?

 Thanks for your reply!
 Regards,
 Ronald

 --
 You received this message because you are subscribed to the Google Groups 
 graylog2 group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to graylog2+u...@googlegroups.com javascript:.
 For more options, visit https://groups.google.com/d/optout.

  -- 
 You received this message because you are subscribed to a topic in the 
 Google Groups graylog2 group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/graylog2/nL4LZG3KUkk/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 graylog2+...@googlegroups.com javascript:.
 For more options, visit https://groups.google.com/d/optout.
  
 

-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Problem with search results

2015-01-13 Thread Arie
Hi all,

Maybe I am totally dull, but I am have a problem with search results

search:  cbs\-prod
or
search: cbs-prod

show even cbs-test-app1 in the results, the higlighted part is okay, but 
for statistics,
we want an exact match. The search is on a hostname that is in the message 
and
with an extractor in a separate field.

What are we doing wrong?

thanks,,

A.

-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Kibana 4.0 Problem

2015-01-05 Thread Arie
I have tried this, and kibana can connect to ES, but the state of ES turns 
to
orange, and in the config of kibana it defects on a timeout.

I have tried it with ES version 1.4.1 ub the kibana.jar, but that gives me 
the old error.



On Friday, January 2, 2015 10:42:07 PM UTC+1, Alexander Reynolds wrote:

 You can cheat Kibana's version check to get around this issue. Edit the 
 index.js file with the kibana.jar file (kibana/public) and change 
 'minimumElasticsearchVersion' from '1.4.0' to '1.3.4'. I haven't done much 
 testing with this yet, but it does allow you to search through your indices 
 at least.

 Cheers,
 Al

 On Monday, December 8, 2014 10:17:20 AM UTC-5, Arie wrote:

 Hi Group,

 I am trying to look at the ES data with Kibana 4.0, but there is an
 incompatibility with graylog2.

 It appears that Kibana joins ES as a node and conflicts with graylog-2.
 When I shutdown graylog2-server it is possible to connect to ES with
 kibana 4.

 Anyone an idea to work around this situation?

 tia.
 Arie


 Notice of Confidentiality: ***This E-mail and any of its attachments may 
 contain Lincoln National Corporation proprietary information, which is 
 privileged, confidential, or subject to copyright belonging to the Lincoln 
 National Corporation family of companies. This E-mail is intended solely 
 for the use of the individual or entity to which it is addressed. If you 
 are not the intended recipient of this E-mail, you are hereby notified that 
 any dissemination, distribution, copying, or action taken in relation to 
 the contents of and attachments to this E-mail is strictly prohibited and 
 may be unlawful. If you have received this E-mail in error, please notify 
 the sender immediately and permanently delete the original and any copy of 
 this E-mail and any printout. Thank You. ***

-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Kibana 4.0 Problem

2015-01-05 Thread Arie
This is getting even more strange,
now on kibana-4 beta 3, and this tells a litlebit more:

Kibana: This version of Kibana requires Elasticsearch 1.4.0 or higher on 
all nodes. I found the following incompatible nodes in your cluster: 
Elasticsearch 1.3.4 @ inet[/10.64.91.14:9350] (10.64.91.14)


Strange as I an using ES 1.4.1

On Monday, January 5, 2015 1:24:26 PM UTC+1, Arie wrote:

 I have tried this, and kibana can connect to ES, but the state of ES turns 
 to
 orange, and in the config of kibana it defects on a timeout.

 I have tried it with ES version 1.4.1 ub the kibana.jar, but that gives me 
 the old error.



 On Friday, January 2, 2015 10:42:07 PM UTC+1, Alexander Reynolds wrote:

 You can cheat Kibana's version check to get around this issue. Edit the 
 index.js file with the kibana.jar file (kibana/public) and change 
 'minimumElasticsearchVersion' from '1.4.0' to '1.3.4'. I haven't done much 
 testing with this yet, but it does allow you to search through your indices 
 at least.

 Cheers,
 Al

 On Monday, December 8, 2014 10:17:20 AM UTC-5, Arie wrote:

 Hi Group,

 I am trying to look at the ES data with Kibana 4.0, but there is an
 incompatibility with graylog2.

 It appears that Kibana joins ES as a node and conflicts with graylog-2.
 When I shutdown graylog2-server it is possible to connect to ES with
 kibana 4.

 Anyone an idea to work around this situation?

 tia.
 Arie


 Notice of Confidentiality: ***This E-mail and any of its attachments may 
 contain Lincoln National Corporation proprietary information, which is 
 privileged, confidential, or subject to copyright belonging to the Lincoln 
 National Corporation family of companies. This E-mail is intended solely 
 for the use of the individual or entity to which it is addressed. If you 
 are not the intended recipient of this E-mail, you are hereby notified that 
 any dissemination, distribution, copying, or action taken in relation to 
 the contents of and attachments to this E-mail is strictly prohibited and 
 may be unlawful. If you have received this E-mail in error, please notify 
 the sender immediately and permanently delete the original and any copy of 
 this E-mail and any printout. Thank You. ***



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: How to fix Nodes with too long GC pauses issues in my cluster.

2015-01-04 Thread Arie
Hi Joseph,

If you are going to hit 2000/msg/sec take a look at
this config for ES:
 

index.refresh_interval: 5s

Just read about to get familiar what is is about.

I think the output_flush_interval = 1 in the graylog.conf file is to
get the same result as the es configuration.


Op zondag 4 januari 2015 10:40:39 UTC+1 schreef Joseph DJOMEDA:

 Hello Arie,

 Thank you for your pointers. I am currently seeing something like 
 600msg/s. But that's just 3 boxes, I am expecting at the end of the 
 migration something like 2000msg/s

 On Saturday, January 3, 2015 10:32:11 PM UTC, Arie wrote:

 check on memory usage by ES, in my case the ES_HEAP_SIZE needed som 
 manual fiddling.

 output_batch_size = 25 this can be something like 500


 check on max_file_descriptors from your os. don't know

 how this is with ubuntu, we use centos6.


 Try the HQ (ElastiHQ) plugin for ES, in can tell you some extra

 things of how it is functioning.

  How many messages/sec are you working on?




 On Saturday, January 3, 2015 2:43:18 PM UTC+1, Joseph DJOMEDA wrote:

 Hello guys and happy new year.

 I am really sorry for this very noob question. I have a cluster (I am 
 not even sure I set up correctly) of 3 nodes. 

 1st node: central logstash/graylogs UI/ graylogs Server  all installed 
 using their ubuntu repositories.
  second and third nodes are solely elasticsearch instances both 
 installed with ubuntu repository

 kindly find on pastie the configuration http://pastie.org/9810898 I 
 used. 

 Each box is a VM of :


- *2 dedicated cores*
- *6 virtual cores*
- *12 GB RAM guaranteed*



-  I thought I would get away with this but I get :

 Nodes with too long GC pauses 15 hours ago
 There are Graylog2 nodes on which the garbage collector runs too long. 
 Garbage collection runs should be as short as possible. Please check 
 whether those nodes are healthy. (Node: *4108686b-xb6400*, GC 
 duration: *1022 ms*, GC threshold: *1000 ms*)

 While memory stabilized at 4GB the CPUs are almost 100% constantly for 
 graylogs UI/Server node. From all I read this type of issue is due to lack 
 of memory. I had to resume some of the streams because it was automatically 
 pause I presume due to the same error.

 Please let me know whether my setup is ok (not sure about the cluster 
 bit on elasticsearch)
 Please help me fix the GC issue as well.

 Thanks 



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: How to fix Nodes with too long GC pauses issues in my cluster.

2015-01-03 Thread Arie
check on memory usage by ES, in my case the ES_HEAP_SIZE needed som manual 
fiddling.

output_batch_size = 25 this can be something like 500


check on max_file_descriptors from your os. don't know

how this is with ubuntu, we use centos6.


Try the HQ (ElastiHQ) plugin for ES, in can tell you some extra

things of how it is functioning.

 How many messages/sec are you working on?




On Saturday, January 3, 2015 2:43:18 PM UTC+1, Joseph DJOMEDA wrote:

 Hello guys and happy new year.

 I am really sorry for this very noob question. I have a cluster (I am not 
 even sure I set up correctly) of 3 nodes. 

 1st node: central logstash/graylogs UI/ graylogs Server  all installed 
 using their ubuntu repositories.
  second and third nodes are solely elasticsearch instances both installed 
 with ubuntu repository

 kindly find on pastie the configuration http://pastie.org/9810898 I 
 used. 

 Each box is a VM of :


- *2 dedicated cores*
- *6 virtual cores*
- *12 GB RAM guaranteed*



-  I thought I would get away with this but I get :

 Nodes with too long GC pauses 15 hours ago
 There are Graylog2 nodes on which the garbage collector runs too long. 
 Garbage collection runs should be as short as possible. Please check 
 whether those nodes are healthy. (Node: *4108686b-xb6400*, GC 
 duration: *1022 ms*, GC threshold: *1000 ms*)

 While memory stabilized at 4GB the CPUs are almost 100% constantly for 
 graylogs UI/Server node. From all I read this type of issue is due to lack 
 of memory. I had to resume some of the streams because it was automatically 
 pause I presume due to the same error.

 Please let me know whether my setup is ok (not sure about the cluster bit 
 on elasticsearch)
 Please help me fix the GC issue as well.

 Thanks 


-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Kibana 4.0 Problem

2014-12-13 Thread Arie
one answer i'dd get is the following:

Kibana doesn't join as a node, you will probably find it is graylog2 that 
does.
The same thing happens when you run logstash with the node protocol.

With the current release the only way to get around this is to not use the 
node protocol but 
use http or transport, but unsure if graylog2 can do that.

I guess it is not possible now.

 

-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Kibana 4.0 Problem

2014-12-08 Thread Arie
Hi Group,

I am trying to look at the ES data with Kibana 4.0, but there is an
incompatibility with graylog2.

It appears that Kibana joins ES as a node and conflicts with graylog-2.
When I shutdown graylog2-server it is possible to connect to ES with
kibana 4.

Anyone an idea to work around this situation?

tia.
Arie

-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: [ANN] Graylog2 0.92.0 released

2014-12-03 Thread Arie
Upgrade went very well, no problems.

Has anyone upgraded elasticsearch to version 1.4 (coming from 1.3.4)



On Tuesday, December 2, 2014 1:17:30 AM UTC+1, Cayuga wrote:

 All in all, a great release!!  We really appreciate all of your hard work.

 I believe that I've found a bug.

 I have non-conforming syslog input on port 513 and it now shows unknown 
 for the source for all incoming messages.


 On Monday, December 1, 2014 4:58:12 AM UTC-5, Jochen Schalanda wrote:

 Hi everyone,

 after an extended beta and release candidate phase we just released 
 Graylog2 0.92.0.

 We'd like to thank everyone in the community who made it possible to 
 produce this release by thoroughly testing the beta and release candidate 
 versions!

 There are lots of new features in Graylog2 0.92.0 like:

- Shareable content packs (i. e. import/export your dashboards, 
streams, inputs, outputs, and extractors)
- Support for pluggable retention strategies (e. g. the much 
requested time-based retention strategy)
- Support for Elasticsearch 1.4.x
- Support for SSL/TLS for the Graylog2 REST API
- Support for Syslog Octet Counting framing method (used by syslog-ng)
- A more detailed Sources page in the web interface
- Many stability and performance improvements 

 Please refer to our release post at 
 http://www.graylog2.org/news/post/0010-graylog2-v0-92 for more details 
 about Graylog2 0.92.0 and upgrade information (especially if you're still 
 running Graylog2 0.90.x or earlier).

 As always: If you find any bugs in Graylog2 or miss that one important 
 feature, please talk to us by either posting to this mailing list or by 
 creating an issue on GitHub at 
 https://github.com/Graylog2/graylog2-server/issues.


 Cheers, 
 Jochen (on behalf of the whole team)



-- 
You received this message because you are subscribed to the Google Groups 
graylog2 group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


  1   2   >