Re: [graylog2] Graylog stopped working

2017-01-06 Thread cypherbit
Hi,

thanks for everything all, clear.

Cheers!

On Friday, January 6, 2017 at 1:50:54 PM UTC+1, Jochen Schalanda wrote:

> Hi,
>
> On Friday, 6 January 2017 05:00:52 UTC+1, cyph...@gmail.com wrote:
>>
>> One last question, how can I prevent running out of space.
>>
>
> The simple (and correct) answer is: Monitor your disk space usage and send 
> a notification if you start running out of disk space.
>
> Also see 
> http://docs.graylog.org/en/2.1/pages/configuration/graylog_ctl.html#extend-disk-space
>  
> for instructions about how to extend the disk space in the OVA.
>
>
> As I understand it, the server should automatically delete older data, 
>> when the disk starts feeling up?
>>
>
> Yes, depending on your configuration. You can configure the data rotation 
> and retention policies in the web interface on the System / Indices page.
>
>  
> Cheers,
> Jochen
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/486d878f-b71b-420c-8bd2-a670e0d041ae%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Graylog stopped working

2017-01-05 Thread cypherbit
Jochen, thanks again.

I did as suggested, then checked the status and etcd was down. I deleted 
/var/opt/graylog/data/etcd/* and executed graylog-ctl reconfigure and etcd 
status is just fine now.

I still however see:
*Elasticsearch cluster is yellow.* Shards: 4 active, 0 initializing, 0 
relocating, 4 unassigned,

Is that something I should worry about it appears it's normal behaviour 
based on the documentation 
http://docs.graylog.org/en/2.1/pages/configuration/elasticsearch.html
 states:
*With only one Elasticsearch node, the cluster state cannot become green 
because shard replicas cannot be assigned.*

One last question, how can I prevent running out of space. As I noticed on 
the blog, 2.2 should come with some improvements in this area, but what is 
the proper procedure for now. As I understand it, the server should 
automatically delete older data, when the disk starts feeling up?

On Thursday, January 5, 2017 at 3:35:26 PM UTC+1, Jochen Schalanda wrote:

> Hi,
>
> On Thursday, 5 January 2017 13:10:57 UTC+1, cyph...@gmail.com wrote:
>>
>> May I delete the disk journal now and how?
>>
>
> You can simply empty the journal directory while Graylog is not running, 
> see http://docs.graylog.org/en/2.1/pages/configuration/file_location.html 
> for the specific path for your installation.
>
> Cheers,
> Jochen
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/15897859-5d97-4505-bbf9-06f153fafca2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Graylog stopped working

2017-01-05 Thread cypherbit
Hello,

after deleting the notification for "*Elasticsearch cluster unhealthy (RED) 
(triggered 6 days ago)"* and rebooting the server I didn't get notified of 
this problem again.

I still see:

*Elasticsearch clusterThe possible Elasticsearch cluster states and more 
related information is available in the Graylog documentation.*
*Elasticsearch cluster is yellow. Shards: 4 active, 0 initializing, 0 
relocating, 4 unassigned, What does this mean?*

May I delete the disk journal now and how?

On Tuesday, January 3, 2017 at 8:57:27 AM UTC+1, cyph...@gmail.com wrote:

> Jochen,
>
> thank you, I looked at the following logs:
>
> root@graylog:/var/log/graylog/elasticsearch# nano current
>   GNU nano 
> 2.2.6   
> File: current
>
> 2017-01-02_09:16:55.57535 [2017-01-02 10:16:55,574][INFO 
> ][node ] [Molecule Man] version[2.3.1], pid[924], 
> build[bd98092/2016-04-04T12:25:05Z]
> 2017-01-02_09:16:55.57604 [2017-01-02 10:16:55,576][INFO 
> ][node ] [Molecule Man] initializing ...
> 2017-01-02_09:16:56.80747 [2017-01-02 10:16:56,807][INFO 
> ][plugins  ] [Molecule Man] modules [reindex, 
> lang-expression, lang-groovy], plugins [kopf], sites [kopf]
> 2017-01-02_09:16:56.84193 [2017-01-02 10:16:56,841][INFO 
> ][env  ] [Molecule Man] using [1] data paths, mounts 
> [[/var/opt/graylog/data (/dev/sdb1)]], net usable_space [85.1gb], net 
> total_space [98.3gb], spins? [possib$
> 2017-01-02_09:16:56.84211 [2017-01-02 10:16:56,842][INFO 
> ][env  ] [Molecule Man] heap size [1.7gb], compressed 
> ordinary object pointers [true]
> 2017-01-02_09:16:56.84234 [2017-01-02 10:16:56,842][WARN 
> ][env  ] [Molecule Man] max file descriptors [64000] 
> for elasticsearch process likely too low, consider increasing to at least 
> [65536]
> 2017-01-02_09:17:02.18937 [2017-01-02 10:17:02,189][INFO 
> ][node ] [Molecule Man] initialized
> 2017-01-02_09:17:02.19168 [2017-01-02 10:17:02,191][INFO 
> ][node ] [Molecule Man] starting ...
> 2017-01-02_09:17:02.56976 [2017-01-02 10:17:02,569][INFO 
> ][transport] [Molecule Man] publish_address {
> 192.168.1.22:9300}, bound_addresses {192.168.1.22:9300}
> 2017-01-02_09:17:02.57613 [2017-01-02 10:17:02,576][INFO 
> ][discovery] [Molecule Man] graylog/62ruQcNHSOahWbBEe71egw
> 2017-01-02_09:17:12.66122 [2017-01-02 10:17:12,661][INFO 
> ][cluster.service  ] [Molecule Man] new_master {Molecule 
> Man}{62ruQcNHSOahWbBEe71egw}{192.168.1.22}{192.168.1.22:9300}, reason: 
> zen-disco-join(elected_as_master, [0] joins rec$
> 2017-01-02_09:17:12.73775 [2017-01-02 10:17:12,737][INFO 
> ][http ] [Molecule Man] publish_address {
> 192.168.1.22:9200}, bound_addresses {192.168.1.22:9200}
> 2017-01-02_09:17:12.73913 [2017-01-02 10:17:12,739][INFO 
> ][node ] [Molecule Man] started
> 2017-01-02_09:17:12.98417 [2017-01-02 10:17:12,984][INFO 
> ][gateway  ] [Molecule Man] recovered [1] indices into 
> cluster_state
> 2017-01-02_09:17:15.92973 [2017-01-02 10:17:15,929][INFO 
> ][cluster.service  ] [Molecule Man] added 
> {{graylog-52498cb4-349d-494a-8c6b-692fd78e3c6c}{56bjekcxQl6kwDCKKmeGuw}{192.168.1.22}{192.168.1.22:9350}{client=true,
>  
> data=false, mas$
> 2017-01-02_09:17:17.20882 [2017-01-02 10:17:17,208][INFO 
> ][cluster.routing.allocation] [Molecule Man] Cluster health status changed 
> from [RED] to [YELLOW] (reason: [shards started [[graylog_0][0], 
> [graylog_0][2], [graylog_0][2], [graylo$
>
>
> root@graylog:/var/log/graylog/elasticsearch# nano graylog.log
> [2016-12-30 07:41:38,399][WARN ][index.translog   ] [Slick] 
> [graylog_0][0] failed to delete unreferenced translog files
> java.nio.file.NoSuchFileException: 
> /var/opt/graylog/data/elasticsearch/graylog/nodes/0/indices/graylog_0/0/translog
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
> at 
> sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
> at java.nio.file.Files.newDirectoryStream(Files.java:457)
> at 
> org.elasticsearch.index.translog.Translog$OnCloseRunnable.handle(Translog.java:726)
> at 
> org.elasticsearch.index.translog.Translog$OnCloseRunnable.handle(Translog.java:714)
> at 
> org.elasticsearch.index.translog.ChannelReference.closeInternal(ChannelReference.java:67)
> at 
> org.elasticsearch.common.util.concurrent.AbstractRefCounted.decRef(AbstractRefCounted.java:64)
> at 
> org.elasticsearch.index.translog.TranslogReader.close(TranslogReader.java:143)
> at 
> 

Re: [graylog2] Graylog stopped working

2017-01-02 Thread cypherbit
Jochen,

thank you, I looked at the following logs:

root@graylog:/var/log/graylog/elasticsearch# nano current
  GNU nano 
2.2.6   
File: current

2017-01-02_09:16:55.57535 [2017-01-02 10:16:55,574][INFO 
][node ] [Molecule Man] version[2.3.1], pid[924], 
build[bd98092/2016-04-04T12:25:05Z]
2017-01-02_09:16:55.57604 [2017-01-02 10:16:55,576][INFO 
][node ] [Molecule Man] initializing ...
2017-01-02_09:16:56.80747 [2017-01-02 10:16:56,807][INFO 
][plugins  ] [Molecule Man] modules [reindex, 
lang-expression, lang-groovy], plugins [kopf], sites [kopf]
2017-01-02_09:16:56.84193 [2017-01-02 10:16:56,841][INFO 
][env  ] [Molecule Man] using [1] data paths, mounts 
[[/var/opt/graylog/data (/dev/sdb1)]], net usable_space [85.1gb], net 
total_space [98.3gb], spins? [possib$
2017-01-02_09:16:56.84211 [2017-01-02 10:16:56,842][INFO 
][env  ] [Molecule Man] heap size [1.7gb], compressed 
ordinary object pointers [true]
2017-01-02_09:16:56.84234 [2017-01-02 10:16:56,842][WARN 
][env  ] [Molecule Man] max file descriptors [64000] 
for elasticsearch process likely too low, consider increasing to at least 
[65536]
2017-01-02_09:17:02.18937 [2017-01-02 10:17:02,189][INFO 
][node ] [Molecule Man] initialized
2017-01-02_09:17:02.19168 [2017-01-02 10:17:02,191][INFO 
][node ] [Molecule Man] starting ...
2017-01-02_09:17:02.56976 [2017-01-02 10:17:02,569][INFO 
][transport] [Molecule Man] publish_address 
{192.168.1.22:9300}, bound_addresses {192.168.1.22:9300}
2017-01-02_09:17:02.57613 [2017-01-02 10:17:02,576][INFO 
][discovery] [Molecule Man] graylog/62ruQcNHSOahWbBEe71egw
2017-01-02_09:17:12.66122 [2017-01-02 10:17:12,661][INFO 
][cluster.service  ] [Molecule Man] new_master {Molecule 
Man}{62ruQcNHSOahWbBEe71egw}{192.168.1.22}{192.168.1.22:9300}, reason: 
zen-disco-join(elected_as_master, [0] joins rec$
2017-01-02_09:17:12.73775 [2017-01-02 10:17:12,737][INFO 
][http ] [Molecule Man] publish_address 
{192.168.1.22:9200}, bound_addresses {192.168.1.22:9200}
2017-01-02_09:17:12.73913 [2017-01-02 10:17:12,739][INFO 
][node ] [Molecule Man] started
2017-01-02_09:17:12.98417 [2017-01-02 10:17:12,984][INFO 
][gateway  ] [Molecule Man] recovered [1] indices into 
cluster_state
2017-01-02_09:17:15.92973 [2017-01-02 10:17:15,929][INFO 
][cluster.service  ] [Molecule Man] added 
{{graylog-52498cb4-349d-494a-8c6b-692fd78e3c6c}{56bjekcxQl6kwDCKKmeGuw}{192.168.1.22}{192.168.1.22:9350}{client=true,
 
data=false, mas$
2017-01-02_09:17:17.20882 [2017-01-02 10:17:17,208][INFO 
][cluster.routing.allocation] [Molecule Man] Cluster health status changed 
from [RED] to [YELLOW] (reason: [shards started [[graylog_0][0], 
[graylog_0][2], [graylog_0][2], [graylo$


root@graylog:/var/log/graylog/elasticsearch# nano graylog.log
[2016-12-30 07:41:38,399][WARN ][index.translog   ] [Slick] 
[graylog_0][0] failed to delete unreferenced translog files
java.nio.file.NoSuchFileException: 
/var/opt/graylog/data/elasticsearch/graylog/nodes/0/indices/graylog_0/0/translog
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
at java.nio.file.Files.newDirectoryStream(Files.java:457)
at 
org.elasticsearch.index.translog.Translog$OnCloseRunnable.handle(Translog.java:726)
at 
org.elasticsearch.index.translog.Translog$OnCloseRunnable.handle(Translog.java:714)
at 
org.elasticsearch.index.translog.ChannelReference.closeInternal(ChannelReference.java:67)
at 
org.elasticsearch.common.util.concurrent.AbstractRefCounted.decRef(AbstractRefCounted.java:64)
at 
org.elasticsearch.index.translog.TranslogReader.close(TranslogReader.java:143)
at 
org.apache.lucene.util.IOUtils.closeWhileHandlingException(IOUtils.java:129)
at 
org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:354)
at 
org.elasticsearch.index.translog.Translog.(Translog.java:179)
at 
org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:208)
at 
org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:151)
at 
org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at 
org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1515)
at 
org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1499)
at 

Re: [graylog2] Graylog stopped working

2017-01-02 Thread cypherbit
Jochen,

thank you, I looked at the following logs:

root@graylog:/var/log/graylog/elasticsearch# nano current
  GNU nano 
2.2.6   
File: current
2017-01-02_09:16:55.57535 [2017-01-02 10:16:55,574][INFO 
][node ] [Molecule Man] version[2.3.1], pid[924], 
build[bd98092/2016-04-04T12:25:05Z]
2017-01-02_09:16:55.57604 [2017-01-02 10:16:55,576][INFO 
][node ] [Molecule Man] initializing ...
2017-01-02_09:16:56.80747 [2017-01-02 10:16:56,807][INFO 
][plugins  ] [Molecule Man] modules [reindex, 
lang-expression, lang-groovy], plugins [kopf], sites [kopf]
2017-01-02_09:16:56.84193 [2017-01-02 10:16:56,841][INFO 
][env  ] [Molecule Man] using [1] data paths, mounts 
[[/var/opt/graylog/data (/dev/sdb1)]], net usable_space [85.1gb], net 
total_space [98.3gb], spins? [possib$
2017-01-02_09:16:56.84211 [2017-01-02 10:16:56,842][INFO 
][env  ] [Molecule Man] heap size [1.7gb], compressed 
ordinary object pointers [true]
2017-01-02_09:16:56.84234 [2017-01-02 10:16:56,842][WARN 
][env  ] [Molecule Man] max file descriptors [64000] 
for elasticsearch process likely too low, consider increasing to at least 
[65536]
2017-01-02_09:17:02.18937 [2017-01-02 10:17:02,189][INFO 
][node ] [Molecule Man] initialized
2017-01-02_09:17:02.19168 [2017-01-02 10:17:02,191][INFO 
][node ] [Molecule Man] starting ...
2017-01-02_09:17:02.56976 [2017-01-02 10:17:02,569][INFO 
][transport] [Molecule Man] publish_address 
{10.10.0.47:9300}, bound_addresses {10.10.0.47:9300}
2017-01-02_09:17:02.57613 [2017-01-02 10:17:02,576][INFO 
][discovery] [Molecule Man] graylog/62ruQcNHSOahWbBEe71egw
2017-01-02_09:17:12.66122 [2017-01-02 10:17:12,661][INFO 
][cluster.service  ] [Molecule Man] new_master {Molecule 
Man}{62ruQcNHSOahWbBEe71egw}{10.10.0.47}{10.10.0.47:9300}, reason: 
zen-disco-join(elected_as_master, [0] joins rec$
2017-01-02_09:17:12.73775 [2017-01-02 10:17:12,737][INFO 
][http ] [Molecule Man] publish_address 
{10.10.0.47:9200}, bound_addresses {10.10.0.47:9200}
2017-01-02_09:17:12.73913 [2017-01-02 10:17:12,739][INFO 
][node ] [Molecule Man] started
2017-01-02_09:17:12.98417 [2017-01-02 10:17:12,984][INFO 
][gateway  ] [Molecule Man] recovered [1] indices into 
cluster_state
2017-01-02_09:17:15.92973 [2017-01-02 10:17:15,929][INFO 
][cluster.service  ] [Molecule Man] added 
{{graylog-52498cb4-349d-494a-8c6b-692fd78e3c6c}{56bjekcxQl6kwDCKKmeGuw}{10.10.0.47}{10.10.0.47:9350}{client=true,
 
data=false, mas$
2017-01-02_09:17:17.20882 [2017-01-02 10:17:17,208][INFO 
][cluster.routing.allocation] [Molecule Man] Cluster health status changed 
from [RED] to [YELLOW] (reason: [shards started [[graylog_0][0], 
[graylog_0][2], [graylog_0][2], [graylo$


root@graylog:/var/log/graylog/elasticsearch# nano graylog.log
[2016-12-30 07:41:38,399][WARN ][index.translog   ] [Slick] 
[graylog_0][0] failed to delete unreferenced translog files
java.nio.file.NoSuchFileException: 
/var/opt/graylog/data/elasticsearch/graylog/nodes/0/indices/graylog_0/0/translog
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
at java.nio.file.Files.newDirectoryStream(Files.java:457)
at 
org.elasticsearch.index.translog.Translog$OnCloseRunnable.handle(Translog.java:726)
at 
org.elasticsearch.index.translog.Translog$OnCloseRunnable.handle(Translog.java:714)
at 
org.elasticsearch.index.translog.ChannelReference.closeInternal(ChannelReference.java:67)
at 
org.elasticsearch.common.util.concurrent.AbstractRefCounted.decRef(AbstractRefCounted.java:64)
at 
org.elasticsearch.index.translog.TranslogReader.close(TranslogReader.java:143)
at 
org.apache.lucene.util.IOUtils.closeWhileHandlingException(IOUtils.java:129)
at 
org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:354)
at 
org.elasticsearch.index.translog.Translog.(Translog.java:179)
at 
org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:208)
at 
org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:151)
at 
org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at 
org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1515)
at 
org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1499)
at 

Re: [graylog2] Graylog stopped working

2016-12-29 Thread cypherbit
Thank you again, we're almost there:

df -m
Filesystem 1M-blocks  Used Available Use% Mounted on
udev1495 1  1495   1% /dev
tmpfs300 1   300   1% /run
/dev/dm-0  15282  4902  9582  34% /
none   1 0 1   0% /sys/fs/cgroup
none   5 0 5   0% /run/lock
none1500 0  1500   0% /run/shm
none 100 0   100   0% /run/user
/dev/sda1236   121   103  55% /boot
/dev/sdb1 100664  8181 87347   9% /var/opt/graylog/data


As you predicted we're still getting errors:

Elasticsearch cluster unhealthy (RED)
The Elasticsearch cluster state is RED which means shards are unassigned. 
This usually indicates a crashed and corrupt cluster and needs to be 
investigated. Graylog will write into the local disk journal. Read how to 
fix this in the Elasticsearch setup documentation. 


I looked at the above provided link, but don't know how to delete the 
journal, any help with this last step would be appreciated.


On Wednesday, December 28, 2016 at 4:59:35 PM UTC+1, Edmundo Alvarez wrote:

> This documentation page covers how to extend the disk space in the OVA: 
> http://docs.graylog.org/en/2.1/pages/configuration/graylog_ctl.html#extend-disk-space
>  
>
> Please note that Graylog's journal is sometimes corrupted when it ran out 
> of disk space. In that case you may need to delete the journal folder. 
>
> Regards, 
> Edmundo 
>
> > On 28 Dec 2016, at 16:04, cyph...@gmail.com  wrote: 
> > 
> > Thank you Edmundo. 
> > 
> > It appears we ran out of space. 
> > 
> > df -h 
> > Filesystem  Size  Used Avail Use% Mounted on 
> > udev1.5G  4.0K  1.5G   1% /dev 
> > tmpfs   300M  388K  300M   1% /run 
> > /dev/dm-015G   15G 0 100% / 
> > none4.0K 0  4.0K   0% /sys/fs/cgroup 
> > none5.0M 0  5.0M   0% /run/lock 
> > none1.5G 0  1.5G   0% /run/shm 
> > none100M 0  100M   0% /run/user 
> > /dev/sda1   236M  121M  103M  55% /boot 
> > 
> > We don't mind loosing all the history, we just want the server up and 
> running. If the space available can be extended even better (keep in mind 
> this is OVA). Any suggestions? 
> > 
> > On Wednesday, December 28, 2016 at 9:18:24 AM UTC+1, Edmundo Alvarez 
> wrote: 
> > Hello, 
> > 
> > I would start by looking into your logs in /var/log/graylog, specially 
> those in the "server" folder, which may give you some errors to start 
> debugging the issue. 
> > 
> > Hope that helps. 
> > 
> > Regards, 
> > Edmundo 
> > 
> > > On 27 Dec 2016, at 20:55, cyph...@gmail.com wrote: 
> > > 
> > > We've been using Graylog OVA 2.1 for a while now, but it stopped 
> working all of the sudden. 
> > > 
> > > We're getting: 
> > > 
> > >  Server currently unavailable 
> > > We are experiencing problems connecting to the Graylog server running 
> on https://graylog:443/api. Please verify that the server is healthy and 
> working correctly. 
> > > You will be automatically redirected to the previous page once we can 
> connect to the server. 
> > > Do you need a hand? We can help you. 
> > > Less details 
> > > This is the last response we received from the server: 
> > > Error message 
> > > cannot GET https://graylog:443/api/system/cluster/node (500) 
> > > 
> > > 
> > > ubuntu@graylog:~$ sudo graylog-ctl status 
> > > run: elasticsearch: (pid 32780) 74s; run: log: (pid 951) 10764s 
> > > down: etcd: 0s, normally up, want up; run: log: (pid 934) 10764s 
> > > run: graylog-server: (pid 33146) 35s; run: log: (pid 916) 10764s 
> > > down: mongodb: 0s, normally up, want up; run: log: (pid 924) 10764s 
> > > run: nginx: (pid 32974) 57s; run: log: (pid 914) 10764s 
> > > 
> > > 
> > > How can we begin to troubleshoot the issue, which logs to view...? 
> > > 
> > > -- 
> > > You received this message because you are subscribed to the Google 
> Groups "Graylog Users" group. 
> > > To unsubscribe from this group and stop receiving emails from it, send 
> an email to graylog2+u...@googlegroups.com. 
> > > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/graylog2/4fb8da46-2e73-42c7-b67d-444c0b801484%40googlegroups.com.
>  
>
> > > For more options, visit https://groups.google.com/d/optout. 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "Graylog Users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to graylog2+u...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/graylog2/9d79cf3a-b221-4419-b94f-f278ec598fe0%40googlegroups.com.
>  
>
> > For more options, visit https://groups.google.com/d/optout. 
>
>

-- 
You received this message because you are subscribed to the 

Re: [graylog2] Graylog stopped working

2016-12-28 Thread cypherbit
Thank you Edmundo.

It appears we ran out of space.

df -h
Filesystem  Size  Used Avail Use% Mounted on
udev1.5G  4.0K  1.5G   1% /dev
tmpfs   300M  388K  300M   1% /run
/dev/dm-015G   15G 0 100% /
none4.0K 0  4.0K   0% /sys/fs/cgroup
none5.0M 0  5.0M   0% /run/lock
none1.5G 0  1.5G   0% /run/shm
none100M 0  100M   0% /run/user
/dev/sda1   236M  121M  103M  55% /boot

We don't mind loosing all the history, we just want the server up and 
running. If the space available can be extended even better (keep in mind 
this is OVA). Any suggestions?

On Wednesday, December 28, 2016 at 9:18:24 AM UTC+1, Edmundo Alvarez wrote:

> Hello, 
>
> I would start by looking into your logs in /var/log/graylog, specially 
> those in the "server" folder, which may give you some errors to start 
> debugging the issue. 
>
> Hope that helps. 
>
> Regards, 
> Edmundo 
>
> > On 27 Dec 2016, at 20:55, cyph...@gmail.com  wrote: 
> > 
> > We've been using Graylog OVA 2.1 for a while now, but it stopped working 
> all of the sudden. 
> > 
> > We're getting: 
> > 
> >  Server currently unavailable 
> > We are experiencing problems connecting to the Graylog server running on 
> https://graylog:443/api. Please verify that the server is healthy and 
> working correctly. 
> > You will be automatically redirected to the previous page once we can 
> connect to the server. 
> > Do you need a hand? We can help you. 
> > Less details 
> > This is the last response we received from the server: 
> > Error message 
> > cannot GET https://graylog:443/api/system/cluster/node (500) 
> > 
> > 
> > ubuntu@graylog:~$ sudo graylog-ctl status 
> > run: elasticsearch: (pid 32780) 74s; run: log: (pid 951) 10764s 
> > down: etcd: 0s, normally up, want up; run: log: (pid 934) 10764s 
> > run: graylog-server: (pid 33146) 35s; run: log: (pid 916) 10764s 
> > down: mongodb: 0s, normally up, want up; run: log: (pid 924) 10764s 
> > run: nginx: (pid 32974) 57s; run: log: (pid 914) 10764s 
> > 
> > 
> > How can we begin to troubleshoot the issue, which logs to view...? 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "Graylog Users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to graylog2+u...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/graylog2/4fb8da46-2e73-42c7-b67d-444c0b801484%40googlegroups.com.
>  
>
> > For more options, visit https://groups.google.com/d/optout. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/9d79cf3a-b221-4419-b94f-f278ec598fe0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Graylog stopped working

2016-12-27 Thread cypherbit
We've been using Graylog OVA 2.1 for a while now, but it stopped working 
all of the sudden.

We're getting:

 Server currently unavailable
We are experiencing problems connecting to the Graylog server running on 
https://graylog:443/api. Please verify that the server is healthy and 
working correctly.
You will be automatically redirected to the previous page once we can 
connect to the server.
Do you need a hand? We can help you.
Less details
This is the last response we received from the server:
Error message
cannot GET https://graylog:443/api/system/cluster/node (500)


ubuntu@graylog:~$ sudo graylog-ctl status
run: elasticsearch: (pid 32780) 74s; run: log: (pid 951) 10764s
down: etcd: 0s, normally up, want up; run: log: (pid 934) 10764s
run: graylog-server: (pid 33146) 35s; run: log: (pid 916) 10764s
down: mongodb: 0s, normally up, want up; run: log: (pid 924) 10764s
run: nginx: (pid 32974) 57s; run: log: (pid 914) 10764s


How can we begin to troubleshoot the issue, which logs to view...?

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/4fb8da46-2e73-42c7-b67d-444c0b801484%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: No space, manually deleted indexes

2016-11-24 Thread cypherbit
Thank you very, very much! All is good.

Dne sreda, 23. november 2016 16.32.56 UTC+1 je oseba Sypris napisala:
>
> Try removing the entire nodes directory and restarting Elasticsearch.  ES 
> should rebuild the nodes directory. 
>
> On Wednesday, November 23, 2016 at 4:17:11 AM UTC-6, cyph...@gmail.com 
> wrote:
>>
>> We're using 2.1 OVA and ran out of space. Due to a lack of knowledge all 
>> the index directories under 
>> /var/opt/graylog/data/elasticsearch/graylog/nodes/0/indices/graylog_0 were 
>> deleted. We now only have _state in there, before there were three 
>> directories 0, 1 & 3.
>>
>> We can't go to System\Indices and when choosing Search we get:
>> Error Message:Unable to execute 
>> searchException:org.elasticsearch.action.search.SearchPhaseExecutionException
>>
>> The log under /var/log/graylog/server/current shows:
>>
>> 2016-11-23_04:51:30.13483 2016-11-23 05:51:30,134 ERROR: 
>> org.graylog2.indexer.rotation.strategies.AbstractRotationStrategy - Cannot 
>> perform rotation at this moment.
>> 2016-11-23_04:51:40.13652 2016-11-23 05:51:40,136 ERROR: 
>> org.graylog2.indexer.rotation.strategies.MessageCountRotationStrategy - 
>> Unknown index, cannot perform rotation
>> 2016-11-23_04:51:40.13654 org.graylog2.indexer.IndexNotFoundException: 
>> Couldn't find index graylog_0
>>
>>
>> We tried to manually cycling the deflector, but it didn't help:
>> curl -XPOST *http://127.0.0.1:9000/api/system/deflector/cycle* 
>> 
>>
>> Multiple reboots were performed, but the situation is the same. We don't 
>> care about the previous data, but would just like to get the server up and 
>> running.
>>
>> Please assist.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/f3144b4e-80e3-42a6-addd-60d97ca77db6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] No space, manually deleted indexes

2016-11-23 Thread cypherbit
We're using 2.1 OVA and ran out of space. Due to a lack of knowledge all 
the index directories under 
/var/opt/graylog/data/elasticsearch/graylog/nodes/0/indices/graylog_0 were 
deleted. We now only have _state in there, before there were three 
directories 0, 1 & 3.

We can't go to System\Indices and when choosing Search we get:
Error Message:Unable to execute 
searchException:org.elasticsearch.action.search.SearchPhaseExecutionException

The log under /var/log/graylog/server/current shows:

2016-11-23_04:51:30.13483 2016-11-23 05:51:30,134 ERROR: 
org.graylog2.indexer.rotation.strategies.AbstractRotationStrategy - Cannot 
perform rotation at this moment.
2016-11-23_04:51:40.13652 2016-11-23 05:51:40,136 ERROR: 
org.graylog2.indexer.rotation.strategies.MessageCountRotationStrategy - Unknown 
index, cannot perform rotation
2016-11-23_04:51:40.13654 org.graylog2.indexer.IndexNotFoundException: Couldn't 
find index graylog_0


We tried to manually cycling the deflector, but it didn't help:
curl -XPOST *http://127.0.0.1:9000/api/system/deflector/cycle* 


Multiple reboots were performed, but the situation is the same. We don't 
care about the previous data, but would just like to get the server up and 
running.

Please assist.

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/bdb903be-8b9d-4606-9734-a78001e3aaa0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [graylog2] Nessus vulnerability scanner and Graylog

2016-07-04 Thread cypherbit
Thank you Marius, I implemented the suggestions listed under: 
http://docs.graylog.org/en/2.0/pages/configuration/graylog_ctl.html#production-readiness
 apart 
from: "Seperate the box network-wise from the outside, otherwise 
Elasticsearch can be reached by anyone".

I'd like to limit access to our Graylog server from one VLAN (user) to 
another (servers; where Graylog is) so that only SSH is available (that is 
easy), but we also need to view the web page. Which ports must be 
accessible (HTTPS anything else)?


Dne sreda, 29. junij 2016 21.14.17 UTC+2 je oseba Marius Sturm napisala:

> Hi,
> the OVAs in general are made for ease of setup and a quick getting started 
> experience with Graylog. The trade-off of this that some services need to 
> be less restricted as in a setup that is optimized for security. 
> Elasticsearch and MongoDB should always placed in a seperate network as 
> documented here: 
> http://docs.graylog.org/en/2.0/pages/configuration/graylog_ctl.html#production-readiness
>
> If you have higher security needs please consider a manual setup of 
> Graylog and make sure that all services are as secured as possible 
> http://docs.graylog.org/en/2.0/pages/installation/manual_setup.html
>
> Cheers,
> Marius
>
> On 29 June 2016 at 19:57,  wrote:
>
>> We're using the latest version of Graylog OVA and have recently had a 
>> vulnerability assesment. I'm attaching the finding from the Nessus scanner. 
>> Can someone please shed some lights on these results focusing on the Medium 
>> severity and esp. MongoDB Service Without Authentication Detection and Web 
>> Server Generic Cookie Injection.
>>
>> Many thanks in advance.
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Graylog Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to graylog2+u...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/graylog2/6f262db7-5494-47ce-aa54-28fde164a383%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Developer
>
> Tel.: +49 (0)40 609 452 077
> Fax.: +49 (0)40 609 452 078
>
> TORCH GmbH - A Graylog Company
> Poolstraße 21
> 20335 Hamburg
> Germany
>
> https://www.graylog.com 
>
> Commercial Reg. (Registergericht): Amtsgericht Hamburg, HRB 125175
> Geschäftsführer: Lennart Koopmann (CEO)
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/14f3ae72-7b64-4c3c-8d85-2edd7c4363fb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Enable HTTPS for web interface, Server currently unavailable

2016-05-21 Thread cypherbit


Dne sobota, 21. maj 2016 09.47.41 UTC+2 je oseba cyph...@gmail.com napisala:
>
> I performed sudo graylog-ctl enforce-ssl and then reconfigure on a 2.0 
> OVA and am now getting Server currently unavailable: We are experiencing 
> problems connecting to the Graylog server running on http://x.x.x.x:12900/. 
> Please verify that the server is healthy and working correctly.
>
> How can I reverse the situation before running enforce-ssl, or resolve the 
> issue?
>


I actually noticed that I can login to Graylog using Google Chrome if I 
Load Unsafe Scripts. Haven't found a way to do so in IE. After logging in I 
get  "The Elasticsearch cluster state is RED which means shards are 
unassigned. This usually indicates a crashed and corrupt cluster and needs 
to be investigated. Graylog will write into the local disk journal."

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/564e3d12-4684-48d4-a865-792452ad4242%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Enable HTTPS for web interface, Server currently unavailable

2016-05-21 Thread cypherbit
I performed sudo graylog-ctl enforce-ssl and then reconfigure on a 2.0 
OVA and am now getting Server currently unavailable: We are experiencing 
problems connecting to the Graylog server running on http://x.x.x.x:12900/. 
Please verify that the server is healthy and working correctly.

How can I reverse the situation before running enforce-ssl, or resolve the 
issue?

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/b6792ba3-f238-4fb7-8a80-3f79239bb6b6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] EventID 4720 not on Graylog

2016-05-20 Thread cypherbit
I've been testing Windows Event Forwarding and then sending the events 
using nxlog to Graylog. It works very well, but I'm not seeing EventID:4720 
in Graylog. It appears under Forwarded Events, but I'm not sure if nxlog or 
Graylog is to blame, and where/how to even begin to troubleshoot.

All other events such as 4726, 5136 (which occur at the same time as 
4720) are available in Graylog, it's just 4720 that's missing.

nxlog config is as follows (so I'm forwarding everything):

Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log

 Module xm_gelf
# ShortMessageLength 42000



Module  im_msvistalog
# For windows 2003 and earlier use the following:
#   Module  im_mseventlog
 Query \
 \
 *\
 \
 


 Module om_udp
 Host 10.10.0.47
 Port 12201
 OutputType GELF


Pathin => out


-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/679dce2d-c039-4ccd-81b5-93b2b7f805be%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Re: Message truncated, WEF, nxlog, Graylog

2016-05-03 Thread cypherbit
Hello,

thank you so much for this, changing the System Locale and this was just 
what was needed.

Dne sobota, 30. april 2016 13.35.32 UTC+2 je oseba Jochen Schalanda 
napisala:

> Hi,
>
> that might be caused by a setting in nxlog. See the description of the 
> ShortMessageLength directive in 
> https://nxlog.co/docs/nxlog-ce/nxlog-reference-manual.html#xm_gelf
>
> Cheers,
> Jochen
>
> On Friday, 29 April 2016 22:31:31 UTC+2, cyph...@gmail.com wrote:
>>
>> I'm using Windows Event Forwarding (WEF) to collect the events on one 
>> server and then forward then using nxlog to Graylog.
>>
>> The default input, extractors are used but the problem is the messages 
>> are truncated (I'm not seing the data that is needed):
>>
>>
>> 
>>
>>
>>
>>
>>
>> 
>>
>>
>> Can someone please assist me so I get the full message, keeping in mind 
>> I'm a newbie.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/05d028bc-bd65-4a8f-a668-f6da94da40e3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[graylog2] Message truncated, WEF, nxlog, Graylog

2016-04-29 Thread cypherbit
I'm using Windows Event Forwarding (WEF) to collect the events on one 
server and then forward then using nxlog to Graylog.

The default input, extractors are used but the problem is the messages are 
truncated (I'm not seing the data that is needed):









Can someone please assist me so I get the full message, keeping in mind I'm 
a newbie.

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/1f533522-1e58-44db-8059-2d2bcef79e81%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.