Re: Upgrading to Elasticsearch 5.6

2018-10-03 Thread Vets, Laurens
Did you upgrade Metron?

On 02-Oct-18 23:50, Farrukh Naveed Anjum wrote:
> I upgraded Elasticsearch to 5.6, I also made upgrade in templates.
>
> But My Elasticsearch logs keeps sayings.
>
> java.lang.IllegalStateException: Received message from unsupported
> version: [2.0.0] minimal compatible version is: [5.0.0]
>
>
>
> and Storm Indexing Topology Says
>
> NoNodeAvailableException[None of the configured nodes are available:
> [{#transport#-1}{127.0.0.1}{cogito/127.0.0.1:9300}]] at
> org.elasticsearch.client.transport.TransportClientNodesService.ensureNodes
>
>
>
>
> Any idea how to resolve this ?
>
> -- 
> With Regards
> Farrukh Naveed Anjum


Re: Indexing topology keep crashing

2018-09-14 Thread Vets, Laurens
For the record, here is how I 'fixed' this:

1. Stop Storm, it's crashing constantly anyway. Stop sending messages to
your Metron installation.

2. Export the messages from the Kafka topic that's crashing Storm so
that they're not lost. In my case that's the indexing topic. I have no
idea yet on how to re-ingest them.

3. Set the 'retention.ms' Kafka configuration setting to a small value,
then wait a minute. The command for this is
"/usr/hdp/current/kafka-broker/bin/kafka-configs.sh --zookeeper
localhost:2181 --entity-type topics --alter --add-config
retention.ms=1000 --entity-name indexing".

4. Make sure that the 'retention.ms' value is set:
"/usr/hdp/current/kafka-broker/bin/kafka-configs.sh --zookeeper
localhost:2181 --entity-type topics --describe --entity-name indexing"

5. Wait a couple of minutes, the Kafka log files should be empty. You
can check this with "ls -altr /tmp/kafka-logs/indexing/" or "du -h
/tmp/kafka-logs/indexing/". Replace "/tmp/kafka-logs/" with the correct
path to your Kafka logs directory. In my case, there was approx. 11GB of
data in the indexing topic.

6. Restore the default retention time:
"/usr/hdp/current/kafka-broker/bin/kafka-configs.sh --zookeeper
localhost:2181 --entity-type topics --alter --delete-config retention.ms
--entity-name indexing".

(7. try to re-index the lost data. I have not found a way for this yet)

At this point, start Storm again. It shouldn't crash anymore as there's
no data to index.

Does this sound like a sound way to 'fix' these kinds of problems? I
suspect that I received a big burst of logs (Kibana seems to support
this) that Storm couldn't handle. Is there a way to better handle big
bursts? Or a rate control mechanism of some sort?

On 13-Sep-18 11:39, Vets, Laurens wrote:
> 1. worker.childopts: -Xmx2048m
>
> 2. As in individual messages? Just small(-ish) JSON messages. A few KBytes?
>
> On 13-Sep-18 11:21, Casey Stella wrote:
>> Two questions:
>> 1. How much memory are you giving the workers for the indexing topology?
>> 2. how large are the messages you're sending through?
>>
>> On Thu, Sep 13, 2018 at 2:00 PM Vets, Laurens > <mailto:laur...@daemon.be>> wrote:
>>
>> Hello list,
>>
>> I've installed OS updates on my Metron 0.4.2 yesterday, restarted all
>> nodes and now my indexing topology keeps crashing.
>>
>> This is what I see in the Storm UI for the indexing topology:
>>
>> Topology stats:
>> 10m 0s    1304380    1953520    12499.833    1320   
>> 3h 0m 0s    1304380    1953520    12499.833    1320   
>> 1d 0h 0m 0s    1304380    1953520    12499.833    1320   
>> All time    1304380    1953520    12499.833    1320
>>
>> Spouts:
>> kafkaSpout    1    1    1299940    1949080    12499.833    1320   
>> 0   
>> metron3    6702    java.lang.OutOfMemoryError: GC overhead limit
>> exceeded at java.lang.Long.valueOf(Long.java:840) at
>> 
>> org.apache.storm.kafka.spout.KafkaSpoutRetryExponentialBackoff$RetryEntryTimeStampComparator.compar
>>
>> Bolts:
>> hdfsIndexingBolt    1    1    1800    1800    0.278    7.022   
>> 1820   
>> 38.633    1800    0    metron3    6702   
>> java.lang.NullPointerException
>> at
>> org.apache.metron.writer.hdfs.SourceHandler.handle(SourceHandler.java:80)
>> at org.apache.metron.writer.hdfs.HdfsWriter.write(HdfsWriter.java:113)
>> at org.apache.metr    Thur, 13 Sep 2018 07:35:02
>> indexingBolt    1    1    1320    1320    0.217    7.662    1300   
>> 47.815    1300    0    metron3    6702   
>> java.lang.OutOfMemoryError: GC
>> overhead limit exceeded at
>> java.util.Arrays.copyOfRange(Arrays.java:3664) at
>> java.lang.String.(String.java:207) at
>> org.json.simple.parser.Yylex.yytext(Yylex.jav    Thur, 13 Sep 2018
>> 07:37:33
>>
>> When I check the Kafka topic, I can see that there's at least 3
>> million
>> messages in the kafka indexing topic... I _suspect_ that the indexing
>> topology tries to write those but fails, restarts, tries to write,
>> fails, etc... Metron is currently not ingesting any additional
>> messages,
>> but also can't seem to index the current ones...
>>
>> Any idea on how to proceed?
>>



Re: Indexing topology keep crashing

2018-09-13 Thread Vets, Laurens
1. worker.childopts: -Xmx2048m

2. As in individual messages? Just small(-ish) JSON messages. A few KBytes?

On 13-Sep-18 11:21, Casey Stella wrote:
> Two questions:
> 1. How much memory are you giving the workers for the indexing topology?
> 2. how large are the messages you're sending through?
>
> On Thu, Sep 13, 2018 at 2:00 PM Vets, Laurens  <mailto:laur...@daemon.be>> wrote:
>
> Hello list,
>
> I've installed OS updates on my Metron 0.4.2 yesterday, restarted all
> nodes and now my indexing topology keeps crashing.
>
> This is what I see in the Storm UI for the indexing topology:
>
> Topology stats:
> 10m 0s    1304380    1953520    12499.833    1320   
> 3h 0m 0s    1304380    1953520    12499.833    1320   
> 1d 0h 0m 0s    1304380    1953520    12499.833    1320   
> All time    1304380    1953520    12499.833    1320
>
> Spouts:
> kafkaSpout    1    1    1299940    1949080    12499.833    1320   
> 0   
> metron3    6702    java.lang.OutOfMemoryError: GC overhead limit
> exceeded at java.lang.Long.valueOf(Long.java:840) at
> 
> org.apache.storm.kafka.spout.KafkaSpoutRetryExponentialBackoff$RetryEntryTimeStampComparator.compar
>
> Bolts:
> hdfsIndexingBolt    1    1    1800    1800    0.278    7.022   
> 1820   
> 38.633    1800    0    metron3    6702   
> java.lang.NullPointerException
> at
> org.apache.metron.writer.hdfs.SourceHandler.handle(SourceHandler.java:80)
> at org.apache.metron.writer.hdfs.HdfsWriter.write(HdfsWriter.java:113)
> at org.apache.metr    Thur, 13 Sep 2018 07:35:02
> indexingBolt    1    1    1320    1320    0.217    7.662    1300   
> 47.815    1300    0    metron3    6702   
> java.lang.OutOfMemoryError: GC
> overhead limit exceeded at
> java.util.Arrays.copyOfRange(Arrays.java:3664) at
> java.lang.String.(String.java:207) at
> org.json.simple.parser.Yylex.yytext(Yylex.jav    Thur, 13 Sep 2018
> 07:37:33
>
> When I check the Kafka topic, I can see that there's at least 3
> million
> messages in the kafka indexing topic... I _suspect_ that the indexing
> topology tries to write those but fails, restarts, tries to write,
> fails, etc... Metron is currently not ingesting any additional
> messages,
> but also can't seem to index the current ones...
>
> Any idea on how to proceed?
>


Indexing topology keep crashing

2018-09-13 Thread Vets, Laurens
Hello list,

I've installed OS updates on my Metron 0.4.2 yesterday, restarted all
nodes and now my indexing topology keeps crashing.

This is what I see in the Storm UI for the indexing topology:

Topology stats:
10m 0s    1304380    1953520    12499.833    1320   
3h 0m 0s    1304380    1953520    12499.833    1320   
1d 0h 0m 0s    1304380    1953520    12499.833    1320   
All time    1304380    1953520    12499.833    1320

Spouts:
kafkaSpout    1    1    1299940    1949080    12499.833    1320    0   
metron3    6702    java.lang.OutOfMemoryError: GC overhead limit
exceeded at java.lang.Long.valueOf(Long.java:840) at
org.apache.storm.kafka.spout.KafkaSpoutRetryExponentialBackoff$RetryEntryTimeStampComparator.compar

Bolts:
hdfsIndexingBolt    1    1    1800    1800    0.278    7.022    1820   
38.633    1800    0    metron3    6702    java.lang.NullPointerException
at
org.apache.metron.writer.hdfs.SourceHandler.handle(SourceHandler.java:80)
at org.apache.metron.writer.hdfs.HdfsWriter.write(HdfsWriter.java:113)
at org.apache.metr    Thur, 13 Sep 2018 07:35:02
indexingBolt    1    1    1320    1320    0.217    7.662    1300   
47.815    1300    0    metron3    6702    java.lang.OutOfMemoryError: GC
overhead limit exceeded at
java.util.Arrays.copyOfRange(Arrays.java:3664) at
java.lang.String.(String.java:207) at
org.json.simple.parser.Yylex.yytext(Yylex.jav    Thur, 13 Sep 2018 07:37:33

When I check the Kafka topic, I can see that there's at least 3 million
messages in the kafka indexing topic... I _suspect_ that the indexing
topology tries to write those but fails, restarts, tries to write,
fails, etc... Metron is currently not ingesting any additional messages,
but also can't seem to index the current ones...

Any idea on how to proceed?



Re: Good press for Metron!

2018-08-09 Thread Vets, Laurens
I was just reading this, see the IRC channel :)

On 09-Aug-18 08:21, Casey Stella wrote:
> https://www.darkreading.com/endpoint/oh-no-not-another-security-product/a/d-id/1332453



Re: Some Metron Alerts UI questions

2018-01-23 Thread Vets, Laurens

Thanks for the answers Simon!

On 22-Jan-18 10:05, Simon Elliston Ball wrote:

Hi Laurens,

A few quick answers inline…

Simon

On 20 Jan 2018, at 00:37, Laurens Vets > wrote:


Hi list,

I have some general Alerts UI questions/comments/remarks, I hope you 
don't mind :) I'm using the UI that's part of Metron 0.4.2. These 
apply to my specific use case, so I might be completely wrong in how 
I use the UI…


Comment and feedback are always welcome!



- When you're talking about 'alerts', from what I can see in the UI, 
that's synonymous with just events in elasticsearch right? Wouldn't 
it make more sense to treat alerts as events where "is_alert" == True?


At present the search does not exclude non-alerts… it’s maybe a little 
odd to call it the alerts view right now, but right now it’s the only 
way to see everything, so this should probably separate out into an 
‘everything’ hunting focused view and a alerts only view.


The reasons I kinda like the current approach is that it’s good for 
picking up things that have become alerts because they’re in threat 
intel for example, along with things clustered against them by 
something like the new TLSH functions, which makes it easier to 
combine known alerts with un-detected events in a meta alert.


- It seems that everything I do in the UI is only stored locally? See 
https://github.com/apache/metron/tree/master/metron-interface/metron-alerts. 
Can this made persistent for multiple people?


Yep. A lot of the preferences, saved searched, column layouts etc, are 
stored in local storage by the browser right now. We need a REST 
endpoint and to figure out how to store them (against user / against a 
group / global??? thoughts?) server side. A lot of the mechanism to do 
that is in, it’s just not quite done done because of those open 
questions I expect.



- How can I change the content "Filters" on the left of the UI?


You wait for https://github.com/apache/metron/pull/853 to land.


- How do I create a MetaAlert?


You can create a meta-alert from a grouped set of alerts, use the 
grouping buttons at the top and you’ll find a merge alert. Slightly 
odd process at the moment true, but a button to create a meta-alert 
from all the selected, or all the visible alerts on the results page 
might be a good addition, what do you think?


Very quick video of the current method here: https://youtu.be/JkFeNKTOd38


- What's the plan regarding notifying someone when alerts triggers?


Currently there is no external notification, but the answer here would 
likely be to consume the indexing topic in kafka and integrate to an 
enterprise alarm or monitoring system (alerting and alarms is a 
massive topic which probably deserves its own project beyond metron 
and I’ve seen people use all sorts of things for this, usually some 
big enterprisey thing mandated by IT).