Re: Drop events from Metron parser

2020-05-11 Thread Otto Fowler
 Nifi’s Syslog 5424 support is based on the same library as Metron uses.

On May 5, 2020 at 22:02:11, Dima Kovalyov (dimdr...@gmail.com) wrote:

Hello Tom,

Exactly, NiFi has range of ingest capable processors including Syslog
server.

- Dima

On Tue, May 5, 2020, 20:00 Yerex, Tom  wrote:

> Hi Dima,
>
> Thanks for this. I have some knowledge of Nifi, but I'm still early on the
> learning curve.
>
> Our current implementation plan is to use a collection of pre-existing log
> servers and feed that into a Kafka cluster. In the model you describe would
> that mean inserting NIfi between the log servers and Kafka?
>
> Cheers,
>
> Tom.
>
>
> On 2020-05-05 17:25:01-07:00 Dima Kovalyov wrote:
>
> I would drop them on ingestion using NiFi's RouteOnContent.
>
> On Tue, May 5, 2020, 17:53 Yerex, Tom  wrote:
>
>> Good afternoon,
>>
>> Our incoming data is not always perfect, in some cases events are simply
>> missing fields. We would like a way to drop events when particular fields
>> are empty (or have values we don't care about).
>>
>> One way we thought to do this might be to write a custom Stellar
>> function. Does anyone know of another solution?
>>
>> Thank you,
>>
>> Tom.
>>
> - Dima
>
>


Re: conn.log unable to parse in apeche metron

2020-03-06 Thread Otto Fowler
I’m confused at what you are doing here. What parser are you using? grok or
bro?

The bro parse works on bro JSON output. Your logs don’t look like they are
output as JSON, that is why it is failing I would guess.




On March 5, 2020 at 08:30:58, updates on tube (abrahamfik...@gmail.com)
wrote:

##sample log or input log


1583402931.976871 CCBAYr2KnmpaWDtxO2 xx.xx.xx.xx 65184 xx.xx.xx.xx 4200 tcp
- 1.855212 503 0 SH T T 0 ScADaF 5 715 2 80 -
1583402933.241900 C6C59e3TdNbeTTBZ7j xx.xx.xx.xx 16020 xx.xx.xx.xx 34032
tcp - 0.015988 2981 0 OTH T T 0 HcADC 6 352 0 0 -

##grok pattern that i used ( https://grokconstructor.appspot.com/groklib/bro)


BRO_CONN
%{NUMBER:ts}\t%{NOTSPACE:uid}\t%{IP:orig_h}\t%{INT:orig_p}\t%{IP:resp_h}\t%{INT:resp_p}\t%{WORD:proto}\t%{GREEDYDATA:service}\t%{NUMBER:duration}\t%{NUMBER:orig_bytes}\t%{NUMBER:resp_bytes}\t%{GREEDYDATA:conn_state}\t%{GREEDYDATA:local_orig}\t%{GREEDYDATA:missed_bytes}\t%{GREEDYDATA:history}\t%{GREEDYDATA:orig_pkts}\t%{GREEDYDATA:orig_ip_bytes}\t%{GREEDYDATA:resp_pkts}\t%{GREEDYDATA:resp_ip_bytes}\t%{GREEDYDATA:tunnel_parents}



##the error shown in metron-rest.log
Caused by: java.lang.IllegalStateException: Unable to parse Message:
1583402939.738024 CTGU7D24R7NL5eTGef xx.xx.xx.xx 50998 xx.xx.xx.xx 6188 tcp
- - - - OTH T T 0C 0 0 0 0 -
at
org.apache.metron.parsers.bro.BasicBroParser.parse(BasicBroParser.java:145)
~[metron-parsing-storm-0.7.1.1.9.1.0-6-uber.jar:?]
at
org.apache.metron.parsers.interfaces.MessageParser.parseOptional(MessageParser.java:54)
~[metron-parsing-storm-0.7.1.1.9.1.0-6-uber.jar:?]
at
org.apache.metron.parsers.interfaces.MessageParser.parseOptionalResult(MessageParser.java:67)
~[metron-parsing-storm-0.7.1.1.9.1.0-6-uber.jar:?]
at
org.apache.metron.rest.service.impl.SensorParserConfigServiceImpl.parseMessage(SensorParserConfigServiceImpl.java:155)
~[metron-rest-0.7.1.1.9.1.0-6.jar:?]
... 94 more
Caused by: org.json.simple.parser.ParseException
at org.json.simple.parser.Yylex.yylex(Yylex.java:610)
~[metron-rest-0.7.1.1.9.1.0-6.jar:?]
at org.json.simple.parser.JSONParser.nextToken(JSONParser.java:269)
~[metron-rest-0.7.1.1.9.1.0-6.jar:?]
at org.json.simple.parser.JSONParser.parse(JSONParser.java:118)
~[metron-rest-0.7.1.1.9.1.0-6.jar:?]
at org.json.simple.parser.JSONParser.parse(JSONParser.java:81)
~[metron-rest-0.7.1.1.9.1.0-6.jar:?]
at org.json.simple.parser.JSONParser.parse(JSONParser.java:75)
~[metron-rest-0.7.1.1.9.1.0-6.jar:?]
at org.apache.metron.parsers.bro.JSONCleaner.clean(JSONCleaner.java:49)
~[metron-parsing-storm-0.7.1.1.9.1.0-6-uber.jar:?]
at
org.apache.metron.parsers.bro.BasicBroParser.parse(BasicBroParser.java:68)
~[metron-parsing-storm-0.7.1.1.9.1.0-6-uber.jar:?]
at
org.apache.metron.parsers.interfaces.MessageParser.parseOptional(MessageParser.java:54)
~[metron-parsing-storm-0.7.1.1.9.1.0-6-uber.jar:?]
at
org.apache.metron.parsers.interfaces.MessageParser.parseOptionalResult(MessageParser.java:67)
~[metron-parsing-storm-0.7.1.1.9.1.0-6-uber.jar:?]
#i need your help as always.


Re: linux-syslog(centos 7) parsing in apache metron error

2020-02-27 Thread Otto Fowler
 org.apache.metron.parsers.syslog.Syslog3164Parser
is the classname.

You have confused me with your description.

1st.  The exception you show, the error points to you using some version of
a syslog parser.

2nd. You only talk about using grok after that.

I have tried your sample string with the above parser and it works.

On February 27, 2020 at 09:19:08, updates on tube (abrahamfik...@gmail.com)
wrote:

but i can't get the parser?

On 2020/02/27 12:13:35, Otto Fowler  wrote: br/>>
Parsing this messages works with the Syslog31164Parser. Maybe you could
> use that.
> br/>> On FFebruary 27, 2020 at 02:03:50, updates on tube (
abrahamfik...@gmail.com)
> wrote:
> br/>> br/>> ## I really apriciate your quick responses..
please tell us the
> valid grok patterns for such kind of log 
> # this is my parser configuration
> {
> "parserClassName": "org.apache.metron.parsers.GrokParser",
> "sensorTopic": "linuxsyslog",
> "parserConfig": {
> "grokPath": "/apps/metron/patterns/linuxsyslog",
> "patternLabel": "SYSLOGBASE2",
> "timestampField": "timestamp"
> },
> br/>> ""fieldTransformations" : [
> br/>> {{
> br/>> ""transformation" : "STELLAR"
> ,"output" : [ "full_hostname", "domain_without_subdomains" ]
> ,"config" : {
> "full_hostname" : "URL_TO_HOST(url)"
> ,"domain_without_subdomains" : "DOMAIN_REMOVE_SUBDOMAINS(full_hostname)"
> }
> }
> ]
> br/>> }}
> br/>> ## this is my grok pattern
> (?:%{SYSLOGTIMESTAMP:timestamp}|%{TIMESTAMP_ISO8601:timestamp8601})
> (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}
> br/>> br/>> ##this is the sample log that couse cause error br/> FFeb 16
08:00:23
> myhostname NetworkManager[1686]:  [1581858023.4306] dhcp4 (eth0):
> address xxx.xxx.xxx.xxx
> Feb 16 08:00:23 myhostname dhclient[1710]: DHCPREQUEST on eth0 to
> xxx.xxx.xxx.xxx port 67 (xid=0x170e0b99)
> br/>> br/>> ##this is the error message found in kibana
> Syntax error @ 1:0 no viable alternative at input 'F'
> br/>> ## detail error found in kibana shows as follow
> com.github.palindromicity.syslog.dsl.ParseException: Syntax error @ 1:0
no
> viable alternative at input 'F'
> at
>
com.github.palindromicity.syslog.dsl.DefaultErrorListener.syntaxError(DefaultErrorListener.java:33)

> br/>> at <
>
org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:65)

> br/>> at
org.antlr.v4.runtime.Parser.notifyErrorListeeners(Parser.java:558)
> at
>
org.antlr.v4.runtime.DefaultErrorStrategy.reportNoViableAlternative(DefaultErrorStrategy.java:310)

> br/>> at <
>
org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:147)

> br/>> at <
>
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.header(Rfc5424Parser.java:412)

> br/>> at <
>
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.syslog_msg(Rfc5424Parser.java:273)

> br/>> at <
>
com.github.palindromicity.syslog.Rfc5424SyslogParser.parseLine(Rfc5424SyslogParser.java:66)

> br/>> at <
>
com.github.palindromicity.syslog.AbstractSyslogParser.lambda$parseLines$0(AbstractSyslogParser.java:144)

> br/>> at java.util.ArrayList.forEach(ArrayList.java:11249)
> at
>
com.github.palindromicity.syslog.AbstractSyslogParser.parseLines(AbstractSyslogParser.java:142)

> br/>> at <
>
org.apache.metron.parsers.syslog.BaseSyslogParser.parseOptionalResult(BaseSyslogParser.java:116)

> br/>> at <
>
org.apache.metron.parsers.ParserRunnerImpl.execute(ParserRunnerImpl.java:144)

> br/>> at
org.apache.metron.parsers.bolt.ParserBolt.exxecute(ParserBolt.java:257)
> at
>
org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735)

> br/>> at <
>
org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466)

> br/>> at <
>
org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40)

> br/>> at <
>
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472)

> br/>> at <
>
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451)

> br/>> at <
>
org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)

> br/>> at <
>
org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855)

> br/>> at org.apache.storm.util$$async_loop$fn__1221.invoke(

Re: linux-syslog(centos 7) parsing in apache metron error

2020-02-27 Thread Otto Fowler
 Parsing this messages works with the Syslog3164Parser.  Maybe you could
use that.

On February 27, 2020 at 02:03:50, updates on tube (abrahamfik...@gmail.com)
wrote:


# I really apriciate your quick responses.. please tell us the
valid grok patterns for such kind of log 
# this is my parser configuration
{
"parserClassName": "org.apache.metron.parsers.GrokParser",
"sensorTopic": "linuxsyslog",
"parserConfig": {
"grokPath": "/apps/metron/patterns/linuxsyslog",
"patternLabel": "SYSLOGBASE2",
"timestampField": "timestamp"
},

"fieldTransformations" : [

{

"transformation" : "STELLAR"
,"output" : [ "full_hostname", "domain_without_subdomains" ]
,"config" : {
"full_hostname" : "URL_TO_HOST(url)"
,"domain_without_subdomains" : "DOMAIN_REMOVE_SUBDOMAINS(full_hostname)"
}
}
]

}

# this is my grok pattern
(?:%{SYSLOGTIMESTAMP:timestamp}|%{TIMESTAMP_ISO8601:timestamp8601})
(?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}


#this is the sample log that couse cause error br/> FFeb 16 08:00:23
myhostname NetworkManager[1686]:  [1581858023.4306] dhcp4 (eth0):
address xxx.xxx.xxx.xxx
Feb 16 08:00:23 myhostname dhclient[1710]: DHCPREQUEST on eth0 to
xxx.xxx.xxx.xxx port 67 (xid=0x170e0b99)


#this is the error message found in kibana
Syntax error @ 1:0 no viable alternative at input 'F'

# detail error found in kibana shows as follow
com.github.palindromicity.syslog.dsl.ParseException: Syntax error @ 1:0 no
viable alternative at input 'F'
at
com.github.palindromicity.syslog.dsl.DefaultErrorListener.syntaxError(DefaultErrorListener.java:33)

at
org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:65)

at org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:558)
at
org.antlr.v4.runtime.DefaultErrorStrategy.reportNoViableAlternative(DefaultErrorStrategy.java:310)

at
org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:147)

at
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.header(Rfc5424Parser.java:412)

at
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.syslog_msg(Rfc5424Parser.java:273)

at
com.github.palindromicity.syslog.Rfc5424SyslogParser.parseLine(Rfc5424SyslogParser.java:66)

at
com.github.palindromicity.syslog.AbstractSyslogParser.lambda$parseLines$0(AbstractSyslogParser.java:144)

at java.util.ArrayList.forEach(ArrayList.java:1249)
at
com.github.palindromicity.syslog.AbstractSyslogParser.parseLines(AbstractSyslogParser.java:142)

at
org.apache.metron.parsers.syslog.BaseSyslogParser.parseOptionalResult(BaseSyslogParser.java:116)

at
org.apache.metron.parsers.ParserRunnerImpl.execute(ParserRunnerImpl.java:144)

at org.apache.metron.parsers.bolt.ParserBolt.execute(ParserBolt.java:257)
at
org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735)

at
org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466)

at
org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40)

at
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472)

at
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451)

at
org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)

at
org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855)

at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.antlr.v4.runtime.NoViableAltException
at
org.antlr.v4.runtime.atn.ParserATNSimulator.noViableAlt(ParserATNSimulator.java:1894)

at
org.antlr.v4.runtime.atn.ParserATNSimulator.execATN(ParserATNSimulator.java:498)

at
org.antlr.v4.runtime.atn.ParserATNSimulator.adaptivePredict(ParserATNSimulator.java:424)

at
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.header(Rfc5424Parser.java:373)

... 18 more
br/> br/> <

On 2020/02/24 19:31:36, Michael Miklavcic 
wrote: br/>> That's how we route errors. Looks like the syslog parser had
trouble with
> one of your syslog messages
> br/>> On Mon, FFeb 24, 2020, 5:41 AM updates on tube <
abrahamfik...@gmail.com>
> wrote:
> br/>> > i get such error on kibana dashboard no errror in storm
> > com.github.palindromicity.syslog.dsl.ParseException: Syntax error @ 1:0
no
> > viable alternative at input 'F'
> > at
> >
com.github.palindromicity.syslog.dsl.DefaultErrorListener.syntaxError(DefaultErrorListener.java:33)

> > at
> >
org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:65)

> > at
> > org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:558)
> > at
> >
org.antlr.v4.runtime.DefaultErrorStrategy.reportNoViableAlternative(DefaultErrorStrategy.java:310)

> > at
> >
org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:147)

> > at
> >
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.header(Rfc5424Parser.java:412)

> > at

Re: linux-syslog(centos 7) parsing in apache metron error

2020-02-26 Thread Otto Fowler
 Can you provide an example of a syslog line that fails?  Clean of personal
data of course.
Also what is your parser configuration?

On February 25, 2020 at 01:05:00, updates on tube (abrahamfik...@gmail.com)
wrote:



On 2020/02/24 19:31:36, Michael Miklavcic 
wrote: br/>> That's how we route errors. Looks like the syslog parser had
trouble with
> one of your syslog messages
> br/>> On Mon, FFeb 24, 2020, 5:41 AM updates on tube <
abrahamfik...@gmail.com>
> wrote:
> br/>> > i get such error on kibana dashboard no errror in storm
> > com.github.palindromicity.syslog.dsl.ParseException: Syntax error @ 1:0
no
> > viable alternative at input 'F'
> > at
> >
com.github.palindromicity.syslog.dsl.DefaultErrorListener.syntaxError(DefaultErrorListener.java:33)

> > at
> >
org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:65)

> > at
> > org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:558)
> > at
> >
org.antlr.v4.runtime.DefaultErrorStrategy.reportNoViableAlternative(DefaultErrorStrategy.java:310)

> > at
> >
org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:147)

> > at
> >
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.header(Rfc5424Parser.java:412)

> > at
> >
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.syslog_msg(Rfc5424Parser.java:273)

> > at
> >
com.github.palindromicity.syslog.Rfc5424SyslogParser.parseLine(Rfc5424SyslogParser.java:66)

> > at
> >
com.github.palindromicity.syslog.AbstractSyslogParser.lambda$parseLines$0(AbstractSyslogParser.java:144)

> > at java.util.ArrayList.forEach(ArrayList.java:1249)
> > at
> >
com.github.palindromicity.syslog.AbstractSyslogParser.parseLines(AbstractSyslogParser.java:142)

> > at
> >
org.apache.metron.parsers.syslog.BaseSyslogParser.parseOptionalResult(BaseSyslogParser.java:116)

> > at
> >
org.apache.metron.parsers.ParserRunnerImpl.execute(ParserRunnerImpl.java:144)

> > at
> > org.apache.metron.parsers.bolt.ParserBolt.execute(ParserBolt.java:257)
> > at
> >
org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735)

> > at
> >
org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466)

> > at
> >
org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40)

> > at
> >
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472)

> > at
> >
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451)

> > at
> >
org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)

> > at
> >
org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855)

> > at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484)
> > at clojure.lang.AFn.run(AFn.java:22)
> > at java.lang.Thread.run(Thread.java:745)
> > Caused by: org.antlr.v4.runtime.NoViableAltException
> > at
> >
org.antlr.v4.runtime.atn.ParserATNSimulator.noViableAlt(ParserATNSimulator.java:1894)

> > at
> >
org.antlr.v4.runtime.atn.ParserATNSimulator.execATN(ParserATNSimulator.java:498)

> > at
> >
org.antlr.v4.runtime.atn.ParserATNSimulator.adaptivePredict(ParserATNSimulator.java:424)

> > at
> >
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.header(Rfc5424Parser.java:373)

> > ... 18 more
> >
> >
> >
> >
okay so my log file look like this found in /var/log/messages centos os 7
Feb 25 00:54:55 master3 dbus[1615]: [system] Successfully activated service
'org.freedesktop.nm_dispatcher'
Feb 25 00:54:55 master3 systemd: Started Network Manager Script Dispatcher
Service.
Feb 25 00:54:55 master3 nm-dispatcher: req:1 'dhcp4-change' [eth0]: new
request (5 scripts)
Feb 25 00:54:55 master3 nm-dispatcher: req:1 'dhcp4-change' [eth0]: start
running ordered scripts...
Feb 25 00:55:23 master3 su: (to root) root on none
Feb 25 00:55:23 master3 systemd: Started Session c212834 of user root.
Feb 25 00:55:28 master3 su: (to kibana) root on none
Feb 25 00:55:28 master3 systemd: Created slice User Slice of kibana.
Feb 25 00:55:28 master3 systemd: Started Session c212835 of user kibana.
Feb 25 00:55:28 master3 /etc/init.d/kibana: kibana is running
Feb 25 00:55:28 master3 systemd: Removed slice User Slice of kibana.
Feb 25 00:55:39 master3 su: (to metron) root on none
and i use parser as follow that works in http://grokdebug.herokuapp.com/
but not in metron;


(?:%{SYSLOGTIMESTAMP:timestamp}|%{TIMESTAMP_ISO8601:timestamp8601})
(?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}
br/>what should I do??
br/>


Re: zeek metron-bro-plugin-kafka plugin build errors

2020-02-11 Thread Otto Fowler
What version of bro are you using?




On February 10, 2020 at 18:20:11, Beneduce, Kristen (kben...@sandia.gov)
wrote:

Hello,



I’m trying to configure Metron bro plugin by following instructions here:
https://github.com/apache/metron-bro-plugin-kafka/.



I’m unable to build the plugin. I built Zeek from source and tried using
‘zkg’ (zeek package manger) as well as manual installing the plugin. During
manual installation the flag in “./configure —bro-dist=…” is not recognized.



Any assistance you can provide much appreciated!



Thank you,

Kris







user@host:~/zeek/librdkafka-0.11.5# zkg install
apache/metron-bro-plugin-kafka --version master

The following packages will be INSTALLED:

  zeek/apache/metron-bro-plugin-kafka (master)



Verify the following REQUIRED external dependencies:

(Ensure their installation on all relevant systems before proceeding):

  from zeek/apache/metron-bro-plugin-kafka (master):

librdkafka ~0.11.5



Proceed? [Y/n] Y

zeek/apache/metron-bro-plugin-kafka asks for LIBRDKAFKA_ROOT (Path to
librdkafka installation tree) ? [/root/zeek/librdkafka-0.11.5]
/usr/local/lib

Saved answers to config file: /root/.zkg/config

*Running unit tests for "zeek/apache/metron-bro-plugin-kafka"*

*error: failed to run tests for zeek/apache/metron-bro-plugin-kafka:
package build_command failed, see log in
/root/.zkg/logs/metron-bro-plugin-kafka-build.log*

Proceed to install anyway? [N/y]



user@host:~/zeek/librdkafka-0.11.5# less
/root/.zkg/logs/metron-bro-plugin-kafka-build.log

=== STDERR ===

=== STDOUT ===

*Cannot determine Bro source directory, use --bro-dist=DIR.*

*/root/.zkg/logs/metron-bro-plugin-kafka-build.log (END)*


Re: Mysterious Metron UI screenshot

2020-01-08 Thread Otto Fowler
I added you to slack, look out for the invite




On January 8, 2020 at 16:07:28, Dima Kovalyov (dimdr...@gmail.com) wrote:

Hello, Metron community,

Here are two screenshots from Slideshare:
https://www.slideshare.net/hortonworks/combating-phishing-attacks-how-big-data-helps-detect-impersonators
Screen1

and
Screen2


It looks like Alerts UI but it was early in development at a times (screens
are dated 2016). Is it Metron Management UI + Kibana iframes?
Can anyone shed more light on how these screens were created?

Thank you.

p.s. Can you please invite me to the slack channel?

- Dima


Re: streaming rsyslog metron using asa parser

2019-12-27 Thread Otto Fowler
Please look at this recent explanation:
http://mail-archives.apache.org/mod_mbox/metron-user/201912.mbox/%3ccamccojq8qwnomevvyih_xwq_c8hgbvbvhynzr6hqcvez4mr...@mail.gmail.com%3e




On December 27, 2019 at 00:33:31, updates on tube (abrahamfik...@gmail.com)
wrote:


On 2019/12/26 14:19:09, Otto Fowler  wrote:
> You are saying different things that are confusing me.
> You seemed to be saying that you couldn’t parse, but now you are saying
you
> can parse, and see things in kibana but they are not in the alert ui?

> yes based on what you suggest me before, i can push sample log from (
https://github.com/apache/metron/blob/master/metron-platform/metron-integration-test/src/main/sample/data/asa/raw/asa_raw)
and to kafka topic and storm parsed it and i see it in kibana ui; but can't
see it on the metron alart ui that is the problem. parsing is going well..
>
> On December 25, 2019 at 10:47:54, updates on tube (abrahamfik...@gmail.com)

> wrote:
>
> On 2019/12/23 11:25:45, Otto Fowler  wrote:
> > That doesn’t look like ASA data.
> >
>
https://github.com/apache/metron/blob/master/metron-platform/metron-integration-test/src/main/sample/data/asa/raw/asa_raw
> >
> > Are you trying to do regular syslog, or ASA.
> >
> >
> >
> >
> > On December 23, 2019 at 01:57:38, updates on tube (
abrahamfik...@gmail.com)
>
> > wrote:
> >
> > i was trying to stream rsyslog log data to apache metron using asa
> parser.
> > the log look like down below
> >
> > 2019-12-20T07:06:41-05:00 ab TESTING: Fri 20 Dec 2019 07:06:41 AM EST
> > the log 2019-12-20T07:06:41-05:00 ab rsyslogd: action
> > 'action-13-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.1911.0
try
> > https://www.rsyslog.com/e/2359 ]
> > 2019-12-20T07:08:04-05:00 ab TESTING: Fri 20 Dec 2019 07:08:04 AM EST
> > 2019-12-20T07:08:05-05:00 ab TESTING: Fri 20 Dec 2019 07:08:05 AM EST
> > 2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST
> > 2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST
> > 2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST
> > 2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST
> > 2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST
> > 2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST
> > 2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session):
session
> > opened for user root by (uid=0)
> > 2019-12-20T07:09:01-05:00 ab CRON[3175]: (root) CMD ( [ -x
> > /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then
> > /usr/lib/php/sessionclean; fi)
> > 2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session):
session
> > closed for user root
> > 2019-12-20T07:09:01-05:00 ab systemd[1]: Starting Clean php session
> > files...
> > 2019-12-20T07:09:01-05:00 ab systemd[1]: phpsessionclean.service:
> > Succeeded.
> > 2019-12-20T07:09:01-05:00 ab systemd[1]: Started Clean php session
files.
> > 2019-12-20T07:10:04-05:00 ab TESTING: Fri 20 Dec 2019 07:10:04 AM EST
> > 2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST
> > 2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST
> > 2019-12-20T07:10:06-05:00 ab TESTING: Fri 20 Dec 2019 07:10:06 AM EST
> > 2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST
> > 2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST
> > 2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST
> > 2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST
> > 2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST
> > 2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST
> > 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST
> > 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST
> > 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST
> > 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST
> > 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST
> > 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST
> > 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST
> > 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST
> > 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST
> > 2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST
> > 2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST
> > 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST
> > 2019-12-20T07:10:14-05:00 ab 

Re: streaming rsyslog metron using asa parser

2019-12-26 Thread Otto Fowler
You are saying different things that are confusing me.
You seemed to be saying that you couldn’t parse, but now you are saying you
can parse, and see things in kibana but they are not in the alert ui?


On December 25, 2019 at 10:47:54, updates on tube (abrahamfik...@gmail.com)
wrote:

On 2019/12/23 11:25:45, Otto Fowler  wrote:
> That doesn’t look like ASA data.
>
https://github.com/apache/metron/blob/master/metron-platform/metron-integration-test/src/main/sample/data/asa/raw/asa_raw
>
> Are you trying to do regular syslog, or ASA.
>
>
>
>
> On December 23, 2019 at 01:57:38, updates on tube (abrahamfik...@gmail.com)

> wrote:
>
> i was trying to stream rsyslog log data to apache metron using asa
parser.
> the log look like down below
>
> 2019-12-20T07:06:41-05:00 ab TESTING: Fri 20 Dec 2019 07:06:41 AM EST
> the log 2019-12-20T07:06:41-05:00 ab rsyslogd: action
> 'action-13-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.1911.0 try
> https://www.rsyslog.com/e/2359 ]
> 2019-12-20T07:08:04-05:00 ab TESTING: Fri 20 Dec 2019 07:08:04 AM EST
> 2019-12-20T07:08:05-05:00 ab TESTING: Fri 20 Dec 2019 07:08:05 AM EST
> 2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST
> 2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST
> 2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST
> 2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST
> 2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST
> 2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST
> 2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session): session
> opened for user root by (uid=0)
> 2019-12-20T07:09:01-05:00 ab CRON[3175]: (root) CMD ( [ -x
> /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then
> /usr/lib/php/sessionclean; fi)
> 2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session): session
> closed for user root
> 2019-12-20T07:09:01-05:00 ab systemd[1]: Starting Clean php session
> files...
> 2019-12-20T07:09:01-05:00 ab systemd[1]: phpsessionclean.service:
> Succeeded.
> 2019-12-20T07:09:01-05:00 ab systemd[1]: Started Clean php session files.
> 2019-12-20T07:10:04-05:00 ab TESTING: Fri 20 Dec 2019 07:10:04 AM EST
> 2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST
> 2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST
> 2019-12-20T07:10:06-05:00 ab TESTING: Fri 20 Dec 2019 07:10:06 AM EST
> 2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST
> 2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST
> 2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST
> 2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST
> 2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST
> 2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST
> 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST
> 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST
> 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST
> 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST
> 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST
> 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST
> 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST
> 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST
> 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST
> 2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST
> 2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST
> 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST
> 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST
> 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST
> 2019-12-20T07:10:15-05:00 ab systemd[1]: Stopping System Logging
Service...
> 2019-12-20T07:10:15-05:00 ab rsyslogd: [origin software="rsyslogd"
> swVersion="8.1911.0" x-pid="3071" x-info="https://www.rsyslog.com;]
exiting
> on signal 15.
> 2019-12-20T07:10:15-05:00 ab systemd[1]: rsyslog.service: Succeeded.
> 2019-12-20T07:10:15-05:00 ab systemd[1]: Stopped System Logging Service.
> 2019-12-20T07:10:15-05:00 ab systemd[1]: Starting System Logging
Service...
> 2019-12-20T07:10:15-05:00 ab rsyslogd: imuxsock: Acquired UNIX socket
> '/run/systemd/journal/syslog' (fd 3) from systemd. [v8.1911.0]
> 2019-12-20T07:10:15-05:00 ab rsyslogd: [origin software="rsyslogd"
> swVersion="8.1911.0" x-pid="3270" x-info="https://www.rsyslog.com;] start
> 2019-12-20T07:10:15-05:00 ab systemd[1]: Sta

Re: streaming rsyslog metron using asa parser

2019-12-23 Thread Otto Fowler
That doesn’t look like ASA data.
https://github.com/apache/metron/blob/master/metron-platform/metron-integration-test/src/main/sample/data/asa/raw/asa_raw

Are you trying to do regular syslog, or ASA.




On December 23, 2019 at 01:57:38, updates on tube (abrahamfik...@gmail.com)
wrote:

i was trying to stream rsyslog log data to apache metron using asa parser.
the log look like down below

2019-12-20T07:06:41-05:00 ab TESTING: Fri 20 Dec 2019 07:06:41 AM EST
the log 2019-12-20T07:06:41-05:00 ab rsyslogd: action
'action-13-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.1911.0 try
https://www.rsyslog.com/e/2359 ]
2019-12-20T07:08:04-05:00 ab TESTING: Fri 20 Dec 2019 07:08:04 AM EST
2019-12-20T07:08:05-05:00 ab TESTING: Fri 20 Dec 2019 07:08:05 AM EST
2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST
2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST
2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST
2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST
2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST
2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST
2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session): session
opened for user root by (uid=0)
2019-12-20T07:09:01-05:00 ab CRON[3175]: (root) CMD ( [ -x
/usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then
/usr/lib/php/sessionclean; fi)
2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session): session
closed for user root
2019-12-20T07:09:01-05:00 ab systemd[1]: Starting Clean php session
files...
2019-12-20T07:09:01-05:00 ab systemd[1]: phpsessionclean.service:
Succeeded.
2019-12-20T07:09:01-05:00 ab systemd[1]: Started Clean php session files.
2019-12-20T07:10:04-05:00 ab TESTING: Fri 20 Dec 2019 07:10:04 AM EST
2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST
2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST
2019-12-20T07:10:06-05:00 ab TESTING: Fri 20 Dec 2019 07:10:06 AM EST
2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST
2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST
2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST
2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST
2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST
2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST
2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST
2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST
2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST
2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST
2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST
2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST
2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST
2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST
2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST
2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST
2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST
2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST
2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST
2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST
2019-12-20T07:10:15-05:00 ab systemd[1]: Stopping System Logging Service...
2019-12-20T07:10:15-05:00 ab rsyslogd: [origin software="rsyslogd"
swVersion="8.1911.0" x-pid="3071" x-info="https://www.rsyslog.com;] exiting
on signal 15.
2019-12-20T07:10:15-05:00 ab systemd[1]: rsyslog.service: Succeeded.
2019-12-20T07:10:15-05:00 ab systemd[1]: Stopped System Logging Service.
2019-12-20T07:10:15-05:00 ab systemd[1]: Starting System Logging Service...
2019-12-20T07:10:15-05:00 ab rsyslogd: imuxsock: Acquired UNIX socket
'/run/systemd/journal/syslog' (fd 3) from systemd. [v8.1911.0]
2019-12-20T07:10:15-05:00 ab rsyslogd: [origin software="rsyslogd"
swVersion="8.1911.0" x-pid="3270" x-info="https://www.rsyslog.com;] start
2019-12-20T07:10:15-05:00 ab systemd[1]: Started System Logging Service.
2019-12-20T07:10:18-05:00 ab TESTING: Fri 20 Dec 2019 07:10:18 AM EST
2019-12-20T07:15:01-05:00 ab CRON[3283]: pam_unix(cron:session): session
opened for user root by (uid=0)
2019-12-20T07:15:01-05:00 ab CRON[3284]: (root) CMD (command -v debian-sa1
> /dev/null && debian-sa1 1 1)
2019-12-20T07:15:01-05:00 ab CRON[3283]: pam_unix(cron:session): session
closed for user root
2019-12-20T07:17:01-05:00 ab CRON[3323]: pam_unix(cron:session): session
opened for user root by (uid=0)
2019-12-20T07:17:01-05:00 ab CRON[3324]: (root) CMD ( cd / && run-parts
--report /etc/cron.hourly)
2019-12-20T07:17:01-05:00 ab CRON[3323]: pam_unix(cron:session): session
closed for user root
2019-12-20T07:25:01-05:00 ab CRON[]: pam_unix(cron:session): session

Re: Feature request: "outputIndexFunction" for Elasticsearch writer

2019-12-19 Thread Otto Fowler
What might even be more interesting would be to have stellar evaluate
conditions and set the index based on the evaluation:

pseudo:

IF ( parser == BRO ) THEN match(FIELD =x) index = y

or something




On December 19, 2019 at 05:14:01, Vladimir Mikhailov (
v.mikhai...@content-media.ru) wrote:

Hi

HDFS Writer has great functionality for defining the destination folder for
indexing data:

{
"index": "bro",
"batchSize": 5,
"outputPathFunction": "FORMAT('uid-%s', uid)"
}

https://github.com/apache/metron/blob/master/metron-platform/metron-writer/README.md#hdfs-writer

Is it possible to make similar functionality for Elasticsearch Writer?
(something like "outputIndexFunction")

Typical usage scenarios for this:

- different indices of the same data type for different clients,
- separate indices for pilot projects,
- different rollover policies for indices, depending on the condition.

It would also be convenient to have such setting at the global level so
that it affects all active parsers.


Re: Metron with Zeek not working.

2019-12-05 Thread Otto Fowler
I don’t think we support newer versions of bro yet i.e. zeek.




On December 5, 2019 at 10:31:12, Farrukh Naveed Anjum (
anjum.farr...@gmail.com) wrote:

Hi,
I am trying to use upgraded version of Bro that is Zeek. I am unable to
receive data into Kafka

 @load packages/metron-bro-plugin-kafka/Apache/Kafka
redef Kafka::logs_to_send = set(SSH::LOG, RDP::LOG, KRB::LOG, SSL::LOG,
DHCP::LOG, Cluster::LOG, Syslog::LOG, SNMP::LOG, Reporter::LOG, DNP3::LOG,
RADIUS::LOG, Tunnel::LOG, Conn::LOG, HTTP::LOG, DNS::LOG, Software::LOG,
Intel::LOG,  Notice::LOG, Signatures::LOG);
redef Kafka::send_all_active_logs = T;
redef Kafka::topic_name = "bro";
redef Kafka::tag_json = T;
redef Kafka::kafka_conf = table(
["metadata.broker.list"] = "localhost:6667",
["client.id"] = "bro"
);

I have 1 name node, 2 data nodes. Kafa does not seems to be recieving data
from either Zeek or Snort.
It keep sayings broker may not be avalable stuff. Any suggestion ?

--
*Best Regards*
Farrukh Naveed Anjum
*M:* +92 321 5083954 (WhatsApp Enabled)
*W:* https://www.farrukh.cc/


Re: metron-bro-plugin-kafka error

2019-12-05 Thread Otto Fowler
Please start a new thread




On December 5, 2019 at 02:07:53, Farrukh Naveed Anjum (
anjum.farr...@gmail.com) wrote:

I am not receiving data from Bro to Kafka

# @load packages/metron-bro-plugin-kafka/Apache/Kafka
redef Kafka::logs_to_send = set(SSH::LOG, RDP::LOG, KRB::LOG, SSL::LOG,
DHCP::LOG, Cluster::LOG, Syslog::LOG, SNMP::LOG, Reporter::LOG, DNP3::LOG,
RADIUS::LOG, Tunnel::LOG, Conn::LOG, HTTP::LOG, DNS::LOG, Software::LOG,
Intel::LOG,  Notice::LOG, Signatures::LOG);
redef Kafka::send_all_active_logs = T;
redef Kafka::topic_name = "bro";
redef Kafka::tag_json = T;
redef Kafka::kafka_conf = table(
["metadata.broker.list"] = "localhost:6667",
["client.id"] = "bro"
);

Commented out line as per your recommendation. Still not getting any data
in Kafka Topic Any suggestions ?

On Thu, Jul 4, 2019 at 5:08 PM zeo...@gmail.com  wrote:

> If you had the all active logs set to true it should send everything.
> What is the latest commit of the version of plugin are you running?  I see
> it's 0.3 but since that hasn't been "released" (tagged) I'm assuming you
> are installing from master?
>
> Jon Zeolla
>
> On Wed, Jul 3, 2019, 5:57 PM Sanket Sharma 
> wrote:
>
>> Seems like all I had to do was to specify the exact logs that I wanted to
>> export. All working now.
>>
>>
>>
>> Thanks for the help @Jon Zeolla
>>
>>
>>
>>
>>
>> Best regards,
>>
>> Sanket
>>
>>
>>
>>
>>
>> *From:* Sanket Sharma 
>> *Reply-To:* "user@metron.apache.org" 
>> *Date:* Wednesday, 03 July 2019 at 19:47
>> *To:* "user@metron.apache.org" 
>> *Subject:* Re: metron-bro-plugin-kafka error
>>
>>
>>
>> Okay, I figured it out. There was a mismatch in my install bro (yum
>> installed), the source (git cloned) and the plugin version. I removed
>> everything and them compiled both zeek and the plugin from source and the
>> issue seems to have gone. I can run the test command I get the following
>> output.
>>
>>
>>
>> # zeek -N Apache::Kafka
>>
>> Apache::Kafka - Writes logs to Kafka (dynamic, version 0.3.0)
>>
>>
>>
>> However, now I can't seem to get alerts/logs to Kafka. Here's the config
>> I'm using in /usr/local/zeek/share/zeek/site/local.zeek
>>
>>
>>
>> #This doesn't work in the new version anymore.
>>
>> #@load packages/metron-bro-plugin-kafka/Apache/Kafka
>>
>>
>>
>> #Tried added this line to ensure all packages are automatically loaded.
>>
>> #@load packages
>>
>>
>>
>> #Then tried loading the specific module
>>
>> #@load metron-bro-plugin-kafka
>>
>> #And then I eventually removed the three previous load lines
>>
>>
>>
>> redef Kafka::send_all_active_logs = T;
>>
>> redef Kafka::tag_json = T;
>>
>> redef Kafka::kafka_conf = table(
>>
>> ["metadata.broker.list"] = "mysecrethost:6667",
>>
>> ["client.id"] = "bro"
>>
>> );
>>
>>
>>
>> Even when I have the `@loads` disabled, I still see the script being
>> loaded (see logs below).
>>
>>
>>
>> To start, I did the following:
>>
>>
>>
>> zeekctl> deploy
>>
>> zeekctl> restart --clean
>>
>> zeekctl> start
>>
>>
>>
>> I can see the following in startup logs:
>>
>>
>>
>> starting ...
>>
>> starting zeek ...
>>
>> [ZeekControl] > diag
>>
>> [zeek]
>>
>>
>>
>> No core file found.
>>
>>
>>
>> Zeek 2.6-558
>>
>> Linux 3.10.0-957.21.3.el7.x86_64
>>
>>
>>
>> Zeek plugins:
>>
>> Apache::Kafka - Writes logs to Kafka (dynamic, version 0.3.0)
>>
>>
>>
>>  No reporter.log
>>
>>
>>
>>  stderr.log
>>
>> listening on em1
>>
>>
>>
>>
>>
>>  stdout.log
>>
>> max memory size (kbytes, -m) unlimited
>>
>> data seg size   (kbytes, -d) unlimited
>>
>> virtual memory  (kbytes, -v) unlimited
>>
>> core file size  (blocks, -c) unlimited
>>
>>
>>
>>  .cmdline
>>
>> -i em1 -U .status -p zeekctl -p zeekctl-live -p standalone -p local -p
>> zeek local.zeek zeekctl zeekctl/standalone zeekctl/auto
>>
>>
>>
>>  .env_vars
>>
>>
>> PATH=/usr/local/zeek/bin:/usr/local/zeek/share/zeekctl/scripts:/usr/local/openssl/bin:/opt/apache-maven-3.3.9/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/zeek/bin
>>
>>
>> ZEEKPATH=/usr/local/zeek/spool/installed-scripts-do-not-touch/site::/usr/local/zeek/spool/installed-scripts-do-not-touch/auto:/usr/local/zeek/share/zeek:/usr/local/zeek/share/zeek/policy:/usr/local/zeek/share/zeek/site
>>
>> CLUSTER_NODE=
>>
>>
>>
>>  .status
>>
>> RUNNING [net_run]
>>
>>
>>
>>  No prof.log
>>
>>
>>
>>  packet_filter.log
>>
>> #separator \x09
>>
>> #set_separator  ,
>>
>> #empty_field(empty)
>>
>> #unset_field-
>>
>> #path   packet_filter
>>
>> #open   2019-07-03-19-36-56
>>
>> #fields ts  nodefilter  initsuccess
>>
>> #types  timestring  string  boolbool
>>
>> 1562175416.590048   zeekip or not ipT   T
>>
>>
>>
>>  loaded_scripts.log
>>
>> #separator \x09
>>
>> #set_separator  ,
>>
>> #empty_field(empty)
>>
>> #unset_field-
>>
>> #path   loaded_scripts
>>
>> #open   2019-07-03-19-36-56
>>
>> #fields name
>>
>> #types  string
>>
>> 

Re: Enable optional fields in csv parser

2019-11-17 Thread Otto Fowler
I think what he is saying is that the csv files may not always have all the
columns, in which case they won’t parse, which is before stellar can do
anything.




On November 16, 2019 at 11:30:25, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

A better way of doing this would be to use the fieldTransformation setting
and the REMOVE method to get rid of the extraneous fields. Docs are
included at
https://metron.apache.org/current-book/metron-platform/metron-parsers/index.html#

That way you don’t need a separate preprocessing step.

Simon

On Sat, 16 Nov 2019 at 16:09, Hema malini  wrote:

> Thanks ..will do preprocessing of data..
>
> On Sat, 16 Nov, 2019, 9:25 PM Otto Fowler, 
> wrote:
>
>> No, there is no way to do this currently.
>>
>> The parser parses the line into and array of strings that must match the
>> size of the columns.
>>
>> The underlying opencsv parser does not support this either.  You may have
>> to do some normalization work on your data if you need to account for this.
>>
>>
>>
>> On November 16, 2019 at 08:49:36, Hema malini (nhemamalin...@gmail.com)
>> wrote:
>>
>> Hi all,
>>
>> Is there any way to mark some columns as optional in column mapping in
>> CSV parser.
>>
>> Thanks and Regards,
>> Hema
>>
>> --
--
simon elliston ball
@sireb


Re: Enable optional fields in csv parser

2019-11-16 Thread Otto Fowler
No, there is no way to do this currently.

The parser parses the line into and array of strings that must match the
size of the columns.

The underlying opencsv parser does not support this either.  You may have
to do some normalization work on your data if you need to account for this.




On November 16, 2019 at 08:49:36, Hema malini (nhemamalin...@gmail.com)
wrote:

Hi all,

Is there any way to mark some columns as optional in column mapping in CSV
parser.

Thanks and Regards,
Hema


RE: Invite for Merton slack channel

2019-10-18 Thread Otto Fowler
Will do




On October 18, 2019 at 11:46:44, Landricombe, Tobin (
tobin.landrico...@roke.co.uk) wrote:

Hi,



Please could you add me.



Thanks,

Tobin



Follow Us: LinkedIn <http://www.linkedin.com/company/roke-manor-research> |
Twitter <https://twitter.com/rokemanor> | Facebook
<https://www.facebook.com/rokemanor>

Roke Manor Research Limited, Romsey, Hampshire, SO51 0ZN, United Kingdom.
Part of the Chemring Group. Registered in England & Wales. Registered No:
00267550. The information contained in this e-mail and any attachments is
proprietary to Roke Manor Research Limited and must not be passed to any
third party without permission. This communication is for information only
and shall not create or change any contractual relationship.
www.roke.co.uk
<http://www.roke.co.uk/?utm_source=Roke_medium=Email_content=Company%20Signature_campaign=Roke>
--

*From:* zeo...@gmail.com 
*Sent:* 08 October 2019 13:44
*To:* user@metron.apache.org
*Subject:* Re: Invite for Merton slack channel



Invite sent


- Jon Zeolla
zeo...@gmail.com





On Tue, Oct 8, 2019 at 6:23 AM Sanket Sharma 
wrote:

Hi,



Can you please add me to the slack channel?



Best regards,

Sanket
------

*From:* Otto Fowler 
*Sent:* Wednesday, August 21, 2019 11:16 PM
*To:* Wan Nabe ; user@metron.apache.org <
user@metron.apache.org>
*Subject:* Re: Invite for Merton slack channel



Done, join the metron channel







On August 21, 2019 at 00:54:08, Wan Nabe (wanna...@ymail.com) wrote:

Hi,



Please add me to the channel.



Thank you & Regards,

Hans


On 14 Aug 2019, at 17:55, R K Sharma  wrote:

Hi,

 Could you please add me to Metron Slack channel ?



Regards

Rinkesh Sharma



On Tue, Aug 6, 2019 at 8:53 PM Otto Fowler  wrote:

sure, give it a sec







On August 6, 2019 at 10:09:36, Thiago Rahal Disposti (
thiago.ra...@kryptus.com) wrote:



Can you please add me ?

thiago.ra...@kryptus.com



Thanks.

Thiago Rahal





On Thu, Jul 18, 2019 at 10:44 PM Otto Fowler 
wrote:

Both of you are all set, join the metron slack channel







On July 18, 2019 at 20:15:33, Aman Diwakar (aman.diwa...@gmail.com) wrote:

Me too please



On Thu, Jul 18, 2019, 12:32 PM Satish Abburi 
wrote:



Can you please add me also. Thanks.



satish.abb...@sstech.us



*From:* "zeo...@gmail.com" 
*Reply-To:* "user@metron.apache.org" 
*Date:* Tuesday, July 9, 2019 at 2:31 AM
*To:* "user@metron.apache.org" 
*Subject:* Re: Invite for Merton slack channel



You got it.  Sent

Jon Zeolla



On Tue, Jul 9, 2019, 12:55 AM Rendi 7936  wrote:

Good Morning,

Hi there

Can i join Apache Metron Slack Channel ?
My e-mail is rendi.7...@gmail.com

On 2019/07/08 13:29:42, "z...@gmail.com" wrote:
> Done>
>
> - Jon Zeolla>
> zeo...@gmail.com>
>
>
> On Mon, Jul 8, 2019 at 9:18 AM Srikanth Nagarajan >
> wrote:>
>
> > Hi>
> >>
> > I would appreciate an invite to the Metron slack channel .>
> >>
> > Thank you>
> > Srikanth>
> >>
> > __>
> > *Srikanth Nagarajan *>
> > Principal>
> > *Gandiva Networks Inc*>
> > *732.690.1884 <732.690.1884>* Mobile>
> > s...@gandivanetworks.com>
> > www.gandivanetworks.com
<https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fwww.gandivanetworks.com=E,1,ZuPKEmeSNiIm91SaFatuU2NeiBPqQZM5zbnm87gyNcC0DA6pqicZygK8Ze1zLhLaQv0KziiCGJr7vcouDyvQDJT8ir_zf2b3PTW9lIt-K7GpMNGhzOdCU29z=1>
>
> >>
>


Re: [ANNOUNCE] Apache Metron-bro-plugin-kafka release 0.3.0

2019-10-17 Thread Otto Fowler
Just a reminder, if you used my script to verify the RC, please comment :
https://github.com/apache/metron-bro-plugin-kafka/pull/38




On October 16, 2019 at 17:19:24, Justin Leet (l...@apache.org) wrote:

Hi all,

I’m pleased to announce the release of Metron 0.3.0! It's been a little
while coming, but there's a good number of improvements and fixes, both
around functionality and testing. Thanks to everyone who's contributing to
and using the plugin!

Details:
The official release source code tarballs may be obtained at any of the
mirrors listed in
http://www.apache.org/dyn/closer.cgi/metron/metron-bro-plugin-kafka/0.3.0/

As usual, the secure signatures and confirming hashes may be obtained at
https://dist.apache.org/repos/dist/release/metron/metron-bro-plugin-kafka/0.3.0/

The release branch in GitHub is
https://github.com/apache/metron-bro-plugin-kafka/tree/Metron-bro-plugin-kafka_0.3.0
(tag
apache-metron-bro-plugin-kafka_0.3.0-release)

Change lists and Release Notes may be obtained at the same locations as the
tarballs.
For your reading pleasure, the change list is appended to this message.

CHANGES (in reverse chronological order):

METRON-2269 Cannot run Docker tests if src is not a git repo
(ottobackwards) closes apache/metron-bro-plugin-kafka#37
METRON-2069 Add btests for bro plugin topic_name selection
(ottobackwards) closes apache/metron-bro-plugin-kafka#36
METRON-2045 Pass a version argument to the bro plugin docker
scripts (JonZeolla) closes apache/metron-bro-plugin-kafka#35
METRON-2003 Bro plugin topic should fall back to the log writer's
path (JonZeolla) closes apache/metron-bro-plugin-kafka#26
METRON-2025 Bro Kafka Plugin Docker should yum clean
(ottobackwards) closes apache/metron-bro-plugin-kafka#33
METRON-2021 Add screen to bro docker image (ottobackwards) closes
apache/metron-bro-plugin-kafka#32
METRON-2013 The bro plugin docker scripts topic name should
be configurable (JonZeolla via ottobackwards) closes
apache/metron-bro-plugin-kafka#27
METRON-2020 Running run_end_to_end.sh with docker give warning if
bash  4.0 (JonZeolla via ottobackwards) closes
apache/metron-bro-plugin-kafka#31
METRON-1991 Bro plugin docker scripts should exit nonzero when bro
and kafka counts differ (JonZeolla via ottobackwards) closes
apache/metron-bro-plugin-kafka#29
METRON-2017 The Bro plugin docker data processing script
incorrectly runs bro (JonZeolla via ottobackwards) closes
apache/metron-bro-plugin-kafka#30
METRON-1990 Bro plugin docker should exit nonzero if it encounters
issues (JonZeolla) closes apache/metron-bro-plugin-kafka#28
METRON-2004 Bro plugin kafka docker_execute_shell.sh workdir
should be unspecified (JonZeolla via ottobackwards) closes
apache/metron-bro-plugin-kafka#25
METRON-2000 Fix bro plugin docker line counting for BRO_COUNT
(JonZeolla via jonzeolla) closes apache/metron-bro-plugin-kafka#24
METRON-1992 Support sending a log to multiple topics (JonZeolla)
closes apache/metron-bro-plugin-kafka#23
METRON-1910 bro plugin segfaults on src/KafkaWriter.cc:72
(JonZeolla) closes apache/metron-bro-plugin-kafka#20
METRON-1911 Create Docker based test environment for Bro Kafka
Plugin (ottobackwards) closes apache/metron-bro-plugin-kafka#21
METRON-1885 Remove version from bro plugin btest (JonZeolla)
closes apache/metron-bro-plugin-kafka#19
METRON-1827 Update librdkafka in metron-bro-plugin-kafka
(JonZeolla via jonzeolla) closes apache/metron-bro-plugin-kafka#13
METRON-1866 Improve metron-bro-plugin-kafka documentation
(JonZeolla via jonzeolla) closes apache/metron-bro-plugin-kafka#17
METRON-1304 Allow metron-bro-plugin-kafka to include or exclude
logs (JonZeolla via nickwallen) closes
apache/metron-bro-plugin-kafka#2
METRON-1865 Fix metron-bro-plugin-kafka tests (JonZeolla via
jonzeolla) closes apache/metron-bro-plugin-kafka#16
METRON-1828 Improve bro plugin contributing documentation
(JonZeolla) closes apache/metron-bro-plugin-kafka#14
METRON-1818 Remove config_files from bro-pkg.meta (JonZeolla)
closes apache/metron-bro-plugin-kafka#11
METRON-1800 Increment metron-bro-plugin-kafka version (JonZeolla
via jonzeolla) closes apache/metron-bro-plugin-kafka#10
METRON-1773 Bro plugin docs should refer to Apache Metron project
(nickwallen) closes apache/metron-bro-plugin-kafka#9


Re: Help deploying in AWS

2019-09-13 Thread Otto Fowler
I believe that the EC2 script has been tested before every release. The
issue here is probably the build.

We need the real errors, maybe you can post it as a gist or something.

Also, maybe you can log into the machine and see if you can just build
metron ( without the ansible and deploy stuff.




On September 13, 2019 at 06:57:30, Otto Fowler (ottobackwa...@gmail.com)
wrote:

So you are using
https://github.com/apache/metron/tree/master/metron-deployment/amazon-ec2 ?



On September 12, 2019 at 16:27:43, Eric Jacksch (e...@jacksch.com) wrote:

Greetings,

I've been trying to deploy in AWS to ec2 instances using the playbook.

The VPC is created, instances spun up, etc, but I get hundreds (or
thousands) of lines of errors during the build.

Can anyone help?

I'm running the playbook on another EC2 instance, so it is certainly
possible that I am missing a dependency, etc...

Thanks!



TASK [metron-builder : Build Metron]
**
failed: [ec2-35-183-121-120.ca-central-1.compute.amazonaws.com ->
localhost] (item=mvn package -DskipTests -T 2C -P HDP-2.5.0.0,mpack)
=> {"changed": true, "cmd": "mvn package -DskipTests -T 2C -P
HDP-2.5.0.0,mpack", "delta": "0:01:12.526650", "end": "2019-09-12
19:58:48.231138", "failed": true, "item": "mvn package -DskipTests -T
2C -P HDP-2.5.0.0,mpack", "msg": "non-zero return code", "rc": 1,
"start": "2019-09-12 19:57:35.704488", "stderr": "warning: No
SupportedSourceVersion annotation found on
org.adrianwalker.multilinestring.MultilineProcessor, returning
RELEASE_6.\nwarning: Supported source version 'RELEASE_6' from
annotation processor
'org.adrianwalker.multilinestring.MultilineProcessor' less than
-source '1.8'\n2 warnings\nwarning: No SupportedSourceVersion
annotation found on
org.adrianwalker.multilinestring.MultilineProcessor, returning
RELEASE_6.\nwarning: Supported source version 'RELEASE_6' from
annotation processor
'org.adrianwalker.multilinestring.MultilineProcessor' less than
-source
'1.8'\n/home/ec2-user/metron/metron-platform/metron-test-utilities/src/main/java/org/apache/metron/test/bolt/BaseBoltTest.java:87:
warning: [unchecked] unchecked method invocation: method copyOf in
class ImmutableSet is applied to given types\n ImmutableSet keys =
ImmutableSet.copyOf(message.keySet());\n
^\n required: Collection\n found: Set\n
where E is a type-variable:\n E extends Object declared in method
copyOf(Collection)\n/home/ec2-user/metron/metron-platform/metron-test-utilities/src/main/java/org/apache/metron/test/bolt/BaseBoltTest.java:87:
warning: [unchecked] unchecked conversion\n ImmutableSet keys =
ImmutableSet.copyOf(message.keySet());\n
^\n required: Collection\n
found: Set\n where E is a type-variable:\n E extends Object
declared in method copyOf(Collection)\n/home/ec2-user/metron/metron-platform/metron-test-utilities/src/main/java/org/apache/metron/test/utils/KafkaLoader.java:67:
warning: [unchecked] unchecked call to send(ProducerRecord) as a
member of the raw type KafkaProducer\n kafkaProducer.send(new
ProducerRecord(topic, line));\n
^\n where K,V are type-variables:\n K extends Object declared
in class KafkaProducer\n V extends Object declared in class
KafkaProducer\n5
warnings\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:174:
warning: [unchecked] unchecked conversion\n functionRestConfig
= getArg(1, Map.class, args);\n ^\n
required: Map\n found:
Map\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:176:
warning: [unchecked] unchecked conversion\n queryParameters =
getArg(2, Map.class, args);\n ^\n
required: Map\n found:
Map\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:181:
warning: [unchecked] unchecked cast\n Map
globalRestConfig = (Map)
getGlobalConfig(context).get(STELLAR_REST_SETTINGS);\n

^\n required: Map\n found:
Object\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:182:
warning: [unchecked] unchecked cast\n Map
getRestConfig = (Map)
getGlobalConfig(context).get(STELLAR_REST_GET_SETTINGS);\n

^\n required: Map\n found:
Object\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:183:
warning: [unchecked] unchecked generic array creation for varargs
parameter of type Map[]\n RestConfig restConfig =
buildRestConfig(globalRestConfig, getRestConfig,
functionRestConfig);\n
^\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:254:
warning: [unchecked] 

Re: Help deploying in AWS

2019-09-13 Thread Otto Fowler
So you are using
https://github.com/apache/metron/tree/master/metron-deployment/amazon-ec2 ?




On September 12, 2019 at 16:27:43, Eric Jacksch (e...@jacksch.com) wrote:

Greetings,

I've been trying to deploy in AWS to ec2 instances using the playbook.

The VPC is created, instances spun up, etc, but I get hundreds (or
thousands) of lines of errors during the build.

Can anyone help?

I'm running the playbook on another EC2 instance, so it is certainly
possible that I am missing a dependency, etc...

Thanks!



TASK [metron-builder : Build Metron]
**

failed: [ec2-35-183-121-120.ca-central-1.compute.amazonaws.com ->
localhost] (item=mvn package -DskipTests -T 2C -P HDP-2.5.0.0,mpack)
=> {"changed": true, "cmd": "mvn package -DskipTests -T 2C -P
HDP-2.5.0.0,mpack", "delta": "0:01:12.526650", "end": "2019-09-12
19:58:48.231138", "failed": true, "item": "mvn package -DskipTests -T
2C -P HDP-2.5.0.0,mpack", "msg": "non-zero return code", "rc": 1,
"start": "2019-09-12 19:57:35.704488", "stderr": "warning: No
SupportedSourceVersion annotation found on
org.adrianwalker.multilinestring.MultilineProcessor, returning
RELEASE_6.\nwarning: Supported source version 'RELEASE_6' from
annotation processor
'org.adrianwalker.multilinestring.MultilineProcessor' less than
-source '1.8'\n2 warnings\nwarning: No SupportedSourceVersion
annotation found on
org.adrianwalker.multilinestring.MultilineProcessor, returning
RELEASE_6.\nwarning: Supported source version 'RELEASE_6' from
annotation processor
'org.adrianwalker.multilinestring.MultilineProcessor' less than
-source
'1.8'\n/home/ec2-user/metron/metron-platform/metron-test-utilities/src/main/java/org/apache/metron/test/bolt/BaseBoltTest.java:87:

warning: [unchecked] unchecked method invocation: method copyOf in
class ImmutableSet is applied to given types\n ImmutableSet keys =
ImmutableSet.copyOf(message.keySet());\n
^\n required: Collection\n found: Set\n
where E is a type-variable:\n E extends Object declared in method
copyOf(Collection)\n/home/ec2-user/metron/metron-platform/metron-test-utilities/src/main/java/org/apache/metron/test/bolt/BaseBoltTest.java:87:

warning: [unchecked] unchecked conversion\n ImmutableSet keys =
ImmutableSet.copyOf(message.keySet());\n
^\n required: Collection\n
found: Set\n where E is a type-variable:\n E extends Object
declared in method copyOf(Collection)\n/home/ec2-user/metron/metron-platform/metron-test-utilities/src/main/java/org/apache/metron/test/utils/KafkaLoader.java:67:

warning: [unchecked] unchecked call to send(ProducerRecord) as a
member of the raw type KafkaProducer\n kafkaProducer.send(new
ProducerRecord(topic, line));\n
^\n where K,V are type-variables:\n K extends Object declared
in class KafkaProducer\n V extends Object declared in class
KafkaProducer\n5
warnings\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:174:

warning: [unchecked] unchecked conversion\n functionRestConfig
= getArg(1, Map.class, args);\n ^\n
required: Map\n found:
Map\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:176:

warning: [unchecked] unchecked conversion\n queryParameters =
getArg(2, Map.class, args);\n ^\n
required: Map\n found:
Map\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:181:

warning: [unchecked] unchecked cast\n Map
globalRestConfig = (Map)
getGlobalConfig(context).get(STELLAR_REST_SETTINGS);\n

^\n required: Map\n found:
Object\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:182:

warning: [unchecked] unchecked cast\n Map
getRestConfig = (Map)
getGlobalConfig(context).get(STELLAR_REST_GET_SETTINGS);\n

^\n required: Map\n found:
Object\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:183:

warning: [unchecked] unchecked generic array creation for varargs
parameter of type Map[]\n RestConfig restConfig =
buildRestConfig(globalRestConfig, getRestConfig,
functionRestConfig);\n
^\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:254:

warning: [unchecked] unchecked conversion\n functionRestConfig
= getArg(2, Map.class, args);\n ^\n
required: Map\n found:
Map\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:256:

warning: [unchecked] unchecked conversion\n queryParameters =
getArg(3, Map.class, args);\n ^\n
required: Map\n found:
Map\n/home/ec2-user/metron/metron-stellar/stellar-common/src/main/java/org/apache/metron/stellar/dsl/functions/RestFunctions.java:261:

warning: [unchecked] unchecked cast\n Map

Re: [DISCUSS] HDP 3.1 Upgrade and release strategy

2019-08-27 Thread Otto Fowler
If anyone can think of the things that need to be backed up, please comment
the jira.




On August 27, 2019 at 17:07:20, Otto Fowler (ottobackwa...@gmail.com) wrote:

Good idea METRON–2239 [blocker].



On August 27, 2019 at 16:30:13, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

You could always submit a Jira :)

On Tue, 27 Aug 2019 at 21:27, Otto Fowler  wrote:

> You are right, that is much better than backup_metron_configs.sh.
>
>
>
> On August 27, 2019 at 16:05:38, Simon Elliston Ball (
> si...@simonellistonball.com) wrote:
>
> You can do this with zk_load_configs and Ambari’s blueprint api, so we
> kinda already do.
>
> Simon
>
> On Tue, 27 Aug 2019 at 20:19, Otto Fowler  wrote:
>
>> Maybe we need some automated method to backup configurations and restore
>> them.
>>
>>
>>
>> On August 27, 2019 at 14:46:58, Michael Miklavcic (
>> michael.miklav...@gmail.com) wrote:
>>
>> > Once you back up your metron configs, the same configs that worked on
>> the previous version will continue to work on the version running on HDP
>> 3.x.  If there is any discrepancy between the two or additional settings
>> will be required, those will be documented in the release notes.  From the
>> Metron perspective, this upgrade would be no different than simply
>> upgrading to the new Metron version.
>>
>> This upgrade cannot be performed the same way we've done it in the past.
>> A number of platform upgrades, including OS, are required:
>>
>>1. Requires the OS to be updated on all nodes because there are no
>>Centos6 RPMs provided in HDP 3.1. Must bump to Centos7.
>>2. The final new HBase code will not run on HDP 2.6
>>3. The MPack changes for new Ambari are not backwards compatible
>>4. YARN and HDFS/MR are also at risk to be backwards incompatible
>>
>>
>> On Tue, Aug 27, 2019 at 12:39 PM Michael Miklavcic <
>> michael.miklav...@gmail.com> wrote:
>>
>>> Adding the dev list back into the thread (a reply-all was missed).
>>>
>>> On Tue, Aug 27, 2019 at 10:49 AM James Sirota 
>>> wrote:
>>>
>>>> I agree with Simon.  HDP 2.x platform is rapidly approaching EOL and
>>>> everyone will likely need to migrate by end of year.  Doing this platform
>>>> upgrade sooner will give everyone visibility into what Metron on HDP 3.x
>>>> looks like so they have time to plan and upgrade at their own pace.
>>>> Feature-wise, the Metron application itself will be unchanged.  It is
>>>> merely the platform underneath that is changing.  HDP itself can be
>>>> upgraded per instructions here:
>>>> https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/release-notes/content/upgrading_parent.html
>>>>
>>>> Once you back up your metron configs, the same configs that worked on
>>>> the previous version will continue to work on the version running on HDP
>>>> 3.x.  If there is any discrepancy between the two or additional settings
>>>> will be required, those will be documented in the release notes.  From the
>>>> Metron perspective, this upgrade would be no different than simply
>>>> upgrading to the new Metron version.
>>>>
>>>> James
>>>>
>>>>
>>>> 27.08.2019, 07:09, "Simon Elliston Ball" :
>>>>
>>>> Something worth noting here is that HDP 2.6.5 is quite old and
>>>> approaching EoL rapidly, so the issue of upgrade is urgent. I am aware of a
>>>> large number of users who require this upgrade ASAP, and in fact an aware
>>>> of zero users who wish to remain on HDP 2.
>>>>
>>>> Perhaps those users who want to stay on the old platform can stick
>>>> their hands up and raise concerns, but this move will likely have to happen
>>>> very soon.
>>>>
>>>> Simon
>>>>
>>>> On Tue, 27 Aug 2019 at 15:04, Otto Fowler 
>>>> wrote:
>>>>
>>>> Although we had the discussion, and some great ideas where passed
>>>> around, I do not believe we came to some kind of consensus on what 1.0
>>>> should look like. So that discussion would have to be picked up again so
>>>> that we could know where we are at, and make it an actual thing if we were
>>>> going to make this a 1.0 release.
>>>>
>>>> I believe that the issues raised in that discussion gating 1.0 are
>>>> still largely applicable, including upgrade.
>>>>
>>>> Right now we hav

Re: [DISCUSS] HDP 3.1 Upgrade and release strategy

2019-08-27 Thread Otto Fowler
Good idea METRON–2239 [blocker].




On August 27, 2019 at 16:30:13, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

You could always submit a Jira :)

On Tue, 27 Aug 2019 at 21:27, Otto Fowler  wrote:

> You are right, that is much better than backup_metron_configs.sh.
>
>
>
> On August 27, 2019 at 16:05:38, Simon Elliston Ball (
> si...@simonellistonball.com) wrote:
>
> You can do this with zk_load_configs and Ambari’s blueprint api, so we
> kinda already do.
>
> Simon
>
> On Tue, 27 Aug 2019 at 20:19, Otto Fowler  wrote:
>
>> Maybe we need some automated method to backup configurations and restore
>> them.
>>
>>
>>
>> On August 27, 2019 at 14:46:58, Michael Miklavcic (
>> michael.miklav...@gmail.com) wrote:
>>
>> > Once you back up your metron configs, the same configs that worked on
>> the previous version will continue to work on the version running on HDP
>> 3.x.  If there is any discrepancy between the two or additional settings
>> will be required, those will be documented in the release notes.  From the
>> Metron perspective, this upgrade would be no different than simply
>> upgrading to the new Metron version.
>>
>> This upgrade cannot be performed the same way we've done it in the past.
>> A number of platform upgrades, including OS, are required:
>>
>>1. Requires the OS to be updated on all nodes because there are no
>>Centos6 RPMs provided in HDP 3.1. Must bump to Centos7.
>>2. The final new HBase code will not run on HDP 2.6
>>3. The MPack changes for new Ambari are not backwards compatible
>>4. YARN and HDFS/MR are also at risk to be backwards incompatible
>>
>>
>> On Tue, Aug 27, 2019 at 12:39 PM Michael Miklavcic <
>> michael.miklav...@gmail.com> wrote:
>>
>>> Adding the dev list back into the thread (a reply-all was missed).
>>>
>>> On Tue, Aug 27, 2019 at 10:49 AM James Sirota 
>>> wrote:
>>>
>>>> I agree with Simon.  HDP 2.x platform is rapidly approaching EOL and
>>>> everyone will likely need to migrate by end of year.  Doing this platform
>>>> upgrade sooner will give everyone visibility into what Metron on HDP 3.x
>>>> looks like so they have time to plan and upgrade at their own pace.
>>>> Feature-wise, the Metron application itself will be unchanged.  It is
>>>> merely the platform underneath that is changing.  HDP itself can be
>>>> upgraded per instructions here:
>>>> https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/release-notes/content/upgrading_parent.html
>>>>
>>>> Once you back up your metron configs, the same configs that worked on
>>>> the previous version will continue to work on the version running on HDP
>>>> 3.x.  If there is any discrepancy between the two or additional settings
>>>> will be required, those will be documented in the release notes.  From the
>>>> Metron perspective, this upgrade would be no different than simply
>>>> upgrading to the new Metron version.
>>>>
>>>> James
>>>>
>>>>
>>>> 27.08.2019, 07:09, "Simon Elliston Ball" :
>>>>
>>>> Something worth noting here is that HDP 2.6.5 is quite old and
>>>> approaching EoL rapidly, so the issue of upgrade is urgent. I am aware of a
>>>> large number of users who require this upgrade ASAP, and in fact an aware
>>>> of zero users who wish to remain on HDP 2.
>>>>
>>>> Perhaps those users who want to stay on the old platform can stick
>>>> their hands up and raise concerns, but this move will likely have to happen
>>>> very soon.
>>>>
>>>> Simon
>>>>
>>>> On Tue, 27 Aug 2019 at 15:04, Otto Fowler 
>>>> wrote:
>>>>
>>>> Although we had the discussion, and some great ideas where passed
>>>> around, I do not believe we came to some kind of consensus on what 1.0
>>>> should look like. So that discussion would have to be picked up again so
>>>> that we could know where we are at, and make it an actual thing if we were
>>>> going to make this a 1.0 release.
>>>>
>>>> I believe that the issues raised in that discussion gating 1.0 are
>>>> still largely applicable, including upgrade.
>>>>
>>>> Right now we have *ZERO* HDP 3.1 users. We will go from that to *only*
>>>> supporting 3.1 work and releases. So every user and deployment we currently
>>>> hav

Re: [DISCUSS] HDP 3.1 Upgrade and release strategy

2019-08-27 Thread Otto Fowler
You are right, that is much better than backup_metron_configs.sh.




On August 27, 2019 at 16:05:38, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

You can do this with zk_load_configs and Ambari’s blueprint api, so we
kinda already do.

Simon

On Tue, 27 Aug 2019 at 20:19, Otto Fowler  wrote:

> Maybe we need some automated method to backup configurations and restore
> them.
>
>
>
> On August 27, 2019 at 14:46:58, Michael Miklavcic (
> michael.miklav...@gmail.com) wrote:
>
> > Once you back up your metron configs, the same configs that worked on
> the previous version will continue to work on the version running on HDP
> 3.x.  If there is any discrepancy between the two or additional settings
> will be required, those will be documented in the release notes.  From the
> Metron perspective, this upgrade would be no different than simply
> upgrading to the new Metron version.
>
> This upgrade cannot be performed the same way we've done it in the past. A
> number of platform upgrades, including OS, are required:
>
>1. Requires the OS to be updated on all nodes because there are no
>Centos6 RPMs provided in HDP 3.1. Must bump to Centos7.
>2. The final new HBase code will not run on HDP 2.6
>3. The MPack changes for new Ambari are not backwards compatible
>4. YARN and HDFS/MR are also at risk to be backwards incompatible
>
>
> On Tue, Aug 27, 2019 at 12:39 PM Michael Miklavcic <
> michael.miklav...@gmail.com> wrote:
>
>> Adding the dev list back into the thread (a reply-all was missed).
>>
>> On Tue, Aug 27, 2019 at 10:49 AM James Sirota  wrote:
>>
>>> I agree with Simon.  HDP 2.x platform is rapidly approaching EOL and
>>> everyone will likely need to migrate by end of year.  Doing this platform
>>> upgrade sooner will give everyone visibility into what Metron on HDP 3.x
>>> looks like so they have time to plan and upgrade at their own pace.
>>> Feature-wise, the Metron application itself will be unchanged.  It is
>>> merely the platform underneath that is changing.  HDP itself can be
>>> upgraded per instructions here:
>>> https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/release-notes/content/upgrading_parent.html
>>>
>>> Once you back up your metron configs, the same configs that worked on
>>> the previous version will continue to work on the version running on HDP
>>> 3.x.  If there is any discrepancy between the two or additional settings
>>> will be required, those will be documented in the release notes.  From the
>>> Metron perspective, this upgrade would be no different than simply
>>> upgrading to the new Metron version.
>>>
>>> James
>>>
>>>
>>> 27.08.2019, 07:09, "Simon Elliston Ball" :
>>>
>>> Something worth noting here is that HDP 2.6.5 is quite old and
>>> approaching EoL rapidly, so the issue of upgrade is urgent. I am aware of a
>>> large number of users who require this upgrade ASAP, and in fact an aware
>>> of zero users who wish to remain on HDP 2.
>>>
>>> Perhaps those users who want to stay on the old platform can stick their
>>> hands up and raise concerns, but this move will likely have to happen very
>>> soon.
>>>
>>> Simon
>>>
>>> On Tue, 27 Aug 2019 at 15:04, Otto Fowler 
>>> wrote:
>>>
>>> Although we had the discussion, and some great ideas where passed
>>> around, I do not believe we came to some kind of consensus on what 1.0
>>> should look like. So that discussion would have to be picked up again so
>>> that we could know where we are at, and make it an actual thing if we were
>>> going to make this a 1.0 release.
>>>
>>> I believe that the issues raised in that discussion gating 1.0 are still
>>> largely applicable, including upgrade.
>>>
>>> Right now we have *ZERO* HDP 3.1 users. We will go from that to *only*
>>> supporting 3.1 work and releases. So every user and deployment we currently
>>> have will feel real pain, have to slay real dragons to move forward with
>>> metron.
>>>
>>> With regards to support for older versions, the “backward capability”
>>> that has been mentioned, I would not say that I have any specific plan for
>>> that in mind. What I would say rather, is that I believe that we must be
>>> explicit, setting expectations correctly and clearly with regards to our
>>> intent while demonstrating that we have thought through the situation. That
>>> discussion has not happened, at least I do not believe t

Re: [DISCUSS] HDP 3.1 Upgrade and release strategy

2019-08-27 Thread Otto Fowler
You are right, that is much better than backup_metron_configs.sh.




On August 27, 2019 at 16:05:38, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

You can do this with zk_load_configs and Ambari’s blueprint api, so we
kinda already do.

Simon

On Tue, 27 Aug 2019 at 20:19, Otto Fowler  wrote:

> Maybe we need some automated method to backup configurations and restore
> them.
>
>
>
> On August 27, 2019 at 14:46:58, Michael Miklavcic (
> michael.miklav...@gmail.com) wrote:
>
> > Once you back up your metron configs, the same configs that worked on
> the previous version will continue to work on the version running on HDP
> 3.x.  If there is any discrepancy between the two or additional settings
> will be required, those will be documented in the release notes.  From the
> Metron perspective, this upgrade would be no different than simply
> upgrading to the new Metron version.
>
> This upgrade cannot be performed the same way we've done it in the past. A
> number of platform upgrades, including OS, are required:
>
>1. Requires the OS to be updated on all nodes because there are no
>Centos6 RPMs provided in HDP 3.1. Must bump to Centos7.
>2. The final new HBase code will not run on HDP 2.6
>3. The MPack changes for new Ambari are not backwards compatible
>4. YARN and HDFS/MR are also at risk to be backwards incompatible
>
>
> On Tue, Aug 27, 2019 at 12:39 PM Michael Miklavcic <
> michael.miklav...@gmail.com> wrote:
>
>> Adding the dev list back into the thread (a reply-all was missed).
>>
>> On Tue, Aug 27, 2019 at 10:49 AM James Sirota  wrote:
>>
>>> I agree with Simon.  HDP 2.x platform is rapidly approaching EOL and
>>> everyone will likely need to migrate by end of year.  Doing this platform
>>> upgrade sooner will give everyone visibility into what Metron on HDP 3.x
>>> looks like so they have time to plan and upgrade at their own pace.
>>> Feature-wise, the Metron application itself will be unchanged.  It is
>>> merely the platform underneath that is changing.  HDP itself can be
>>> upgraded per instructions here:
>>> https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/release-notes/content/upgrading_parent.html
>>>
>>> Once you back up your metron configs, the same configs that worked on
>>> the previous version will continue to work on the version running on HDP
>>> 3.x.  If there is any discrepancy between the two or additional settings
>>> will be required, those will be documented in the release notes.  From the
>>> Metron perspective, this upgrade would be no different than simply
>>> upgrading to the new Metron version.
>>>
>>> James
>>>
>>>
>>> 27.08.2019, 07:09, "Simon Elliston Ball" :
>>>
>>> Something worth noting here is that HDP 2.6.5 is quite old and
>>> approaching EoL rapidly, so the issue of upgrade is urgent. I am aware of a
>>> large number of users who require this upgrade ASAP, and in fact an aware
>>> of zero users who wish to remain on HDP 2.
>>>
>>> Perhaps those users who want to stay on the old platform can stick their
>>> hands up and raise concerns, but this move will likely have to happen very
>>> soon.
>>>
>>> Simon
>>>
>>> On Tue, 27 Aug 2019 at 15:04, Otto Fowler 
>>> wrote:
>>>
>>> Although we had the discussion, and some great ideas where passed
>>> around, I do not believe we came to some kind of consensus on what 1.0
>>> should look like. So that discussion would have to be picked up again so
>>> that we could know where we are at, and make it an actual thing if we were
>>> going to make this a 1.0 release.
>>>
>>> I believe that the issues raised in that discussion gating 1.0 are still
>>> largely applicable, including upgrade.
>>>
>>> Right now we have *ZERO* HDP 3.1 users. We will go from that to *only*
>>> supporting 3.1 work and releases. So every user and deployment we currently
>>> have will feel real pain, have to slay real dragons to move forward with
>>> metron.
>>>
>>> With regards to support for older versions, the “backward capability”
>>> that has been mentioned, I would not say that I have any specific plan for
>>> that in mind. What I would say rather, is that I believe that we must be
>>> explicit, setting expectations correctly and clearly with regards to our
>>> intent while demonstrating that we have thought through the situation. That
>>> discussion has not happened, at least I do not believe t

Re: [DISCUSS] HDP 3.1 Upgrade and release strategy

2019-08-27 Thread Otto Fowler
Maybe we need some automated method to backup configurations and restore
them.




On August 27, 2019 at 14:46:58, Michael Miklavcic (
michael.miklav...@gmail.com) wrote:

> Once you back up your metron configs, the same configs that worked on the
previous version will continue to work on the version running on HDP 3.x.
If there is any discrepancy between the two or additional settings will be
required, those will be documented in the release notes.  From the Metron
perspective, this upgrade would be no different than simply upgrading to
the new Metron version.

This upgrade cannot be performed the same way we've done it in the past. A
number of platform upgrades, including OS, are required:

   1. Requires the OS to be updated on all nodes because there are no
   Centos6 RPMs provided in HDP 3.1. Must bump to Centos7.
   2. The final new HBase code will not run on HDP 2.6
   3. The MPack changes for new Ambari are not backwards compatible
   4. YARN and HDFS/MR are also at risk to be backwards incompatible


On Tue, Aug 27, 2019 at 12:39 PM Michael Miklavcic <
michael.miklav...@gmail.com> wrote:

> Adding the dev list back into the thread (a reply-all was missed).
>
> On Tue, Aug 27, 2019 at 10:49 AM James Sirota  wrote:
>
>> I agree with Simon.  HDP 2.x platform is rapidly approaching EOL and
>> everyone will likely need to migrate by end of year.  Doing this platform
>> upgrade sooner will give everyone visibility into what Metron on HDP 3.x
>> looks like so they have time to plan and upgrade at their own pace.
>> Feature-wise, the Metron application itself will be unchanged.  It is
>> merely the platform underneath that is changing.  HDP itself can be
>> upgraded per instructions here:
>> https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/release-notes/content/upgrading_parent.html
>>
>> Once you back up your metron configs, the same configs that worked on the
>> previous version will continue to work on the version running on HDP 3.x.
>> If there is any discrepancy between the two or additional settings will be
>> required, those will be documented in the release notes.  From the Metron
>> perspective, this upgrade would be no different than simply upgrading to
>> the new Metron version.
>>
>> James
>>
>>
>> 27.08.2019, 07:09, "Simon Elliston Ball" :
>>
>> Something worth noting here is that HDP 2.6.5 is quite old and
>> approaching EoL rapidly, so the issue of upgrade is urgent. I am aware of a
>> large number of users who require this upgrade ASAP, and in fact an aware
>> of zero users who wish to remain on HDP 2.
>>
>> Perhaps those users who want to stay on the old platform can stick their
>> hands up and raise concerns, but this move will likely have to happen very
>> soon.
>>
>> Simon
>>
>> On Tue, 27 Aug 2019 at 15:04, Otto Fowler 
>> wrote:
>>
>> Although we had the discussion, and some great ideas where passed around,
>> I do not believe we came to some kind of consensus on what 1.0 should look
>> like. So that discussion would have to be picked up again so that we could
>> know where we are at, and make it an actual thing if we were going to make
>> this a 1.0 release.
>>
>> I believe that the issues raised in that discussion gating 1.0 are still
>> largely applicable, including upgrade.
>>
>> Right now we have *ZERO* HDP 3.1 users. We will go from that to *only*
>> supporting 3.1 work and releases. So every user and deployment we currently
>> have will feel real pain, have to slay real dragons to move forward with
>> metron.
>>
>> With regards to support for older versions, the “backward capability”
>> that has been mentioned, I would not say that I have any specific plan for
>> that in mind. What I would say rather, is that I believe that we must be
>> explicit, setting expectations correctly and clearly with regards to our
>> intent while demonstrating that we have thought through the situation. That
>> discussion has not happened, at least I do not believe that the prior dev
>> thread really handles it in context.
>>
>> Depending on the upgrade situation for going to 3.1, it may be that a
>> dual stream of releases or fixes or new features to the extent that we can
>> do it will greatly reduce the pain for folks, or make it viable to stick
>> with metron until they can upgrade.
>>
>> The issue of what metron *is* features wise may be another one we want
>> to take up at some point. The idea being can we separate the metron
>> _integration parts from the metron core functionality such that we can work
>> on them separately and thus support multiple platforms thro

Re: [DISCUSS] HDP 3.1 Upgrade and release strategy

2019-08-27 Thread Otto Fowler
Although we had the discussion, and some great ideas where passed around, I
do not believe we came to some kind of consensus on what 1.0 should look
like. So that discussion would have to be picked up again so that we could
know where we are at, and make it an actual thing if we were going to make
this a 1.0 release.

I believe that the issues raised in that discussion gating 1.0 are still
largely applicable, including upgrade.

Right now we have *ZERO* HDP 3.1 users. We will go from that to *only*
supporting 3.1 work and releases. So every user and deployment we currently
have will feel real pain, have to slay real dragons to move forward with
metron.

With regards to support for older versions, the “backward capability” that
has been mentioned, I would not say that I have any specific plan for that
in mind. What I would say rather, is that I believe that we must be
explicit, setting expectations correctly and clearly with regards to our
intent while demonstrating that we have thought through the situation. That
discussion has not happened, at least I do not believe that the prior dev
thread really handles it in context.

Depending on the upgrade situation for going to 3.1, it may be that a dual
stream of releases or fixes or new features to the extent that we can do it
will greatly reduce the pain for folks, or make it viable to stick with
metron until they can upgrade.

The issue of what metron *is* features wise may be another one we want to
take up at some point. The idea being can we separate the metron
_integration parts from the metron core functionality such that we can work
on them separately and thus support multiple platforms through
integrations/applications. Of course definition of metron’s value beyond
integration, and what those features and application boundaries are would
be necessary.




On August 26, 2019 at 18:52:57, Michael Miklavcic (
michael.miklav...@gmail.com) wrote:

Hi devs and users,

Some questions were asked in the Slack channel about our ongoing HDP/Hadoop
upgrade and I'd like to get a discussion rolling. The original Hadoop
upgrade discuss thread can be found here
https://lists.apache.org/thread.html/37cc29648f0592cc39d3c78a0d07fce38521bdbbc4cf40e022a7a8ea@%3Cdev.metron.apache.org%3E

The major issue and impact with upgrading the Hadoop platform is that there
are breaking changes. Code that runs on HDP 3.1 will not run on 2.x. Here
is a sampling of core components we depend on that we know of so far that
are not backwards compatible:

   1. The Core OS - we currently base our builds and test deployment off of
   artifacts pulled from HDP. The latest rev of HDP no longer ships RPMs for
   Centos 6, which means we need to upgrade to Centos 7
   2. Ambari
   3. HBase

This differs from individual components we've upgraded in the past in that
our code could still be deployed on the old as well as new version of the
component in a backwards compatible way. Based on semantic versioning, I
don't know if we can introduce this level of change in a point release,
which is the reason for kicking off this discussion. In the past, users and
developers in the community have suggested that they are -1 on a 1.x
release that does not provide an upgrade
https://lists.apache.org/thread.html/eb1a8df2d0a6a79c5d50540d1fdbf215ec83d831ff15d3117c2592cc@%3Cdev.metron.apache.org%3E
.

Is there a way we can avoid a 1.x release? If we do need 1.x, do we still
see upgrades as a gating function? The main issue is that this has the
potential to drag out the upgrade and further couple it with other
features. And with Storm 1.x being eol'ed, I'm not sure this is something
we can wait much longer for. I'll think on this and send out my own
thoughts once folks have had a chance to review.

Best,
Mike Miklavcic
Apache Metron, PMC, committer


Re: Invite for Merton slack channel

2019-08-21 Thread Otto Fowler
Done, join the metron channel




On August 21, 2019 at 00:54:08, Wan Nabe (wanna...@ymail.com) wrote:

Hi,

Please add me to the channel.


Thank you & Regards,
Hans

On 14 Aug 2019, at 17:55, R K Sharma  wrote:

Hi,
 Could you please add me to Metron Slack channel ?

Regards
Rinkesh Sharma

On Tue, Aug 6, 2019 at 8:53 PM Otto Fowler  wrote:

> sure, give it a sec
>
>
>
> On August 6, 2019 at 10:09:36, Thiago Rahal Disposti (
> thiago.ra...@kryptus.com) wrote:
>
>
> Can you please add me ?
>
> thiago.ra...@kryptus.com
>
>
> Thanks.
> Thiago Rahal
>
>
> On Thu, Jul 18, 2019 at 10:44 PM Otto Fowler 
> wrote:
>
>> Both of you are all set, join the metron slack channel
>>
>>
>>
>> On July 18, 2019 at 20:15:33, Aman Diwakar (aman.diwa...@gmail.com)
>> wrote:
>>
>> Me too please
>>
>> On Thu, Jul 18, 2019, 12:32 PM Satish Abburi 
>> wrote:
>>
>>>
>>>
>>> Can you please add me also. Thanks.
>>>
>>>
>>>
>>> satish.abb...@sstech.us
>>>
>>>
>>>
>>> *From:* "zeo...@gmail.com" 
>>> *Reply-To:* "user@metron.apache.org" 
>>> *Date:* Tuesday, July 9, 2019 at 2:31 AM
>>> *To:* "user@metron.apache.org" 
>>> *Subject:* Re: Invite for Merton slack channel
>>>
>>>
>>>
>>> You got it.  Sent
>>>
>>> Jon Zeolla
>>>
>>>
>>>
>>> On Tue, Jul 9, 2019, 12:55 AM Rendi 7936  wrote:
>>>
>>> Good Morning,
>>>
>>> Hi there
>>>
>>> Can i join Apache Metron Slack Channel ?
>>> My e-mail is rendi.7...@gmail.com
>>>
>>> On 2019/07/08 13:29:42, "z...@gmail.com" wrote:
>>> > Done>
>>> >
>>> > - Jon Zeolla>
>>> > zeo...@gmail.com>
>>> >
>>> >
>>> > On Mon, Jul 8, 2019 at 9:18 AM Srikanth Nagarajan >
>>> > wrote:>
>>> >
>>> > > Hi>
>>> > >>
>>> > > I would appreciate an invite to the Metron slack channel .>
>>> > >>
>>> > > Thank you>
>>> > > Srikanth>
>>> > >>
>>> > > __>
>>> > > *Srikanth Nagarajan *>
>>> > > Principal>
>>> > > *Gandiva Networks Inc*>
>>> > > *732.690.1884 <732.690.1884>* Mobile>
>>> > > s...@gandivanetworks.com>
>>> > > www.gandivanetworks.com
>>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fwww.gandivanetworks.com=E,1,ZuPKEmeSNiIm91SaFatuU2NeiBPqQZM5zbnm87gyNcC0DA6pqicZygK8Ze1zLhLaQv0KziiCGJr7vcouDyvQDJT8ir_zf2b3PTW9lIt-K7GpMNGhzOdCU29z=1>
>>> >
>>> > >>
>>> >
>>>
>>>


Re: Adding to metron slack channel

2019-08-14 Thread Otto Fowler
invited, head over to the metron channel




On August 13, 2019 at 23:39:35, Mrinal Pande (
mrinal.pa...@st.niituniversity.in) wrote:

Hi,

Please add me to the metron slack channel.



Regards,
Mrinal


Re: Invite for Merton slack channel

2019-08-14 Thread Otto Fowler
invited, head over to the metron channel




On August 14, 2019 at 05:55:36, R K Sharma (rksu...@gmail.com) wrote:

Hi,
 Could you please add me to Metron Slack channel ?

Regards
Rinkesh Sharma

On Tue, Aug 6, 2019 at 8:53 PM Otto Fowler  wrote:

> sure, give it a sec
>
>
>
> On August 6, 2019 at 10:09:36, Thiago Rahal Disposti (
> thiago.ra...@kryptus.com) wrote:
>
>
> Can you please add me ?
>
> thiago.ra...@kryptus.com
>
>
> Thanks.
> Thiago Rahal
>
>
> On Thu, Jul 18, 2019 at 10:44 PM Otto Fowler 
> wrote:
>
>> Both of you are all set, join the metron slack channel
>>
>>
>>
>> On July 18, 2019 at 20:15:33, Aman Diwakar (aman.diwa...@gmail.com)
>> wrote:
>>
>> Me too please
>>
>> On Thu, Jul 18, 2019, 12:32 PM Satish Abburi 
>> wrote:
>>
>>>
>>>
>>> Can you please add me also. Thanks.
>>>
>>>
>>>
>>> satish.abb...@sstech.us
>>>
>>>
>>>
>>> *From:* "zeo...@gmail.com" 
>>> *Reply-To:* "user@metron.apache.org" 
>>> *Date:* Tuesday, July 9, 2019 at 2:31 AM
>>> *To:* "user@metron.apache.org" 
>>> *Subject:* Re: Invite for Merton slack channel
>>>
>>>
>>>
>>> You got it.  Sent
>>>
>>> Jon Zeolla
>>>
>>>
>>>
>>> On Tue, Jul 9, 2019, 12:55 AM Rendi 7936  wrote:
>>>
>>> Good Morning,
>>>
>>> Hi there
>>>
>>> Can i join Apache Metron Slack Channel ?
>>> My e-mail is rendi.7...@gmail.com
>>>
>>> On 2019/07/08 13:29:42, "z...@gmail.com" wrote:
>>> > Done>
>>> >
>>> > - Jon Zeolla>
>>> > zeo...@gmail.com>
>>> >
>>> >
>>> > On Mon, Jul 8, 2019 at 9:18 AM Srikanth Nagarajan >
>>> > wrote:>
>>> >
>>> > > Hi>
>>> > >>
>>> > > I would appreciate an invite to the Metron slack channel .>
>>> > >>
>>> > > Thank you>
>>> > > Srikanth>
>>> > >>
>>> > > __>
>>> > > *Srikanth Nagarajan *>
>>> > > Principal>
>>> > > *Gandiva Networks Inc*>
>>> > > *732.690.1884 <732.690.1884>* Mobile>
>>> > > s...@gandivanetworks.com>
>>> > > www.gandivanetworks.com
>>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fwww.gandivanetworks.com=E,1,ZuPKEmeSNiIm91SaFatuU2NeiBPqQZM5zbnm87gyNcC0DA6pqicZygK8Ze1zLhLaQv0KziiCGJr7vcouDyvQDJT8ir_zf2b3PTW9lIt-K7GpMNGhzOdCU29z=1>
>>> >
>>> > >>
>>> >
>>>
>>>


Re: Invite for Merton slack channel

2019-08-06 Thread Otto Fowler
sure, give it a sec




On August 6, 2019 at 10:09:36, Thiago Rahal Disposti (
thiago.ra...@kryptus.com) wrote:


Can you please add me ?

thiago.ra...@kryptus.com


Thanks.
Thiago Rahal


On Thu, Jul 18, 2019 at 10:44 PM Otto Fowler 
wrote:

> Both of you are all set, join the metron slack channel
>
>
>
> On July 18, 2019 at 20:15:33, Aman Diwakar (aman.diwa...@gmail.com) wrote:
>
> Me too please
>
> On Thu, Jul 18, 2019, 12:32 PM Satish Abburi 
> wrote:
>
>>
>>
>> Can you please add me also. Thanks.
>>
>>
>>
>> satish.abb...@sstech.us
>>
>>
>>
>> *From:* "zeo...@gmail.com" 
>> *Reply-To:* "user@metron.apache.org" 
>> *Date:* Tuesday, July 9, 2019 at 2:31 AM
>> *To:* "user@metron.apache.org" 
>> *Subject:* Re: Invite for Merton slack channel
>>
>>
>>
>> You got it.  Sent
>>
>> Jon Zeolla
>>
>>
>>
>> On Tue, Jul 9, 2019, 12:55 AM Rendi 7936  wrote:
>>
>> Good Morning,
>>
>> Hi there
>>
>> Can i join Apache Metron Slack Channel ?
>> My e-mail is rendi.7...@gmail.com
>>
>> On 2019/07/08 13:29:42, "z...@gmail.com" wrote:
>> > Done>
>> >
>> > - Jon Zeolla>
>> > zeo...@gmail.com>
>> >
>> >
>> > On Mon, Jul 8, 2019 at 9:18 AM Srikanth Nagarajan >
>> > wrote:>
>> >
>> > > Hi>
>> > >>
>> > > I would appreciate an invite to the Metron slack channel .>
>> > >>
>> > > Thank you>
>> > > Srikanth>
>> > >>
>> > > __>
>> > > *Srikanth Nagarajan *>
>> > > Principal>
>> > > *Gandiva Networks Inc*>
>> > > *732.690.1884 <732.690.1884>* Mobile>
>> > > s...@gandivanetworks.com>
>> > > www.gandivanetworks.com
>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fwww.gandivanetworks.com=E,1,ZuPKEmeSNiIm91SaFatuU2NeiBPqQZM5zbnm87gyNcC0DA6pqicZygK8Ze1zLhLaQv0KziiCGJr7vcouDyvQDJT8ir_zf2b3PTW9lIt-K7GpMNGhzOdCU29z=1>
>> >
>> > >>
>> >
>>
>>


Re: Invite for Merton slack channel

2019-07-18 Thread Otto Fowler
Both of you are all set, join the metron slack channel




On July 18, 2019 at 20:15:33, Aman Diwakar (aman.diwa...@gmail.com) wrote:

Me too please

On Thu, Jul 18, 2019, 12:32 PM Satish Abburi 
wrote:

>
>
> Can you please add me also. Thanks.
>
>
>
> satish.abb...@sstech.us
>
>
>
> *From:* "zeo...@gmail.com" 
> *Reply-To:* "user@metron.apache.org" 
> *Date:* Tuesday, July 9, 2019 at 2:31 AM
> *To:* "user@metron.apache.org" 
> *Subject:* Re: Invite for Merton slack channel
>
>
>
> You got it.  Sent
>
> Jon Zeolla
>
>
>
> On Tue, Jul 9, 2019, 12:55 AM Rendi 7936  wrote:
>
> Good Morning,
>
> Hi there
>
> Can i join Apache Metron Slack Channel ?
> My e-mail is rendi.7...@gmail.com
>
> On 2019/07/08 13:29:42, "z...@gmail.com" wrote:
> > Done>
> >
> > - Jon Zeolla>
> > zeo...@gmail.com>
> >
> >
> > On Mon, Jul 8, 2019 at 9:18 AM Srikanth Nagarajan >
> > wrote:>
> >
> > > Hi>
> > >>
> > > I would appreciate an invite to the Metron slack channel .>
> > >>
> > > Thank you>
> > > Srikanth>
> > >>
> > > __>
> > > *Srikanth Nagarajan *>
> > > Principal>
> > > *Gandiva Networks Inc*>
> > > *732.690.1884 <732.690.1884>* Mobile>
> > > s...@gandivanetworks.com>
> > > www.gandivanetworks.com
> 
> >
> > >>
> >
>
>


Re: batch indexing in JSON format

2019-07-15 Thread Otto Fowler
We could do something like have some other topology or job that kicks off
when an HDFS file is closed.

So before we start a new file, we “queue” a log to some conversion
topology/job whatever or something like that.




On July 15, 2019 at 10:04:08, Michael Miklavcic (michael.miklav...@gmail.com)
wrote:

Adding to what Ryan said (and I agree), there are a couple additional
consequences:

   1. There are questions around just how optimal an ORC file written in
   real-time can actually be. In order to get columns of data striped
   effectively, you need a sizable number of k rows. That's probably unlikely
   in real-time, though some of these storage formats also have "engines"
   running that manage compactions (like HBase does), but I haven't checked on
   this in a while. I think Kudu may do this, actually, but again that's a
   whole new storage engine, not just a format.
   2. More importantly - loss of data - HDFS is the source of truth. We
   guarantee at-least-once processing. In order to achieve efficient columnar
   storage that makes a columnar format feasible, it's likely that we'd have
   to make larger batches in indexing. This creates a potential for lag in the
   system where we would now have to do more to worry about Storm failures
   than we do currently. With HDFS writing our partial files are still written
   even if there's a failure in the topology or elsewhere. It does have take
   up more disk space, but we felt this was a reasonable tradeoff
   architecturally for something that should be feasible to be written ad-hoc.

That being said, you could certainly write conversion jobs that should be
able to lag the real-time processing just enough to get the benefits of
real-time and still do a decent job of getting your data into a more
efficient storage format, if you choose.

Cheers,
Mike


On Mon, Jul 15, 2019 at 7:00 AM Ryan Merriman  wrote:

> The short answer is no.  Offline conversion to other formats (as you
> describe) is a better approach anyways.  Writing to a Parquet/ORC file is
> more compute intensive than just writing JSON data directly to HDFS and not
> something you need to do in real-time since you have the same data
> available in ES/Solr.  This would slow down the batch indexing topology for
> no real gain.
>
> On Jul 15, 2019, at 7:25 AM,  <
> stephane.d...@orange.com> wrote:
>
> Hello all,
>
>
>
> I have a question regarding batch indexing. As as I can see, data are
> stored in json format in hdfs. Nevertheless, this uses a lot of storage
> because of json verbosity, enrichment,.. Is there any way to use parquet
> for example? I guess it’s possible to do it the day after, I mean you read
> the json and with spark you save as another format, but is it possible to
> choose the format at the batch indexing configuration level?
>
>
>
> Thanks a lot
>
>
>
> Stéphane
>
>
>
>
>
>


Re: Built Failed for 0.7.2

2019-05-28 Thread Otto Fowler
https://github.com/apache/metron/pull/1419

This is the PR to follow


On May 28, 2019 at 00:34:10, Farrukh Naveed Anjum (anjum.farr...@gmail.com)
wrote:

Hi,
Can you please tell if this issue if fixed ? as even today i get this error
after getting latest pull from metron. Issuing build command gives this
error.

On Wed, May 22, 2019 at 4:12 PM Otto Fowler  wrote:

> Thanks!  I’ll create the issue
>
>
> On May 22, 2019 at 01:42:15, Farrukh Naveed Anjum (anjum.farr...@gmail.com)
> wrote:
>
> Requires: /bin/bash
> Checking for unpackaged file(s): /usr/lib/rpm/check-files
> /root/BUILDROOT/metron-0.7.2-root
> error: Installed (but unpackaged) file(s) found:
>/usr/metron/0.7.2/config/zookeeper/parsers/leef.json
> Macro %_prerelease has empty body
> Macro %_prerelease has empty body
> Installed (but unpackaged) file(s) found:
>/usr/metron/0.7.2/config/zookeeper/parsers/leef.json
>
>

--
*Best Regards*
Farrukh Naveed Anjum
*M:* +92 321 5083954 (WhatsApp Enabled)
*W:* https://www.farrukh.cc/


Re: Built Failed for 0.7.2

2019-05-22 Thread Otto Fowler
Thanks!  I’ll create the issue


On May 22, 2019 at 01:42:15, Farrukh Naveed Anjum (anjum.farr...@gmail.com)
wrote:

Requires: /bin/bash
Checking for unpackaged file(s): /usr/lib/rpm/check-files
/root/BUILDROOT/metron-0.7.2-root
error: Installed (but unpackaged) file(s) found:
   /usr/metron/0.7.2/config/zookeeper/parsers/leef.json
Macro %_prerelease has empty body
Macro %_prerelease has empty body
Installed (but unpackaged) file(s) found:
   /usr/metron/0.7.2/config/zookeeper/parsers/leef.json


Re: Issue when trying to load JSON

2019-04-25 Thread Otto Fowler
On April 25, 2019 at 12:05:45, Nick Allen (n...@nickallen.org) wrote:

> Otto: I’m not sure this is an enveloped issue, or a new feature for the
json map parser

This is not an issue with JSONMapParser.  This is an issue with the
"enveloping" mechanism, prior to when the JSONMapParser gets the message.

The entire message has been parsed as a JSON object including the value of
the "_source" field.  Since the "_source" field itself contains valid JSON,
the parser transformed it into a Map, rather than the String that it
expects.

In my opinion, the ENVELOPE strategy needs to not parse the contents of
that "_source" field.  The ENVELOPE strategy should work for JSON and
non-JSON content alike.


The envelope doesn’t parse the contents of the source field, it just was
written to support string fields as source and not json objects.
The source field is parsed because the json it is in is parsed.

This is a design limitation of the envelope functionality, not a bug.

The issue _may_ be one that can be addressed in the JSONMap parser or
transformations if the use case is not really an envelope use case, but one
where they are using envelope to ‘get at’ the inner json to transform it or
something, maybe.  I don’t mean to say this is a bug in JSONMap either



On Thu, Apr 25, 2019 at 11:31 AM Otto Fowler 
wrote:

> I’m not sure about the name, I’m more thinking about the case.
> I’m not sure this is an enveloped issue, or a new feature for the json map
> parser ( or if you could do it with the jsonMap parser and JSONPath )
>
>
>
> On April 25, 2019 at 11:23:25, Simon Elliston Ball (
> si...@simonellistonball.com) wrote:
>
> Seems like this would a good additional strategy, something like
> ENVELOPE_PARSED? Any thoughts on a good name?
>
> On Thu, 25 Apr 2019 at 16:20, Otto Fowler  wrote:
>
>> So,  the enveloped message doesn’t support getting an already parsed json
>> object from the enveloped json, we would have to do some work to support
>> this,  Even if we _could_ wrangle it in there now, from what I can see we
>> would still  have to serialize to bytes to pass to the actual parser and
>> that would be inefficient.
>> Can you open a jira with the information you provided?
>>
>>
>>
>> On April 25, 2019 at 11:12:38, Otto Fowler (ottobackwa...@gmail.com)
>> wrote:
>>
>> Raw message in this case assumes that the raw message is a String
>> embedded in the json field that you supply, not a nested json object, so it
>> is looking for
>>
>>
>> “_source” : “some other embedded string of some format like syslog in
>> json”
>>
>> There are other message strategies, but I’m not sure they would work in
>> this instance.  I’ll keep looking. hopefully someone more familiar will
>> jump in.
>>
>>
>> On April 25, 2019 at 10:48:06, stephane.d...@orange.com (
>> stephane.d...@orange.com) wrote:
>>
>> Hello,
>>
>>
>>
>> I’m trying to load some JSON data which has the following structure (this
>> is a sample):
>>
>>
>>
>> {
>>
>>   "_index": "indexing",
>>
>>   "_type": "Event",
>>
>>   "_id": "AWAkTAefYn0uCUpkHmCy",
>>
>>   "_score": 1,
>>
>>   "_source": {
>>
>> "dst": "127.0.0.1",
>>
>> "devTimeEpoch": "151243734",
>>
>> "dstPort": "0",
>>
>> "srcPort": "80",
>>
>> "src": "194.51.198.185"
>>
>>   }
>>
>> }
>>
>>
>>
>> In my file, everything is on the same line. My parser config is the
>> following:
>>
>>
>>
>> {
>>
>>   "parserClassName": "org.apache.metron.parsers.json.JSONMapParser",
>>
>>   "filterClassName": null,
>>
>>   "sensorTopic": "my_topic",
>>
>>   "outputTopic": null,
>>
>>   "errorTopic": null,
>>
>>   "writerClassName": null,
>>
>>   "errorWriterClassName": null,
>>
>>   "readMetadata": true,
>>
>>   "mergeMetadata": true,
>>
>>   "numWorkers": 2,
>>
>>   "numAckers": null,
>>
>>   "spoutParallelism": 1,
>>
>>   "spoutNumTasks": 1,
>>
>>   "parserParallelism": 2,
>>
>>   "parserNumTasks": 2,
>>
>>   "errorWriterParallelism

Re: Issue when trying to load JSON

2019-04-25 Thread Otto Fowler
Also, our support for nested, unflattened json isn’t great to begin with.

Stephane,  can you state your use case?

Do you want to get _source only to transform it?  or do you want to use
source as the message and discard the top level fields?  other?



On April 25, 2019 at 11:31:36, Otto Fowler (ottobackwa...@gmail.com) wrote:

I’m not sure about the name, I’m more thinking about the case.
I’m not sure this is an enveloped issue, or a new feature for the json map
parser ( or if you could do it with the jsonMap parser and JSONPath )



On April 25, 2019 at 11:23:25, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Seems like this would a good additional strategy, something like
ENVELOPE_PARSED? Any thoughts on a good name?

On Thu, 25 Apr 2019 at 16:20, Otto Fowler  wrote:

> So,  the enveloped message doesn’t support getting an already parsed json
> object from the enveloped json, we would have to do some work to support
> this,  Even if we _could_ wrangle it in there now, from what I can see we
> would still  have to serialize to bytes to pass to the actual parser and
> that would be inefficient.
> Can you open a jira with the information you provided?
>
>
>
> On April 25, 2019 at 11:12:38, Otto Fowler (ottobackwa...@gmail.com)
> wrote:
>
> Raw message in this case assumes that the raw message is a String embedded
> in the json field that you supply, not a nested json object, so it is
> looking for
>
>
> “_source” : “some other embedded string of some format like syslog in json”
>
> There are other message strategies, but I’m not sure they would work in
> this instance.  I’ll keep looking. hopefully someone more familiar will
> jump in.
>
>
> On April 25, 2019 at 10:48:06, stephane.d...@orange.com (
> stephane.d...@orange.com) wrote:
>
> Hello,
>
>
>
> I’m trying to load some JSON data which has the following structure (this
> is a sample):
>
>
>
> {
>
>   "_index": "indexing",
>
>   "_type": "Event",
>
>   "_id": "AWAkTAefYn0uCUpkHmCy",
>
>   "_score": 1,
>
>   "_source": {
>
> "dst": "127.0.0.1",
>
> "devTimeEpoch": "151243734",
>
> "dstPort": "0",
>
> "srcPort": "80",
>
> "src": "194.51.198.185"
>
>   }
>
> }
>
>
>
> In my file, everything is on the same line. My parser config is the
> following:
>
>
>
> {
>
>   "parserClassName": "org.apache.metron.parsers.json.JSONMapParser",
>
>   "filterClassName": null,
>
>   "sensorTopic": "my_topic",
>
>   "outputTopic": null,
>
>   "errorTopic": null,
>
>   "writerClassName": null,
>
>   "errorWriterClassName": null,
>
>   "readMetadata": true,
>
>   "mergeMetadata": true,
>
>   "numWorkers": 2,
>
>   "numAckers": null,
>
>   "spoutParallelism": 1,
>
>   "spoutNumTasks": 1,
>
>   "parserParallelism": 2,
>
>   "parserNumTasks": 2,
>
>   "errorWriterParallelism": 1,
>
>   "errorWriterNumTasks": 1,
>
>   "spoutConfig": {},
>
>   "securityProtocol": null,
>
>   "stormConfig": {},
>
>   "parserConfig": {
>
>   },
>
>   "fieldTransformations": [
>
>{
>
>  "transformation":"RENAME",
>
>  "config": {
>
> "dst": "ip_dst_addr",
>
> "src": "ip_src_addr",
>
> "srcPort": "ip_src_port",
>
> "dstPort": "ip_dst_port",
>
> "devTimeEpoch": "timestamp"
>
>  }
>
>}
>
>   ],
>
>   "cacheConfig": {},
>
>   "rawMessageStrategy": "ENVELOPE",
>
>   "rawMessageStrategyConfig": {
>
> "messageField": "_source"
>
>   }
>
> }
>
>
>
> But in Storm I get the following errors:
>
>
>
> 2019-04-25 16:45:22.225 o.a.s.d.executor Thread-5-parserBolt-executor[8 8]
> [ERROR]
>
> java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to
> java.lang.String
>
> at
> org.apache.metron.common.message.metadata.EnvelopedRawMessageStrategy.get(EnvelopedRawMessageStrategy.java:78)
> ~[stormjar.jar:?]
>
> at
> org.apache.metron.common.message.metadata.RawMe

Re: Issue when trying to load JSON

2019-04-25 Thread Otto Fowler
I’m not sure about the name, I’m more thinking about the case.
I’m not sure this is an enveloped issue, or a new feature for the json map
parser ( or if you could do it with the jsonMap parser and JSONPath )



On April 25, 2019 at 11:23:25, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Seems like this would a good additional strategy, something like
ENVELOPE_PARSED? Any thoughts on a good name?

On Thu, 25 Apr 2019 at 16:20, Otto Fowler  wrote:

> So,  the enveloped message doesn’t support getting an already parsed json
> object from the enveloped json, we would have to do some work to support
> this,  Even if we _could_ wrangle it in there now, from what I can see we
> would still  have to serialize to bytes to pass to the actual parser and
> that would be inefficient.
> Can you open a jira with the information you provided?
>
>
>
> On April 25, 2019 at 11:12:38, Otto Fowler (ottobackwa...@gmail.com)
> wrote:
>
> Raw message in this case assumes that the raw message is a String embedded
> in the json field that you supply, not a nested json object, so it is
> looking for
>
>
> “_source” : “some other embedded string of some format like syslog in json”
>
> There are other message strategies, but I’m not sure they would work in
> this instance.  I’ll keep looking. hopefully someone more familiar will
> jump in.
>
>
> On April 25, 2019 at 10:48:06, stephane.d...@orange.com (
> stephane.d...@orange.com) wrote:
>
> Hello,
>
>
>
> I’m trying to load some JSON data which has the following structure (this
> is a sample):
>
>
>
> {
>
>   "_index": "indexing",
>
>   "_type": "Event",
>
>   "_id": "AWAkTAefYn0uCUpkHmCy",
>
>   "_score": 1,
>
>   "_source": {
>
> "dst": "127.0.0.1",
>
> "devTimeEpoch": "151243734",
>
> "dstPort": "0",
>
> "srcPort": "80",
>
> "src": "194.51.198.185"
>
>   }
>
> }
>
>
>
> In my file, everything is on the same line. My parser config is the
> following:
>
>
>
> {
>
>   "parserClassName": "org.apache.metron.parsers.json.JSONMapParser",
>
>   "filterClassName": null,
>
>   "sensorTopic": "my_topic",
>
>   "outputTopic": null,
>
>   "errorTopic": null,
>
>   "writerClassName": null,
>
>   "errorWriterClassName": null,
>
>   "readMetadata": true,
>
>   "mergeMetadata": true,
>
>   "numWorkers": 2,
>
>   "numAckers": null,
>
>   "spoutParallelism": 1,
>
>   "spoutNumTasks": 1,
>
>   "parserParallelism": 2,
>
>   "parserNumTasks": 2,
>
>   "errorWriterParallelism": 1,
>
>   "errorWriterNumTasks": 1,
>
>   "spoutConfig": {},
>
>   "securityProtocol": null,
>
>   "stormConfig": {},
>
>   "parserConfig": {
>
>   },
>
>   "fieldTransformations": [
>
>{
>
>  "transformation":"RENAME",
>
>  "config": {
>
> "dst": "ip_dst_addr",
>
> "src": "ip_src_addr",
>
> "srcPort": "ip_src_port",
>
> "dstPort": "ip_dst_port",
>
> "devTimeEpoch": "timestamp"
>
>  }
>
>}
>
>   ],
>
>   "cacheConfig": {},
>
>   "rawMessageStrategy": "ENVELOPE",
>
>   "rawMessageStrategyConfig": {
>
> "messageField": "_source"
>
>   }
>
> }
>
>
>
> But in Storm I get the following errors:
>
>
>
> 2019-04-25 16:45:22.225 o.a.s.d.executor Thread-5-parserBolt-executor[8 8]
> [ERROR]
>
> java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to
> java.lang.String
>
> at
> org.apache.metron.common.message.metadata.EnvelopedRawMessageStrategy.get(EnvelopedRawMessageStrategy.java:78)
> ~[stormjar.jar:?]
>
> at
> org.apache.metron.common.message.metadata.RawMessageStrategies.get(RawMessageStrategies.java:54)
> ~[stormjar.jar:?]
>
> at
> org.apache.metron.common.message.metadata.RawMessageUtil.getRawMessage(RawMessageUtil.java:55)
> ~[stormjar.jar:?]
>
> at
> org.apache.metron.parsers.bolt.ParserBolt.execute(ParserBolt.java:251)
> [stormjar.jar:?]
>
> at
&g

Re: Issue when trying to load JSON

2019-04-25 Thread Otto Fowler
Raw message in this case assumes that the raw message is a String embedded
in the json field that you supply, not a nested json object, so it is
looking for


“_source” : “some other embedded string of some format like syslog in json”

There are other message strategies, but I’m not sure they would work in
this instance.  I’ll keep looking. hopefully someone more familiar will
jump in.


On April 25, 2019 at 10:48:06, stephane.d...@orange.com (
stephane.d...@orange.com) wrote:

Hello,



I’m trying to load some JSON data which has the following structure (this
is a sample):



{

  "_index": "indexing",

  "_type": "Event",

  "_id": "AWAkTAefYn0uCUpkHmCy",

  "_score": 1,

  "_source": {

"dst": "127.0.0.1",

"devTimeEpoch": "151243734",

"dstPort": "0",

"srcPort": "80",

"src": "194.51.198.185"

  }

}



In my file, everything is on the same line. My parser config is the
following:



{

  "parserClassName": "org.apache.metron.parsers.json.JSONMapParser",

  "filterClassName": null,

  "sensorTopic": "my_topic",

  "outputTopic": null,

  "errorTopic": null,

  "writerClassName": null,

  "errorWriterClassName": null,

  "readMetadata": true,

  "mergeMetadata": true,

  "numWorkers": 2,

  "numAckers": null,

  "spoutParallelism": 1,

  "spoutNumTasks": 1,

  "parserParallelism": 2,

  "parserNumTasks": 2,

  "errorWriterParallelism": 1,

  "errorWriterNumTasks": 1,

  "spoutConfig": {},

  "securityProtocol": null,

  "stormConfig": {},

  "parserConfig": {

  },

  "fieldTransformations": [

   {

 "transformation":"RENAME",

 "config": {

"dst": "ip_dst_addr",

"src": "ip_src_addr",

"srcPort": "ip_src_port",

"dstPort": "ip_dst_port",

"devTimeEpoch": "timestamp"

 }

   }

  ],

  "cacheConfig": {},

  "rawMessageStrategy": "ENVELOPE",

  "rawMessageStrategyConfig": {

"messageField": "_source"

  }

}



But in Storm I get the following errors:



2019-04-25 16:45:22.225 o.a.s.d.executor Thread-5-parserBolt-executor[8 8]
[ERROR]

java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to
java.lang.String

at
org.apache.metron.common.message.metadata.EnvelopedRawMessageStrategy.get(EnvelopedRawMessageStrategy.java:78)
~[stormjar.jar:?]

at
org.apache.metron.common.message.metadata.RawMessageStrategies.get(RawMessageStrategies.java:54)
~[stormjar.jar:?]

at
org.apache.metron.common.message.metadata.RawMessageUtil.getRawMessage(RawMessageUtil.java:55)
~[stormjar.jar:?]

at
org.apache.metron.parsers.bolt.ParserBolt.execute(ParserBolt.java:251)
[stormjar.jar:?]

at
org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]

at
org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]

at
org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]

at
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]

at
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]

at
org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]

at
org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]

at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]

at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]

at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]





How can I debug this?



Thanks



Stéphane

_

Ce message et ses pieces jointes peuvent contenir des informations
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les
messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere,
deforme ou falsifie. Merci.

This message and its attachments may contain confidential or
privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and
delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have
been modified, changed or 

Re: Help regarding Parser Configuration

2019-02-26 Thread Otto Fowler
OK,
So the data you want is embedded inside the message field after parsing.
Syslog through bro is a generic format, it parses out the message, but
doesn’t parse the message part.
If you need to parse the message part out it will be more work.

ip dest and ip source are there for you

 "ip_dst_addr": "172.16.4.18",
> "ip_src_addr": "10.2.2.1",
>

As for priority and classification, I think you can get the using two
stellar REGEXP_GROUP_VAL()



On February 26, 2019 at 00:41:47, Farrukh Naveed Anjum (
anjum.farr...@gmail.com) wrote:

If you are asking for Syslog from Bro following is
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path syslog
#open 2019-02-26-00-00-00
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto facility
severity message
#types time string addr port addr port enum string string string

1551121201.339249 CEyre817MtElGE642a 10.2.2.1 514 172.16.4.18 514 udp
LOCAL5 NOTICE Feb 26 00:00:01 suricata[1546]: [Drop] [1:5103:0]
OPN_Social_Media - Facebook - DNS request for facebook.com [Classification:
Social-Media app detection by OPNsense] [Priority: 2] {UDP} 10.2.2.153:39445
-> 8.8.8.8:53



On Tue, Feb 26, 2019 at 10:32 AM Farrukh Naveed Anjum <
anjum.farr...@gmail.com> wrote:

> {
>   "_index": "bro_index_2019.02.26.10",
>   "_type": "bro_doc",
>   "_id": "2ecb0750-00c5-4617-95d7-eeaba539d12d",
>   "_version": 1,
>   "_score": null,
>   "_source": {
> "bro_timestamp": "1551159048.102709",
> "ip_dst_port": 514,
> "adapter:geoadapter:begin:ts": "1551159049014",
> "parallelenricher:enrich:end:ts": "1551159049017",
> "uid": "CEyre817MtElGE642a",
> "protocol": "syslog",
> "source:type": "bro",
> "adapter:threatinteladapter:end:ts": "1551159049016",
> "original_string": "SYSLOG | severity:NOTICE uid:CEyre817MtElGE642a
> id.orig_p:514 id.resp_p:514 proto:udp id.orig_h:10.2.2.1 message:Feb 26
> 10:30:48 suricata[1546]: [Drop] [1:5103:0] OPN_Social_Media - Facebook
> - DNS request for facebook.com [Classification: Social-Media app
> detection by OPNsense] [Priority: 2] {UDP} 10.2.2.115:25269 -> 8.8.8.8:53
> facility:LOCAL5 ts:1551159048.102709 id.resp_h:172.16.4.18",
> "ip_dst_addr": "172.16.4.18",
> "adapter:hostfromjsonlistadapter:end:ts": "1551159049014",
> "adapter:geoadapter:end:ts": "1551159049015",
> "ip_src_addr": "10.2.2.1",
> "timestamp": 1551159048102,
> "severity": "NOTICE",
> "parallelenricher:enrich:begin:ts": "1551159049016",
> "adapter:hostfromjsonlistadapter:begin:ts": "1551159049014",
> "message": "Feb 26 10:30:48 suricata[1546]: [Drop] [1:5103:0]
> OPN_Social_Media - Facebook - DNS request for facebook.com
> [Classification: Social-Media app detection by OPNsense] [Priority: 2]
> {UDP} 10.2.2.115:25269 -> 8.8.8.8:53",
> "parallelenricher:splitter:begin:ts": "1551159049016",
> "ip_src_port": 514,
> "proto": "udp",
> "parallelenricher:splitter:end:ts": "1551159049016",
> "adapter:threatinteladapter:begin:ts": "1551159049016",
> "guid": "2ecb0750-00c5-4617-95d7-eeaba539d12d",
> "facility": "LOCAL5"
>   },
>   "fields": {
> "parallelenricher:enrich:begin:ts": [
>   1551159049016
> ],
> "adapter:geoadapter:begin:ts": [
>   1551159049014
> ],
> "adapter:hostfromjsonlistadapter:begin:ts": [
>   1551159049014
> ],
> "parallelenricher:enrich:end:ts": [
>   1551159049017
> ],
> "parallelenricher:splitter:begin:ts": [
>   1551159049016
> ],
> "adapter:threatinteladapter:end:ts": [
>   1551159049016
> ],
> "adapter:hostfromjsonlistadapter:end:ts": [
>   1551159049014
> ],
> "parallelenricher:splitter:end:ts": [
>   1551159049016
> ],
> "adapter:threatinteladapter:begin:ts": [
>   1551159049016
> ],
> "adapter:geoadapter:end:ts": [
>   1551159049015
> ],
> "timestamp": [
>   1551159048102
> ]
>   },
>   "highlight": {
> "original_string"

Re: Help regarding Parser Configuration

2019-02-21 Thread Otto Fowler
Can you find an instance of one of these logs in Kibana or ES and give us a
sanitized version of that?



On February 21, 2019 at 02:55:09, Farrukh Naveed Anjum (
anjum.farr...@gmail.com) wrote:

Hi this is the original event received to bro

SYSLOG | *severity:*NOTICE uid:CN4kU02atBGK0qlA5g *id.orig_p*:514
*id.resp_p*:514 *proto*:udp id.orig_h:10.2.2.1 *message*:Feb 21 12:46:50
suricata[72280]: [Drop] [1:5103:0] OPN_Social_Media - Facebook - DNS
request for facebook.com [Classification: Social-Media app detection by
OPNsense] [Priority: 2] {UDP} 10.2.2.236:11928 -> 114.114.114.114:53
*facility:*LOCAL5 ts:1550735210.67931 id.resp_h:172.16.4.18


All I am asking is to further extract *message*
Feb 21 12:46:50 suricata[72280]: [Drop] [1:5103:0] OPN_Social_Media -
Facebook - DNS request for facebook.com [Classification: Social-Media app
detection by OPNsense] [Priority: 2] {UDP} 10.2.2.236:11928 ->
114.114.114.114:53

Following is the default parser for bro.
{
   "parserClassName":"org.apache.metron.parsers.bro.BasicBroParser",
   "filterClassName":null,
   "sensorTopic":"bro",
   "outputTopic":null,
   "errorTopic":null,
   "writerClassName":null,
   "errorWriterClassName":null,
   "readMetadata":false,
   "mergeMetadata":false,
   "numWorkers":null,
   "numAckers":null,
   "spoutParallelism":1,
   "spoutNumTasks":1,
   "parserParallelism":1,
   "parserNumTasks":1,
   "errorWriterParallelism":1,
   "errorWriterNumTasks":1,
   "spoutConfig":{

   },
   "securityProtocol":null,
   "stormConfig":{

   },
   "parserConfig":{

   },
   "fieldTransformations":[

   ],
   "cacheConfig":{

   },
   "rawMessageStrategy":"DEFAULT",
   "rawMessageStrategyConfig":{

   }
}
Can you please tell me how can i extract *Classification*, *Priority*, *UDP*
(*From*) --> (*To*) IP.
How can I extract fields and apply the Parser Chaining in it ?






On Wed, Feb 20, 2019 at 10:08 PM Simon Elliston Ball <
si...@simonellistonball.com> wrote:

> You might like to look into parser chaining for this:
> https://metron.apache.org/current-book/metron-platform/metron-parsers/ParserChaining.html
>
> Simon
>
> On 20 Feb 2019, at 16:47, Farrukh Naveed Anjum 
> wrote:
>
> Yes, I am using BRO Parser, Can I sub divide the *message* field
>
> On Wed, Feb 20, 2019 at 7:39 PM Otto Fowler 
> wrote:
>
>> Can you print what the fields are after parsing?  These are the fields
>> that you will be able to use Stellar on, to possibly extract your info.
>> Are you using the Bro parser?
>>
>>
>> On February 20, 2019 at 02:14:17, Farrukh Naveed Anjum (
>> anjum.farr...@gmail.com) wrote:
>>
>> Hi,
>> I wanted to know how can I define and extract a field in parser from
>> messages. With If It Exists like option
>>
>> For example. I am using Bro Syslog. Following is a sample data
>>
>> SYSLOG | severity:ERR uid:C5oe7F5SYMWqfVKKj id.orig_p:514 id.resp_p:514
>> proto:udp id.orig_h:10.2.2.1 message:Feb 20 12:11:18 suricata[72950]:
>> [1:2000538:8] ET SCAN NMAP -sA (1) [Classification: Attempted
>> Information Leak] [Priority: 2] {TCP} 74.125.133.189:443 ->
>> 10.2.2.202:52012 facility:LOCAL5 ts:1550646678.442785
>> id.resp_h:172.16.4.18
>>
>> From Message Field, I want to extract Classification, Priority and TCP
>> From -> To IPs.
>>
>> Can I make some kind of configurations in Bro Parser to get this
>> information Back As
>>
>> *Classification* 
>> *Priority* 
>> *TCP* From 
>> *TCP* To 
>>
>> Any guidance will be great help.
>>
>>
>>
>>
>>
>> --
>> With Regards
>> Farrukh Naveed Anjum
>>
>>
>
> --
> With Regards
> Farrukh Naveed Anjum
>
>

--
With Regards
Farrukh Naveed Anjum


Re: Help regarding Parser Configuration

2019-02-20 Thread Otto Fowler
Can you print what the fields are after parsing?  These are the fields that
you will be able to use Stellar on, to possibly extract your info.
Are you using the Bro parser?


On February 20, 2019 at 02:14:17, Farrukh Naveed Anjum (
anjum.farr...@gmail.com) wrote:

Hi,
I wanted to know how can I define and extract a field in parser from
messages. With If It Exists like option

For example. I am using Bro Syslog. Following is a sample data

SYSLOG | severity:ERR uid:C5oe7F5SYMWqfVKKj id.orig_p:514 id.resp_p:514
proto:udp id.orig_h:10.2.2.1 message:Feb 20 12:11:18 suricata[72950]:
[1:2000538:8] ET SCAN NMAP -sA (1) [Classification: Attempted Information
Leak] [Priority: 2] {TCP} 74.125.133.189:443 -> 10.2.2.202:52012
facility:LOCAL5 ts:1550646678.442785 id.resp_h:172.16.4.18

>From Message Field, I want to extract Classification, Priority and TCP From
-> To IPs.

Can I make some kind of configurations in Bro Parser to get this
information Back As

*Classification* 
*Priority* 
*TCP* From 
*TCP* To 

Any guidance will be great help.





--
With Regards
Farrukh Naveed Anjum


Re: Unable to use Syslog Parser

2019-02-15 Thread Otto Fowler
;: "Cw7P6g38y3tWWpC9R4",
"protocol": "syslog",
"source:type": "bro",
"adapter:threatinteladapter:end:ts": "1550209569923",
"original_string": "SYSLOG | severity:NOTICE uid:Cw7P6g38y3tWWpC9R4
id.orig_p:60607 id.resp_p:514 proto:udp id.orig_h:10.60.60.81 message:Feb
15 10:49:20 DC12.tap.local MSWinEventLog\t5\tSecurity\t239656\tFri Feb 15
10:49:11 2019\t4634\tMicrosoft-Windows-Security-Auditing\t\tN/A\tAudit
Success\tDC12.tap.local\t12545\tAn account was logged
off.\r\n\r\nSubject:\r\n\tSecurity
ID:\t\tS-1-5-21-761976910-1883327070-1659661340-1104\r\n\tAccount
Name:\t\tEXG$\r\n\tAccount Domain:\t\tTAP\r\n\tLogon
ID:\t\t0x505F5B4\r\n\r\nLogon Type:\t\t\t3\r\n\r\nThis event is generated
when a logon session is destroyed. It may be positively correlated with a
logon event using the Logon ID value. Logon IDs are only unique between
reboots on the same computer.\n facility:KERN ts:1550209568.304029
id.resp_h:172.16.4.18",
"ip_dst_addr": "172.16.4.18",
"adapter:hostfromjsonlistadapter:end:ts": "1550209569921",
"adapter:geoadapter:end:ts": "1550209569921",
"ip_src_addr": "10.60.60.81",
"timestamp": 1550209568304,
"severity": "NOTICE",
"parallelenricher:enrich:begin:ts": "1550209569923",
"adapter:hostfromjsonlistadapter:begin:ts": "1550209569921",
"message": "Feb 15 10:49:20 DC12.tap.local
MSWinEventLog\t5\tSecurity\t239656\tFri Feb 15 10:49:11
2019\t4634\tMicrosoft-Windows-Security-Auditing\t\tN/A\tAudit
Success\tDC12.tap.local\t12545\tAn account was logged
off.\r\n\r\nSubject:\r\n\tSecurity
ID:\t\tS-1-5-21-761976910-1883327070-1659661340-1104\r\n\tAccount
Name:\t\tEXG$\r\n\tAccount Domain:\t\tTAP\r\n\tLogon
ID:\t\t0x505F5B4\r\n\r\nLogon Type:\t\t\t3\r\n\r\nThis event is generated
when a logon session is destroyed. It may be positively correlated with a
logon event using the Logon ID value. Logon IDs are only unique between
reboots on the same computer.\n",
"parallelenricher:splitter:begin:ts": "1550209569923",
"ip_src_port": 60607,
"proto": "udp",
"parallelenricher:splitter:end:ts": "1550209569923",
"adapter:threatinteladapter:begin:ts": "1550209569923",
"guid": "7107a0b8-4999-4956-b20f-40fd666bed46",
"facility": "KERN"
  },
  "fields": {
"parallelenricher:enrich:begin:ts": [
  1550209569923
],
"adapter:geoadapter:begin:ts": [
  1550209569921
],
"adapter:hostfromjsonlistadapter:begin:ts": [
  1550209569921
],
"parallelenricher:enrich:end:ts": [
  1550209569923
],
"parallelenricher:splitter:begin:ts": [
  1550209569923
],
"adapter:threatinteladapter:end:ts": [
  1550209569923
],
"adapter:hostfromjsonlistadapter:end:ts": [
  1550209569921
],
"parallelenricher:splitter:end:ts": [
  1550209569923
],
"adapter:threatinteladapter:begin:ts": [
  1550209569923
],
"adapter:geoadapter:end:ts": [
  1550209569921
],
"timestamp": [
  1550209568304
]
  },
  "highlight": {
"original_string": [
  "SYSLOG | severity:NOTICE uid:Cw7P6g38y3tWWpC9R4 id.orig_p:60607
id.resp_p:514 proto:udp id.orig_h:@kibana-highlighted-field@10.60.60.81@
/kibana-highlighted-field@ message:Feb 15 10:49:20 DC12.tap.local
MSWinEventLog\t5\tSecurity\t239656\tFri Feb 15 10:49:11
2019\t4634\tMicrosoft-Windows-Security-Auditing\t\tN/A\tAudit
Success\tDC12.tap.local\t12545\tAn account was logged
off.\r\n\r\nSubject:\r\n\tSecurity
ID:\t\tS-1-5-21-761976910-1883327070-1659661340-1104\r\n\tAccount
Name:\t\tEXG$\r\n\tAccount Domain:\t\tTAP\r\n\tLogon
ID:\t\t0x505F5B4\r\n\r\nLogon Type:\t\t\t3\r\n\r\nThis event is generated
when a logon session is destroyed. It may be positively correlated with a
logon event using the Logon ID value. Logon IDs are only unique between
reboots on the same computer.\n facility:KERN ts:1550209568.304029
id.resp_h:172.16.4.18"
]
  },
  "sort": [
1550209568304
  ]
}



On Thu, Feb 14, 2019 at 4:57 PM Otto Fowler  wrote:

> I don’t understand what “Default Bro Syslog parser does not crunch it……”
> means.
>
> Can you explain your data flow?
>
>
>
> On February 14, 2019 at 04:30:52, Farrukh Naveed Anjum (
> anjum.farr...@gmail.com) wrote:
>
> Hi,
>
> Thanks for reply, I did not made any configuration changes, But I can send
> you sample Events
> For example
> SYSLOG | severity:ERR uid:CvS7064cni4HcD7FU6 id.orig_p:514 id.resp_p:514
> proto:udp 

Re: Unable to use Syslog Parser

2019-02-14 Thread Otto Fowler
I don’t understand what “Default Bro Syslog parser does not crunch it……”
means.

Can you explain your data flow?



On February 14, 2019 at 04:30:52, Farrukh Naveed Anjum (
anjum.farr...@gmail.com) wrote:

Hi,

Thanks for reply, I did not made any configuration changes, But I can send
you sample Events
For example
SYSLOG | severity:ERR uid:CvS7064cni4HcD7FU6 id.orig_p:514 id.resp_p:514
proto:udp id.orig_h:10.2.2.1 message:Feb 14 13:16:52 suricata[88128]:
[1:2007994:20] ET MALWARE Suspicious User-Agent (1 space) [Classification:
A Network Trojan was Detected] [Priority: 1] {TCP} 10.2.2.229:37423 ->
168.235.205.6:80 facility:LOCAL5 ts:1550132212.404591 id.resp_h:172.16.4.18


Default Bro Syslog parser does not crunch it and just paste it as this
message

Feb 14 13:16:52 suricata[88128]: [1:2007994:20] ET MALWARE Suspicious
User-Agent (1 space) [Classification: A Network Trojan was Detected]
[Priority: 1] {TCP} 10.2.2.229:37423 -> 168.235.205.6:80 Now the problem is
IP_SRC and IP_DST are being populated as the local IP instead of these ips.
Similar classifications is not set. Please suggest also about windows
events logs for detecting Failed Logins
Feb 14 14:32:18 DC12.tap.local MSWinEventLog 5 Security 182049 Thu Feb 14
14:32:10 2019 4634 Microsoft-Windows-Security-Auditing N/A Audit Success
DC12.tap.local 12545 An account was logged off. Subject: Security ID:
S-1-5-21-761976910-1883327070-1659661340-1104 Account Name: EXG$ Account
Domain: TAP Logon ID: 0x3E3F0A7 Logon Type: 3 This event is generated when
a logon session is destroyed. It may be positively correlated with a logon
event using the Logon ID value. Logon IDs are only unique between reboots
on the same computer.


On Wed, Feb 13, 2019 at 7:01 PM Otto Fowler  wrote:

> Also include the configuration of the parser please.
>
>
>
> On February 13, 2019 at 09:00:08, Otto Fowler (ottobackwa...@gmail.com)
> wrote:
>
> Farrukh,
>
> This error means that the syslog line you are passing in is not proper per
> the spec.
> Can you create a jira, with this info, and attach or otherwise include a
> SANITIZED (change IP, machine names, business stuff etc since this will be
> on the internet ) version of
> the failing line?
> I’ll be able to tell you what the issue is and what the options are once I
> can test it.
>
> Not everything sends properly formatted ( to the spec ) syslog.   While
> simple-syslog ( the library I wrote that backs this parser ) makes
> allowances ( for missing priority, different date formats ) it
> cannot handle everything that is possible obviously.
>
> As a not, this same library is used in nifi for the 5424 processor/ record
> reader as well.
>
>
>
>
> On February 13, 2019 at 05:54:42, Farrukh Naveed Anjum (
> anjum.farr...@gmail.com) wrote:
>
> Hi,
> I am trying to utilize for Syslog5424 I am recieving data from Nifi into
> the Kakfa.
>
> I am getting the Parser Exception any help will be appreciated. Following
> is the error.
>
> nerated.Rfc5424Parser.header(Rfc5424Parser.java:412) ~[stormjar.jar:?]
> at 
> com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.syslog_msg(Rfc5424Parser.java:273)
>  ~[stormjar.jar:?]
> at 
> com.github.palindromicity.syslog.Rfc5424SyslogParser.parseLine(Rfc5424SyslogParser.java:93)
>  ~[stormjar.jar:?]
> at 
> com.github.palindromicity.syslog.Rfc5424SyslogParser.lambda$parseLines$0(Rfc5424SyslogParser.java:130)
>  ~[stormjar.jar:?]
> at java.util.ArrayList.forEach(ArrayList.java:1249) [?:1.8.0_112]
> at 
> com.github.palindromicity.syslog.Rfc5424SyslogParser.parseLines(Rfc5424SyslogParser.java:128)
>  ~[stormjar.jar:?]
> at 
> org.apache.metron.parsers.syslog.Syslog5424Parser.parseOptionalResult(Syslog5424Parser.java:103)
>  ~[stormjar.jar:?]
> at 
> org.apache.metron.parsers.ParserRunnerImpl.execute(ParserRunnerImpl.java:146) 
> ~[stormjar.jar:?]
> at 
> org.apache.metron.parsers.bolt.ParserBolt.execute(ParserBolt.java:253) 
> [stormjar.jar:?]
> at 
> org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735)
>  [storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
> at 
> org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466)
>  [storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
> at 
> org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40)
>  [storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
> at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472)
>  [storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
> at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(Disrup

Re: Unable to find the paths of YAF

2019-02-13 Thread Otto Fowler
The patterns, if not in HDFS are loaded from the uber jar itself.
Can you create a jira with the error and a sanitized version of the failing
line, as well as the sensor configuration you have?


On February 11, 2019 at 03:48:36, Farrukh Naveed Anjum (
anjum.farr...@gmail.com) wrote:

Could not find grok statement in HDFS or classpath at /patterns/yaf
I am getting this Error in YAF

--
With Regards
Farrukh Naveed Anjum


Re: Unable to use Syslog Parser

2019-02-13 Thread Otto Fowler
Also include the configuration of the parser please.



On February 13, 2019 at 09:00:08, Otto Fowler (ottobackwa...@gmail.com)
wrote:

Farrukh,

This error means that the syslog line you are passing in is not proper per
the spec.
Can you create a jira, with this info, and attach or otherwise include a
SANITIZED (change IP, machine names, business stuff etc since this will be
on the internet ) version of
the failing line?
I’ll be able to tell you what the issue is and what the options are once I
can test it.

Not everything sends properly formatted ( to the spec ) syslog.   While
simple-syslog ( the library I wrote that backs this parser ) makes
allowances ( for missing priority, different date formats ) it
cannot handle everything that is possible obviously.

As a not, this same library is used in nifi for the 5424 processor/ record
reader as well.




On February 13, 2019 at 05:54:42, Farrukh Naveed Anjum (
anjum.farr...@gmail.com) wrote:

Hi,
I am trying to utilize for Syslog5424 I am recieving data from Nifi into
the Kakfa.

I am getting the Parser Exception any help will be appreciated. Following
is the error.

nerated.Rfc5424Parser.header(Rfc5424Parser.java:412) ~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.syslog_msg(Rfc5424Parser.java:273)
~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.Rfc5424SyslogParser.parseLine(Rfc5424SyslogParser.java:93)
~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.Rfc5424SyslogParser.lambda$parseLines$0(Rfc5424SyslogParser.java:130)
~[stormjar.jar:?]
at java.util.ArrayList.forEach(ArrayList.java:1249) [?:1.8.0_112]
at 
com.github.palindromicity.syslog.Rfc5424SyslogParser.parseLines(Rfc5424SyslogParser.java:128)
~[stormjar.jar:?]
at 
org.apache.metron.parsers.syslog.Syslog5424Parser.parseOptionalResult(Syslog5424Parser.java:103)
~[stormjar.jar:?]
at 
org.apache.metron.parsers.ParserRunnerImpl.execute(ParserRunnerImpl.java:146)
~[stormjar.jar:?]
at 
org.apache.metron.parsers.bolt.ParserBolt.execute(ParserBolt.java:253)
[stormjar.jar:?]
at 
org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
Caused by: org.antlr.v4.runtime.NoViableAltException
at 
org.antlr.v4.runtime.atn.ParserATNSimulator.noViableAlt(ParserATNSimulator.java:1894)
~[stormjar.jar:?]
at 
org.antlr.v4.runtime.atn.ParserATNSimulator.execATN(ParserATNSimulator.java:498)
~[stormjar.jar:?]
at 
org.antlr.v4.runtime.atn.ParserATNSimulator.adaptivePredict(ParserATNSimulator.java:424)
~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.header(Rfc5424Parser.java:373)
~[stormjar.jar:?]
... 18 more
2019-02-13 15:52:03.138 o.a.s.d.executor
Thread-12-parserBolt-executor[5 5] [ERROR]
com.github.palindromicity.syslog.dsl.ParseException: Syntax error @
1:5 no viable alternative at input 'F'
at 
com.github.palindromicity.syslog.dsl.DefaultErrorListener.syntaxError(DefaultErrorListener.java:17)
~[stormjar.jar:?]
at 
org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:65)
~[stormjar.jar:?]
at org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:558)
~[stormjar.jar:?]
at 
org.antlr.v4.runtime.DefaultErrorStrategy.reportNoViableAlternative(DefaultErrorStrategy.java:310)
~[stormjar.jar:?]
at 
org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:147)
~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.header(Rfc5424Parser.java:412)
~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.syslog_msg(Rfc5424Parser.java

Re: Unable to use Syslog Parser

2019-02-13 Thread Otto Fowler
Farrukh,

This error means that the syslog line you are passing in is not proper per
the spec.
Can you create a jira, with this info, and attach or otherwise include a
SANITIZED (change IP, machine names, business stuff etc since this will be
on the internet ) version of
the failing line?
I’ll be able to tell you what the issue is and what the options are once I
can test it.

Not everything sends properly formatted ( to the spec ) syslog.   While
simple-syslog ( the library I wrote that backs this parser ) makes
allowances ( for missing priority, different date formats ) it
cannot handle everything that is possible obviously.

As a not, this same library is used in nifi for the 5424 processor/ record
reader as well.




On February 13, 2019 at 05:54:42, Farrukh Naveed Anjum (
anjum.farr...@gmail.com) wrote:

Hi,
I am trying to utilize for Syslog5424 I am recieving data from Nifi into
the Kakfa.

I am getting the Parser Exception any help will be appreciated. Following
is the error.

nerated.Rfc5424Parser.header(Rfc5424Parser.java:412) ~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.syslog_msg(Rfc5424Parser.java:273)
~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.Rfc5424SyslogParser.parseLine(Rfc5424SyslogParser.java:93)
~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.Rfc5424SyslogParser.lambda$parseLines$0(Rfc5424SyslogParser.java:130)
~[stormjar.jar:?]
at java.util.ArrayList.forEach(ArrayList.java:1249) [?:1.8.0_112]
at 
com.github.palindromicity.syslog.Rfc5424SyslogParser.parseLines(Rfc5424SyslogParser.java:128)
~[stormjar.jar:?]
at 
org.apache.metron.parsers.syslog.Syslog5424Parser.parseOptionalResult(Syslog5424Parser.java:103)
~[stormjar.jar:?]
at 
org.apache.metron.parsers.ParserRunnerImpl.execute(ParserRunnerImpl.java:146)
~[stormjar.jar:?]
at 
org.apache.metron.parsers.bolt.ParserBolt.execute(ParserBolt.java:253)
[stormjar.jar:?]
at 
org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at 
org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484)
[storm-core-1.1.0.2.6.5.1050-37.jar:1.1.0.2.6.5.1050-37]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
Caused by: org.antlr.v4.runtime.NoViableAltException
at 
org.antlr.v4.runtime.atn.ParserATNSimulator.noViableAlt(ParserATNSimulator.java:1894)
~[stormjar.jar:?]
at 
org.antlr.v4.runtime.atn.ParserATNSimulator.execATN(ParserATNSimulator.java:498)
~[stormjar.jar:?]
at 
org.antlr.v4.runtime.atn.ParserATNSimulator.adaptivePredict(ParserATNSimulator.java:424)
~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.header(Rfc5424Parser.java:373)
~[stormjar.jar:?]
... 18 more
2019-02-13 15:52:03.138 o.a.s.d.executor
Thread-12-parserBolt-executor[5 5] [ERROR]
com.github.palindromicity.syslog.dsl.ParseException: Syntax error @
1:5 no viable alternative at input 'F'
at 
com.github.palindromicity.syslog.dsl.DefaultErrorListener.syntaxError(DefaultErrorListener.java:17)
~[stormjar.jar:?]
at 
org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:65)
~[stormjar.jar:?]
at org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:558)
~[stormjar.jar:?]
at 
org.antlr.v4.runtime.DefaultErrorStrategy.reportNoViableAlternative(DefaultErrorStrategy.java:310)
~[stormjar.jar:?]
at 
org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:147)
~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.header(Rfc5424Parser.java:412)
~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.dsl.generated.Rfc5424Parser.syslog_msg(Rfc5424Parser.java:273)
~[stormjar.jar:?]
at 
com.github.palindromicity.syslog.Rfc5424SyslogParser.parseLine(Rfc5424SyslogParser.java:93)

Re: Centos VM Install Fails, Python Exception Syntax

2019-02-01 Thread Otto Fowler
I don’t think the issue is in the VM though.

"File
"/usr/local/Cellar/ansible/2.7.6/libexec/lib/python3.7/site-packages/ansible/plugins/action/normal.py",
line 46, in run”

that is a home-brew path.

FWIW, https://github.com/apache/metron/pull/1261 is a PR to build the full
dev vm using docker to do the building, so that your local env isn’t such a
pain.
Maybe you can tray that out and see if it works?



On February 1, 2019 at 11:33:44, Ryan Sommers (ry...@rpsommers.com) wrote:

File
"/usr/local/Cellar/ansible/2.7.6/libexec/lib/python3.7/site-packages/ansible/plugins/action/normal.py",
line 46, in run


Re: Centos VM Install Fails, Python Exception Syntax

2019-02-01 Thread Otto Fowler
I think you should have python 2.7.11 at a minimum on the machine running
ansible maybe.


On February 1, 2019 at 07:55:59, Ryan Sommers (ry...@rpsommers.com) wrote:

When attempting to build the single-vm I am getting an error in what
appears to be command-line python. I added 'ansible.verbose = "vvv"' to get
the verbose output. Any suggestions on remedies would be appreciated!

R


TASK [ambari_config : Deploy cluster with Ambari; http://node1:8080]
***
task path:
/Users/user/github/metron/metron-deployment/ansible/roles/ambari_config/tasks/main.yml:28
 ESTABLISH SSH CONNECTION FOR USER: vagrant
 SSH: EXEC ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes
-o
IdentityFile=/Users/user/github/metron/metron-deployment/development/centos6/.vagrant/machines/node1/virtualbox/private_key
-o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o
KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o
ControlPath=/Users/user/.ansible/cp/%h-%p-%r node1 '/bin/sh -c '"'"'echo
~vagrant && sleep 0'"'"''
 (0, b'/home/vagrant\n', b'')
 ESTABLISH SSH CONNECTION FOR USER: vagrant
 SSH: EXEC ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes
-o
IdentityFile=/Users/user/github/metron/metron-deployment/development/centos6/.vagrant/machines/node1/virtualbox/private_key
-o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o
KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o
ControlPath=/Users/user/.ansible/cp/%h-%p-%r node1 '/bin/sh -c '"'"'( umask
77 && mkdir -p "` echo
/home/vagrant/.ansible/tmp/ansible-tmp-1548991736.285941-79805437559679 `"
&& echo ansible-tmp-1548991736.285941-79805437559679="` echo
/home/vagrant/.ansible/tmp/ansible-tmp-1548991736.285941-79805437559679 `"
) && sleep 0'"'"''
 (0,
b'ansible-tmp-1548991736.285941-79805437559679=/home/vagrant/.ansible/tmp/ansible-tmp-1548991736.285941-79805437559679\n',
b'')
The full traceback is:
Traceback (most recent call last):
  File
"/usr/local/Cellar/ansible/2.7.6/libexec/lib/python3.7/site-packages/ansible/executor/task_executor.py",
line 140, in run
res = self._execute()
  File
"/usr/local/Cellar/ansible/2.7.6/libexec/lib/python3.7/site-packages/ansible/executor/task_executor.py",
line 612, in _execute
result = self._handler.run(task_vars=variables)
  File
"/usr/local/Cellar/ansible/2.7.6/libexec/lib/python3.7/site-packages/ansible/plugins/action/normal.py",
line 46, in run
result = merge_hash(result, self._execute_module(task_vars=task_vars,
wrap_async=wrap_async))
  File
"/usr/local/Cellar/ansible/2.7.6/libexec/lib/python3.7/site-packages/ansible/plugins/action/__init__.py",
line 739, in _execute_module
(module_style, shebang, module_data, module_path) =
self._configure_module(module_name=module_name, module_args=module_args,
task_vars=task_vars)
  File
"/usr/local/Cellar/ansible/2.7.6/libexec/lib/python3.7/site-packages/ansible/plugins/action/__init__.py",
line 178, in _configure_module
environment=final_environment)
  File
"/usr/local/Cellar/ansible/2.7.6/libexec/lib/python3.7/site-packages/ansible/executor/module_common.py",
line 973, in modify_module
environment=environment)
  File
"/usr/local/Cellar/ansible/2.7.6/libexec/lib/python3.7/site-packages/ansible/executor/module_common.py",
line 791, in _find_module_utils
recursive_finder(module_name, b_module_data, py_module_names,
py_module_cache, zf)
  File
"/usr/local/Cellar/ansible/2.7.6/libexec/lib/python3.7/site-packages/ansible/executor/module_common.py",
line 538, in recursive_finder
tree = ast.parse(data)
  File
"/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ast.py",
line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
  File "", line 230
except requests.ConnectionError, e:
   ^
SyntaxError: invalid syntax

fatal: [node1]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}


--
Ryan P Sommers
ry...@rpsommers.com


RE: How to provide hbase-site.xml to Stellar Processor Java API

2019-01-25 Thread Otto Fowler
Please link it to https://issues.apache.org/jira/browse/METRON-1409.
This is a jira around hosting stellar that I created a while ago.


On January 25, 2019 at 07:02:30, Anil Donthireddy (
anil.donthire...@sstech.us) wrote:

Hi Mohan DV,



I appreciate your time for sharing the much useful information.



As per my understanding there seems to be no scope to pass my own build
Hbase configuration object to execute Stellar queries in my extended API. I
may need to re-write lot of things in my extended API in the way Stellar
processor works to override the Hbase configuration.



@Otto Fowler: I will raise a Jira case with the usecase and the issue.



Thanks,

Anil.



*From:* Otto Fowler [mailto:ottobackwa...@gmail.com]
*Sent:* Thursday, January 24, 2019 11:15 PM
*To:* Anil Donthireddy ; user@metron.apache.org;
d...@metron.apache.org; Mohan Venkateshaiah 
*Cc:* Maxim Dashenko ; Satish Abburi <
satish.abb...@sstech.us>; James Sirota ;
Christopher Berry 
*Subject:* Re: How to provide hbase-site.xml to Stellar Processor Java API



But still file a jira ;)





On January 24, 2019 at 12:19:34, Mohan Venkateshaiah (
mvenkatesha...@hortonworks.com) wrote:

Hi Anil,

I had done similar to this in the past . In the stellar to get the hbase
configuration we call HBaseConfiguration.create() , in that call hbase adds
the hbase-site and core-site as resources to the config we probably SHOULD
let people specify a base config.

What I had done was in the global config, I set a property called
hbase.provider.impl, it's the fully qualified class name for a class that
implements the TableProvider interface which has one method:

public HTableInterface getTable(Configuration config, String tableName)
throws IOException

if you implement your own where you ignore the config argument and resolve
the hbase table with your own injected config that will work

Thanks
Mohan DV

On 1/24/19, 8:56 PM, "Otto Fowler"  wrote:

Hi Anil,
Can you create a jira on this with these details and a general overview of
your use case?
It looks like the HbaseConfiguration we use in the HTableConnector
is done using the create() method, which creates from resources.

I think we would need to do some work to support the external file.



On January 24, 2019 at 10:14:46, Anil Donthireddy (
anil.donthire...@sstech.us) wrote:

Hi,



I have written a java application which uses Stellar processor and execute
the stellar expressions. The issue I am facing is I am unable to connect
Hbase unless I place hbase-site.xml in src/main/resources/ folder of the
code. As it is not the proper way of packaging the hbase-site.xml with Jar,
I would like to understand how the hbase-site.xml is being set to classpath
while starting profiler topology.



The ways I tried are

1) Setting the classpath to hbase conf folder using command “java -cp
$CLASSPATH:/etc/hbase/conf:/etc/Hadoop/conf –jar myJar.jar”

2) Adding Hbase conf folder to HADOOP_CLASSPATH. Below is the Hadoop
classpath

/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*::mysql-connector-java.jar:postgresql-jdbc2ee.jar:postgresql-jdbc2.jar:postgresql-jdbc3.jar:postgresql-jdbc.jar:/etc/hbase/conf/:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf




One more step I would like to try to fix the issue is to set property “
*zookeeper.znode.parent*” for configuration object while instantiating
HbaseConnector. But it is in the scope of metron code to try this fix.



I would like to know if any one able to provide hbase-site.xml to any Java
Application or anyone able to extend Metron Stellar Processor and execute
profile definitions successfully.

Please provide any inputs to resolve the issue.



Thanking you.



Thanks,

Anil.


Re: How to provide hbase-site.xml to Stellar Processor Java API

2019-01-24 Thread Otto Fowler
But still file a jira ;)


On January 24, 2019 at 12:19:34, Mohan Venkateshaiah (
mvenkatesha...@hortonworks.com) wrote:

Hi Anil,

I had done similar to this in the past . In the stellar to get the hbase
configuration we call HBaseConfiguration.create() , in that call hbase adds
the hbase-site and core-site as resources to the config we probably SHOULD
let people specify a base config.

What I had done was in the global config, I set a property called
hbase.provider.impl, it's the fully qualified class name for a class that
implements the TableProvider interface which has one method:

public HTableInterface getTable(Configuration config, String tableName)
throws IOException

if you implement your own where you ignore the config argument and resolve
the hbase table with your own injected config that will work

Thanks
Mohan DV

On 1/24/19, 8:56 PM, "Otto Fowler"  wrote:

Hi Anil,
Can you create a jira on this with these details and a general overview of
your use case?
It looks like the HbaseConfiguration we use in the HTableConnector
is done using the create() method, which creates from resources.

I think we would need to do some work to support the external file.



On January 24, 2019 at 10:14:46, Anil Donthireddy (
anil.donthire...@sstech.us) wrote:

Hi,



I have written a java application which uses Stellar processor and execute
the stellar expressions. The issue I am facing is I am unable to connect
Hbase unless I place hbase-site.xml in src/main/resources/ folder of the
code. As it is not the proper way of packaging the hbase-site.xml with Jar,
I would like to understand how the hbase-site.xml is being set to classpath
while starting profiler topology.



The ways I tried are

1) Setting the classpath to hbase conf folder using command “java -cp
$CLASSPATH:/etc/hbase/conf:/etc/Hadoop/conf –jar myJar.jar”

2) Adding Hbase conf folder to HADOOP_CLASSPATH. Below is the Hadoop
classpath

/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*::mysql-connector-java.jar:postgresql-jdbc2ee.jar:postgresql-jdbc2.jar:postgresql-jdbc3.jar:postgresql-jdbc.jar:/etc/hbase/conf/:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf




One more step I would like to try to fix the issue is to set property “
*zookeeper.znode.parent*” for configuration object while instantiating
HbaseConnector. But it is in the scope of metron code to try this fix.



I would like to know if any one able to provide hbase-site.xml to any Java
Application or anyone able to extend Metron Stellar Processor and execute
profile definitions successfully.

Please provide any inputs to resolve the issue.



Thanking you.



Thanks,

Anil.


Re: How to provide hbase-site.xml to Stellar Processor Java API

2019-01-24 Thread Otto Fowler
Hi Anil,
Can you create a jira on this with these details and a general overview of
your use case?
It looks like the HbaseConfiguration we use in the HTableConnector
is done using the create() method, which creates from resources.

I think we would need to do some work to support the external file.



On January 24, 2019 at 10:14:46, Anil Donthireddy (
anil.donthire...@sstech.us) wrote:

Hi,



I have written a java application which uses Stellar processor and execute
the stellar expressions. The issue I am facing is I am unable to connect
Hbase unless I place hbase-site.xml in src/main/resources/ folder of the
code. As it is not the proper way of packaging the hbase-site.xml with Jar,
I would like to understand how the hbase-site.xml is being set to classpath
while starting profiler topology.



The ways I tried are

1)  Setting the classpath to hbase conf folder using command “java -cp
$CLASSPATH:/etc/hbase/conf:/etc/Hadoop/conf –jar myJar.jar”

2)  Adding Hbase conf folder to HADOOP_CLASSPATH. Below is the Hadoop
classpath

/usr/hdp/2.6.1.0-129/hadoop/conf:/usr/hdp/2.6.1.0-129/hadoop/lib/*:/usr/hdp/2.6.1.0-129/hadoop/.//*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/./:/usr/hdp/2.6.1.0-129/hadoop-hdfs/lib/*:/usr/hdp/2.6.1.0-129/hadoop-hdfs/.//*:/usr/hdp/2.6.1.0-129/hadoop-yarn/lib/*:/usr/hdp/2.6.1.0-129/hadoop-yarn/.//*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/lib/*:/usr/hdp/2.6.1.0-129/hadoop-mapreduce/.//*::mysql-connector-java.jar:postgresql-jdbc2ee.jar:postgresql-jdbc2.jar:postgresql-jdbc3.jar:postgresql-jdbc.jar:/etc/hbase/conf/:/usr/hdp/2.6.1.0-129/tez/*:/usr/hdp/2.6.1.0-129/tez/lib/*:/usr/hdp/2.6.1.0-129/tez/conf



One more step I would like to try to fix the issue is to set property “
*zookeeper.znode.parent*” for configuration object while instantiating
HbaseConnector. But it is in the scope of metron code to try this fix.



I would like to know if any one able to provide hbase-site.xml to any Java
Application or anyone able to extend Metron Stellar Processor and execute
profile definitions successfully.

Please provide any inputs to resolve the issue.



Thanking you.



Thanks,

Anil.


Re: what version metron on HCP 1.8.0

2019-01-22 Thread Otto Fowler
You should post this to the Hortonworks community forum or contact your
Hortonworks representative.

>%s/Hortonworks/Cloudera/g



On January 22, 2019 at 02:05:06, tkg_cangkul (yuza.ras...@gmail.com) wrote:

Hi,
I've downloaded hcp 1.8.0 mpack from this link :
https://docs.hortonworks.com/HDPDocuments/HCP1/HCP-1.8.0/release-notes/content/hcp_repositories.html

on hortonworks docs website, i've read if hcp 1.8.0 components is metron
0.7.0.
but in mpack.json file of hcp 1.8.0, the metron version is
service_version" : "0.6.0.1.8.0.0"

i've tried to install it on ambari and the stack version is metron 0.6.0

pls help.


Best Regards,

Tkg_Cangkul


Re: Metron - How to use Java API of Profiler client

2019-01-02 Thread Otto Fowler
Hi Anil,
Can you create a jira to capture your use case?


On January 2, 2019 at 04:41:18, Anil Donthireddy (anil.donthire...@sstech.us)
wrote:

Hi,



As part of our requirements, it will be good if we have an interface to
access Metron profiler statistics from other applications developed in
Java/Python/Spark etc.



While going through Profiler client document, I have seen that currently
profiler client is supported as Java API and stellar. However I could not
find more information about how to configure/develop code to query profiler
statistics using Java API.



If anyone is aware of how to use the Java API and query metron profiler
statistics, Please share the steps to how to use Java API.



Thanks,

Anil.


Re: Graphs based on Metron or PCAP data

2019-01-02 Thread Otto Fowler
Pieter,
Can you create a jira with your use case?  It is important to capture.  We
have some outstanding jira’s around graph support.


On January 2, 2019 at 04:40:23, Stefan Kupstaitis-Dunkler (
stefan@gmail.com) wrote:

Hi Pieter,



Happy new year!



I believe that always depends on a lot of factors and applies to any kind
of visualization problem with big amounts of data:

   - How fast do you need the visualisations available?
   - How up-to-date do they need to be?
   - How complex?
   - How beautiful/custom modified?
   - How familiar are you with these frameworks? (could be a reason not to
   use a lib if they are otherwise equal in capabilities)



It sounds like you want to create a simple histogram across the full
history of stored data. So I’ll throw in another option, that is commonly
used for such use cases:

   - Zeppelin notebook:
  - Access data stored in HDFS via Hive.
  - A bit of preparation in Hive is required (and can be scheduled),
  e.g. creating external tables and converting data into a more efficient
  format, such as ORC.



Best,

Stefan



*From: *Pieter Baele 
*Reply-To: *"user@metron.apache.org" 
*Date: *Wednesday, 2. January 2019 at 07:50
*To: *"user@metron.apache.org" 
*Subject: *Graphs based on Metron or PCAP data



Hi,



(and good New Year to all as well!)



What would you consider as the easiest approach to create a Graph based
primarly on ip_dst and ip_src adresses and the number (of connections) of
those?



I was thinking:

- graph functionality in Elastic stack, but limited (ex only recent data in
1 index?)

- interfacing with Neo4J

- GraphX using Spark?

- using R on data stored in HDFS?

- using Python: plotly? Pandas?







Sincerely

Pieter


Re: CEF parser timestamp rt field not present

2018-12-18 Thread Otto Fowler
Pieter,

You can always create jira issues for things that you think are wrong or
missing in the existing parsers, and maybe that work can get done.

There are also things ‘in the pipeline’ that you may want to think about.
- There is a new regex parser that just landed.
- There is a syslog 3164 parser that is in pr, that includes refactoring of
the 5424 parser such that these parsers can be derived from and used by the
’syslog’ based parsers, that parse the syslog themselves ( making them
simpler ).
CEF is a candidate to be redone on this base, and if you do your own, you
may want to think about that approach too.



On December 18, 2018 at 01:59:13, Pieter Baele (pieter.ba...@gmail.com)
wrote:

Hi,

Thank you for the welcome! The concepts behind Metron are nice, very nice.
Finally network log / data analysis is possible  using different approaches.
It has been a few years since I used tools such as Suricata, Bro, Snort,
OSSEC, PF_RING. But the integration of some of those at scale... nice
possibilities!
In our environment we only have some krb issues, maybe related to our
specific choices, but I am sure we will be able to solve those

I think our use cases can perfectly work with the defaults. We probably
need to get the necessary experience.
Both in parsing, using Stellar, writing Elasticsearch templates, Spark
So basically If I set the timestamp field correctly, we can correlate and
profile with other events around the same timestamp, right?

We could also write a 3th party parser , almost a copy of the CEF parser
and modify it specifically for this device (and contribute back...)?
But if it is only about the timestamp, probably only the
fieldTransformation is necessary.

I suppose it also by design that the ASA parser only does some 'basic'
parsing? Currently there is no interpretation of access-list, hit counts,
etcetera.
But I thought it was better using a Java parser than grok patterns if
possible, because of optimalisations?

Pieter






On Mon, Dec 17, 2018 at 5:49 PM Simon Elliston Ball <
si...@simonellistonball.com> wrote:

> Hi Pieter,
>
> Welcome to the Metron community.
>
> The logic used by the CEF parser right now is to populate the timestamp
> field as follows:
> 1. if the rt field exists, use that.
> 2. if there is a field matching the syslogTime pattern (either the old
> pattern of 5424)
> 3. If neither are present, it uses current system time (as a last resort)
>
> You can use stellar fieldTransformations after the fact to change the
> timestamp field to map to anything you like, including any parsed field.
>
> The field type is ignored in custom fields from CEF, since type is not a
> required construct in the Metron JSON object, so we just take the
> corresponding label field and map the value to that, since the Metron
> schema is more flexible than the CEF schema. I can see there may be edge
> cases where this could create a collision, and would be interested to hear
> about your use case around this, and any problems that approach might
> cause, but it seems to have worked for most people so far.
>
> Simon
>
>
> On Mon, 17 Dec 2018 at 15:38, Pieter Baele  wrote:
>
>> Hi,
>>
>> For correlation & profiling the presense of a correct timestamp /
>> eventtime is important,
>> what to do with a device implementing CEF output, but not properly
>> providing the rt field?
>> Also syslogTime is not parsed by the CEF parser.
>>
>> There is another field present, how can I assure this field is taken as
>> timestamp for further analysis and ingest in ES and HDFS?
>>
>> Also, is it not necessary to have the type of a field set during (CEF)
>> parsing (maybe I am missing something there)
>>
>> Sincerely
>> Pieter
>>
>>
>>
>>
>
> --
> --
> simon elliston ball
> @sireb
>


Re: Metron Upgrade from 0.4.3 to 0.6.0 issues

2018-11-29 Thread Otto Fowler
I’m going to add you to slack as well.



On November 29, 2018 at 19:28:45, Doug Mann (ma...@avalonconsult.com) wrote:

Hi all,

I've been running into lots of issues regarding an installation of Metron
0.6.0 (upgrading from 0.4.3) failing silently during the deployment phase
in Ambari. I've documented in the below document the steps followed and
issues encountered. I'll be available all business hours tomorrow (Friday
11/30) central time.

https://docs.google.com/document/d/1UV222rV3oK2roQBtC_kpjWPyGaWN5qXF7LC9R7eVaUQ/edit?usp=sharing

Any insight would be appreciated. Thanks for your time.
--
Doug Mann
Hadoop Consultant | Avalon Consulting, LLC
P: 812 240 1862
LinkedIn  | Google+
 | Twitter



Re: Syslog parser design using regx

2018-11-01 Thread Otto Fowler
You are welcome to join the palindromicity slack to discuss.
https://join.slack.com/t/palindromicity/shared_invite/enQtNDcxMDE4ODQ5NzAyLTY4ZTIzZWMyNTliZjE5ZjRkNzczZjY3MTAyYWFlYjY1ZjhiMDYxYTJhOGE4ODE3ZTA0MGViN2E5YTJhYjg3MTY
As is anyone.



On November 1, 2018 at 08:38:05, Muhammed Irshad (irshadkt@gmail.com)
wrote:

Thanks a lot Otto. That covers everything.

On Thu, Nov 1, 2018 at 5:16 PM Otto Fowler  wrote:

> simple-syslog-5424 uses antlr4 instead of regex because I was unable to
> find or develop regex’s to single pass parse structured data.  If you look
> around you’ll find that most platform’s support for 5424 does not handle
> structured data, and is implemented as regex.  The legacy NiFi syslog
> support, which takes it’s regex from Flume was like this for example.  Nifi
> now supports structured data because it too uses simple-syslog-5424 for
> that.  Also that lib offers interfaces and base functionality to build new
> parser logic on top of the grammar, on top of the default implementation.
>
> The regex performance, if the regex’s are cached or static should be ok I
> think.
>
> Note that I plan to develop simple-syslog-3164, probably using regex with
> injectable “message” parsing soon ( and a follow on to create a 3rd,
> unified simple-syslog lib ). This will have common headers etc to the 5424
> lib.  This will be done in the https://github.com/palindromicity org.
>
>
> On November 1, 2018 at 01:12:53, Muhammed Irshad (irshadkt@gmail.com)
> wrote:
>
> I have to parse large volumes of syslog data collected in splunk in
> different indexes. Seems splunk can be configured in different ways to
> collect syslog data
> <https://docs.splunk.com/Documentation/Splunk/7.2.0/Data/HowSplunkEnterprisehandlessyslogdata>.
> I have a custom written regex parser. I am planning to use regex ( Single
> pass ) to separate out message and header and use parser chaining to parse
> message content using csv/ regex itself according to the message format. In
> terms of performance considering heavy traffic ( 3 TB/day )  any problem
> with this approach ? I could see existing syslog5424
> <https://github.com/palindromicity/simple-syslog-5424/> uses antlr4
> instead of regex. Any advantage for this in terms of performance ?
>
> --
> Muhammed Irshad K T
> Senior Software Engineer
> +919447946359
> irshadkt@gmail.com
> Skype : muhammed.irshad.k.t
>
>

--
Muhammed Irshad K T
Senior Software Engineer
+919447946359
irshadkt@gmail.com
Skype : muhammed.irshad.k.t


Re: Syslog parser design using regx

2018-11-01 Thread Otto Fowler
simple-syslog-5424 uses antlr4 instead of regex because I was unable to
find or develop regex’s to single pass parse structured data.  If you look
around you’ll find that most platform’s support for 5424 does not handle
structured data, and is implemented as regex.  The legacy NiFi syslog
support, which takes it’s regex from Flume was like this for example.  Nifi
now supports structured data because it too uses simple-syslog-5424 for
that.  Also that lib offers interfaces and base functionality to build new
parser logic on top of the grammar, on top of the default implementation.

The regex performance, if the regex’s are cached or static should be ok I
think.

Note that I plan to develop simple-syslog-3164, probably using regex with
injectable “message” parsing soon ( and a follow on to create a 3rd,
unified simple-syslog lib ). This will have common headers etc to the 5424
lib.  This will be done in the https://github.com/palindromicity org.


On November 1, 2018 at 01:12:53, Muhammed Irshad (irshadkt@gmail.com)
wrote:

I have to parse large volumes of syslog data collected in splunk in
different indexes. Seems splunk can be configured in different ways to
collect syslog data
.
I have a custom written regex parser. I am planning to use regex ( Single
pass ) to separate out message and header and use parser chaining to parse
message content using csv/ regex itself according to the message format. In
terms of performance considering heavy traffic ( 3 TB/day )  any problem
with this approach ? I could see existing syslog5424
 uses antlr4 instead
of regex. Any advantage for this in terms of performance ?

--
Muhammed Irshad K T
Senior Software Engineer
+919447946359
irshadkt@gmail.com
Skype : muhammed.irshad.k.t


Re: Syslog parser issue

2018-10-30 Thread Otto Fowler
Per the spec which this is written to, if you don’t have structured data,
you need to have a ‘-‘ marker.  So this is not valid 5424.  That is from a
cursory look.
Metron has a dedicated ISE parser, have you tried that?

If you would like to have the parser have a setting to optionally accept
missing structured data, you can open an issue @
https://github.com/palindromicity/simple-syslog-5424/issues
If/when resolved there, a jira to pick up the change in metron can be
logged.



On October 30, 2018 at 13:38:39, Muhammed Irshad (irshadkt@gmail.com)
wrote:

I am trying to test existing Syslog5424Parser with the logs from my
cisco:ise log data. I am getting the below error message under
MessageParserResult. Is the below format supported by existing syslog
parser ? Or can I configure it to support this format ?

Message sample :
<182>1 2018-10-05T08:46:06+00:00 lxapp1492-admin.in.mycompany.com
CISE_Profiler 0038547765 1 0 2018-10-05 18:46:06.972 +10:00 0538115228
80002 INFO  Profiler: Profiler EndPoint profiling event occurred,
ConfigVersionId=267, OperatingSystem=FreeBSD 10.0-CURRENT (accuracy 92%),
EndpointCertainityMetric=160, EndpointIPAddress=192.168.88.55,
EndpointMacAddress=F8:0D:60:FF:86:E5, EndpointMatchedPolicy=Canon-Printer,

Error message :
com.github.palindromicity.syslog.dsl.ParseException: Syntax error @ 1:93 no
viable alternative at input '1'

--
Muhammed Irshad K T
Senior Software Engineer
+919447946359
irshadkt@gmail.com
Skype : muhammed.irshad.k.t


Re: Build Errors

2018-10-24 Thread Otto Fowler
You can look at the metron-builder role in metron-deployment/ansible to see
how the referenced vagrant machine is built


On October 24, 2018 at 11:34:11, Michael Miklavcic (
michael.miklav...@gmail.com) wrote:

Hi David, building the RPMs requires building full Metron first. Switch to
the root project directory for Metron and run "mvn clean install" from
there. It will build and test all modules which are then able to be bundled
up with RPM. You might also check out this page for some options on
deployment - https://github.com/apache/metron/tree/master/metron-deployment.
If you're looking just to get a quickstart sense of Metron, I would spin up
our full dev environment in Vagrant. The link I shared has references to
that. If you're looking to do a full-on bare metal installation, then
you're probably already in the right place, though it looks like those docs
are from 0.4.1 which is from some time ago.

Best,
Mike Miklavcic

On Wed, Oct 24, 2018 at 8:10 AM David Auclair  wrote:

> I'm trying to build Metron and encountering some build errors.
>
> Build environment is CentOS7 (minimal install), configured according to
> the following:
>
> https://cwiki.apache.org/confluence/display/METRON/Metron+0.4.1+with+HDP+2.5+bare-metal+install+on+Centos+7+with+MariaDB+for+Metron+REST
>
> And running the build command, according to this:
>
> https://metron.apache.org/current-book/metron-deployment/packaging/docker/rpm-docker/index.html
>
> I've encountered a build failure on the following step:
> [INFO] metron-rpm . FAILURE [
> 41.832 s]
>
> It looks like it's attempting to package up the components for building in
> Docker, but encountering an issue:
> + tar -xzf /root/SOURCES/metron-common-0.6.1-archive.tar.gz -C
> /root/BUILDROOT/metron-0.6.1-root/usr/metron/0.6.1
> + tar -xzf /root/SOURCES/metron-parsers-0.6.1-archive.tar.gz -C
> /root/BUILDROOT/metron-0.6.1-root/usr/metron/0.6.1
> + tar -xzf /root/SOURCES/metron-elasticsearch-0.6.1-archive.tar.gz -C
> /root/BUILDROOT/metron-0.6.1-root/usr/metron/0.6.1
> + tar -xzf /root/SOURCES/metron-data-management-0.6.1-archive.tar.gz -C
> /root/BUILDROOT/metron-0.6.1-root/usr/metron/0.6.1
> + tar -xzf /root/SOURCES/metron-solr-0.6.1-archive.tar.gz -C
> /root/BUILDROOT/metron-0.6.1-root/usr/metron/0.6.1
> + tar -xzf /root/SOURCES/metron-enrichment-0.6.1-archive.tar.gz -C
> /root/BUILDROOT/metron-0.6.1-root/usr/metron/0.6.1
> + tar -xzf /root/SOURCES/metron-indexing-0.6.1-archive.tar.gz -C
> /root/BUILDROOT/metron-0.6.1-root/usr/metron/0.6.1
> + tar -xzf /root/SOURCES/metron-pcap-backend-0.6.1-archive.tar.gz -C
> /root/BUILDROOT/metron-0.6.1-root/usr/metron/0.6.1
> + tar -xzf /root/SOURCES/metron-profiler-storm-0.6.1-archive.tar.gz -C
> /root/BUILDROOT/metron-0.6.1-root/usr/metron/0.6.1
> + tar -xzf /root/SOURCES/metron-rest-0.6.1-archive.tar.gz -C
> /root/BUILDROOT/metron-0.6.1-root/usr/metron/0.6.1
> tar (child): /root/SOURCES/metron-rest-0.6.1-archive.tar.gz: Cannot open:
> No such file or directory
> tar (child): Error is not recoverable: exiting now
>
> Any advice on how to solve this issue?
>
> Thanks in advance,
> David Auclair
> Information and Technology Services
> University of Toronto
>
> This email and any attachments contain privileged and / or confidential
> information for internal University of Toronto communication only unless
> otherwise indicated.
>
>


Re: Metron dev environments moving to require Ansible 2.4+

2018-09-28 Thread Otto Fowler
Yeah,  I thought we had more but maybe they where removed.
Many places in *.md files referencing Ansible and versions too


On September 28, 2018 at 11:45:14, zeo...@gmail.com (zeo...@gmail.com)
wrote:

Do you mean this
<https://cwiki.apache.org/confluence/display/METRON/Downgrade+Ansible>?  It
was the only reference I could find on the wiki.  All of the READMEs should
be updated as a part of the PR, but feel free to provide your input if I
missed anything.

Jon

On Fri, Sep 28, 2018 at 10:15 AM Otto Fowler 
wrote:

> We should make sure the non-source documentation is updated
>
>
> On September 28, 2018 at 09:32:52, zeo...@gmail.com (zeo...@gmail.com)
> wrote:
>
> Hi All,
>
> As it currently sits, once METRON-1758
> <https://github.com/apache/metron/pull/1179> is merged into the code
> base, Ansible 2.4 or later will be required to use any of the Metron
> ansible playbooks.  This is in contrast to the prior version requirements
> outlined in Metron documentation which specifically point to 2.0.0.2 and
> 2.2.0.0 as supported/recommended Ansible versions.  If you install Ansible
> 2.5.0 exactly you should not experience any issues spinning up pre- and post-
> merge versions of Metron.
>
> I am broadcasting this to both the user and dev communities in advance of
> any changes to provide an opportunity to voice any concerns.  Thanks,
>
> Jon
> --
>
> Jon
>
> --

Jon


Re: WELCOME to user@metron.apache.org

2018-09-09 Thread Otto Fowler
Invite sent


On September 9, 2018 at 07:27:08, siavosh.zarrasv...@gmail.com (
siavosh.zarrasv...@gmail.com) wrote:


Hi all,

Also, could while I still would like to be added to the slack channel, I
wonder if this thread could be deleted as well? Accidentally, I am sending
my phone number as part of my signature.


Re: Add account to slack

2018-09-04 Thread Otto Fowler
Done

On September 4, 2018 at 04:13:45, Lehuede sebastien (lehued...@gmail.com)
wrote:

Hi All,

I take the liberty to use Ivan's email to ask for a Slack account to join
the channel too.

Regards,
Sebastien.

Le mar. 4 sept. 2018 à 10:02, Ivan Paterno  a écrit :

> Hi, can i have an account to join the slack channel?
>
>
>
>
>
> Ivan Paterno
> Security Specialist
>
> ivan.pate...@elmec.it
>
>
>
> Elmec Informatica SPA
>
> HQ - via Pret, 1
>
> 21020 Brunello (VA)
>
> Tel. +39 0332802627
> Fax +39 0332870430
>
> Follow us on
> elmec.com   | LinkedIn
>   | Twitter
>   | Facebook
>   | YouTube
>   |
> Instagram 
>
>
>
> Questa comunicazione e ogni eventuale file allegato sono confidenziali e 
> destinati all'uso esclusivo del destinatario.
> Se avete ricevuto questo messaggio per errore Vi preghiamo di comunicarlo al 
> mittente e distruggere quanto ricevuto.
> Il mittente, tenuto conto del mezzo utilizzato, non si assume alcuna 
> responsabilit? in ordine alla segretezza e riservatezza delle informazioni 
> contenute nella presente comunicazione via e-mail.
>
> The information contained in this e-mail message is confidential and intended 
> only for the use of the individual or entity named above.
> If you are not the intended recipient, please notify us immediately by 
> telephone or e-mail and destroy this communication.
> Due to the way of the transmission, we do not undertake any liability with 
> respect to the secrecy and confidentiality of the information contained in 
> this e-mail message.
>
>


Re: Add account to slack

2018-09-04 Thread Otto Fowler
Done


On September 4, 2018 at 04:02:06, Ivan Paterno (ivan.pate...@elmec.it)
wrote:

Hi, can i have an account to join the slack channel?





Ivan Paterno
Security Specialist

ivan.pate...@elmec.it



Elmec Informatica SPA

HQ - via Pret, 1

21020 Brunello (VA)

Tel. +39 0332802627
Fax +39 0332870430

Follow us on
elmec.com   | LinkedIn
  | Twitter
  | Facebook
  | YouTube
  |
Instagram 



Questa comunicazione e ogni eventuale file allegato sono confidenziali
e destinati all'uso esclusivo del destinatario.
Se avete ricevuto questo messaggio per errore Vi preghiamo di
comunicarlo al mittente e distruggere quanto ricevuto.
Il mittente, tenuto conto del mezzo utilizzato, non si assume alcuna
responsabilit? in ordine alla segretezza e riservatezza delle
informazioni contenute nella presente comunicazione via e-mail.

The information contained in this e-mail message is confidential and
intended only for the use of the individual or entity named above.
If you are not the intended recipient, please notify us immediately by
telephone or e-mail and destroy this communication.
Due to the way of the transmission, we do not undertake any liability
with respect to the secrecy and confidentiality of the information
contained in this e-mail message.


Re: Issue with Enrichment topology: java.lang.OutOfMemoryError: GC overhead limit exceeded

2018-08-21 Thread Otto Fowler
So, before you where doing GEO you did not have the problem?  If you took
the GEO out it would stop?


On August 21, 2018 at 11:04:56, Anil Donthireddy (anil.donthire...@sstech.us)
wrote:

Hi,



We have been keep on getting the error “java.lang.OutOfMemoryError: GC
overhead limit exceeded” at Enrichment topology in storm at several bolts
defined. Please find the attached screenshot which shows the bolts in
topology at which we are getting the same issue.



I tried out couple of configuration changes to provide more RAM to storm
topologies from ambary UI and restarted Storm but it dint help. Can I get
some configuration steps to assign more RAM to storm topologies to resolve
the issue.



I see in the logs that for each record, it is trying to update the GeoIP
data as below. I wonder if it is causing the issue.

o.a.m.e.a.g.GeoLiteDatabase Thread-16-geoEnrichmentBolt-executor[6 6]
[INFO] [Metron] Update to GeoIP data started with
/apps/metron/geo/default/GeoLite2-City.mmdb.gz



Note: I am facing the issue from the time I implemented the geo alerts rule
as specified in the link
.




Thanks,

Anil.


Re: Google Cloud Platform

2018-08-09 Thread Otto Fowler
I would also recommend creating a jira for the support of metron deployment
to GCP, as a peer deployment to the EC2.  With some of the requirements for
such support


On August 9, 2018 at 09:29:48, Justin Leet (justinjl...@gmail.com) wrote:

Unfortunately, I have no familiarity with GCP at all, but a good place to
start *may* be by reverse engineering some of our EC2 instructions
.
You might be able to sub in GCP steps as needed for provisioning and more
or less follow the internal instructions otherwise.  Keep in mind Metron
itself is deployed via Ambari, so as long as you can get a Hadoop cluster
up and running, the RPMs out and installed + the mpack, you should at least
be able to take a good stab at getting things up and running.

I'd be curious if anyone has any GCP experience at all and would know if
this is a reasonable approach.

If you do make an attempt, I'd definitely like to hear back on how it goes,
and what issues where hit, etc.

Justin

On Thu, Aug 9, 2018 at 1:04 AM Kevin Waterson 
wrote:

> Was hoping somebody else had.. not sure where to start... :)
>
>
> On Thu, Aug 9, 2018 at 2:00 AM James Sirota  wrote:
>
>> Not to my knowledge. Are you trying it?
>>
>>
>> 24.07.2018, 22:19, "Kevin Waterson" :
>>
>> Has anybody been able to deploy Metron using GCP?
>>
>> Thanks
>> Kevin
>>
>>
>>
>> ---
>> Thank you,
>>
>> James Sirota
>> PMC- Apache Metron
>> jsirota AT apache DOT org
>>
>>


Re: CEF Parser not Indexing data via Nifi (SysLogs)

2018-07-20 Thread Otto Fowler
I’ll put some thoughts in METRON-1453, unless we want a discuss thread

On July 20, 2018 at 10:32:48, Casey Stella (ceste...@gmail.com) wrote:

So, I would really love to see METRON-1453 go in, because I'd love to
decouple syslog parsing (very common) from generic grok.

On Fri, Jul 20, 2018 at 10:26 AM Otto Fowler 
wrote:

> Metron does not have a generic Syslog Parser.
>
> Nifi has Syslog parsing ( either Records or standard Processor ), in two
> modes.
>
> ParseSyslog is the original, where regex’s are used to parse the syslog
> RFC3164 and RFC5424, but only extracts the common fields ( so the
> ‘additional info’ like program id, message id, structured data in 5424 is
> in the MSG ). I have recently added a record reader for that method as well
> ( Nifi PR#2900 <https://github.com/apache/nifi/pull/2900>).
>
> Syslog5424Reader(records) and ParseSyslog5424 are new and instead of using
> regexes they use a new library simple-syslog–5424
> <https://github.com/palindromicity/simple-syslog-5424> I wrote that
> parses RFC5424 messages completely ( note properly formatted RFC 5424
> messages ) see Nifi PR#2805 <https://github.com/apache/nifi/pull/2805>
> and Nifi PR#2816 <https://github.com/apache/nifi/pull/2816> using an
> antlr grammar.
>
> You should be able to pick the manner best for you and parse that out in
> Nifi if you choose.
>
> Metron parses syslog as required in specific parsers that have messages
> assumed to be embedded in syslog.
>
> What I have been talking about in METRON–1453
> <https://issues.apache.org/jira/browse/METRON-1453> and other places is
> separating out the syslog from the parser, such that the parsers don’t need
> to know that the message is delivered embedded in syslog.
>
> The new parser chaining work would give us an avenue to this, and as you
> can see here MetronPR#1099
> <https://github.com/apache/metron/pull/1099#issuecomment-405701948> I
> have put that case forward.
>
> If that hits, I think that we’d be able to : 1. parse plain syslog to
> metron 2. parse plain syslog as a transform and then have less complicated,
> more specific parsers for the msg part.
>
> We may end up having syslog parsers and transforms at the end of this.
>
> In the mean time, if you wish to parse plain syslog in Metron, you will
> have to use grok, which doesn’t get structured data.
>
> If you want the complete 5424 set of data, then you can open a jira for
> creating a parser using simple-syslog–5424.
>
>
>
> On July 20, 2018 at 04:23:36, Farrukh Naveed Anjum (
> anjum.farr...@gmail.com) wrote:
>
> Hi,
>
> I am trying to index the Syslog using CEF Parser with Nifi.
>
> It does not give any error though, transport data to kafa without indexing
> it. It keepg giving FAILED in Spout.
>
> I believe indexing Syslog are most basic usecase for all. But metron fails
> to do it with each in standard format.
>
> I tried bro for it. But even it keeps giving PARSER Error.
>
> Any help ? Fast will be apperciated.
>
>
>
>
> --
> With Regards
> Farrukh Naveed Anjum
>
>


Re: CEF Parser not Indexing data via Nifi (SysLogs)

2018-07-20 Thread Otto Fowler
Metron does not have a generic Syslog Parser.

Nifi has Syslog parsing ( either Records or standard Processor ), in two
modes.

ParseSyslog is the original, where regex’s are used to parse the syslog
RFC3164 and RFC5424, but only extracts the common fields ( so the
‘additional info’ like program id, message id, structured data in 5424 is
in the MSG ). I have recently added a record reader for that method as well
( Nifi PR#2900 ).

Syslog5424Reader(records) and ParseSyslog5424 are new and instead of using
regexes they use a new library simple-syslog–5424
 I wrote that parses
RFC5424 messages completely ( note properly formatted RFC 5424 messages )
see Nifi PR#2805  and Nifi PR#2816
 using an antlr grammar.

You should be able to pick the manner best for you and parse that out in
Nifi if you choose.

Metron parses syslog as required in specific parsers that have messages
assumed to be embedded in syslog.

What I have been talking about in METRON–1453
 and other places is
separating out the syslog from the parser, such that the parsers don’t need
to know that the message is delivered embedded in syslog.

The new parser chaining work would give us an avenue to this, and as you
can see here MetronPR#1099
 I have
put that case forward.

If that hits, I think that we’d be able to : 1. parse plain syslog to
metron 2. parse plain syslog as a transform and then have less complicated,
more specific parsers for the msg part.

We may end up having syslog parsers and transforms at the end of this.

In the mean time, if you wish to parse plain syslog in Metron, you will
have to use grok, which doesn’t get structured data.

If you want the complete 5424 set of data, then you can open a jira for
creating a parser using simple-syslog–5424.




On July 20, 2018 at 04:23:36, Farrukh Naveed Anjum (anjum.farr...@gmail.com)
wrote:

Hi,

I am trying to index the Syslog using CEF Parser with Nifi.

It does not give any error though, transport data to kafa without indexing
it. It keepg giving FAILED in Spout.

I believe indexing Syslog are most basic usecase for all. But metron fails
to do it with each in standard format.

I tried bro for it. But even it keeps giving PARSER Error.

Any help ? Fast will be apperciated.




--
With Regards
Farrukh Naveed Anjum


Re: How to delete the original message field once the message parsed?

2018-06-25 Thread Otto Fowler
Also, theoretically, ‘not throwing anything away’ allows future
processing/reprocessing of data to gain new insights.  It is not uncommon
from the SEIM’s that I’ve seen to store the raw log information for the
reasons Simon states for example.


So all these things that Simon and James have mentioned are true, and are
the why from a capabilities perspective.

That doesn’t invalidate your very practical point Michel however, and it is
important to understand field issues as people put Metron into use.  If
these features are not being used, or don’t exist yet (replay) can someone
not tune
them down for their scenario with some understanding of the tradeoffs?

I don’t think there is currently a way to do this, but it is worth having a
discussion on the issue.


On June 25, 2018 at 20:04:16, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Very sorry... posted on the wrong thread...

The original string serves purposes well beyond debugging. Many users will
need to be able to prove provenance to the raw logs in order to prove or
prosecute an attack from an internal threat, or provide evidence to law
enforcement or an external threat. As such, the original string is
important.

It also provides a valuable source for the free text search where parsing
has not extracted all the necessary tokens for a hunt use case, so it can
be a valuable field to have in Elastic or Solr for text rather than keyword
indexing.

That said, it may make sense to remove a heavy weight processing and
storage field like this from the lucene store. We have been talking for a
while about filtering some of the data out of the realtime index, and
preserving full copies in the batch index, which could meet the forensic
use cases above, and would make it a matter of user choice. That would
probably be configured through indexing config to filter fields.

Simon


On 25 June 2018 at 23:49, Michel Sumbul  wrote:

> Hi James,
>
> Will it not be interesting, to have an option to remove that field just
> before indexing? This save storage space/Cost in HDFS and ES?
> For example, during development/debugging you keep that field and when
> everything is ready for prod, you check a box to remove that field before
> indexing?
>
> Michel
>
> 2018-06-25 23:37 GMT+01:00 James Sirota :
>
>> Hi Michael, the original_string is there for a reason. It's an immutable
>> field that preserves the original message. While enrichments are added,
>> various parts of the message are parsed out, changed, filtered out,
>> ocncantenated, etc., you can always recover the original message from the
>> original string.
>>
>> Thanks,
>> James
>>
>>
>> 25.06.2018, 15:18, "Michel Sumbul" :
>>
>> Hello,
>>
>> Is there a way to avoid to keep the field "original message", once the
>> message have been parsed?
>> The objectif is to reduce the size of the message to store in HDFS, ES
>> and the traffic between storm/kafka.
>> Currently, we have all the fields + the original message which means that
>> we are going to used 2 time more space to store an information.
>>
>> Thanks for the help,
>> Michel
>>
>>
>>
>> ---
>> Thank you,
>>
>> James Sirota
>> PMC- Apache Metron
>> jsirota AT apache DOT org
>>
>>
>


--
--
simon elliston ball
@sireb


java-grok awakening

2018-04-13 Thread Otto Fowler
I have been in contact with the maintainer of java-grok about the status of
the project and I am happy to say that there has been
activity today, as well as some steps to move it forward and pull some
forks back in.

https://groups.google.com/forum/#!forum/java-grokhas been created to
discuss the project.

I would recommend anyone interested join up and we can see about getting it
active.

java-grok is very important to Metron, so IMHO this is a Good Thing™

ottO


Re: DataWorks Summit San Jose

2018-02-08 Thread Otto Fowler
Sometimes I try a different browser if that happens.  Also if you are using
ghostery or something that can do it.


On February 8, 2018 at 14:15:48, pele_smk (pele...@gmail.com) wrote:

Hey Jon,
I'm trying to submit my abstract, but it seems the datasummit website
submission is broken. It's just spinning around and around when I submit my
biography, name, address, etc.. Any ideas?

Daniel

On Thu, Feb 8, 2018 at 3:14 AM, zeo...@gmail.com  wrote:

> Hi Daniel,
>
> Yes, I think that would perfectly fit the intent of the Cyber security
> track.  Really anything that has to do with the hadoop ecosystem and
> security.
>
> Jon
>
> On Wed, Feb 7, 2018, 19:33 pele_smk  wrote:
>
>> Hey Jon,
>> Would this be a reasonable place to present examples of apache zeppelin
>> used to answer network security related questions?
>>
>> Daniel
>>
>> On Wed, Feb 7, 2018 at 9:17 AM, zeo...@gmail.com 
>> wrote:
>>
>>> Hi All,
>>>
>>> Just a heads up that *the San Jose DataWorks Summit's call for papers
>>> is coming to a close soon* (February 9th, in 2 days!).  If you are
>>> doing anything cool with open source big data and security that you want to
>>> talk about, please submit to the Cyber Security track.  I'm hoping to
>>> attend and I'm sure there will be others from the community in attendance
>>> as well.
>>>
>>> Feel free to reply here (or to me directly) if you have any questions
>>> about submitting or even if you just want to meet up at the event.  For
>>> more information check out the website here
>>> .
>>>
>>> Jon
>>> --
>>>
>>> Jon
>>>
>>
>>
>
> --
>
> Jon
>


Re: CentOS and Ubuntu

2018-02-07 Thread Otto Fowler
The Ubuntu support in Apache Metron is new.  Really new. At the moment,
developers are not going to be required to test things on Ubuntu when
submitting or committing pull requests.   Work is also ongoing to get the
Ambari install complete.

The Ubuntu support should be considered experimental at this time.

You can track
https://issues.apache.org/jira/browse/METRON-1370?jql=project%20%3D%20METRON%20AND%20text%20~%20UBUNTU
for progress on full dev and ambari progress.




On February 7, 2018 at 08:00:29, Helder Reia (helder.r...@gmail.com) wrote:

Hey everyone!
I am new to Apache Metron and I don't know much about this! Are there any
differences on using CentOS or Ubuntu ? I am used to work with Ubuntu but I
can look for CentOS if it is easier to use / has advantages !

Thank you for your help!

--
Helder Reia
ALF-AL TM


Re: Location of Quickstart "full dev platform"

2018-02-06 Thread Otto Fowler
https://github.com/apache/metron/blob/master/CONTRIBUTING.md


On February 6, 2018 at 17:01:09, Jack Hamm (jack.h...@gigamon.com) wrote:

Thank you, Ryan!



-jack



On 2/6/18, 1:56 PM, "Ryan Merriman"  wrote:



https://github.com/apache/metron/tree/master/metron-deployment/development/centos6



That was changed recently and the wiki hasn't been updated yet.  Your best
bet is to rely on the READMEs which are always up to date.



On Tue, Feb 6, 2018 at 3:53 PM, Jack Hamm  wrote:

Hi all,



Just trying to get started with Metron.  I found my way into the docs and
there is a page that describes an all-in-one test image.



The page is: https://cwiki.apache.org/confluence/display/METRON/Quick+Start



Which has a link to:
https://github.com/apache/metron/tree/master/metron-deployment/vagrant/full-dev-platform



However, that GitHub link is 404'd.  Can someone let me know where to find
the dev platform?  Also, what's the process with this project for
submitting a bug on the documentation?


Thanks,

-jack


Re: Define a function that can be used in Stellar

2018-02-02 Thread Otto Fowler
I think if we understand the use case, we may be able to think of a more
general set of functionality for stellar to meet this and other cases.

Will this configuration change?  Do you need to track that change without
reloading?  How *much* is in the configuration?  Do we want
people putting their own keys in the global and have to manage that
upgrading ( I think it will get blown away )?






On February 2, 2018 at 09:38:28, Casey Stella (ceste...@gmail.com) wrote:

We use a guava cache to cache the data for 24 hours.  You can see how it's
done here:
https://github.com/apache/metron/blob/master/metron-platform/metron-enrichment/src/main/java/org/apache/metron/enrichment/stellar/ObjectGet.java

We also do something like this in GEO_GET as well, but it's a bit more
complex.

On Fri, Feb 2, 2018 at 9:35 AM, Simon Elliston Ball <
si...@simonellistonball.com> wrote:

> I forgot we added OBJECT_GET. How does the caching work on that?
>
> Simn
>
>
> On 2 Feb 2018, at 14:33, Nick Allen  wrote:
>
> There are many functions that use the global configuration.  For example,
> GET_GEO in org.apache.metron.enrichment.stellar.GeoEnrichmentFunctions.
> There might be a better example, but that is one is staring at me at the
> moment.
>
> There is an OBJECT_GET function defined in 
> org.apache.metron.enrichment.stellar.ObjectGet
> that was purpose-built to retrieve files from HDFS.  If you wanted to
> retrieve a configuration from HDFS that would be a good example (if you
> can't just use that functions directly).
>
> On Fri, Feb 2, 2018 at 8:50 AM Ali Nazemian  wrote:
>
>> Is there any Stellar function already been implemented in Metron that has
>> a config file associated with it? I am trying to get an idea of how it
>> works.
>>
>> On 3 Feb. 2018 00:44, "Simon Elliston Ball" 
>> wrote:
>>
>>> Depends how you write the function class, but most likely, yes. Hence
>>> global config option.
>>>
>>> Simon
>>>
>>> On 2 Feb 2018, at 13:42, Ali Nazemian  wrote:
>>>
>>> Does it mean every time the function gets called it will load the
>>> config, but if I use the global one it will only read it one time and it
>>> will be available in memory?
>>>
>>> On 2 Feb. 2018 21:53, "Simon Elliston Ball" 
>>> wrote:
>>>
 Shouldn’t be. The one this I would point out though is that you don’t
 necessarily know which supervisor you will be running from, so pulling from
 HDFS would make sense. That said, the performance implications are probably
 not great. A good option here would be to have the config available in the
 global config for example and refer to that, since most instances of
 stellar apply global config to their context.

 Simon


 On 2 Feb 2018, at 07:14, Ali Nazemian  wrote:

 Will be any problem if the Stellar function we want to implement need
 to load an external config file?

 Cheers,
 Ali

 On Thu, Jan 18, 2018 at 4:58 PM, Ali Nazemian 
 wrote:

> Thanks, All.
>
> Yes, Nick. It is highly related to our use case and the way that we
> are going to enrich events with assets and vulnerability properties. It is
> not a general case at all.
>
> Cheers,
> Ali
>
> On Thu, Jan 18, 2018 at 5:43 AM, Matt Foley  wrote:
>
>> Besides the example code Simon mentioned at
>> https://github.com/apache/metron/tree/master/metron-
>> stellar/stellar-3rd-party-example ,
>> there is some documentation at http://metron.apache.org/
>> current-book/metron-stellar/stellar-common/3rdPartyStellar.html
>>
>>
>>
>> *From:* Nick Allen 
>> *Reply-To:* "user@metron.apache.org" 
>> *Date:* Wednesday, January 17, 2018 at 4:46 AM
>> *To:* "user@metron.apache.org" 
>> *Subject:* Re: Define a function that can be used in Stellar
>>
>>
>>
>>
>>
>>
>>
>> If something we have already does not fit the bill, I would recommend
>> creating that function in Java.   Since you described it as "a bit 
>> complex"
>> and "the logic would be complicated" I don't see any value in defining
>> something like this in Stellar with named functions.
>>
>>
>>
>> Best
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Wed, Jan 17, 2018 at 7:38 AM Simon Elliston Ball <
>> si...@simonellistonball.com> wrote:
>>
>> Have you looked at the recent TLSH functions in Stellar? We already
>> have that for similarity preserving hashes.
>>
>>
>>
>> Simon
>>
>>
>>
>>
>> On 17 Jan 2018, at 12:35, Ali Nazemian  wrote:
>>
>> It is a bit complex. We want to create a function that accepts a list
>> 

[ANNOUNCE] Metron User Community Meeting

2018-01-28 Thread Otto Fowler
Topic: Community zoom meeting

Time:  Wednesday, January 31st at 09:30AM PST


Join from PC, Mac, Linux, iOS or Android:
https://hortonworks.zoom.us/j/658498271

Or join by phone:

+1 669 900 6833  (US Toll) or +1 646 558 8656  (US Toll)

+1 877 853 5247  (US Toll Free)

+1 877 369 0926  (US Toll Free)

Meeting ID: 658 498 271

International numbers available:
https://hortonworks.zoom.us/zoomconference?m=y7M0gPfv8kRv3WvXHjXrpc3n3DyNqTMe



Topics

We have a volunteer for a community member presentation:

Ahmed Shah (PMP, M. Eng.) Cybersecurity Analyst & Developer GCR -
Cybersecurity Operations Center Carleton University - cugcr.com

Ahmed would like to talk to the community about

   -

   Who the GCR group is
   -

   How they use Metron 0.4.1
   -

   Walk through their dashboards, UI management screen, nifi
   -

   Challenges we faced up until now

I would like to thank Ahmed for stepping forward for this meeting.

If you have something you would like to present or talk about please reply
here! Maybe we can have people ask for “A better explanation of feature
X” type things?
Metron User Community Meetings

User Community Meetings are a means for realtime discussion of experiences
with Apache Metron, or demonstration of how the community is using or will
be using Apache Metron.

These meetings are geared towards:

   -

   Demonstrations and knowledge sharing as opposed to technical discussion
   or implementation details from members of the Apache Metron Community
   -

   Existing Feature demonstrations
   -

   Proposed Feature demonstrations
   -

   Community feedback

These meetings are *not* for :

   -

   Support discussions. Those are best left to the mailing lists.
   -

   Development discussions. There is another type of meeting for that.


Re: Deployment help needed.

2018-01-25 Thread Otto Fowler
at specified path /Library/Java/JavaVirtualMachines/jdk-9.0.4.jdk/Contents/
>> Home
>>
>
We don’t support Java 9.


On January 25, 2018 at 14:16:51, Sujay Jaladi (jsu...@gmail.com) wrote:

I deployed a full development environment, started docker and vagrant. It
still failed. Attached is the ansible log. Thanks for the help.


On Wed, Jan 24, 2018 at 7:38 PM, Gaurav Bapat  wrote:

> 1.>In Full development folder, do vagrant destroy -f
> 2.>In metron root directory do mvn clean compile -DskipTests
> 3.>start docker
> 4.>vagrant up
>
> On 24 January 2018 at 08:13, Sujay Jaladi  wrote:
>
>> Hello,
>>
>> Everytime I attempt to deploy apache metron on AWS, I get the following
>> error and all the servers are up and running expect Metron or its
>> components are not installed. Please help.
>>
>> fatal: [ec2-52-10-94-22.us-west-2.compute.amazonaws.com -> localhost]:
>> FAILED! => {"changed": true, "cmd": "cd /Users/sujay/Downloads/apache-
>> metron-0.4.2-rc2/metron-deployment/amazon-ec2/../playbooks/../.. && mvn
>> clean package -DskipTests -T 2C -P HDP-2.5.0.0,mpack", "delta":
>> "0:00:04.845260", "end": "2018-01-23 18:28:27.608265", "failed": true,
>> "rc": 1, "start": "2018-01-23 18:28:22.763005", "stderr": "", "stdout":
>> "[INFO] Scanning for projects...\n[INFO] --
>> --\n[INFO] Reactor Build
>> Order:\n[INFO] \n[INFO] Metron\n[INFO] metron-stellar\n[INFO]
>> stellar-common\n[INFO] metron-analytics\n[INFO] metron-maas-common\n[INFO]
>> metron-platform\n[INFO] metron-zookeeper\n[INFO]
>> metron-test-utilities\n[INFO] metron-integration-test\n[INFO]
>> metron-maas-service\n[INFO] metron-common\n[INFO] metron-statistics\n[INFO]
>> metron-writer\n[INFO] metron-storm-kafka-override\n[INFO]
>> metron-storm-kafka\n[INFO] metron-hbase\n[INFO]
>> metron-profiler-common\n[INFO] metron-profiler-client\n[INFO]
>> metron-profiler\n[INFO] metron-hbase-client\n[INFO]
>> metron-enrichment\n[INFO] metron-indexing\n[INFO] metron-solr\n[INFO]
>> metron-pcap\n[INFO] metron-parsers\n[INFO] metron-pcap-backend\n[INFO]
>> metron-data-management\n[INFO] metron-api\n[INFO] metron-management\n[INFO]
>> elasticsearch-shaded\n[INFO] metron-elasticsearch\n[INFO]
>> metron-deployment\n[INFO] Metron Ambari Management Pack\n[INFO]
>> metron-contrib\n[INFO] metron-docker\n[INFO] metron-interface\n[INFO]
>> metron-config\n[INFO] metron-alerts\n[INFO] metron-rest-client\n[INFO]
>> metron-rest\n[INFO] site-book\n[INFO] 3rd party Functions (just for
>> tests)\n[INFO] \n[INFO] Using the MultiThreadedBuilder implementation with
>> a thread count of 8\n[INFO]
>> \n[INFO] --
>> --\n[INFO] Building Metron
>> 0.4.2\n[INFO] --
>> --\n[INFO] \n[INFO] ---
>> maven-clean-plugin:2.5:clean (default-clean) @ Metron ---\n[INFO] \n[INFO]
>> --- maven-enforcer-plugin:1.4.1:enforce (enforce-versions) @ Metron
>> ---\n[INFO] \n[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent
>> (default) @ Metron ---\n[INFO] argLine set to -javaagent:/Users/sujay/.m2/re
>> pository/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-
>> 0.7.9-runtime.jar=destfile=/Users/sujay/Downloads/apache-
>> metron-0.4.2-rc2/target/jacoco.exec\n[INFO] \n[INFO] ---
>> jacoco-maven-plugin:0.7.9:report (report) @ Metron ---\n[INFO] Skipping
>> JaCoCo execution due to missing execution data file.\n[INFO]
>> \n[INFO]
>> \n[INFO]
>> Building metron-stellar 0.4.2\n[INFO] --
>> --\n[INFO]
>>   \n[INFO]
>>   \n[INFO]
>> \n[INFO]
>> Building metron-platform 0.4.2\n[INFO] --
>> --\n[INFO]
>> \n[INFO]
>> Building metron-analytics 0.4.2\n[INFO] --
>> --\n[INFO]
>>   \n[INFO]
>> \n[INFO]
>>
>> \n[INFO] 
>> \n[INFO]
>> Building metron-contrib 0.4.2\n[INFO] --
>> --\n[INFO] Building
>> metron-deployment 0.4.2\n[INFO] --
>> --\n[INFO]
>>   \n[INFO]
>> 

Metron User Community Meeting Call

2018-01-25 Thread Otto Fowler
I would like to propose a Metron user community meeting. I propose that we
set the meeting next week, and will throw out Wednesday, January 31st at
09:30AM PST, 12:30 on the East Coast and 5:30 in London Towne. This meeting
will be held over a web-ex, the details of which will be included in the
actual meeting notice.
Topics

We have a volunteer for a community member presentation:

Ahmed Shah (PMP, M. Eng.) Cybersecurity Analyst & Developer GCR -
Cybersecurity Operations Center Carleton University - cugcr.com

Ahmed would like to talk to the community about

   -

   Who the GCR group is
   -

   How they use Metron 0.4.1
   -

   Walk through their dashboards, UI management screen, nifi
   -

   Challenges we faced up until now

I would like to thank Ahmed for stepping forward for this meeting.

If you have something you would like to present or talk about please reply
here! Maybe we can have people ask for “A better explanation of feature
X” type things?
Metron User Community Meetings

User Community Meetings are a means for realtime discussion of experiences
with Apache Metron, or demonstration of how the community is using or will
be using Apache Metron.

These meetings are geared towards:

   -

   Demonstrations and knowledge sharing as opposed to technical discussion
   or implementation details from members of the Apache Metron Community
   -

   Existing Feature demonstrations
   -

   Proposed Feature demonstrations
   -

   Community feedback

These meetings are *not* for :

   -

   Support discussions. Those are best left to the mailing lists.
   -

   Development discussions. There is another type of meeting for that.


Re: Deployment help needed.

2018-01-24 Thread Otto Fowler
Can you run metron-deployment/scripts/platform_info.sh and send the output?


On January 23, 2018 at 21:43:34, Sujay Jaladi (jsu...@gmail.com) wrote:

Hello,

Everytime I attempt to deploy apache metron on AWS, I get the following
error and all the servers are up and running expect Metron or its
components are not installed. Please help.

fatal: [ec2-52-10-94-22.us-west-2.compute.amazonaws.com -> localhost]:
FAILED! => {"changed": true, "cmd": "cd
/Users/sujay/Downloads/apache-metron-0.4.2-rc2/metron-deployment/amazon-ec2/../playbooks/../..
&& mvn clean package -DskipTests -T 2C -P HDP-2.5.0.0,mpack", "delta":
"0:00:04.845260", "end": "2018-01-23 18:28:27.608265", "failed": true,
"rc": 1, "start": "2018-01-23 18:28:22.763005", "stderr": "", "stdout":
"[INFO] Scanning for projects...\n[INFO]
\n[INFO]
Reactor Build Order:\n[INFO] \n[INFO] Metron\n[INFO] metron-stellar\n[INFO]
stellar-common\n[INFO] metron-analytics\n[INFO] metron-maas-common\n[INFO]
metron-platform\n[INFO] metron-zookeeper\n[INFO]
metron-test-utilities\n[INFO] metron-integration-test\n[INFO]
metron-maas-service\n[INFO] metron-common\n[INFO] metron-statistics\n[INFO]
metron-writer\n[INFO] metron-storm-kafka-override\n[INFO]
metron-storm-kafka\n[INFO] metron-hbase\n[INFO]
metron-profiler-common\n[INFO] metron-profiler-client\n[INFO]
metron-profiler\n[INFO] metron-hbase-client\n[INFO]
metron-enrichment\n[INFO] metron-indexing\n[INFO] metron-solr\n[INFO]
metron-pcap\n[INFO] metron-parsers\n[INFO] metron-pcap-backend\n[INFO]
metron-data-management\n[INFO] metron-api\n[INFO] metron-management\n[INFO]
elasticsearch-shaded\n[INFO] metron-elasticsearch\n[INFO]
metron-deployment\n[INFO] Metron Ambari Management Pack\n[INFO]
metron-contrib\n[INFO] metron-docker\n[INFO] metron-interface\n[INFO]
metron-config\n[INFO] metron-alerts\n[INFO] metron-rest-client\n[INFO]
metron-rest\n[INFO] site-book\n[INFO] 3rd party Functions (just for
tests)\n[INFO] \n[INFO] Using the MultiThreadedBuilder implementation with
a thread count of 8\n[INFO]
\n[INFO]
\n[INFO]
Building Metron 0.4.2\n[INFO]
\n[INFO]
\n[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ Metron
---\n[INFO] \n[INFO] --- maven-enforcer-plugin:1.4.1:enforce
(enforce-versions) @ Metron ---\n[INFO] \n[INFO] ---
jacoco-maven-plugin:0.7.9:prepare-agent (default) @ Metron ---\n[INFO]
argLine set to
-javaagent:/Users/sujay/.m2/repository/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=/Users/sujay/Downloads/apache-metron-0.4.2-rc2/target/jacoco.exec\n[INFO]
\n[INFO] --- jacoco-maven-plugin:0.7.9:report (report) @ Metron ---\n[INFO]
Skipping JaCoCo execution due to missing execution data file.\n[INFO]
  \n[INFO]
\n[INFO]
Building metron-stellar 0.4.2\n[INFO]
\n[INFO]

\n[INFO]
  \n[INFO]
\n[INFO]
Building metron-platform 0.4.2\n[INFO]
\n[INFO]
\n[INFO]
Building metron-analytics 0.4.2\n[INFO]
\n[INFO]

\n[INFO]
\n[INFO]

\n[INFO]
\n[INFO]
Building metron-contrib 0.4.2\n[INFO]
\n[INFO]
Building metron-deployment 0.4.2\n[INFO]
\n[INFO]

\n[INFO]
\n[INFO]
Building metron-interface 0.4.2\n[INFO]
\n[INFO]

\n[INFO]
\n[INFO]
Building site-book 0.4.2\n[INFO]
\n[INFO]
\n[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ metron-contrib
---\n[INFO] \n[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @
metron-deployment ---\n[INFO] \n[INFO] ---
maven-enforcer-plugin:1.4.1:enforce (enforce-versions) @ metron-deployment
---\n[INFO] \n[INFO] --- maven-enforcer-plugin:1.4.1:enforce
(enforce-versions) @ metron-contrib ---\n[INFO] \n[INFO] ---
maven-clean-plugin:2.5:clean (default-clean) @ site-book ---\n[INFO]
\n[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default) @

Re: SysLog using CEF Parser (RSysLogs)

2018-01-22 Thread Otto Fowler
If it reaches the Indexing topology it is not a Parser problem, in almost
all cases.



On January 22, 2018 at 03:24:35, Farrukh Naveed Anjum (
anjum.farr...@gmail.com) wrote:

Yes its Strom Indexing Bolt that is halting it. Any one working on CEF
Parser (Can Syslog work with it like RSyslog). We are stuck at that point.

Please see the above error and suggest

On Mon, Jan 22, 2018 at 1:10 PM, Gaurav Bapat  wrote:

> Hi,
>
> Even I am stuck with the same, and dont know how to solve the issue.
>
> Looks like this is a parsing error
>
> On 22 January 2018 at 13:00, Farrukh Naveed Anjum  > wrote:
>
>> Hi,
>>
>> I am trying to Ingest syslog using CEF Parser it is not creating any
>> Elastic Search Index based on.
>>
>> Any suggestion how can I achieve it ?
>>
>>
>>
>>
>> --
>> With Regards
>> Farrukh Naveed Anjum
>>
>
>


--
With Regards
Farrukh Naveed Anjum


Re: Stellar on another platform?

2018-01-18 Thread Otto Fowler
Please comment on the jira.  We can come up with what would be a good
example program, obviously massively commented to show this.
Down the line, we could even have archetypes for different application
types… but that is just me thinking down the line ;)


On January 18, 2018 at 07:57:17, Otto Fowler (ottobackwa...@gmail.com)
wrote:

I would also say that you should look at METRON–876
<https://issues.apache.org/jira/browse/METRON-876>.

This is the umbrella jira for the effort to separate stellar into a more
independent module.




On January 18, 2018 at 07:54:38, Otto Fowler (ottobackwa...@gmail.com)
wrote:

I have created METRON–1409
<https://issues.apache.org/jira/browse/METRON-1409>

There are several ways to look at hosting stellar to get examples:

   - The unit tests
   - The shell
   - The storm bolts and transformer classes

>From a high level, to host stellar you need to:

   - Include stellar-common in you pom
   - Create a Context
   - Initialize the function resolver
   - Create the StellarProcessor
   - Create a variable resolver

Then you set everything up, set the vars for the call in the variable
resolver, and have the processor execute a statement.

The issue right now, and the reason we need METRON–1409 is that each of the
things above are *so* integrated into the flow of the host, that it is not
obvious what is going on.

The tests are pretty straight forward, but don’t show the context init very
well.

I would suggest that you start with the unit tests, as they are the most
concise. Look through them, debug through them etc.

Then move onto the shell.

I would look at the bolts/transformers last ( although they are the most
analogous to what I think you want to do ).






On January 17, 2018 at 17:34:45, Ian Abreu (iab...@wayfair.com) wrote:

Hey all,



We’ve come across the design decision where we’d like to use Metron tooling
as a framework to build our SIEM around. This being the case, stellar is
something that we’d like to use, but we’ve currently got different
enrichment and normalization layers.



So my question is this: Has anyone, or could anyone point me to a resource
that’d help to normalize our data in such a way that Stellar could be used
downstream from our data manipulation/normalization layer?



Cheers,

Z0r0


Re: Stellar on another platform?

2018-01-18 Thread Otto Fowler
I would also say that you should look at METRON–876
<https://issues.apache.org/jira/browse/METRON-876>.

This is the umbrella jira for the effort to separate stellar into a more
independent module.




On January 18, 2018 at 07:54:38, Otto Fowler (ottobackwa...@gmail.com)
wrote:

I have created METRON–1409
<https://issues.apache.org/jira/browse/METRON-1409>

There are several ways to look at hosting stellar to get examples:

   - The unit tests
   - The shell
   - The storm bolts and transformer classes

>From a high level, to host stellar you need to:

   - Include stellar-common in you pom
   - Create a Context
   - Initialize the function resolver
   - Create the StellarProcessor
   - Create a variable resolver

Then you set everything up, set the vars for the call in the variable
resolver, and have the processor execute a statement.

The issue right now, and the reason we need METRON–1409 is that each of the
things above are *so* integrated into the flow of the host, that it is not
obvious what is going on.

The tests are pretty straight forward, but don’t show the context init very
well.

I would suggest that you start with the unit tests, as they are the most
concise. Look through them, debug through them etc.

Then move onto the shell.

I would look at the bolts/transformers last ( although they are the most
analogous to what I think you want to do ).






On January 17, 2018 at 17:34:45, Ian Abreu (iab...@wayfair.com) wrote:

Hey all,



We’ve come across the design decision where we’d like to use Metron tooling
as a framework to build our SIEM around. This being the case, stellar is
something that we’d like to use, but we’ve currently got different
enrichment and normalization layers.



So my question is this: Has anyone, or could anyone point me to a resource
that’d help to normalize our data in such a way that Stellar could be used
downstream from our data manipulation/normalization layer?



Cheers,

Z0r0


Re: Stellar on another platform?

2018-01-18 Thread Otto Fowler
I have created METRON–1409


There are several ways to look at hosting stellar to get examples:

   - The unit tests
   - The shell
   - The storm bolts and transformer classes

>From a high level, to host stellar you need to:

   - Include stellar-common in you pom
   - Create a Context
   - Initialize the function resolver
   - Create the StellarProcessor
   - Create a variable resolver

Then you set everything up, set the vars for the call in the variable
resolver, and have the processor execute a statement.

The issue right now, and the reason we need METRON–1409 is that each of the
things above are *so* integrated into the flow of the host, that it is not
obvious what is going on.

The tests are pretty straight forward, but don’t show the context init very
well.

I would suggest that you start with the unit tests, as they are the most
concise. Look through them, debug through them etc.

Then move onto the shell.

I would look at the bolts/transformers last ( although they are the most
analogous to what I think you want to do ).






On January 17, 2018 at 17:34:45, Ian Abreu (iab...@wayfair.com) wrote:

Hey all,



We’ve come across the design decision where we’d like to use Metron tooling
as a framework to build our SIEM around. This being the case, stellar is
something that we’d like to use, but we’ve currently got different
enrichment and normalization layers.



So my question is this: Has anyone, or could anyone point me to a resource
that’d help to normalize our data in such a way that Stellar could be used
downstream from our data manipulation/normalization layer?



Cheers,

Z0r0


Re: Metron Install - Vagrant provision error.

2018-01-17 Thread Otto Fowler
If the newest 8 doesn’t work that would be a bug, imho


On January 17, 2018 at 07:20:35, Srikanth Nagarajan (s...@gandivanetworks.com)
wrote:

What is the highest version of Java supported?

__
*Srikanth Nagarajan *
President
*Gandiva Networks Inc*
*732.690.1884 <732.690.1884>* Mobile
s...@gandivanetworks.com
www.gandivanetworks.com

On Jan 17, 2018, at 5:22 PM, Otto Fowler <ottobackwa...@gmail.com> wrote:

We do not support Java 9 yet.



On January 17, 2018 at 04:25:29, Srikanth Nagarajan (s...@gandivanetworks.com)
wrote:

InvocationTargetException: java.nio.file.NotDirectoryException:
/Library/Java/JavaVirtualMachines/jdk-9.0.1.jdk/Contents/Home/lib/modules


[ALL] List Replies

2018-01-17 Thread Otto Fowler
The goal of the user list is to foster the Apache Metron community by
allowing for common discussion of the uses and application of Apache
Metron.  The list’s archives also provide a valuable resource for people to
look through for ideas and answers to questions.

Unless someone specifically requests an off-list contact, please keep
replies and discussion on the list.  That way everyone gets the benefit (
both now and in the future through the archives ).

ottO


Re: Metron Install - Vagrant provision error.

2018-01-17 Thread Otto Fowler
We do not support Java 9 yet.



On January 17, 2018 at 04:25:29, Srikanth Nagarajan (s...@gandivanetworks.com)
wrote:

InvocationTargetException: java.nio.file.NotDirectoryException:
/Library/Java/JavaVirtualMachines/jdk-9.0.1.jdk/Contents/Home/lib/modules


Re: Metron Install - Vagrant provision error.

2018-01-16 Thread Otto Fowler
   - Is the the complete error? Can you post the ansible.log in that
   directory?
   - Do you have docker installed and running?
   - can you run METRON_SRC_DIR/metron-deployment/scripts/platform_info.sh
   and put the output in a mail

ottO




On January 16, 2018 at 02:42:39, Srikanth Nagarajan (s...@gandivanetworks.com)
wrote:

Hi,

I am getting the following error in the full development install (single
box on Mac OSX ) while following the development install procedure.

vagrant provision gives the below error and more below that.

fatal: [node1 -> localhost]: FAILED! => { "changed": true, "cmd": "cd
/Users/sri/metron/metron-deployment/playbooks/../.. && mvn clean package
-DskipTests -T 2C -P HDP-2.5.0.0,mpack", "delta": "0:00:02.478441", "end":
"2018-01-16 13:09:22.953422", "failed": true, "invocation": {
"module_args": { "_raw_params": "cd
/Users/sri/metron/metron-deployment/playbooks/../.. && mvn clean package
-DskipTests -T 2C -P HDP-2.5.0.0,mpack", "_uses_shell": true, "chdir":
null, "creates": null, "executable": null, "removes": null, "warn": true },
"module_name": "command" }, "rc": 1, "start": "2018-01-16 13:09:20.474981",
"stderr": ""

Any help would be appreciated.

Thanks

Srikanth

__

*Srikanth Nagarajan*
*Principal*

*Gandiva Networks Inc*

*732.690.1884* Mobile

s...@gandivanetworks.com

www.gandivanetworks.com

Please consider the environment before printing this. NOTICE: The
information contained in this e-mail message is intended for addressee(s)
only. If you have received this message in error please notify the sender.


Re: Intro & Question

2018-01-10 Thread Otto Fowler
So, what would work would be:

1.  Create a jira like “Ability to deploy metron full dev to aws with
vagrant”
With a description of the use case, and how the vagrant file will fill it.
2. create a pr, with the new file in metron-deployment/vagrant/aws
3. update the readme

I think


On January 10, 2018 at 10:16:03, Ahmed Shah (ahmeds...@cmail.carleton.ca)
wrote:

Would be glad to.

Where in github should I put it?


-Ahmed
___
Ahmed Shah (PMP, M. Eng.)
Cybersecurity Analyst & Developer
GCR - Cybersecurity Operations Center
Carleton University - cugcr.com <https://cugcr.com/tiki/lce/index.php>


--
*From:* Otto Fowler <ottobackwa...@gmail.com>
*Sent:* January 9, 2018 11:51 AM
*To:* Ahmed Shah; user@metron.apache.org
*Subject:* Re: Intro & Question

Any interest in submitting this?


On January 9, 2018 at 10:42:08, Ahmed Shah (ahmeds...@cmail.carleton.ca)
wrote:

Hello Srikanth,


Our team adapted the Metron 0.4.1 Single Node VM install (Original Code
Here:
https://github.com/apache/metron/tree/master/metron-deployment/vagrant/full-dev-platform)
 to
deploy a single node to AWS.


Our Vagrent file is here:

https://github.com/LTW-GCR-CSOC/csoc-installation-scripts/blob/master/amazon-deploy/Metron/Vagrantfile

You can define your AWS Elastic IP,  Subnet ID, VPC, and Security Group ID
before running the file.


Hope it helps.


-Ahmed
___
Ahmed Shah (PMP, M. Eng.)
Cybersecurity Analyst & Developer
GCR - Cybersecurity Operations Center
Carleton University - cugcr.com <https://cugcr.com/tiki/lce/index.php>


--
*From:* Srikanth Nagarajan <s...@gandivanetworks.com>
*Sent:* January 9, 2018 2:39 AM
*To:* user@metron.apache.org
*Subject:* Intro & Question


Hi

My name is Srikanth and work for a Cyber Security firm.   We are building
Metron to test in our lab environment using AWS.

1. Is there a single VM version for Cloud install available ?   If yes,
please share procedure.

2. During the Amazon-Ec2 install for the multi node version provided in the
metron git-hub docs

https://github.com/apache/metron/tree/master/metron-deployment/amazon-ec2
get an error

[WARNING]:  * Failed to parse
/Users/sri/metron/metron-deployment/amazon-ec2/ec2.py with script plugin:
Inventory script (/Users/sri/metron/metron-deployment/amazon-ec2/ec2.py)
had an execution

error: ERROR: "Forbidden", while: getting ElastiCache clusters

Any assistance would be appreciated.

Thanks

Srikanth

__

*Srikanth Nagarajan*
*Principal*

* Gandiva Networks Inc*

*732.690.1884* Mobile

s...@gandivanetworks.com

www.gandivanetworks.com

Please consider the environment before printing this. NOTICE: The
information contained in this e-mail message is intended for addressee(s)
only. If you have received this message in error please notify the sender.


Re: ElasticSearch Indexing not working (Strom Error)

2018-01-10 Thread Otto Fowler
Can we get the complete exception?  There may be a ‘caused by’ listing that
could help.



On January 10, 2018 at 08:53:37, Farrukh Naveed Anjum (
anjum.farr...@gmail.com) wrote:

Please some one respond

On Mon, Jan 8, 2018 at 1:10 PM, Farrukh Naveed Anjum <
anjum.farr...@gmail.com> wrote:

> Hi,
>
> I am unable to see any ElasticSearch Index in kibana or in elasticsearch
> plugin
>
> http://node1:9200/_plugin/head/
>
> After looking into strom, it seems like GeoLiteDatabase Exception in
> Strom bolts
>
> How can I fix it.
>
>
> java.lang.IllegalStateException: [Metron] Unable to update MaxMind
> database at org.apache.metron.enrichment.adapters.geo.GeoLiteDatabase.
> update(GeoLiteDatabase.java:150) at org.apache.metron.enrichm
>
> --
> With Regards
> Farrukh Naveed Anjum
>



--
With Regards
Farrukh Naveed Anjum


Re: Installing Metron 0.4.1

2018-01-09 Thread Otto Fowler
Laurens got to it actually.


On January 9, 2018 at 11:51:58, Ryan Merriman (merrim...@gmail.com) wrote:

Thanks Otto you beat me to it.  Was this not added to our documentation?

On Tue, Jan 9, 2018 at 10:50 AM, Otto Fowler <ottobackwa...@gmail.com>
wrote:

> As answered on irc, updating gcc got Tarik by this.  Laurens FTW!
>
>
> On January 9, 2018 at 11:24:47, Tarik Courdy (tarik.cou...@gmail.com)
> wrote:
>
> Here is the output of the platform_info.sh
>
> Thank you.
>
> -Tarik
>
> On Tue, Jan 9, 2018 at 9:12 AM, Tarik Courdy <tarik.cou...@gmail.com>
> wrote:
>
>> Good morning -
>>
>> I am trying to follow this guide
>> <https://cwiki.apache.org/confluence/display/METRON/Metron+0.4.1+with+HDP+2.5+bare-metal+install+on+Centos+7+with+MariaDB+for+Metron+REST>
>> to installing Metron 0.4.1 on a Virtual Machine running CentOS 7.
>>
>> I am able to get to the section titled "Build Metron Code" and then I
>> fail on this command:
>>
>> ​> ​
>> mvn clean package -DskipTests -T 2C -P HDP-2.5.0.0,mpack
>>
>>
>> I have attached the stack trace at the point where the error is
>> encountered.
>>
>> I have also attached the npm-debug.log
>>
>> I looked at the package.json file in 
>> /home/admin/metron/metron-interface/metron-config/
>> and noticed that tough-cookie is not listed as a dependency.
>>
>> I tried npm install tough-cookie in that directory and when I re-run the
>> mvn command it doesn't fail on the tough-cookie dependency, but it does
>> fail on another missing dependency.
>>
>> If there is any insight you could provide to help me get metron set up,
>> it would be greatly appreciated.
>>
>> Thank you for your time.
>>
>> -Tarik
>>
>
>


Re: Intro & Question

2018-01-09 Thread Otto Fowler
Any interest in submitting this?


On January 9, 2018 at 10:42:08, Ahmed Shah (ahmeds...@cmail.carleton.ca)
wrote:

Hello Srikanth,


Our team adapted the Metron 0.4.1 Single Node VM install (Original Code
Here:
https://github.com/apache/metron/tree/master/metron-deployment/vagrant/full-dev-platform)
 to
deploy a single node to AWS.


Our Vagrent file is here:

https://github.com/LTW-GCR-CSOC/csoc-installation-scripts/blob/master/amazon-deploy/Metron/Vagrantfile

You can define your AWS Elastic IP,  Subnet ID, VPC, and Security Group ID
before running the file.


Hope it helps.


-Ahmed
___
Ahmed Shah (PMP, M. Eng.)
Cybersecurity Analyst & Developer
GCR - Cybersecurity Operations Center
Carleton University - cugcr.com 


--
*From:* Srikanth Nagarajan 
*Sent:* January 9, 2018 2:39 AM
*To:* user@metron.apache.org
*Subject:* Intro & Question


Hi

My name is Srikanth and work for a Cyber Security firm.   We are building
Metron to test in our lab environment using AWS.

1. Is there a single VM version for Cloud install available ?   If yes,
please share procedure.

2. During the Amazon-Ec2 install for the multi node version provided in the
metron git-hub docs

https://github.com/apache/metron/tree/master/metron-deployment/amazon-ec2
get an error

[WARNING]:  * Failed to parse
/Users/sri/metron/metron-deployment/amazon-ec2/ec2.py with script plugin:
Inventory script (/Users/sri/metron/metron-deployment/amazon-ec2/ec2.py)
had an execution

error: ERROR: "Forbidden", while: getting ElastiCache clusters

Any assistance would be appreciated.

Thanks

Srikanth

__

*Srikanth Nagarajan*
*Principal*

* Gandiva Networks Inc*

*732.690.1884* Mobile

s...@gandivanetworks.com

www.gandivanetworks.com

Please consider the environment before printing this. NOTICE: The
information contained in this e-mail message is intended for addressee(s)
only. If you have received this message in error please notify the sender.


Re: Installing Metron 0.4.1

2018-01-09 Thread Otto Fowler
As answered on irc, updating gcc got Tarik by this.  Laurens FTW!


On January 9, 2018 at 11:24:47, Tarik Courdy (tarik.cou...@gmail.com) wrote:

Here is the output of the platform_info.sh

Thank you.

-Tarik

On Tue, Jan 9, 2018 at 9:12 AM, Tarik Courdy  wrote:

> Good morning -
>
> I am trying to follow this guide
> 
> to installing Metron 0.4.1 on a Virtual Machine running CentOS 7.
>
> I am able to get to the section titled "Build Metron Code" and then I fail
> on this command:
>
> ​> ​
> mvn clean package -DskipTests -T 2C -P HDP-2.5.0.0,mpack
>
>
> I have attached the stack trace at the point where the error is
> encountered.
>
> I have also attached the npm-debug.log
>
> I looked at the package.json file in 
> /home/admin/metron/metron-interface/metron-config/
> and noticed that tough-cookie is not listed as a dependency.
>
> I tried npm install tough-cookie in that directory and when I re-run the
> mvn command it doesn't fail on the tough-cookie dependency, but it does
> fail on another missing dependency.
>
> If there is any insight you could provide to help me get metron set up, it
> would be greatly appreciated.
>
> Thank you for your time.
>
> -Tarik
>


Re: Kafka-Kibana Integration

2018-01-08 Thread Otto Fowler
Please see the Subject:Metron Version thread you started for this.



On January 8, 2018 at 02:14:20, Gaurav Bapat (gauravb3...@gmail.com) wrote:

Hi,

I have deployed Metron on single node but I am not able to visualize logs
in Kibana, I have my logs going from NiFi to Kafka topic but I cant see
them on Kibana

I think I am missing something, my data is not parsed and indexed, I dont
know how do I connect this

My logs are in CEF format and I have selected CEF Parser

Please help, I am stuck


Re: Metron Version

2018-01-08 Thread Otto Fowler
There are multiple topologies at work to get the data into elasticsearch.
The flow is basically:

Kafka ( sensor name ) -> parser topology ( sensor name ) -> Kafka
(enrichment) -> enrichment topology -> Kafka (indexing) -> indexing
topology -> ES + HDFS

Each of these topologies are listed in the StormUI, and each needs to be
checked for errors.



On January 5, 2018 at 11:10:26, Gaurav Bapat (gauravb3...@gmail.com) wrote:

There are no errors in Storm, the topic is emitting just like Snort & Bro
but I still cant understand the problem

On Fri, Jan 5, 2018 at 19:54 zeo...@gmail.com  wrote:

> Are you able to look through the storm UI and identify any errors?  Also,
> did you look at the Metron error dashboard?  Thanks,
>
> Jon
>
> On Thu, Jan 4, 2018, 22:47 Gaurav Bapat  wrote:
>
>> Also when I enter indices in Kibana, it fails to search for my Kafka
>> topic and I dont know why the cef logs are not coming into Kibana
>>
>>
>>
>> On 5 January 2018 at 00:23, Simon Elliston Ball <
>> si...@simonellistonball.com> wrote:
>>
>>> Are the logs you’re sending with syslog in CEF format? You will note
>>> that the CEF sensor uses the CEF parser, which means unless your logs are
>>> in CEF format, they will fail to parse and be dropped into the error index
>>> (worth checking the error index in kibana via the Metron Error Dashboard.
>>> That will likely tell you why things aren’t parsing.
>>>
>>> The most likely scenario is that you are sending something non-CEF on
>>> the syslog feed, in which case you will need something like a Grok parser.
>>> I suggest reading through the Squid example in the documentation on how to
>>> do this.
>>>
>>> Simon
>>>
>>> > On 4 Jan 2018, at 18:49, Gaurav Bapat  wrote:
>>> >
>>> > They are syslogs and my topic name is cef, I get one parsed logs out
>>> of 1000+ and I want to do analytics using Spark but I cant find a way out.
>>>
>>> --
>
> Jon
>


Full Dev -> Heartbeat issues

2018-01-08 Thread Otto Fowler
I just started up full dev from the 0.4.2 release tag, and ended up with
failed heartbeats for all my services in ambari.
After investigation, I found the my /etc/hosts ( on node1 ) had multiple
entries for node1 :

[vagrant@node1 ~]$ cat /etc/hosts
127.0.0.1 node1 node1
127.0.0.1   localhost

## vagrant-hostmanager-start
192.168.66.121 node1

## vagrant-hostmanager-end

After removing the 127.0.0.1 node1 node1 line and restarting the machine +
all the services etc my issues are resolved and my board is green.

I am not sure why this may happen.
Hopefully if you are seeing this, this will help.

Anyone know why this may happen?


ottO


Re: [ANNOUNCE] Apache Metron release 0.4.2 and Apache Metron bro plugin for Kafka release 0.1

2018-01-04 Thread Otto Fowler
Thank you Matt, and congratulations everyone!


On January 4, 2018 at 16:11:50, Matt Foley (ma...@apache.org) wrote:

Metron Community: Happy New Year.

I’m happy to announce the release of Metron 0.4.2. A great deal of work
from across the community went into this, with over 100 enhancements,
improvements, and bug fixes since 0.4.1. Thanks to all contributors, and
may all users enjoy the new features!

This release also includes the first official release of the
apache-metron-bro-plugin-kafka, version 0.1.

Details:
The official release source code tarballs may be obtained at any of the
mirrors listed in
http://www.apache.org/dyn/closer.cgi/metron/0.4.2/

As usual, the secure signatures and confirming hashes may be obtained at
https://dist.apache.org/repos/dist/release/metron/0.4.2/

The release branches in github are
https://github.com/apache/metron/tree/Metron_0.4.2 (tag
apache-metron-0.4.2-release)
https://github.com/apache/metron-bro-plugin-kafka/tree/0.1 (tag 0.1)

The release doc book is at http://metron.apache.org/current-book/index.html
The Apache Metron web site at http://metron.apache.org/ has been updated;
please refresh your web browser cache if the new links do not immediately
appear.

Change lists and Release Notes may be obtained at the same locations as the
tarballs.
For your reading pleasure, the change list is appended to this message.

Best regards,
--Matt Foley
release manager

Metron CHANGES (in reverse chron order):
METRON-1373 RAT failure for metron-interface/metron-alerts (mattf-horton)
closes apache/metron#875
METRON-1313 Update metron-deployment to use bro-pkg to install the kafka
plugin (JonZeolla) closes apache/metron#847
METRON-1346 Add new PMC members to web site (ottobackwards) closes
apache/metron#860
METRON-1336 Patching Can Result in Bad Configuration (nickwallen) closes
apache/metron#851
METRON-1335 Install metron-maas-service RPM as a part of the full-dev
deployment (anandsubbu via ottobackwards) closes apache/metron#850
METRON-1308 Fix Metron Documentation (JonZeolla) closes apache/metron#836
METRON-1338 Rat Check Should Ignore Vagrant Retry Files (nickwallen) closes
apache/metron#855
METRON-1286 Add MIN & MAX Stellar functions (jasper-k via justinleet)
closes apache/metron#823
METRON-1334 Add C++11 Compliance Check to platform-info.sh (nickwallen)
closes apache/metron#849
METRON-1277 Add match statement to Stellar language closes
apache/incubator-metron#814
METRON-1239 Drop extra dev environments (nickwallen) closes
apache/metron#852
METRON-1328 Enhance platform-info.sh script to check if docker daemon is
running (anandsubbu via nickwallen) closes apache/metron#846
METRON-1333 Ansible-Docker can no longer build metron (ottobackwards)
closes apache/metron#848
METRON-1252 Build UI for grouping alerts into meta-alerts (iraghumitra via
nickwallen) closes apache/metron#803
METRON-1316 Fastcapa Fails to Compile in Test Environment (nickwallen)
closes apache/metron#841
METRON-1088 Upgrade bro to 2.5.2 (JonZeolla) closes apache/metron#844
METRON-1319 Column Metadata REST service should use default indices on
empty input (merrimanr) closes apache/metron#843
METRON-1321 Metaalert Threat Score Type Does Not Match Sensor Indices
(nickwallen) closes apache/metron#845
METRON-1301 Alerts UI - Sorting on Triage Score Unexpectedly Filters Some
Records (nickwallen) closes apache/metron#832
METRON-1294 IP addresses are not formatted correctly in facet and group
results (merrimanr) closes apache/metron#827
METRON-1291 Kafka produce REST endpoint does not work in a Kerberized
cluster (merrimanr) closes apache/metron#826
METRON-1290 Only first 10 alerts are update when a MetaAlert status is
changed to inactive (justinleet) closes apache/metron#842
METRON-1311 Service Check Should Check Elasticsearch Index Templates
(nickwallen) closes apache/metron#839
METRON-1289 Alert fields are lost when a MetaAlert is created (merrimanr)
closes apache/metron#824
METRON-1309 Change metron-deployment to pull the plugin from
apache/metron-bro-plugin-kafka (JonZeolla) closes apache/metron#837
METRON-1310 Template Delete Action Deletes Search Indices (nickwallen)
closes apache/metron#838
METRON-1275 Fix Metron Documentation closes apache/incubator-metron#833
METRON-1295 Unable to Configure Logging for REST API (nickwallen) closes
apache/metron#828
METRON-1307 Force install of java8 since java9 does not appear to work with
the scripts (brianhurley via ottobackwards) closes apache/metron#835
METRON-1296 Full Dev Fails to Deploy Index Templates (nickwallen via
cestella) closes apache/incubator-metron#829
METRON-1281 Remove hard-coded indices from the Alerts UI (merrimanr) closes
apache/metron#821
METRON-1287 Full Dev Fails When Installing EPEL Repository (nickwallen)
closes apache/metron#820
METRON-1267 Alerts UI returns a 404 when refreshing the alerts-list page
(iraghumitra via merrimanr) closes apache/metron#819
METRON-1283 Install Elasticsearch template as a part of the mpack startup
scripts (anandsubbu 

RE: Hello and install issue

2017-12-30 Thread Otto Fowler
Can you run docker?


On December 29, 2017 at 22:28:46, James Byrne (
james.by...@intrepidtravel.com) wrote:

Can’t run vagrant build as ansible won’t run on windows. For anyone else
having the issue, you needs to run mvn package DskipTests in the metron
root directory before the deployment directory. Also you need to upgrade
the gcc component before running maven as 4.8 chucks build errors.

Cheers



*From:* Michael Miklavcic [mailto:michael.miklav...@gmail.com]
*Sent:* Saturday, 30 December 2017 2:53 AM
*To:* user@metron.apache.org
*Cc:* u...@metron.incubator.apache.org
*Subject:* Re: Hello and install issue



The quickest and easiest way is to run the full Dev build -
metron-deployment/vagrant/full-dev. If you need to deploy on a dedicated
machine, the rpms can be built by following the doc here, under "build
rpms" -
https://github.com/apache/metron/blob/master/metron-deployment/README.md.



Hope this helps.



Best,

Mike



On Dec 28, 2017 5:59 PM, "James Byrne" 
wrote:

Hi there,

  Trying to get a one node PoC going on CentOS7 and in the
community build it says to build the local repo for Metron. Everything else
is up but Metron fails to install because the RPMS are missing. They also
seem to be missing from the Docker Image when I spin it up separately. Has
anyone come up against this too?

Cheers


Re: metron vs ossec

2017-12-21 Thread Otto Fowler
Is it in jira?



On December 21, 2017 at 10:39:46, Ahmed Shah (ahmeds...@cmail.carleton.ca)
wrote:

Hello tuutdo,


We used OSSEC with OSSIM.

My experience with OSSIM is you can't save queries and create elaborate
dashboards like you can with Metron. Metron also seems to have a better
path for integrating your own sensors.


OSSEC integration with Metron is on our wish list.


-Ahmed
___
Ahmed Shah (PMP, M. Eng.)
Cybersecurity Analyst & Developer
GCR - Cybersecurity Operations Center
Carleton University - cugcr.com 


--
*From:* zeo...@gmail.com 
*Sent:* December 21, 2017 8:15 AM
*To:* user@metron.apache.org
*Subject:* Re: metron vs ossec

@Haruo, we haven't tightly integrated them yet, but have plans to do so in
Q1.  We have been running OSSEC for a very long time and are in the middle
of an upgrade/cleanup project that we want to complete before feeding the
data into Metron (v2.9.0 now supports JSON alerts).  Interested to hear
more about your service, feel free to contact me off list if needed.

I agree with Simon's opinion on OSSIM vs Metron.

Jon

On Thu, Dec 21, 2017 at 7:48 AM Simon Elliston Ball <
si...@simonellistonball.com> wrote:

In many ways it’s a matter of scale. OSSIM is a kind of lite version of
AlienVault, and used by them. I’ve seen people move from an OSSIM
architecture to Metron specifically to get better scaling, things like PCAP
capabilities etc. but also retain the OSSEC agents to handle endpoint and
scanning use cases, which they then feed into Metron. In these cases it was
mostly about scalability and flexibility to extend, as well as
manageability of multi-tenant environments.

In functional terms, Metron also emphasises behaviour profiling and machine
learning, whereas OSSIM is a more traditional rules-centric way of looking
at security and log monitoring.

Hope that helps you understand the difference a little better,
Simon


On 21 Dec 2017, at 12:22, moshe jarusalem  wrote:

Jon thanks for the information.

I am indeed trying to learn both of them just wanted to get expert ideas.

OSSEC is also supported by OSSIM which is somewhat like metron. I  would
like to hear  ideas which may make metron better alternative and or
composite usage.

Regards,


On Thu, Dec 21, 2017 at 2:39 PM, zeo...@gmail.com  wrote:

Yes, I run both in my environment and they are both security products but
that's about where the similarities end.  Ossec is a host based solution
that monitors local activity with it's tree based rules engine, Metron is a
distributed solution that handles large sets of data from many sources and
a lot more.  A possible connection between the two may be that ossec
logs/alerts could be fed into Metron for enrichment, triage, alerting, and
analysis.

I would recommend either reading the documentation for both of them in more
detail, or spinning them both up to get a better handle on the differences.

Jon

On Thu, Dec 21, 2017, 00:34 moshe jarusalem  wrote:

Hi All,
I have come across OSSEC project and find it similar to metron. I am
confused a bit.
is anyone aware of Ossec and give some comparisons?

Regards,

--

Jon



--

Jon


Re: bro kafka plugin build error on --bro-init=$BRO_SRC option doesn't exist

2017-12-21 Thread Otto Fowler
If you don’t send them through the kafka topic, and use nifi to write to
hdfs directly, then you will be skipping the enrichment and ES indexing.
Is that what you want?


On December 21, 2017 at 06:52:37, Gaurav Bapat (gauravb3...@gmail.com)
wrote:

Can I send syslogs to HDFS using NiFi without using Kafka Topic?

On 21 Dec 2017 5:16 p.m., "zeo...@gmail.com"  wrote:

> Where did you get the plugin from, and do you have $BRO_SRC set?  This
> plugin has recently moved, had a release, and became a package.  The
> documentation you point to is outdated at this point, and updated
> documentation is a part of a release that's currently being voted on.
>
> Please use bro-pkg to install this, or go directly to
> https://github.com/apache/metron-bro-plugin-kafka
>
> Bro dist is definitely in configure, https://github.com/apache/
> metron-bro-plugin-kafka/blob/master/configure#L86
>
> Jon
>
> On Thu, Dec 21, 2017, 00:16 pele_smk  wrote:
>
>> I'm following the instructions for building the bro Kafka plugin and I'm
>> getting an error with the --bro-dist option does not exist in ./configure
>>
>> The command:
>> ./configure --bro-dist=$BRO_SRC
>>
>> The instructions I'm following:
>> https://metron.apache.org/current-book/metron-sensors/
>> bro-plugin-kafka/index.html
>>
>> The --enable-sasl option exists in the configure and works fine
>>
>> ./configure --enable-sasl
>>
>>
>>
>> Am I missing something obvious?
>>
>> Thanks,
>> Daniel
>>
> --
>
> Jon
>


Re: machine learning libraries supported

2017-12-07 Thread Otto Fowler
Simon,
What do you think a good example of python, spark and MaaS would look like?


On December 7, 2017 at 07:56:00, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

I would recommend starting out with something like Spark, but the short
answer is that anything that will run inside a yarn container, so the
answer is most ML libraries.

Using Spark to train models on the historical store is a good bet, and then
using the trained models with model as a service.

See
https://github.com/apache/metron/tree/master/metron-analytics/metron-maas-service
for
information on models and some sample boilerplate for deploying your own
python based models.

You could as some have suggested use spark streaming, but to be honest, the
spark ML models are not well suited to streaming use cases, and you would
be very much breaking the metron flow rather than benefitting from elements
like MaaS (you’d basically be building a 100% custom side project, which
would be fine, but you’re missing a lot of the benefits of Metron that
way). If you do go down that route I would strong recommend having the
output of your streaming jobs feed back into a Metron sensor. To be honest
though, you’re much better off training in batch and scoring / inferring
via the Model as a Service approach.

Simon


On 6 Dec 2017, at 07:45, moshe jarusalem  wrote:

Hi All,
Would you please suggest some documentation about machine learning
libraries can be used in metron architecture? and how ? any examples
appretiated.

regards,


Re: machine learning libraries supported

2017-12-07 Thread Otto Fowler
Right now, you can look at MaaS, for plugging in machine learning services.

If you want to use spark, and you have it on your cluster, you could write
your own spark drivers and have them pull from the
kakfa topics ( indexing for example ) and run your spark stuff there.


On December 7, 2017 at 03:37:00, moshe jarusalem (tuu...@gmail.com) wrote:

Hi all,

ping

On Wed, Dec 6, 2017 at 1:23 PM, Gaurav Bapat  wrote:

> Hi Moshe,
>
> Even I want to know about ML libraries on Metron, I think Spark might help
> but I dont know how will I setup Metron
>
> Be in touch!!
>
> Thank You,
> Gaurav
>
> On 6 December 2017 at 13:15, moshe jarusalem  wrote:
>
>> Hi All,
>> Would you please suggest some documentation about machine learning
>> libraries can be used in metron architecture? and how ? any examples
>> appretiated.
>>
>> regards,
>>
>>
>


Re: Basic analysis

2017-12-06 Thread Otto Fowler
The issue is the requirement for people on the user list to go to the
source.


On December 6, 2017 at 09:16:39, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

No problem, I’ll grant you it’s not in the most intuitive part of the
source tree to go digging in, but you can also get to the zeppelin bits via
the actions button on the Metron config section (Install Notebooks)

If anyone has any good ideas (or code!) for sample zeppelin notebooks that
would be useful, you can add them to a specific instance of the platform
via the config/zeppelin/metron location and run the action again I believe,
and this would be a great place for more security people to contribute
sample run books for example. There are also efforts by commercial support
providers I believe to add more samples of both dashboards and use cases.

Simon

On 6 Dec 2017, at 14:12, Otto Fowler <ottobackwa...@gmail.com> wrote:

Thanks Simon


On December 6, 2017 at 09:11:50, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

In product… Install Zeppelin Notebooks, and the samples including notebooks
at
https://github.com/apache/metron/tree/master/metron-platform/metron-indexing/src/main/config/zeppelin/metron

as of course there are similar Kibana dashboards included, which are
examples of custom visualisation of metron data, there is also the run book
for visualising squid data in kibana on the docs wiki
https://cwiki.apache.org/confluence/display/METRON/Enhancing+Metron+Dashboard

Should at least get us started.

Simon

On 6 Dec 2017, at 14:00, Otto Fowler <ottobackwa...@gmail.com> wrote:

Links?


On December 6, 2017 at 08:18:23, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

We do already have a number of example of exactly this, but sure if someone
feels like adding to those that would be great.

Simon

On 6 Dec 2017, at 13:14, Otto Fowler <ottobackwa...@gmail.com> wrote:

Maybe a Jira logged for an ‘example’ notebook for this would be appropriate
as well?


On December 6, 2017 at 07:06:30, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Yes. Consider a zeppelin notebook, or kibana dashboard for this.

If you want to use these values for detection, consider building a profile
based on the stats objects (see the profiler section of the documentation
under analytics.

Simon

> On 6 Dec 2017, at 07:42, Syed Hammad Tahir <mscs16...@itu.edu.pk> wrote:
>
> Hi,
>
> Can I setup custom visualization to show lets say the peak netrwork usage
traffic in a certain time?
>
> Regards.


  1   2   >