Hi,
I want to setup nxlog so that logs are guaranteed to be written to
(rotating) files and sent to http destination if possible.
Files are backup storage. HTTP is an intermediate step before indexing.
Files don't need buffering, but indexing might.
What's nxlog conf file to accomplish the above
he parsing using a regexp:
>
> Exec if $raw_event =~ /^([^|]+)\|([^|]+)\|([^|]+)/ { \
> $Hostname = $1; \
> $EventTime = parsedate($2); \
> $Else = $3; \
> }
>
> The above is untested, it's just to give you the idea.
>
> Regards,
> Botond
>
> On Mon, 19 Oc
TL;DR: How to ignore columns while parsing CSV?
I'm trying to watch a file whose each line is Pipe Separated Values. It
looks like this:
RD000D3AC015BF|2015/10/19 15:01:58|Server binding to 0.0.0.0:10001
RD000D3AC015BF|2015/10/16 12:44:22|10008|IP084|ERR|shutdown|shutdown|IP /
127.0.0.1|PRT 5210
Say my output process set with om_exec exits because of a bug or else. Is
there a restart mechanism?
im_exec already has it. Ideally, it should ignore records which cause it to
exit after certain number of trials, IMHO.
--
I want to output a whitelisted set of fields. I thought of using CSV
extension to achieve that. Is that possible?
I tried a ton of to_csv(), parse_csv() but to_json() always include
original fields. How does to_json() "knows" about fields which were not
picked while building the CSV? $raw_event =
I'm already using nxlog as an (better) alternative to windows azure
diagnostics logging. I wished it had support to write to Azure's table
storage. Any chance someone here wrote an output module for it?
I'm considering writing an output module myself, using REST API Azure
provides. Biggest challe