yes. The easiest way I found to do that is to have a control system and send two streams of data to two or more different destinations.
In case of rsyslog processing a large message volume UDP the loss has always been noticeable. On Fri, Feb 12, 2016 at 11:35 PM, singh.janmejay <[email protected]> wrote: > Inviting ideas. > > Has anyone tried to quantify log-loss (#number of lines lost per day > per sender etc) for a log-store? > > Let us consider the following setup: > - An environment has several application nodes. Each app node > hands-over its logs to local Rsyslog daemon(let us call it Ra, > Rsyslog-application). > - The environment has one or more Rsyslog receiver nodes (let us call > it Rr, Rsyslog-receiver). > - Rr(s) write received logs to a log-store. > > The problem statement is: Quantify log-loss(defined as messages that > are successfully handed over to Ra, but can't be found in log-store) > in log-events lost per day per host. > > Log-events may be lost because of any reason (in the pipe, or after > being written to log-store). It doesn't matter which of the > intermediate systems lost logs, as long as loss is bounded (by any > empirical figure, say less than 0.1%). > > -- > Regards, > Janmejay > http://codehunk.wordpress.com > _______________________________________________ > rsyslog mailing list > http://lists.adiscon.net/mailman/listinfo/rsyslog > http://www.rsyslog.com/professional-services/ > What's up with rsyslog? Follow https://twitter.com/rgerhards > NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad > of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you > DON'T LIKE THAT. > _______________________________________________ rsyslog mailing list http://lists.adiscon.net/mailman/listinfo/rsyslog http://www.rsyslog.com/professional-services/ What's up with rsyslog? Follow https://twitter.com/rgerhards NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE THAT.

