I think that we can come up with an initial version with little efforts.
The simplest scenario I can think of is running a Flume instance (with a
SeqGen source and a Null sink) for one minute, and then report the average
events per second.

On Thu, Oct 13, 2016 at 6:43 PM, Attila Simon <s...@cloudera.com> wrote:

> Good idea! What would be required to set up something similar for Flume?
> ie initial time cost for setting up the infrastructure and periodic time
> cost to add new use-cases.
>
> Cheers,
> Attila
>
>
>
> On Thu, Oct 13, 2016 at 5:19 PM, Lior Zeno <liorz...@gmail.com> wrote:
>
> > Hi All,
> >
> > Monitoring Flume's performance over time is an important step in every
> > production-level application.  Benchmarking Flume on a nightly basis has
> > the following advantages:
> >
> > * Better understanding of Flume's bottlenecks.
> > * Allow users to compare the performance of different solutions, such as
> > Logstash and Fluentd.
> > * Better understanding of the influence of recent commits on performance.
> >
> > Logstash already conducts various performance tests, more details in this
> > link:
> > http://logstash-benchmarks.elastic.co/
> >
> > I propose adding a few micro-benchmarks showing Flume's TPS vs date (of
> > course, in the ideal case where the input and/or output do not bottleneck
> > the system), e.g. using the SeqGen source.
> >
> > Thoughts?
> >
> > Thanks
> >
>

Reply via email to