Cos,

I think Apache BigTop is using servers provided by Amazon. Can you make a
suggestion on how can Ignite community get a few servers from Amazon for
benchmarking as well?

D.

On Fri, Sep 15, 2017 at 5:57 AM, Anton Vinogradov <a...@apache.org> wrote:

> Guys,
>
> I fully agree that configured servers at Amazon is the best choice.
>
> But when you need to check that your changes has no performance drop you're
> able to use your own PC or PCs to checks that.
> All you need is to benchmark already released version vs version with your
> fix at same environment.
>
> So, seems we should have couple of configuration recommendations
> - reasonable for standalone PC
> - reasonable for cluster
>
> On Fri, Sep 15, 2017 at 12:20 PM, Nikolay Izhikov <nizhikov....@gmail.com>
> wrote:
>
> > Hello, Dmitriy.
> >
> > I think experienced members of community have specific number for
> > benchmarking.
> >
> > Can we start from reference hardware configuration: Num of CPU, RAM and
> > HDD(SDD?) configuration, network configs, etc.
> >
> > Can someone share that kind of knowledge - Which hardware is best for
> > Ignite benchmarking?
> >
> > I found some numbers here - [1]. Is it well suited for Apache Ignite?
> >
> > [1] https://www.gridgain.com/resources/benchmarks/gridgain-vs-
> > hazelcast-benchmarks
> >
> > 14.09.2017 23:27, Dmitriy Setrakyan пишет:
> >
> > Alexey, I completely agree. However, for the benchmarks to be useful,
> then
> >> need to be run on the same hardware all the time. Apache Ignite does not
> >> have servers sitting around, available to run the benchmarks.
> >>
> >> Would be nice to see how other projects address it. Can Amazon donate
> >> servers for the Apache projects?
> >>
> >> D.
> >>
> >> On Thu, Sep 14, 2017 at 6:25 AM, Aleksei Zaitsev <
> ign...@alexzaitzev.pro>
> >> wrote:
> >>
> >> Hi, Igniters.
> >>>
> >>> Recently I’ve done some research in benchmarks for Ignite, and noticed
> >>> that we don’t have any rules for running benchmarks and collecting
> result
> >>> from them. Although sometimes we have tasks, which results need to be
> >>> measured. I propose to formalize such things as:
> >>>   * set of benchmarks,
> >>>   * parameters of launching them,
> >>>   * way of result collection and interpretation,
> >>>   * Ignite cluster configuration.
> >>>
> >>> I don’t think that we need to run benchmarks before every merge into
> >>> master, but in some cases it should be mandatory to compare new results
> >>> with reference values to be sure that changes do not lead to the
> >>> performance degradation.
> >>>
> >>> What do you think?
> >>>
> >>>
> >>
>

Reply via email to