Re: [vdsm] Profiling and benchmarking VDSM

2014-03-18 Thread Francesco Romani
- Original Message -
> From: "Saggi Mizrahi" 
> To: "Francesco Romani" 
> Cc: "vdsm-devel" , "ybronhei" 
> 
> Sent: Tuesday, March 18, 2014 12:11:14 PM
> Subject: Re: [vdsm] Profiling and benchmarking VDSM

> > > Ignore http://www.ovirt.org/Vdsm_Developers#Performance_and_scalability
> > 
> > Not sure I understood correctly. You mean I should drop my additions to the
> > Vdsm_Developers page?
> Don't drop it, just don't have it as a priority over actual work.
> I'd much rather have benchmarks and no WIKI than the other way around. :)

Agreed. I just reorganized a bits the existing pages just to have a starting 
point
and a place where I can document/point stuff/progress/temporary scripts/configs.
From now to onwards, I'll focus on the actual benchmarking :)

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Profiling and benchmarking VDSM

2014-03-18 Thread Saggi Mizrahi


- Original Message -
> From: "Francesco Romani" 
> To: "vdsm-devel" 
> Cc: "ybronhei" , "Saggi Mizrahi" 
> Sent: Tuesday, March 18, 2014 12:47:55 PM
> Subject: Re: [vdsm] Profiling and benchmarking VDSM
> 
> 
> - Original Message -
> > From: "Saggi Mizrahi" 
> > To: "Francesco Romani" 
> > Cc: "vdsm-devel" , "ybronhei"
> > 
> > Sent: Tuesday, March 18, 2014 10:18:16 AM
> > Subject: Re: [vdsm] Profiling and benchmarking VDSM
> > 
> > Thank you for taking the initiative.
> > Just reminding you that the test framework is owned
> > by infra so don't forget to put Yaniv and I in the CC
> > for all future correspondence regarding this feature.
> > 
> > As I will be the one responsible for the final
> > approval.
> 
> Yes, of course I will.
> At the moment I'm using "unofficial"/out of tree decorators and support code
> just because I just started the exploration and the work.
> In the meantime, we can and should discuss the better/long term/official
> approach to measure performance and benchmark things.
> 
> > Ignore http://www.ovirt.org/Vdsm_Developers#Performance_and_scalability
> 
> Not sure I understood correctly. You mean I should drop my additions to the
> Vdsm_Developers page?
Don't drop it, just don't have it as a priority over actual work.
I'd much rather have benchmarks and no WIKI than the other way around. :)
> 
> > Also we don't want to do it per test since it's meaningless for most tests
> > since they only run through the code once.
> > 
> > I started investigating how we want to solve this issue in the past and
> > this
> > is what I can up with.
> > 
> > What we need to do is create a decorator that wraps the test with cProfile.
> > We also want to create a generator that using configuration from nose.
> > 
> > def BenchmarkIter():
> > start = time.time()
> > i = 0
> > while i < MIN_ITERATIONS or (time.time() - start) < MIN_TIME_RUNNING:
> > yield i
> > i += 1
> > 
> > So that writing a benchmark is just:
> > 
> > @benchmark([min_iter[, min_time_running]])
> > def testSomething(self):
> > something()
> > 
> > That way we are sure we have a statistically significant sample for all
> > tests.
> 
> Agreed
> 
> > There will need to be a plugin created for nose that skips @benchmark if
> > benchmarks are not turned on and can generate output for the Jenkins
> > performance plugin[1]. That way we can run it every night as the benchmarks
> > will be slow to run since they will intentionally take a few seconds each
> > and try and hammer the CPU\disk so people would probably not run the entire
> > suite themselves.
> > 
> > [1] https://wiki.jenkins-ci.org/display/JENKINS/Performance+Plugin
> 
> This looks very nice.
> 
> Thanks and bests,
> 
> --
> Francesco Romani
> RedHat Engineering Virtualization R & D
> Phone: 8261328
> IRC: fromani
> 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Profiling and benchmarking VDSM

2014-03-18 Thread Francesco Romani

- Original Message -
> From: "Saggi Mizrahi" 
> To: "Francesco Romani" 
> Cc: "vdsm-devel" , "ybronhei" 
> 
> Sent: Tuesday, March 18, 2014 10:18:16 AM
> Subject: Re: [vdsm] Profiling and benchmarking VDSM
> 
> Thank you for taking the initiative.
> Just reminding you that the test framework is owned
> by infra so don't forget to put Yaniv and I in the CC
> for all future correspondence regarding this feature.
> 
> As I will be the one responsible for the final
> approval.

Yes, of course I will.
At the moment I'm using "unofficial"/out of tree decorators and support code
just because I just started the exploration and the work.
In the meantime, we can and should discuss the better/long term/official
approach to measure performance and benchmark things.

> Ignore http://www.ovirt.org/Vdsm_Developers#Performance_and_scalability

Not sure I understood correctly. You mean I should drop my additions to the
Vdsm_Developers page?

> Also we don't want to do it per test since it's meaningless for most tests
> since they only run through the code once.
> 
> I started investigating how we want to solve this issue in the past and this
> is what I can up with.
> 
> What we need to do is create a decorator that wraps the test with cProfile.
> We also want to create a generator that using configuration from nose.
> 
> def BenchmarkIter():
> start = time.time()
> i = 0
> while i < MIN_ITERATIONS or (time.time() - start) < MIN_TIME_RUNNING:
> yield i
> i += 1
> 
> So that writing a benchmark is just:
> 
> @benchmark([min_iter[, min_time_running]])
> def testSomething(self):
> something()
> 
> That way we are sure we have a statistically significant sample for all
> tests.

Agreed

> There will need to be a plugin created for nose that skips @benchmark if
> benchmarks are not turned on and can generate output for the Jenkins
> performance plugin[1]. That way we can run it every night as the benchmarks
> will be slow to run since they will intentionally take a few seconds each
> and try and hammer the CPU\disk so people would probably not run the entire
> suite themselves.
> 
> [1] https://wiki.jenkins-ci.org/display/JENKINS/Performance+Plugin

This looks very nice.

Thanks and bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Profiling and benchmarking VDSM

2014-03-18 Thread Saggi Mizrahi
Thank you for taking the initiative.
Just reminding you that the test framework is owned
by infra so don't forget to put Yaniv and I in the CC
for all future correspondence regarding this feature.

As I will be the one responsible for the final
approval.

Ignore http://www.ovirt.org/Vdsm_Developers#Performance_and_scalability

Also we don't want to do it per test since it's meaningless for most tests
since they only run through the code once.

I started investigating how we want to solve this issue in the past and this
is what I can up with.

What we need to do is create a decorator that wraps the test with cProfile.
We also want to create a generator that using configuration from nose.

def BenchmarkIter():
start = time.time()
i = 0
while i < MIN_ITERATIONS or (time.time() - start) < MIN_TIME_RUNNING:
yield i
i += 1

So that writing a benchmark is just:

@benchmark([min_iter[, min_time_running]])
def testSomething(self):
something()

That way we are sure we have a statistically significant sample for all tests.

There will need to be a plugin created for nose that skips @benchmark if
benchmarks are not turned on and can generate output for the Jenkins
performance plugin[1]. That way we can run it every night as the benchmarks
will be slow to run since they will intentionally take a few seconds each
and try and hammer the CPU\disk so people would probably not run the entire
suite themselves.

[1] https://wiki.jenkins-ci.org/display/JENKINS/Performance+Plugin
- Original Message -
> From: "ybronhei" 
> To: "Francesco Romani" , "vdsm-devel" 
> 
> Sent: Monday, March 17, 2014 1:57:34 PM
> Subject: Re: [vdsm] Profiling and benchmarking VDSM
> 
> On 03/17/2014 01:03 PM, Francesco Romani wrote:
> > - Original Message -
> >> From: "Francesco Romani" 
> >> To: "Antoni Segura Puimedon" 
> >> Cc: "vdsm-devel" 
> >> Sent: Monday, March 17, 2014 10:32:40 AM
> >> Subject: Re: [vdsm] Profiling and benchmarking VDSM
> >
> >> next immediate steps will be
> >>
> >> - have a summary page to collect all performance/profiling/benchmarking
> >> page
> >
> > Links added at the bottom of the VDSM developer page:
> > http://www.ovirt.org/Vdsm_Developers
> > see item #15
> http://www.ovirt.org/Vdsm_Developers#Performance_and_scalability
> 
> >
> >> - document and detail the scenarios the way you described (which I like)
> >> the benchmark templates will be attached/documented on this page
> >
> > Started to sketch our "Monday Morning" test scenario here
> > http://www.ovirt.org/VDSM_benchmarks
> >
> > (yes, looks quite ugly, no attached template yet. Will add).
> >
> > I'll wait a few hours to let things cool down a bit and see if something
> > is missing, then start with the benchmarks using the new, proper
> > definitions
> > and a more structured approach like the one documented on the wiki.
> >
> > http://gerrit.ovirt.org/#/c/25678/ is the first in queue.
> >
> can we add the profiling decorator on each nose test function and share
> results link with each push to gerrit?
> the issue is that it collects profiling only for one function in a file.
> we need somehow to integrate all outputs..
> 
> the nose tests might be good to check the profiling status. it should
> cover most of the flows specifically (especially if we'll enforce adding
> unit tests for each new change)
> 
> --
> Yaniv Bronhaim.
> ___
> vdsm-devel mailing list
> vdsm-devel@lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
> 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Profiling and benchmarking VDSM

2014-03-17 Thread ybronhei

On 03/17/2014 01:03 PM, Francesco Romani wrote:

- Original Message -

From: "Francesco Romani" 
To: "Antoni Segura Puimedon" 
Cc: "vdsm-devel" 
Sent: Monday, March 17, 2014 10:32:40 AM
Subject: Re: [vdsm] Profiling and benchmarking VDSM



next immediate steps will be

- have a summary page to collect all performance/profiling/benchmarking page


Links added at the bottom of the VDSM developer page:
http://www.ovirt.org/Vdsm_Developers
see item #15

http://www.ovirt.org/Vdsm_Developers#Performance_and_scalability




- document and detail the scenarios the way you described (which I like)
the benchmark templates will be attached/documented on this page


Started to sketch our "Monday Morning" test scenario here
http://www.ovirt.org/VDSM_benchmarks

(yes, looks quite ugly, no attached template yet. Will add).

I'll wait a few hours to let things cool down a bit and see if something
is missing, then start with the benchmarks using the new, proper definitions
and a more structured approach like the one documented on the wiki.

http://gerrit.ovirt.org/#/c/25678/ is the first in queue.

can we add the profiling decorator on each nose test function and share 
results link with each push to gerrit?
the issue is that it collects profiling only for one function in a file. 
we need somehow to integrate all outputs..


the nose tests might be good to check the profiling status. it should 
cover most of the flows specifically (especially if we'll enforce adding 
unit tests for each new change)


--
Yaniv Bronhaim.
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Profiling and benchmarking VDSM

2014-03-17 Thread Francesco Romani
- Original Message -
> From: "Francesco Romani" 
> To: "Antoni Segura Puimedon" 
> Cc: "vdsm-devel" 
> Sent: Monday, March 17, 2014 10:32:40 AM
> Subject: Re: [vdsm] Profiling and benchmarking VDSM

> next immediate steps will be
> 
> - have a summary page to collect all performance/profiling/benchmarking page

Links added at the bottom of the VDSM developer page:
http://www.ovirt.org/Vdsm_Developers
see item #15

> - document and detail the scenarios the way you described (which I like)
> the benchmark templates will be attached/documented on this page

Started to sketch our "Monday Morning" test scenario here
http://www.ovirt.org/VDSM_benchmarks

(yes, looks quite ugly, no attached template yet. Will add).

I'll wait a few hours to let things cool down a bit and see if something
is missing, then start with the benchmarks using the new, proper definitions
and a more structured approach like the one documented on the wiki.

http://gerrit.ovirt.org/#/c/25678/ is the first in queue.

-- 
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Profiling and benchmarking VDSM

2014-03-17 Thread Francesco Romani
- Original Message -
> From: "Antoni Segura Puimedon" 
> To: "Francesco Romani" 
> Cc: "vdsm-devel" 
> Sent: Monday, March 17, 2014 10:13:28 AM
> Subject: Re: [vdsm] Profiling and benchmarking VDSM

> > At the moment, the plan is:
> > - define and share (wiki page?) benchmarking/profiling scenarios,
> > aiming first to reproduce the "Monday Morning" effect (mass startup of many
> > VMs), possibly
> > the sampling threading battle and after that anything else which may be
> > useful.
> 
> Great, I'd recommend on the wiki page have something like:
> 
> virt scenarios:
> - Monday morning effetc: *Here goes description of it*
> networking scenarios:
> - Massive network configuring: Adding 200+ networks with a single command.
> - Massive network removal: Deleting 200+ networks with a single command.
> storage scenarios:
> - X: Y

Will do.
 
> > 
> > - (maybe?) provide a benchmarking results template, something like a
> > spreadsheet, in order
> > to make the results easily processable and shareable (e.g. 'A' does the
> > tests, 'B' analyzes
> > the results and so on)
> 
> I'd propose a profiling decorator that outputs into a specific format.

I agree :)
I am starting a 'tools' page here http://www.ovirt.org/Profiling_Vdsm

next immediate steps will be

- have a summary page to collect all performance/profiling/benchmarking page
- document and detail the scenarios the way you described (which I like)
the benchmark templates will be attached/documented on this page

> > - find new bottlenecks
> 
> - help bisecting performance regressions.

That's a good point.

I think we can achieve this with small additions to our current rules, because
for example patches to be merged are already required to be self-contained,
do not break tests and so on. 

-- 
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Profiling and benchmarking VDSM

2014-03-17 Thread Antoni Segura Puimedon


- Original Message -
> From: "Francesco Romani" 
> To: "vdsm-devel" 
> Sent: Monday, March 17, 2014 8:37:10 AM
> Subject: [vdsm] Profiling and benchmarking VDSM
> 
> Hello everyone,
> 
> In the coming weeks, starting today, I'm beginning an effort to profile and
> benchmark VDSM
> in order to let it scale better and address some performance issues which are
> surfacing.
> 
> On this mail (thread) I'd like to share and discuss my plan.
> 
> At the moment, the plan is:
> - define and share (wiki page?) benchmarking/profiling scenarios,
> aiming first to reproduce the "Monday Morning" effect (mass startup of many
> VMs), possibly
> the sampling threading battle and after that anything else which may be
> useful.

Great, I'd recommend on the wiki page have something like:

virt scenarios:
- Monday morning effetc: *Here goes description of it*
networking scenarios:
- Massive network configuring: Adding 200+ networks with a single command.
- Massive network removal: Deleting 200+ networks with a single command.
storage scenarios:
- X: Y

> 
> - (maybe?) provide a benchmarking results template, something like a
> spreadsheet, in order
> to make the results easily processable and shareable (e.g. 'A' does the
> tests, 'B' analyzes
> the results and so on)

I'd propose a profiling decorator that outputs into a specific format.

> 
> - create minimal support infrastructure (scripts) to run the benchmarks
> defined on the bullet point above, gather the profile data and present it
> 
> - measure the impact of performance improvements, like
> http://gerrit.ovirt.org/#/c/25678/
> 
> - find new bottlenecks

- help bisecting performance regressions.

> - share the results
> 
> The goal is to have a set of tools which allow us to find possible
> bottlenecks and to easily
> reproduce the results.
> 
> For the long term, my not-so-secret-wish is to have something like
> http://speed.pypy.org
> (http://bench.ovirt.org ? :) ) which is really nice and, AFAIK, fully
> automated :)
> 
> Any suggestion or comment is the most welcome.
> I'll update this thread with further information in the next days.
> 
> Bests,
> 
> --
> Francesco Romani
> RedHat Engineering Virtualization R & D
> Phone: 8261328
> IRC: fromani
> ___
> vdsm-devel mailing list
> vdsm-devel@lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
> 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Profiling and benchmarking VDSM

2014-03-17 Thread Francesco Romani
Hello everyone,

In the coming weeks, starting today, I'm beginning an effort to profile and 
benchmark VDSM
in order to let it scale better and address some performance issues which are 
surfacing.

On this mail (thread) I'd like to share and discuss my plan.

At the moment, the plan is:
- define and share (wiki page?) benchmarking/profiling scenarios,
aiming first to reproduce the "Monday Morning" effect (mass startup of many 
VMs), possibly
the sampling threading battle and after that anything else which may be useful.

- (maybe?) provide a benchmarking results template, something like a 
spreadsheet, in order
to make the results easily processable and shareable (e.g. 'A' does the tests, 
'B' analyzes
the results and so on)

- create minimal support infrastructure (scripts) to run the benchmarks
defined on the bullet point above, gather the profile data and present it

- measure the impact of performance improvements, like 
http://gerrit.ovirt.org/#/c/25678/

- find new bottlenecks

- share the results

The goal is to have a set of tools which allow us to find possible bottlenecks 
and to easily
reproduce the results.

For the long term, my not-so-secret-wish is to have something like 
http://speed.pypy.org
(http://bench.ovirt.org ? :) ) which is really nice and, AFAIK, fully automated 
:)

Any suggestion or comment is the most welcome.
I'll update this thread with further information in the next days.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel