[The Java Posse] Re: Problems with continuous performance testing, solutions anyone?

2009-09-15 Thread Marcelo Fukushima

where i work, we use an isolated (non virtual) hudson for performance
test. its an old machine, but we're only interested on relative times
(each run takes around 6 times). you might want to virtualize the os's
and use hudson locks.

On 9/16/09, Patrick  wrote:
>
> You might take a look at Japex, which was developed at Sun for
> benchmarking some of the XML libraries. It offers a harness in which
> you can run tests and gives you a sort of framework by which to handle
> initialization and warmup issues, plus it can compare between runs and
> against a baseline. I don't know of a good solution for the concurrent
> tests problem, though.
>
>
> Patrick
> >
>


-- 
http://mapsdev.blogspot.com/
Marcelo Takeshi Fukushima

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "The 
Java Posse" group.
To post to this group, send email to javaposse@googlegroups.com
To unsubscribe from this group, send email to 
javaposse+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en
-~--~~~~--~~--~--~---



[The Java Posse] Re: Problems with continuous performance testing, solutions anyone?

2009-09-15 Thread Patrick

You might take a look at Japex, which was developed at Sun for
benchmarking some of the XML libraries. It offers a harness in which
you can run tests and gives you a sort of framework by which to handle
initialization and warmup issues, plus it can compare between runs and
against a baseline. I don't know of a good solution for the concurrent
tests problem, though.


Patrick
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "The 
Java Posse" group.
To post to this group, send email to javaposse@googlegroups.com
To unsubscribe from this group, send email to 
javaposse+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en
-~--~~~~--~~--~--~---



[The Java Posse] Re: Problems with continuous performance testing, solutions anyone?

2009-09-15 Thread Joshua Marinacci

Yep. Pretty much you need to create a hudson slave just for running  
your tests. Tests should always be run on an isolated server.  
Fortunately Hudson makes this pretty easy.

- Josh

On Sep 15, 2009, at 10:35 AM, Fabrizio Giudici wrote:

>
> Robert Casto wrote:
>> That depends of course on what you are trying to do.
>>
>> Joshua wants to measure average system performance while things are
>> humming along.
>>
>> If you want to know how long it takes to startup, then you keep the
>> data. I tend to separate the two in reports I give to companies. Very
>> different work is done to speed one or the other up.
>>
> Exactly. If you're profiling a server, the performance at startup is
> completely useless. For a client (desktop or applet) it depends, but  
> in
> most cases I think it is still not relevant.
>
> Anyway, this is not my focus problem - I always throw away boot
> parameters. The idea of averaging a large number of result could  
> indeed
> at least alleviate my problem about measuring, still it would make
> things more complex on other aspect - let's say a typical test run  
> takes
> 1h (not parallelized); i have to run for two JDK (5 and 6, when 7 will
> be near I'll drop 5) and at least three operating systems. This means
> 6h, not parallelized. Running 10 times the suite would give 60h :-((
> Even running 8 tests in parallel per each CPU, it would be 7.5 hours -
> and in that period, I couldn't run on the CI server anything else.
>
>
>
> -- 
> Fabrizio Giudici - Java Architect, Project Manager
> Tidalwave s.a.s. - "We make Java work. Everywhere."
> weblogs.java.net/blog/fabriziogiudici - www.tidalwave.it/blog
> fabrizio.giud...@tidalwave.it - mobile: +39 348.150.6941
>
>
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "The 
Java Posse" group.
To post to this group, send email to javaposse@googlegroups.com
To unsubscribe from this group, send email to 
javaposse+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en
-~--~~~~--~~--~--~---



[The Java Posse] Re: Problems with continuous performance testing, solutions anyone?

2009-09-15 Thread Fabrizio Giudici

Robert Casto wrote:
> That depends of course on what you are trying to do.
>
> Joshua wants to measure average system performance while things are 
> humming along.
>
> If you want to know how long it takes to startup, then you keep the 
> data. I tend to separate the two in reports I give to companies. Very 
> different work is done to speed one or the other up.
>
Exactly. If you're profiling a server, the performance at startup is 
completely useless. For a client (desktop or applet) it depends, but in 
most cases I think it is still not relevant.

Anyway, this is not my focus problem - I always throw away boot 
parameters. The idea of averaging a large number of result could indeed 
at least alleviate my problem about measuring, still it would make 
things more complex on other aspect - let's say a typical test run takes 
1h (not parallelized); i have to run for two JDK (5 and 6, when 7 will 
be near I'll drop 5) and at least three operating systems. This means 
6h, not parallelized. Running 10 times the suite would give 60h :-(( 
Even running 8 tests in parallel per each CPU, it would be 7.5 hours - 
and in that period, I couldn't run on the CI server anything else.



-- 
Fabrizio Giudici - Java Architect, Project Manager
Tidalwave s.a.s. - "We make Java work. Everywhere."
weblogs.java.net/blog/fabriziogiudici - www.tidalwave.it/blog
fabrizio.giud...@tidalwave.it - mobile: +39 348.150.6941


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "The 
Java Posse" group.
To post to this group, send email to javaposse@googlegroups.com
To unsubscribe from this group, send email to 
javaposse+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en
-~--~~~~--~~--~--~---



[The Java Posse] Re: Problems with continuous performance testing, solutions anyone?

2009-09-15 Thread Joshua Marinacci
Yep. It all depends in what you are trying to measure.

- Josh, on the go

On Sep 15, 2009, at 9:38 AM, Robert Casto   
wrote:

> That depends of course on what you are trying to do.
>
> Joshua wants to measure average system performance while things are  
> humming along.
>
> If you want to know how long it takes to startup, then you keep the  
> data. I tend to separate the two in reports I give to companies.  
> Very different work is done to speed one or the other up.
>
> On Tue, Sep 15, 2009 at 12:26 PM, Alexey Zinger  
>  wrote:
> Interesting, but don't you think that for certain situations,  
> throwing away results that might be affected by start-up times is  
> the exact wrong thing to do?
>
> Alexey
>
>
> From: Joshua Marinacci 
> To: javaposse@googlegroups.com
> Sent: Tuesday, September 15, 2009 10:43:06 AM
> Subject: [The Java Posse] Re: Problems with continuous performance  
> testing, solutions anyone?
>
>
> When performance testing the client JRE we do two things which seem to
> help:
>
> 1) check out both the latest and your older / baseline releases of
> your code. Test them *both*. This lets you plot how you have improved,
> regardless of what computer your tests are running on. It's also the
> only way to test things which might vary from computer to computer.
>
> 2) always run each test a bunch of times, throw away the first result,
> then average the rest. This gives you a more consistent result.
> Throwing away the first one lets you ignore the places where HotSpot
> hadn't kicked in, or you were thrashing in JVM startup.
>
> - Josh
>
> On Sep 15, 2009, at 12:45 AM, Fabrizio Giudici wrote:
>
> >
> > Working with imaging, I came to the conclusion that I need  
> continuous
> > performance testing more than one year ago
> > (http://netbeans.dzone.com/news/stopwatches-anyone-or-about-co). Of
> > course, the idea is not mine, but seems surprisingly "old" (2003,
> > http://www.devx.com/Java/Article/16755). As you can read in my
> > article,
> > my continuous performance testing had initially been manual, 1.5  
> years
> > ago I developed a few trivial code to at least automatically collect
> > the
> > results (then they were manually inserted into an Excel sheet).  
> Since
> > Hudson allows to plot arbitrary data, the next step I'm going to
> > complete is to provide those data to Hudson. Due to the very  
> nature of
> > my functions, I'm not going to strictly assert that a task is
> > completed
> > in a certain time, but I'd be satisfied to plot the trend over time,
> > so
> > I can see the impact of performance optimizations, and above all I  
> can
> > make sure that the performance isn't slowly but inexorably getting
> > worse
> > refactoring after refactoring.
> >
> > Since the time of my article, I got one more problem. My testing
> > machine, to compute and compare timings against, so far has been my
> > laptop. The amount of tests is increasing and it has become  
> impossible
> > to run everything on my laptop each time (otherwise I couldn't use  
> it
> > for hours), so I've moved the tests to a Hudson slave (a good
> > 8-processor, where I'm going to exploit the parallelism to compute
> > multiple tests at the same time). At this point there's the problem:
> > scheduling parallel tasks is an excellent way to screw up  
> measurements
> > (while with my laptop I made sure that everything was executed
> > serially
> > and there were no other processes consuming CPU in the background).
> > While at least for some tasks I could strictly measure the CPU time
> > (by
> > means of JMX), parts of the tests are related to I/O (loading and
> > decoding files) - clearly performing many of them at the same time
> > will
> > have each interfere with the other. BTW, I've got doubts that even
> > pure
> > elaboration tests can interfere, as they work with large (about
> > 100MBytes) rasters in memory, so loading multiple ones could lead to
> > memory swapping and cache interferences. Furthermore, the fact that
> > the
> > host is a Hudson slave makes it possible that other projects gets
> > scheduled for a build, making things even more complex.
> >
> > What to do? At the moment, the only thing I can think of is to use
> > Hudson locks to properly serialize performance tests - with a
> > multi-stage approach I can reduce the "critical section" of tests,
> > still
> > resorting to the most brutal solution hurts me. I'd like to know
> > whethe

[The Java Posse] Re: Problems with continuous performance testing, solutions anyone?

2009-09-15 Thread Robert Casto
That depends of course on what you are trying to do.

Joshua wants to measure average system performance while things are humming
along.

If you want to know how long it takes to startup, then you keep the data. I
tend to separate the two in reports I give to companies. Very different work
is done to speed one or the other up.

On Tue, Sep 15, 2009 at 12:26 PM, Alexey Zinger wrote:

> Interesting, but don't you think that for certain situations, throwing away
> results that might be affected by start-up times is the exact wrong thing to
> do?
>
> Alexey
>
>
> --
> *From:* Joshua Marinacci 
> *To:* javaposse@googlegroups.com
> *Sent:* Tuesday, September 15, 2009 10:43:06 AM
> *Subject:* [The Java Posse] Re: Problems with continuous performance
> testing, solutions anyone?
>
>
> When performance testing the client JRE we do two things which seem to
> help:
>
> 1) check out both the latest and your older / baseline releases of
> your code. Test them *both*. This lets you plot how you have improved,
> regardless of what computer your tests are running on. It's also the
> only way to test things which might vary from computer to computer.
>
> 2) always run each test a bunch of times, throw away the first result,
> then average the rest. This gives you a more consistent result.
> Throwing away the first one lets you ignore the places where HotSpot
> hadn't kicked in, or you were thrashing in JVM startup.
>
> - Josh
>
> On Sep 15, 2009, at 12:45 AM, Fabrizio Giudici wrote:
>
> >
> > Working with imaging, I came to the conclusion that I need continuous
> > performance testing more than one year ago
> > (http://netbeans.dzone.com/news/stopwatches-anyone-or-about-co). Of
> > course, the idea is not mine, but seems surprisingly "old" (2003,
> > http://www.devx.com/Java/Article/16755). As you can read in my
> > article,
> > my continuous performance testing had initially been manual, 1.5 years
> > ago I developed a few trivial code to at least automatically collect
> > the
> > results (then they were manually inserted into an Excel sheet). Since
> > Hudson allows to plot arbitrary data, the next step I'm going to
> > complete is to provide those data to Hudson. Due to the very nature of
> > my functions, I'm not going to strictly assert that a task is
> > completed
> > in a certain time, but I'd be satisfied to plot the trend over time,
> > so
> > I can see the impact of performance optimizations, and above all I can
> > make sure that the performance isn't slowly but inexorably getting
> > worse
> > refactoring after refactoring.
> >
> > Since the time of my article, I got one more problem. My testing
> > machine, to compute and compare timings against, so far has been my
> > laptop. The amount of tests is increasing and it has become impossible
> > to run everything on my laptop each time (otherwise I couldn't use it
> > for hours), so I've moved the tests to a Hudson slave (a good
> > 8-processor, where I'm going to exploit the parallelism to compute
> > multiple tests at the same time). At this point there's the problem:
> > scheduling parallel tasks is an excellent way to screw up measurements
> > (while with my laptop I made sure that everything was executed
> > serially
> > and there were no other processes consuming CPU in the background).
> > While at least for some tasks I could strictly measure the CPU time
> > (by
> > means of JMX), parts of the tests are related to I/O (loading and
> > decoding files) - clearly performing many of them at the same time
> > will
> > have each interfere with the other. BTW, I've got doubts that even
> > pure
> > elaboration tests can interfere, as they work with large (about
> > 100MBytes) rasters in memory, so loading multiple ones could lead to
> > memory swapping and cache interferences. Furthermore, the fact that
> > the
> > host is a Hudson slave makes it possible that other projects gets
> > scheduled for a build, making things even more complex.
> >
> > What to do? At the moment, the only thing I can think of is to use
> > Hudson locks to properly serialize performance tests - with a
> > multi-stage approach I can reduce the "critical section" of tests,
> > still
> > resorting to the most brutal solution hurts me. I'd like to know
> > whether
> > somebody else has done, or is doing, public work in the area.
> >
> > PS There is a very recent (JavaZone '09) presentation about "testing
> > in
> > the cloud"

[The Java Posse] Re: Problems with continuous performance testing, solutions anyone?

2009-09-15 Thread Alexey Zinger
Interesting, but don't you think that for certain situations, throwing away 
results that might be affected by start-up times is the exact wrong thing to do?

 Alexey






From: Joshua Marinacci 
To: javaposse@googlegroups.com
Sent: Tuesday, September 15, 2009 10:43:06 AM
Subject: [The Java Posse] Re: Problems with continuous performance testing, 
solutions anyone?


When performance testing the client JRE we do two things which seem to  
help:

1) check out both the latest and your older / baseline releases of  
your code. Test them *both*. This lets you plot how you have improved,  
regardless of what computer your tests are running on. It's also the  
only way to test things which might vary from computer to computer.

2) always run each test a bunch of times, throw away the first result,  
then average the rest. This gives you a more consistent result.  
Throwing away the first one lets you ignore the places where HotSpot  
hadn't kicked in, or you were thrashing in JVM startup.

- Josh

On Sep 15, 2009, at 12:45 AM, Fabrizio Giudici wrote:

>
> Working with imaging, I came to the conclusion that I need continuous
> performance testing more than one year ago
> (http://netbeans.dzone.com/news/stopwatches-anyone-or-about-co). Of
> course, the idea is not mine, but seems surprisingly "old" (2003,
> http://www.devx.com/Java/Article/16755). As you can read in my  
> article,
> my continuous performance testing had initially been manual, 1.5 years
> ago I developed a few trivial code to at least automatically collect  
> the
> results (then they were manually inserted into an Excel sheet). Since
> Hudson allows to plot arbitrary data, the next step I'm going to
> complete is to provide those data to Hudson. Due to the very nature of
> my functions, I'm not going to strictly assert that a task is  
> completed
> in a certain time, but I'd be satisfied to plot the trend over time,  
> so
> I can see the impact of performance optimizations, and above all I can
> make sure that the performance isn't slowly but inexorably getting  
> worse
> refactoring after refactoring.
>
> Since the time of my article, I got one more problem. My testing
> machine, to compute and compare timings against, so far has been my
> laptop. The amount of tests is increasing and it has become impossible
> to run everything on my laptop each time (otherwise I couldn't use it
> for hours), so I've moved the tests to a Hudson slave (a good
> 8-processor, where I'm going to exploit the parallelism to compute
> multiple tests at the same time). At this point there's the problem:
> scheduling parallel tasks is an excellent way to screw up measurements
> (while with my laptop I made sure that everything was executed  
> serially
> and there were no other processes consuming CPU in the background).
> While at least for some tasks I could strictly measure the CPU time  
> (by
> means of JMX), parts of the tests are related to I/O (loading and
> decoding files) - clearly performing many of them at the same time  
> will
> have each interfere with the other. BTW, I've got doubts that even  
> pure
> elaboration tests can interfere, as they work with large (about
> 100MBytes) rasters in memory, so loading multiple ones could lead to
> memory swapping and cache interferences. Furthermore, the fact that  
> the
> host is a Hudson slave makes it possible that other projects gets
> scheduled for a build, making things even more complex.
>
> What to do? At the moment, the only thing I can think of is to use
> Hudson locks to properly serialize performance tests - with a
> multi-stage approach I can reduce the "critical section" of tests,  
> still
> resorting to the most brutal solution hurts me. I'd like to know  
> whether
> somebody else has done, or is doing, public work in the area.
>
> PS There is a very recent (JavaZone '09) presentation about "testing  
> in
> the cloud" which could address some problems, but I think that  
> JavaZone
> '09 slides are not available yet:
>
> http://javazone.no/incogito09/events/JavaZone%202009/sessions/Continuous%20Performance%20Testing%20in%20the%20Cloud
>
> In any case, it seems to mostly refer to JEE testing, where one would
> expect that indeed the most significant tests are those with multiple
> clients in parallel, which is not my primary case.
>
> PS Yes, I know that parallelizing to 8 different computers instead  
> than
> 8 CPUs of a single computer would be a good idea, but I can't afford  
> it
> :-) In any case, this would bring the problem of having 8 perfectly
> identical computers.
>
> -- 
> Fabrizio Giudici - Java Architect, Projec

[The Java Posse] Re: Problems with continuous performance testing, solutions anyone?

2009-09-15 Thread Joshua Marinacci

When performance testing the client JRE we do two things which seem to  
help:

1) check out both the latest and your older / baseline releases of  
your code. Test them *both*. This lets you plot how you have improved,  
regardless of what computer your tests are running on. It's also the  
only way to test things which might vary from computer to computer.

2) always run each test a bunch of times, throw away the first result,  
then average the rest. This gives you a more consistent result.  
Throwing away the first one lets you ignore the places where HotSpot  
hadn't kicked in, or you were thrashing in JVM startup.

- Josh

On Sep 15, 2009, at 12:45 AM, Fabrizio Giudici wrote:

>
> Working with imaging, I came to the conclusion that I need continuous
> performance testing more than one year ago
> (http://netbeans.dzone.com/news/stopwatches-anyone-or-about-co). Of
> course, the idea is not mine, but seems surprisingly "old" (2003,
> http://www.devx.com/Java/Article/16755). As you can read in my  
> article,
> my continuous performance testing had initially been manual, 1.5 years
> ago I developed a few trivial code to at least automatically collect  
> the
> results (then they were manually inserted into an Excel sheet). Since
> Hudson allows to plot arbitrary data, the next step I'm going to
> complete is to provide those data to Hudson. Due to the very nature of
> my functions, I'm not going to strictly assert that a task is  
> completed
> in a certain time, but I'd be satisfied to plot the trend over time,  
> so
> I can see the impact of performance optimizations, and above all I can
> make sure that the performance isn't slowly but inexorably getting  
> worse
> refactoring after refactoring.
>
> Since the time of my article, I got one more problem. My testing
> machine, to compute and compare timings against, so far has been my
> laptop. The amount of tests is increasing and it has become impossible
> to run everything on my laptop each time (otherwise I couldn't use it
> for hours), so I've moved the tests to a Hudson slave (a good
> 8-processor, where I'm going to exploit the parallelism to compute
> multiple tests at the same time). At this point there's the problem:
> scheduling parallel tasks is an excellent way to screw up measurements
> (while with my laptop I made sure that everything was executed  
> serially
> and there were no other processes consuming CPU in the background).
> While at least for some tasks I could strictly measure the CPU time  
> (by
> means of JMX), parts of the tests are related to I/O (loading and
> decoding files) - clearly performing many of them at the same time  
> will
> have each interfere with the other. BTW, I've got doubts that even  
> pure
> elaboration tests can interfere, as they work with large (about
> 100MBytes) rasters in memory, so loading multiple ones could lead to
> memory swapping and cache interferences. Furthermore, the fact that  
> the
> host is a Hudson slave makes it possible that other projects gets
> scheduled for a build, making things even more complex.
>
> What to do? At the moment, the only thing I can think of is to use
> Hudson locks to properly serialize performance tests - with a
> multi-stage approach I can reduce the "critical section" of tests,  
> still
> resorting to the most brutal solution hurts me. I'd like to know  
> whether
> somebody else has done, or is doing, public work in the area.
>
> PS There is a very recent (JavaZone '09) presentation about "testing  
> in
> the cloud" which could address some problems, but I think that  
> JavaZone
> '09 slides are not available yet:
>
> http://javazone.no/incogito09/events/JavaZone%202009/sessions/Continuous%20Performance%20Testing%20in%20the%20Cloud
>
> In any case, it seems to mostly refer to JEE testing, where one would
> expect that indeed the most significant tests are those with multiple
> clients in parallel, which is not my primary case.
>
> PS Yes, I know that parallelizing to 8 different computers instead  
> than
> 8 CPUs of a single computer would be a good idea, but I can't afford  
> it
> :-) In any case, this would bring the problem of having 8 perfectly
> identical computers.
>
> -- 
> Fabrizio Giudici - Java Architect, Project Manager
> Tidalwave s.a.s. - "We make Java work. Everywhere."
> weblogs.java.net/blog/fabriziogiudici - www.tidalwave.it/blog
> fabrizio.giud...@tidalwave.it - mobile: +39 348.150.6941
>
>
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "The 
Java Posse" group.
To post to this group, send email to javaposse@googlegroups.com
To unsubscribe from this group, send email to 
javaposse+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en
-~--~~~~--~~--~--~---