Before tracking detailed EM/JIT profiling information( which we may need at
some point ), it may be useful to initially just track benchmark raw scores
weekly to see overall progress/regression and make it publicly available.
If there are licensing issues with SpecJVM and SpecJBB, we could use a broad
public suite like JavaGrande. We could just run this externally weekly
on the same reference machine and post the numbers.

We need to decide what we want to use  benchmarks for. If it to be used
internally primarily for performance health, we could consider requiring
reporting before and after scores as part of JIRA code submissions along
with smoke test logs. For this, in addition to the jitted code oriented ones
like Linpack, Scimark, we also need the memory benchmarks like decapo. We
could also start filing perf bugs. But given that DRLVM will change
significantly for some time, it seems too early to do this.

To report/publish competitive scores we will need the rights to run
specJVM98, specjbb2005, specjAPPServer etc. In addition to licensing, this
also has some minimal infrastructure needs. Again, we can do do the prep
work, but it maybe  too early to post competitive scores.

Thanks,
Rana


On 8/2/06, Mikhail Fursov <[EMAIL PROTECTED]> wrote:

In my opinion this is a very good idea to have public performance profile
with a hotspots identified.
So, if this idea is accepted by community we can start a discussion which
kind of profile might be useful.

I know that execution manager and optimizing JIT in DRLVM have a command
line keys to dump a lot of useful profiling information. I hope that other
components have such switches too. So the only thing we need to do first
(if
the your proposal is accepted) is to write a tool to parses this data and
shows as webpage. I can help to anyone with this task (importing profile
from DRLVM JIT/EM) or just find a time and do it by myself if no
volunteers
will be found..

On 8/2/06, Stefano Mazzocchi <[EMAIL PROTECTED]> wrote:
>
> one thing that happened in mozilla-land that catalized the community in
> fixing leaks and performance issues was adding profiling information to
> the tests and start plotting them overtime.
>
> Not only that gives an idea of the evolution of the program performance
> overtime, but it also keeps people honest because profiling is not
> something that should be done once and being forgotten but something
> that should be considered part of the feature of the program.
>
> --
> Stefano.
>
>

--
Mikhail Fursov


Reply via email to