On Sun, Apr 18, 2010 at 8:32 PM, Stefan Dösinger <[email protected]> wrote:
>
> Am 18.04.2010 um 18:33 schrieb Roderick Colenbrander:
>> There's not much better than can be done. Some games use the same
>> calculation as in Wine while others take into account previous
>> results. Here I have adjusted the code to basically: current_fps =
>> 0.1*last_fps + 0.9*frames/(current_time - prev_time) and adjusted the
>> sample period to 1s. This made the fps a bit more accurate.
> My main concern here is separating different parts of the runs. A while ago 
> the ATI developers asked for a fps average for the entire runtime of the app. 
> That is good for their small samples, but if you use this with e.g. HL2 the 
> time it takes to load the game from disk comes into play.
>
> I'm not saying that benchmarking load times is bad, but you want to separate 
> that from the play benchmark. With the apps I use the game knows when it 
> stops loading and when it starts rendering, which is why I consider the 
> game's results more trustworthy than the +fps channel.
>
> --snip 1--
>
> While we're talking about it, I've brought my "amd64" test box live again, so 
> we should get fresh test results from a GF7600 card. I have a new Linux box 
> with an ATI card, and I've set up the tests there as well, but before I can 
> run them reasonably I have to wait for the next fglrx release to get the FBO 
> crash fixed.
>
> --snip 2--
>
> I am thinking about extending this a bit, and write a wrapper d3d9.dll that 
> can monitor d3d calls, and catch e.g. a shader that signals that the main 
> game rendering has started / stopped, etc. With such a setup we can start 
> thinking about taking screenshots(in a separate run of course) to see if the 
> rendering result is roughly what we expect. If I extend the idea a little, 
> make it slightly less Wine and D3D specific and find a fancy title(e.g. 
> "Quality Assurance in interactive 3D applications") I might get a master's 
> thesis out of it.
>
>

Implementation details aside it would be very useful if a mechanism
can take advantage of GPU monitor APIs like the ones from nvidia
perfkit and GL_AMD_performance_monitor. This would also allow us to
monitor what is holding up the GPU.

I have been looking into nvperfkit the past few days. The current
Linux version is based on the 173.x drivers (ancient but they would
work fine for your geforce 7600). As a workaround I have been
experimenting a bit on Windows mostly in the hope to get NVPerfkit
running on WineD3D there. In the end WineD3D worked and perfkit also
installed (there were some issues for 32-bit apps on Win7) but then it
appeared that my GPU is too new yet (it is a GTS 360M), so it doesn't
work yet. In case of AMD no special drivers are needed, so perhaps
I'll give GL_AMD_performance_monitor a shot but they say that on Linux
due to lack of good tools it is a bit hard to use.

Roderick


Reply via email to