Am 10.04.2018 um 17:35 schrieb Cyr, Aric:
-----Original Message-----
From: Wentland, Harry
Sent: Tuesday, April 10, 2018 11:08
To: Michel Dänzer <mic...@daenzer.net>; Koenig, Christian 
<christian.koe...@amd.com>; Manasi Navare
<manasi.d.nav...@intel.com>
Cc: Haehnle, Nicolai <nicolai.haeh...@amd.com>; Daniel Vetter 
<daniel.vet...@ffwll.ch>; Daenzer, Michel
<michel.daen...@amd.com>; dri-devel <dri-de...@lists.freedesktop.org>; amd-gfx 
mailing list <amd-gfx@lists.freedesktop.org>;
Deucher, Alexander <alexander.deuc...@amd.com>; Cyr, Aric <aric....@amd.com>; Koo, 
Anthony <anthony....@amd.com>
Subject: Re: RFC for a render API to support adaptive sync and VRR

On 2018-04-10 03:37 AM, Michel Dänzer wrote:
On 2018-04-10 08:45 AM, Christian König wrote:
Am 09.04.2018 um 23:45 schrieb Manasi Navare:
Thanks for initiating the discussion. Find my comments below:
On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
On 2018-04-09 03:56 PM, Harry Wentland wrote:
=== A DRM render API to support variable refresh rates ===

In order to benefit from adaptive sync and VRR userland needs a way
to let us know whether to vary frame timings or to target a
different frame time. These can be provided as atomic properties on
a CRTC:
   * bool    variable_refresh_compatible
   * int    target_frame_duration_ns (nanosecond frame duration)

This gives us the following cases:

variable_refresh_compatible = 0, target_frame_duration_ns = 0
   * drive monitor at timing's normal refresh rate

variable_refresh_compatible = 1, target_frame_duration_ns = 0
   * send new frame to monitor as soon as it's available, if within
min/max of monitor's reported capabilities

variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
   * send new frame to monitor with the specified
target_frame_duration_ns

When a target_frame_duration_ns or variable_refresh_compatible
cannot be supported the atomic check will reject the commit.

What I would like is two sets of properties on a CRTC or preferably on
a connector:

KMD properties that UMD can query:
* vrr_capable -  This will be an immutable property for exposing
hardware's capability of supporting VRR. This will be set by the
kernel after
reading the EDID mode information and monitor range capabilities.
* vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max
refresh rates supported.
These properties are optional and will be created and attached to the
DP/eDP connector when the connector
is getting intialized.
Mhm, aren't those properties actually per mode and not per CRTC/connector?

Properties that you mentioned above that the UMD can set before kernel
can enable VRR functionality
*bool vrr_enable or vrr_compatible
target_frame_duration_ns
Yeah, that certainly makes sense. But target_frame_duration_ns is a bad
name/semantics.

We should use an absolute timestamp where the frame should be presented,
otherwise you could run into a bunch of trouble with IOCTL restarts or
missed blanks.
Also, a fixed target frame duration isn't suitable even for video
playback, due to drift between the video and audio clocks.
Why?  Even if they drift, you know you want to show your 24Hz video frame for 
41.6666ms and adaptive sync can ensure that with reasonable accuracy.
All we're doing is eliminating the need for frame rate converters from the 
application and offloading that to hardware.

Time-based presentation seems to be the right approach for preventing
micro-stutter in games as well, Croteam developers have been researching
this.

I'm not sure if the driver can ever give a guarantee of the exact time a flip 
occurs. What we have control over with our HW is frame
duration.

Are Croteam devs trying to predict render times? I'm not sure how that would 
work. We've had bad experience in the past with
games that try to do framepacing as that's usually not accurate and tends to 
lead to more problems than benefits.
For gaming, it doesn't make sense nor is it feasible to know how exactly how long a 
render will take with microsecond precision, very coarse guesses at best.  The point of 
adaptive sync is that it works *transparently* for the majority of cases, within the 
capability of the HW and driver.  We don't want to have every game re-write their engine 
to support this, but we do want the majority to "just work".

The only exception is the video case where an application may want to request a 
fixed frame duration aligned to the video content.  This requires an explicit 
interface for the video app, and our proposal is to keep it simple:  app knows 
how long a frame should be presented for, and we try to honour that.

Well I strongly disagree on that.

See VDPAU for example: https://http.download.nvidia.com/XFree86/vdpau/doxygen/html/group___vdp_presentation_queue.html#ga5bd61ca8ef5d1bc54ca6921aa57f835a
[in]

earliest_presentation_time The timestamp associated with the surface. The presentation queue will not display the surface until the presentation queue's current time is at least this value.


Especially video players want an interface where they can specify when exactly a frame should show up on the display and then get the feedback when it actually was displayed.

For video games we have a similar situation where a frame is rendered for a certain world time and in the ideal case we would actually display the frame at this world time.

I mean we have the guys from Valve on this mailing list so I think we should just get the feedback from them and see what they prefer.

Regards,
Christian.


-Aric

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to