Re: HDR support in Wayland/Weston

2019-02-18 Thread Chris Murphy
On Fri, Feb 1, 2019 at 3:43 AM Pekka Paalanen  wrote:
>
> On Thu, 31 Jan 2019 12:03:25 -0700
> Chris Murphy  wrote:
>
> > I'm pretty sure most every desktop environment and distribution have
> > settled on colord as the general purpose service.
> > https://github.com/hughsie/colord
> > https://www.freedesktop.org/software/colord/
>
> FWIW, Weston already has a small plugin to use colord. The only thing
> it does to apply anything is to set the simplest form of the gamma
> ramps.

Short version:
Having just briefly looked that code, my best guess is colord is
probably reading a vcgt tag in the ICC profile for the display, and
applying it to the video card LUT (or one of them anyway).

Super extra long version:
In ancient times (two decades+) there was a clear separation between
display calibration (change the device) and characterization (record
its behavior). Calibration was a combination of resetting and fiddling
with display controls like brightness and contrast, and then also
leveraging the at best 8 bit per channel LUT in the video card to
achieve the desired white point and tone curve per channel.
Characterization, which results in an ICC profile, happens on top of
that. The profile is valid only when the calibration is applied, both
the knob fiddling part and the applicable LUT in the video card. The
LUT information used to be kept in a separate file, and then circa 15
years ago Apple started to embed this information into the ICC profile
as the vcgt tag, and the operating system display manager reads that
tag and applies it to the video card LUT prior to login time. This has
become fairly widespread, even though I'm not finding vcgt in the
published ICC v4.3 spec. But they do offer this document:
www.color.org/groups/medical/displays/controllingVCGT.pdf

There are some test profiles that contain various vcgt tags here:
http://www.brucelindbloom.com/index.html?Vcgt.html

You really must have a reliable central service everyone agrees on to
apply such a LUT, and then also banning anything else from setting a
conflicting LUT. Again in ancient times we had all sorts of problems
with applications messing around with the LUT, and instead of reading
it first and restoring it the same way, they just reset it to some
default, thereby making the ICC profile invalid.

The primary reason, again historically, for setting the white point
outside of software (ideally set correctly in the display itself; less
ideal is using a video card LUT) is because mismatching white points
are really distracting, it prevents proper adaptation, and therefore
everything looks wrong. Ironically the color managed content is
decently likely to look more wrong than non-color-managed content. Why
would there be mismatching white points? Correct white point fully
color managed content in an application window, but not any other
application or the surrounding UI of the desktop environment.

Ergo, some kind of "calibration" of white point independent of the
color management system. Sometimes this is just a preset in the
display's on-screen menu. Getting the display white point in the ball
park of target white point means a less aggressive LUT in the video
card, or even ideally a linear LUT.

Alternatively, you decide you're going to have some master of all
pixels. That's the concept of full display compensation, where every
pixel is subject to color management transforms regardless of its
source application, all normalized to a single intermediate color
space. In theory if you throw enough bits at this intermediate space,
you could forgo the video card LUT based calibration.

The next workflow gotcha, is multiple displays. In designing a color
management system for an OS you have to decide if applications will
have the option to display across multiple displays, each of which
could have their own display profile.

I agree with Graeme that having different pipelines for calibration or
characterization is asking for big trouble. The thing I worry about,
is whether it's possible for each application to effectively have
unique pipelines because they're all using different rendering
libraries. The idea we'd have application specific characterization to
account for each application pipeline just spells doom. The return of
conflicting video card LUTs would be a nightmare.

--
Chris Murphy
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [PATCH v3] protocol: warn clients about some wl_output properties

2019-02-18 Thread Jonas Ådahl
On Mon, Feb 18, 2019 at 04:28:18PM +, Daniel Stone wrote:
> Hi Simon,
> 
> On Mon, 18 Feb 2019 at 16:22, Simon Ser  wrote:
> > On Friday, October 26, 2018 11:13 AM, Simon Ser  wrote:
> > > Compositors also expose output refresh rate, which shouldn't be used
> > > for synchronization purposes.
> >
> > I'd like to bump this patch, because people apparently do use wl_output's
> > refresh rate for synchronization purposes in Firefox [1]. I think it
> > would be a good thing to make sure people are aware they should be using
> > frame callbacks instead.
> 
> Thanks a lot for bumping this. Patch is:
> Reviewed-by: Daniel Stone 

This is

Reviewed-by: Jonas Ådahl 

too.


Jonas

> 
> Cheers,
> Daniel
> ___
> wayland-devel mailing list
> wayland-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [PATCH v3] protocol: warn clients about some wl_output properties

2019-02-18 Thread Daniel Stone
Hi Simon,

On Mon, 18 Feb 2019 at 16:22, Simon Ser  wrote:
> On Friday, October 26, 2018 11:13 AM, Simon Ser  wrote:
> > Compositors also expose output refresh rate, which shouldn't be used
> > for synchronization purposes.
>
> I'd like to bump this patch, because people apparently do use wl_output's
> refresh rate for synchronization purposes in Firefox [1]. I think it
> would be a good thing to make sure people are aware they should be using
> frame callbacks instead.

Thanks a lot for bumping this. Patch is:
Reviewed-by: Daniel Stone 

Cheers,
Daniel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [PATCH v3] protocol: warn clients about some wl_output properties

2019-02-18 Thread Simon Ser
On Friday, October 26, 2018 11:13 AM, Simon Ser  wrote:
> Compositors also expose output refresh rate, which shouldn't be used
> for synchronization purposes.

I'd like to bump this patch, because people apparently do use wl_output's
refresh rate for synchronization purposes in Firefox [1]. I think it
would be a good thing to make sure people are aware they should be using
frame callbacks instead.

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1515448#c13

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: Question about linux-explicit-synchronization

2019-02-18 Thread Scott Anderson

On 18/02/19 11:02 pm, Daniel Stone wrote:

Hi Scott,

On Mon, 18 Feb 2019 at 03:27, Scott Anderson
 wrote:

In the Weston implementation, it's simply a case of the compositor
getting the fence from the client, using eglWaitSyncKHR (or equivilent)
on it, and passing back a release fence from OUT_FENCE_FD.


Yep, that's pretty much the MVP ...


However, is the protocol intended to allow a compositor which takes a
more active role in monitoring the fences? For example, a compositor
could check the state of the fence at drawing time and decide to use the
client's previous buffer if the fence hasn't been signalled.

Is it worth a compositor being implemented like this? As far as I
understand, doing this would stop any particuarly slow clients from
stalling the compositor's drawing too much and possibly missing vblank.
But I'm not an expert on fences or how eglWaitSyncKHR affects anything
at a driver level.


No, you're totally right. The intention is that compositors should be
able to use this to schedule composition. It's still helpful without
doing that - you get more information for tracing - but in an ideal
world, compositors would be able to delay presentation if it would
endanger full frame rate.

Doing this gets _really_ tricky as you start considering things like
synchronised subsurfaces (which always seems to be the case), since
you have to make sure you're not presenting incoherent results. But
for unrelated surfaces, you can definitely delay presentation.


Ok, cool. That actually answers a follow-up question I would've had. 
I've been looking at implementing this in wlroots, and subsurfaces 
certainly were a point I was wondering about.


The case in particular would be:
- Parent surface is "ready" (signaled fence or no fence)
- Subsurface is synchronized and fence is not signaled

The intuitive thing would be to delay the parent's content from being 
updated until the subsurface is ready, but the protocol itself is a bit 
underspecified when it comes to this.



FWIW, eglWaitSyncKHR just ensures that the GPU waits for completion of
the sync object before executing any of the commands you give it
subsequently: it's a guaranteed stall. For making that decision on the
compositor side, you just want to be using the dma-fence FD. You can
add dma-fence FDs to a poll loop, where they become 'readable' when
the fence has been signalled.

Cheers,
Daniel


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: Question about linux-explicit-synchronization

2019-02-18 Thread Daniel Stone
Hi Scott,

On Mon, 18 Feb 2019 at 03:27, Scott Anderson
 wrote:
> In the Weston implementation, it's simply a case of the compositor
> getting the fence from the client, using eglWaitSyncKHR (or equivilent)
> on it, and passing back a release fence from OUT_FENCE_FD.

Yep, that's pretty much the MVP ...

> However, is the protocol intended to allow a compositor which takes a
> more active role in monitoring the fences? For example, a compositor
> could check the state of the fence at drawing time and decide to use the
> client's previous buffer if the fence hasn't been signalled.
>
> Is it worth a compositor being implemented like this? As far as I
> understand, doing this would stop any particuarly slow clients from
> stalling the compositor's drawing too much and possibly missing vblank.
> But I'm not an expert on fences or how eglWaitSyncKHR affects anything
> at a driver level.

No, you're totally right. The intention is that compositors should be
able to use this to schedule composition. It's still helpful without
doing that - you get more information for tracing - but in an ideal
world, compositors would be able to delay presentation if it would
endanger full frame rate.

Doing this gets _really_ tricky as you start considering things like
synchronised subsurfaces (which always seems to be the case), since
you have to make sure you're not presenting incoherent results. But
for unrelated surfaces, you can definitely delay presentation.

FWIW, eglWaitSyncKHR just ensures that the GPU waits for completion of
the sync object before executing any of the commands you give it
subsequently: it's a guaranteed stall. For making that decision on the
compositor side, you just want to be using the dma-fence FD. You can
add dma-fence FDs to a poll loop, where they become 'readable' when
the fence has been signalled.

Cheers,
Daniel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel