RE: [PATCH] unstable: add HDR Mastering Meta-data Protocol

2019-03-04 Thread Sharma, Shashank
Regards
Shashank

> -Original Message-
> From: Graeme Gill [mailto:grae...@argyllcms.com]
> Sent: Tuesday, March 5, 2019 9:35 AM
> To: Pekka Paalanen ; Nautiyal, Ankit K
> 
> Cc: e.bur...@gmail.com; niels_...@salscheider-online.de; wayland-
> de...@lists.freedesktop.org; sebast...@sebastianwick.net; Kps, Harish Krupo
> ; Sharma, Shashank 
> Subject: Re: [PATCH] unstable: add HDR Mastering Meta-data Protocol
> 
> Pekka Paalanen wrote:
> > On Wed, 27 Feb 2019 10:27:16 +0530
> > "Nautiyal, Ankit K"  wrote:
> >
> >> From: Ankit Nautiyal 
> >>
> >> This protcol enables a client to send the hdr meta-data:
> >> MAX-CLL, MAX-FALL, Max Luminance and Min Luminance as defined by
> >> SMPTE ST.2086.
> 
> Hmm. I'm wondering what the intent of this is.
> 
> If it is to feed these values through to the display itself so that it can do 
> dynamic HDR
> rendering, then this tends to cut across color management, since color 
> management
> as it is currently formulated depends on reproducability in color devices. If 
> really
> desired, I guess this would have to be some kind of special mode that 
> bypasses the
> usual rendering pipeline.
> 
The main reason is to pass it to the compositor so that it can do the tone 
mapping properly. As you know, there could be cases where we are dealing with 
one HDR and multiple SDR frames. Now, the HDR content comes with its own 
metadata information, which needs to be passed to the display using the AVI 
infoframes, for the correct HDR experience, but we need to make others non-HDR 
frames also compatible to this brightness range of metadata, so we do tone 
mapping. 

We are writing compositor code, which will take a call on target outputs 
content brightness level too (apart with colorspace) by applying appropriate 
tone mapping (SDR to HDR, HDR to HDR, or HDR to SDR)  using libVA or GL 
extensions. This will make sure that everything being shown on the display 
falls into same brightness range. 
 
> If the idea is that somehow the compositor is to do dynamic HDR rendering, 
> then I
> have my doubts about its suitability and practicality, unless you are 
> prepared to spell
> out in detail the algorithm it should use for performing this.
> 
The GL tone mapping algorithms will be opensource methods, and will be 
available for the code review. HW tone mapping is using the libVA interface and 
media-driver handler.  
> My assumption was that the compositor would only be expected to do well
> understood fallback color conversions, and that anything that needed special
> functionality or that was going to do experimental color handling (such as 
> dynamic
> HDR rendering) would do so in the client.
> 
Yes, we are trying our best here to make compositor fair and intelligent, and I 
believe with proper discussions and code reviews we should be able to reach 
that level. 
> For the purposes of mixing SDR and HDR in the compositor as a fallback, a much
> simpler set of transforms should be sufficient, namely a brightness for 
> converting
> SDR into HDR, and a tone mapping curve or equivalent settings for compressing 
> HDR
> into SDR.
> 
Agree. 
> > Is there a case for compositors that implement color management but
> > not HDR support? Would it be unreasonable to make HDR parameters a
> > part of the color management extension? Such that it would be optional
> > for a client to set the HDR parameters.
> 
Right now we are not targeting this, but eventually we want to be there. As we 
all know, it's a huge area to cover in an single attempt, and gets very 
cumbersome. So the plan is to first implement a modular limited capability HDR 
video playback, and  use that as a driving and testing vehicle for color 
management, and then later enhance it with adding more and more modules.  
> I think that at this point in time it is entirely reasonable that a 
> compositing system have
> some level of HDR awareness.
> 
Yes, the idea is to make it completely aware of HDR scenarios over the period, 
over a slowly maturing stack.  

- Shashank
> > Btw. do HDR videos contain ICC profiles? Where would a client get the
> > ICC profile for a video it wants to show? Do we need to require the
> > compositor to expose some commonly used specific profiles?
> 
> As I understand it, no they don't. They either rely on a stated encoding 
> (i.e. REC2020
> based), or sometimes have simple meta-data such as primary coordinates & 
> transfer
> curve specifications. A client wanting to support such meta-data can simply 
> create a
> matching ICC profile on the fly if it wishes.
> 
> Cheers,
>   Graeme Gill.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: HDR support in Wayland/Weston

2019-03-04 Thread Chris Murphy
On Mon, Mar 4, 2019 at 3:20 AM Pekka Paalanen  wrote:
>
> X11 apps have (hopefully) done hardware LUT manipulation only through
> X11. There was no other way AFAIK, unless the app started essentially
> hacking the system via root privileges.
>
> The current DRM KMS kernel design has always had a "lock" (the DRM
> master concept) so that if one process (e.g. the X server) is in
> control of the display, then no other process can touch the display
> hardware at the same time.
>
> Before DRM KMS, the video mode programming was in the Xorg video
> drivers, so if an app wanted to bypass the X server, it would have had
> to poke the hardware "directly" bypassing any drivers, which is even
> more horrible that it might sound.

Sounds pretty horrible. Anyway, I'm definitely a fan of the answer to
the question found here: http://www.islinuxaboutchoice.com/

It sounds like legacy applications use XWayland, and in some edge case
request for a literal video hardware LUT, this could be some kind of
surface for that app's windows. That seems sane to me. A way to make
such computations almost free is important to them. I think they only
ever cared about doing it with a hardware LUT because it required no
CPU or GPU time. In really ancient times, display compensation (i.e.
do a transform from sRGB to mydisplayRGB to compensate for the fact my
display is not really an sRGB display) performance was variable. A few
companies figured out a way to do this really cheaply, even Apple had
a way to apply a non-LUT, lower quality profile, to do display
compensation with live Quicktime video, over 20 years ago. Meanwhile,
one of the arguments the Mozilla Firefox folks had for moving away
from lcms2 in favor of qcms was performance, but even that wasn't good
enough performance wise for always on display compensation. I still
don't know why, other than I recognize imaging pipelines are
complicated and really hard work.

Also in that era, before OS X, was configurable transform quality
performance settings: fast, good, best - or something like that. For
all I know, best is just as cheap these days as fast and you don't
need to distinguish such things. But if you did, I think historic
evidence shows only fast and best matter. Fast might have meant taking
a LUT display profile and describing the TRC with a gamma function
instead, or 4 bits per channel instead of 8, and 8 instead of 16.
These days I've heard there are hardware optimizations for floating
point that makes it pointless to do integer as a performance saving
measure.

Back then we were really worried about get a display the "correct 8
bits per channel" since that was the pipeline we had, any video
hardware LUT for calibration took away bits from that pipeline.

And that's gotten quite a lot easier these days because at the not
even super high end, there are commodity displays that are calibrated
internally and supply a minimum 8 bits per channel, often now 10 bits
per channel, pipeline. On those displays, we don't even worry about
calibration on the desktop. And that means the high end you get almost
for free from an application standpoint. The thing to worry about are
shitty laptop displays which might have 8 bit per channel addressible
but aren't in any sense really giving us that much precision, it might
be 6 or 7. And there's a bunch of abstraction baked into the panel
that you have no control over that limits this further. So you kinda
have to be careful about doing something seemingly rudimentary like
changing its white point from D75 to D55. Hilariously, it can be the
crap displays that'll cause the most grief, not the high end use case.

OK so how do you deal with that? Well, it might in fact be that you
don't want to force accuracy, but rather re-render with a purpose to
make it look as decent as possible on that display even if the
transforms you're doing aren't achieving a colorimetric ally accurate
result that you can measure, but do achieve a pleasing result. I know
it's funny - how the frak do we distinguish between these use cases?
And then what happens if you have what seems to be a mixed case of a
higher end self-calibrating display connected to a laptop with a
shitty display? Haha, yeah you can be fakaked almost no matter what
choices you make. That's my typical setup, by the way.


> > Even if it turns out the application tags its content with displayRGB,
> > thereby in effect getting a null transform, (or a null transform with
> > whatever quantization happens through 32bpc float intermediate color
> > image encoding), that's functionally a do not color manage deviceRGB
> > path.
>
> What is "displayRGB"? Does it essentially mean "write these pixel
> values to any monitor as is"? What if the channel value's data type
> does not match?

Good question. I use 'displayRGB' as generic shorthand for the display
profile, which is different on every system. On my system right now
it's /home/chris/.local/share/icc/edid-388f82e68786f1c5ac552f0b4d0c945f.icc
but it's 

Re: [PATCH v8 2/6] tests: support waitpid

2019-03-04 Thread Leonid Bobrov
You know, I think this whole patch is silly, because waitid() is part of
POSIX, FreeBSD and NetBSD implement it while OpenBSD and DragonFly BSD
don't. I'll ask OpenBSD upstream to merge NetBSD's implementation and
DragonFly BSD upstream to merge FreeBSD's implementation.

OpenBSD aims standardization so we should blame OpenBSD.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: HDR support in Wayland/Weston

2019-03-04 Thread Chris Murphy
On Mon, Mar 4, 2019 at 1:32 AM Simon Ser  wrote:
>
> On Monday, March 4, 2019 8:13 AM, Graeme Gill  wrote:
> > And the current favorite is "blue light filter" effects, for which numerous
> > applications are currently available. They tweak the white point
> > of the display by arbitrarily modifying the hardware per channel LUTs.
> > (i.e. f.lux, Redshift, SunsetScreen, Iris, Night Shift, Twilight etc.)
> >
> > Such applications have their place for those who like the effect, but
> > ideally such usage would not simply blow color management away.
>
> FWIW wlroots has a protocol for this [1]. GNOME and KDE have this feature
> directly integrated in their compositor.

Another interesting use case, Flatpaks. Say I have a flatpak
application which depends on KDE, so I have org.kde.Platform runtime
installed, how does an application that wants to alter display white
point work in such a case? It'll presumably go through the KDE runtime
to try to do this, but then how does that get communicated when this
is actually running on GNOME? And then how does it properly get reset
once that application quits?

I don't really need a literal explanation. I'm fine with simplistic
trust based explanations.

> Is there a "good way" to combine multiple LUTs?

Whether 1D, 2D, 3D LUTs, they can be concatenated yes. I'll let Graeme
answer the math aspect of it, he probably knows it off the top of his
head. I don't. But yes it's possible, and yes it's really easy to do
it wrong in particular when there are white point differences. But
likely also more complicated in an HDR context. So yeah you'll
probably want an API for it.

Another use case for this would be simulating different standard
observers, including dichromacy and anomalous trichromacy (of which
there are several each but only a couple are particularly common).
When designing user interfaces, or signage, or advertising, it's a
really good idea to check if some substantial number of users are
going to get hit with confusion because they can't distinguish
elements in a design. This is normally implemented in the application
rather than in the OS, but macOS does have an option for this.

And now that I think of it, I'm not sure they're even using 'space'
class ICC profiles but rather 'abstract' class. You can think of the
'abstract' class as an analog to 'device link' where the device link
is implies a direct deviceA>deviceB, or even direct deviceA>deviceC
(the A>B>C concatenation is precomputed when the device link is
built), the abstract profile works in non-device color spaces such as
L*a*b* or XYZ, you can think of them as a kind of effects filter.

And for that matter, the "blue light filter" can be described with an
abstract profile, or even two of them to define the endpoints of a
continuum with something like a slider that effectively concatenates
them with a varible coefficient to weight them however the user wants,
with like a slider or something.

These things don't necessarily need to be literal ICC profiles as a
file on disk if you don't want. The non-LUT display class (primaries
plus tone curve which can be defined as a gamma function or even
parametrically), space class, and abstract class are typically pretty
tiny, they could be created on the fly and fed into lcms2 as virtual
profiles, and let it handle all the transforms with its well tested
math. Named color class, device link class, output class, input class,
can be quite a bit larger, and even massive in some cases - but off
hand I don't see a use case for them here.

-- 
Chris Murphy
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: HDR support in Wayland/Weston

2019-03-04 Thread Chris Murphy
On Mon, Mar 4, 2019 at 12:13 AM Graeme Gill  wrote:
>
> Chris Murphy wrote:
>
> > A common offender were games. They'd try to access the video card LUT
> > directly for effects, but then not reset it back to what it was,
> > rather reset it to a hardwired assumption the game makes,
>
> And the current favorite is "blue light filter" effects, for which numerous
> applications are currently available. They tweak the white point
> of the display by arbitrarily modifying the hardware per channel LUTs.
> (i.e. f.lux, Redshift, SunsetScreen, Iris, Night Shift, Twilight etc.)

It's a really good example. I have no idea how they work.  GNOME has
one built-in called "Night Light" and I use it on Wayland. So if
Wayland no longer supports applications altering video hardware LUTs,
then I'm not sure how it does it for every pixel from every
application, both Wayland and XWayland.


> Such applications have their place for those who like the effect, but
> ideally such usage would not simply blow color management away.

I've never been suspicious upon reset but I admit I've never measured
a before and after.


> In order of desirability it would be nice to:
>
> 3) Have the hardware Luts restored after an application that uses them
>exits (i.e. like OS X handles it).
>
> 2) Implement virtual per channel LUTs, with the compositor combining them
>together in some way, and have some means of the color management 
> applications
>being aware when the display is being interfered with by another 
> application,
>so that the user can be warned that the color management state is invalid.
>
> 1) A color managed API that lets an application shift the display white point
>using chromatic adaptation, so that such blue light filter applications
>can operate more predictably, as well as some means of the color management
>applications being aware of when this is happening.
>

Yep this could be done with the ICC 'color space' class very cheaply,
built on the fly even. And applied in the intermediate color space so
it affects every pixel for every display regardless of source.

As for notification I think it's more practical and useful that we'd
find this in a system UI element, like settings, related to the
display profile and calibration state. Getting messages between
applications is harder, there isn't universal agreement on using dbus,
or even asking colord what the state is. But the system settings,
devices > displays > color panel, could certainly put up a ! in a red
triangle and if you click it or hover you get a notice that the
display state is not calibrated; or some such. It may in fact still be
calibrated, but chromatically adapted with a different white point in
mind, but I think most users will better grok "not calibrated" than
some more technically accurate description.


-- 
Chris Murphy
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [RFC wayland-protocols v2 1/1] Add the color-management protocol

2019-03-04 Thread Kai-Uwe
Hello Sebastian and Pekka,

Am 04.03.19 um 15:44 schrieb Sebastian Wick:

> On 2019-03-04 14:45, Pekka Paalanen wrote:
...
> Not requiring a color space transformation at all (in the best case)
> by supplying a surface with the color space of the output.
>
>> Especially if device link profiles are taken out, which removes the
>> possibility of the application to provide its own color transformation.
With a
* device link profile or with
* color transform inside the application and attaching the output
profile to the surface,
in both cases the application provides it's own color transform. The
difference is, that in the former case to transform runs usually pretty
slowly on the CPU and memory optimisation for the whole system is not
possible. Each application is cooking on its own. In the later case the
conversion is performed by the compositor on the GPU, which is way
faster and the it can be easier optimised both in terms of performance
and memory wise.
> If the application has a device link profile it can just do the color
> space conversion on its own and assign an ICC profile of the resulting
> color space to the surface.

See above. Without offloading the conversion to the compositor, that
will be typical slower. Maybe I am wrong? But I do not see too many
applications doing GPU color management by their own. For certain they
do not share the color transforms in memory.

As a implementation detail, the compositor needs to cache the color
transforms in memory as they are expensive on first run or cache even on
disk. The later happens typical as device link profile. So, the
compositor might choose to support device links anyway. In my experience
on desktop color management and application color management, the
implementation effort for device link versus without is similar.

thanks,
Kai-Uwe Behrmann

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [RFC wayland-protocols v2 1/1] Add the color-management protocol

2019-03-04 Thread Sebastian Wick

On 2019-03-04 14:45, Pekka Paalanen wrote:

Hi Sebastian and Graeme

On Mon, 04 Mar 2019 13:37:06 +0100
Sebastian Wick  wrote:


On 2019-03-04 12:27, Pekka Paalanen wrote:
> On Mon, 4 Mar 2019 19:04:11 +1100
> Graeme Gill  wrote:
>
>> Pekka Paalanen wrote:
>>
>> > My failing is that I haven't read about what ICC v4 definition actually
>> > describes, does it characterise content or a device, or is it more
>> > about defining a transformation from something to something without
>> > saying what something is.
>>
>> The ICC format encompasses several related forms. The one
>> that is pertinent to this discussion is ICC device profiles.
>>
>> At a minimum an ICC device profile characterizes a devices color
>> response by encoding a model of device values (i.e. RGB value
>> combinations)
>> to device independent color values (i.e. values related to device
>> independent CIE XYZ, called Profile Connection Space values in ICC
>> terms). A simple model such as color primary values, white point
>> and per channel responses is easily invertible to allow transformation
>> both directions.
>>
>> For less additive devices there are more general models (cLut -
>> multi-dimensional color Lookup Table), and they are non-trivial
>> to invert, so a profile contains both forward tables (device -> PCS
>> AKA A2B tables) and reverse tables (PCS -> device AKA B2A tables).
>>
>> Then there is intents. The most basic is Absolute Colorimetric
>> and Relative Colorimetric. The former relates the measured
>> values, while the latter one assumes that the observer is adapted
>> to the white point of the devices. Typically the difference is assumed
>> to be a simple chromatic adaptation transform that can be encoded
>> as the absolute white point or a 3x3 matrix. The default intent
>> is Relative Colorimetric because this is the transform of least
>> surprise.
>>
>> cLUT based profiles allow for two additional intents,
>> Perceptual where out of gamut colors are mapped to be within
>> gamut while retaining proportionality, and Saturation where
>> colors are expanded if possible to maximize colorfulness. These
>> two intents allow the profile creator considerable latitude in
>> how they achieve these aims, and they can only be encoded using
>> a cLUT model.
>
> Hi Graeme,
>
> thank you for taking the time to explain this, much appreciated.
>
> I'm still wondering, if an application uses an ICC profile for the
> content it provides and defines an intent with it, should a compositor
> apply that intent when converting from application color space to the
> blending color space in the compositor?

I think the correct approach would be to first convert from
application color space to the output color space using the intent and
then to blending color space. That way all colors in the blending
color space will fit in the output color space.

> Should the same application provided intent be used when converting the
> composition result of the window to the output color space?

When all blending color sources are in the output color space so is
the resulting color. No intent required.


Right, this is what I did not realize until Graeme explained it. Very
good.


>> > What is the use for a device link profile?
>>
>> Device link profile use was a suggestion to overcome the previously
>> stated
>> impossibility of a client knowing which output a surface was mapped
>> to.
>> Since this no longer appears to be impossible (due to
>> wl_surface.enter/leave events
>> being available), device link profiles should be dropped from the
>> extension.
>> It is sufficient that a client can do its own color transformation
>> to the primary output if it chooses, while leaving the compositor to
>> perform
>> a fallback color transform for any portion that is mapped to a
>> secondary output,
>> or for any client that is color management unaware, or does not wish
>> to
>> implement its own color transforms.
>> This greatly reduces the implementation burden on the compositor.
>
> Btw. wl_surface.enter/leave is not unambigous, because they may
> indicate multiple outputs simultaneously. I did talk with you about
> adding an event to define the one output the app should be optimizing
> for, but so far neither protocol proposal has that.
>
> Niels, Sebastian, would you consider such event?

My proposal has the zwp_color_space_feedback_v1 interface which is
trying to solve this issue by listing the color spaces a surface was
converted to in order of importance.


Oh yes, indeed, sorry. We had a discussion going on about that.

Either advertising the main output for the surface, or the compositor's
preferred color space for the surface.

I'm still a little confused here. If an application ICC profile
describes how the content maps to PCS, and an output ICC profile
describes how PCS map to the output space, and we always need both in
the compositor to have a meaningful transformation of content to output
space, what was the benefit of the app knowing the "primary 

Re: [RFC wayland-protocols v2 1/1] Add the color-management protocol

2019-03-04 Thread Pekka Paalanen
Hi Sebastian and Graeme

On Mon, 04 Mar 2019 13:37:06 +0100
Sebastian Wick  wrote:

> On 2019-03-04 12:27, Pekka Paalanen wrote:
> > On Mon, 4 Mar 2019 19:04:11 +1100
> > Graeme Gill  wrote:
> >   
> >> Pekka Paalanen wrote:
> >>   
> >> > My failing is that I haven't read about what ICC v4 definition actually
> >> > describes, does it characterise content or a device, or is it more
> >> > about defining a transformation from something to something without
> >> > saying what something is.  
> >> 
> >> The ICC format encompasses several related forms. The one
> >> that is pertinent to this discussion is ICC device profiles.
> >> 
> >> At a minimum an ICC device profile characterizes a devices color
> >> response by encoding a model of device values (i.e. RGB value 
> >> combinations)
> >> to device independent color values (i.e. values related to device
> >> independent CIE XYZ, called Profile Connection Space values in ICC
> >> terms). A simple model such as color primary values, white point
> >> and per channel responses is easily invertible to allow transformation
> >> both directions.
> >> 
> >> For less additive devices there are more general models (cLut -
> >> multi-dimensional color Lookup Table), and they are non-trivial
> >> to invert, so a profile contains both forward tables (device -> PCS
> >> AKA A2B tables) and reverse tables (PCS -> device AKA B2A tables).
> >> 
> >> Then there is intents. The most basic is Absolute Colorimetric
> >> and Relative Colorimetric. The former relates the measured
> >> values, while the latter one assumes that the observer is adapted
> >> to the white point of the devices. Typically the difference is assumed
> >> to be a simple chromatic adaptation transform that can be encoded
> >> as the absolute white point or a 3x3 matrix. The default intent
> >> is Relative Colorimetric because this is the transform of least
> >> surprise.
> >> 
> >> cLUT based profiles allow for two additional intents,
> >> Perceptual where out of gamut colors are mapped to be within
> >> gamut while retaining proportionality, and Saturation where
> >> colors are expanded if possible to maximize colorfulness. These
> >> two intents allow the profile creator considerable latitude in
> >> how they achieve these aims, and they can only be encoded using
> >> a cLUT model.  
> > 
> > Hi Graeme,
> > 
> > thank you for taking the time to explain this, much appreciated.
> > 
> > I'm still wondering, if an application uses an ICC profile for the
> > content it provides and defines an intent with it, should a compositor
> > apply that intent when converting from application color space to the
> > blending color space in the compositor?  
> 
> I think the correct approach would be to first convert from
> application color space to the output color space using the intent and
> then to blending color space. That way all colors in the blending
> color space will fit in the output color space.
> 
> > Should the same application provided intent be used when converting the
> > composition result of the window to the output color space?  
> 
> When all blending color sources are in the output color space so is
> the resulting color. No intent required.

Right, this is what I did not realize until Graeme explained it. Very
good.

> >> > What is the use for a device link profile?  
> >> 
> >> Device link profile use was a suggestion to overcome the previously 
> >> stated
> >> impossibility of a client knowing which output a surface was mapped 
> >> to.
> >> Since this no longer appears to be impossible (due to 
> >> wl_surface.enter/leave events
> >> being available), device link profiles should be dropped from the 
> >> extension.
> >> It is sufficient that a client can do its own color transformation
> >> to the primary output if it chooses, while leaving the compositor to 
> >> perform
> >> a fallback color transform for any portion that is mapped to a 
> >> secondary output,
> >> or for any client that is color management unaware, or does not wish 
> >> to
> >> implement its own color transforms.
> >> This greatly reduces the implementation burden on the compositor.  
> > 
> > Btw. wl_surface.enter/leave is not unambigous, because they may
> > indicate multiple outputs simultaneously. I did talk with you about
> > adding an event to define the one output the app should be optimizing
> > for, but so far neither protocol proposal has that.
> > 
> > Niels, Sebastian, would you consider such event?  
> 
> My proposal has the zwp_color_space_feedback_v1 interface which is
> trying to solve this issue by listing the color spaces a surface was
> converted to in order of importance.

Oh yes, indeed, sorry. We had a discussion going on about that.

Either advertising the main output for the surface, or the compositor's
preferred color space for the surface.

I'm still a little confused here. If an application ICC profile
describes how the content maps to PCS, and an output ICC profile
describes how PCS map to the 

Re: [RFC wayland-protocols v2 1/1] Add the color-management protocol

2019-03-04 Thread Pekka Paalanen
On Mon, 4 Mar 2019 23:09:59 +1100
Graeme Gill  wrote:

> Pekka Paalanen wrote:
> 
> Hi,
> 
> > another thought about a compositor implementation detail I would like
> > to ask you all is about the blending space.
> > 
> > If the compositor blending space was CIE XYZ with direct (linear)
> > encoding to IEEE754 32-bit float values in pixels, with the units of Y
> > chosen to match an absolute physical luminance value (or something that
> > corresponds with the HDR specifications), would that be sufficient for
> > all imaginable and realistic color reproduction purposes, HDR included?  
> 
> I don't think such a thing is necessary. There is no need to transform
> to some other primary basis such as XYZ, unless you were attempting
> to compose different colorspaces together, something that is
> highly undesirable at the compositor blending stage, due to the
> lack of input possible from the clients as to source colorspace
> encoding and gamut mapping/intent handling.
> 
> AFAIK, blending just has to be in a linear light space in
> a common set of primaries. If the surfaces that will be
> composed have already been converted to the output device colorspace,
> then all that is necessary for blending is that they be converted
> to a linear light version of the output device colorspace
> via per channel curves. Such curves do not have to be 100% accurate
> to get most of the benefits of linear light composition. If the
> per channel LUTs and compositing is done to sufficient
> resolution, this will leave the color management fidelity
> completely in tact.

...

> > Meaning, that all client content gets converted according to the client
> > provided ICC profiles to CIE XYZ, composited/blended, and then
> > converted to output space according to the output ICC profile.  
> 
> See all my previous discussions. This approach has many problems
> when it comes to gamut and intent.

Hi Graeme,

ok, doing the composition and blending in the output color but linear
light space sounds good. I'm glad my overkill suggestion was not
necessary.


Thanks,
pq


pgpfrNBRWU1vV.pgp
Description: OpenPGP digital signature
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: HDR support in Wayland/Weston

2019-03-04 Thread Simon Ser
On Monday, March 4, 2019 10:50 AM, Carsten Haitzler  
wrote:
> > How should this look like? Disclaimer: I have no idea how these applications
> > work and I know nothing about color management.
> > I'm guessing this is a restriction of the "change the whole LUTs" API. Are
> > there any features the "blue light filter" app won't be able to implement 
> > when
> > switching to this API? Would the compositor part become complicated (judging
> > from 2 it seems different "blue light filter" apps may compute LUTs
> > differently)?
> > Since many compositors (GNOME, KDE, wlroots, maybe more) implement a way to
> > apply a "blue light filter", I think it's important to be able to notify 
> > color
> > management applications that they don't have exclusive access. Or maybe this
> > should just be handled internally by the compositor? (Display a warning or
> > something?)
>
> apps should not have exclusive access. we're re-doing the whole horrid 
> "install
> colormap" thing from the x days of 256 color (or paletted/colormapped 
> displays).

Yes, of course. I should've said "that the compositor won't display
accurate colors".
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [PATCH v8 2/6] tests: support waitpid

2019-03-04 Thread Pekka Paalanen
On Wed, 27 Feb 2019 21:13:09 +0200
Leonid Bobrov  wrote:

> From: Vadim Zhukov 
> 
> *BSD don't have waitid()

Hi Leonid,

many details to fix here. This patch breaks all tests that use the
runner on Linux.

> 
> Co-authored-by: Koop Mast 
> Signed-off-by: Leonid Bobrov 
> ---
>  configure.ac|  4 
>  tests/test-compositor.c | 25 +++--
>  tests/test-runner.c | 29 -
>  3 files changed, 55 insertions(+), 3 deletions(-)
> 
> diff --git a/configure.ac b/configure.ac
> index 495e1a6..c0f1c37 100644
> --- a/configure.ac
> +++ b/configure.ac
> @@ -65,6 +65,10 @@ AC_SUBST(GCC_CFLAGS)
>  AC_CHECK_HEADERS([sys/prctl.h])
>  AC_CHECK_FUNCS([accept4 mkostemp posix_fallocate prctl])
>  
> +# waitid() and signal.h are needed for the test suite.
> +AC_CHECK_FUNCS([waitid])

Should you not check for waitpid as well, to error out on systems where
neither is present rather than assuming that lack of waitid means
waitpid is present?

> +AC_CHECK_HEADERS([signal.h])

The result for the signal.h check does not seem to be used at all.
Should it not?

> +
>  AC_ARG_ENABLE([libraries],
> [AC_HELP_STRING([--disable-libraries],
> [Disable compilation of wayland libraries])],
> diff --git a/tests/test-compositor.c b/tests/test-compositor.c
> index 72f6351..6e12630 100644
> --- a/tests/test-compositor.c
> +++ b/tests/test-compositor.c
> @@ -23,6 +23,8 @@
>   * SOFTWARE.
>   */
>  
> +#include "config.h"
> +
>  #include 
>  #include 
>  #include 
> @@ -86,8 +88,8 @@ get_socket_name(void)
>   static char retval[64];
>  
>   gettimeofday(, NULL);
> - snprintf(retval, sizeof retval, "wayland-test-%d-%ld%ld",
> -  getpid(), tv.tv_sec, tv.tv_usec);
> + snprintf(retval, sizeof retval, "wayland-test-%d-%lld%lld",
> +  getpid(), (long long)tv.tv_sec, (long long)tv.tv_usec);

This hunk does not seem to have anything to do with waitid, hence does
not belong in this patch.

>  
>   return retval;
>  }
> @@ -97,10 +99,15 @@ handle_client_destroy(void *data)
>  {
>   struct client_info *ci = data;
>   struct display *d;
> +#ifdef HAVE_WAITID
>   siginfo_t status;
> +#else
> + int istatus;
> +#endif
>  
>   d = ci->display;
>  
> +#ifdef HAVE_WAITID
>   assert(waitid(P_PID, ci->pid, , WEXITED) != -1);

Would it not be easier to rewrite this code to use waitpid() instead?

>  
>   switch (status.si_code) {
> @@ -118,6 +125,20 @@ handle_client_destroy(void *data)
>   ci->exit_code = status.si_status;
>   break;
>   }
> +#else
> + assert(waitpid(ci->pid, , WNOHANG) != -1);

Why WNOHANG? We do want to wait for the child to exit if it hasn't
already.

> +
> + if (WIFSIGNALED(istatus)) {
> + fprintf(stderr, "Client '%s' was killed by signal %d\n",
> + ci->name, WTERMSIG(istatus));
> + ci->exit_code = WEXITSTATUS(istatus);
> + } else if (WIFEXITED(istatus)) {
> + if (WEXITSTATUS(istatus) != EXIT_SUCCESS)
> + fprintf(stderr, "Client '%s' exited with code %d\n",
> + ci->name, WEXITSTATUS(istatus));
> + ci->exit_code = WEXITSTATUS(istatus);
> + }
> +#endif

Btw. such big or multiple #ifdef blocks inside a function that also has
common stuff make understanding it more difficult. If such alternate
paths must exist, they would be better as a separate function whose
whole body is switched.

>  
>   ++d->clients_terminated_no;
>   if (d->clients_no == d->clients_terminated_no) {
> diff --git a/tests/test-runner.c b/tests/test-runner.c
> index 1487dc4..7fa72eb 100644
> --- a/tests/test-runner.c
> +++ b/tests/test-runner.c
> @@ -23,6 +23,8 @@
>   * SOFTWARE.
>   */
>  
> +#include "config.h"
> +
>  #define _GNU_SOURCE
>  
>  #include 
> @@ -32,6 +34,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -293,7 +296,11 @@ int main(int argc, char *argv[])
>   const struct test *t;
>   pid_t pid;
>   int total, pass;
> +#ifdef HAVE_WAITID
>   siginfo_t info;
> +#else
> + int status;
> +#endif
>  
>   if (isatty(fileno(stderr)))
>   is_atty = 1;
> @@ -336,7 +343,8 @@ int main(int argc, char *argv[])
>   if (pid == 0)
>   run_test(t); /* never returns */
>  
> - if (waitid(P_PID, pid, , WEXITED)) {
> +#ifdef HAVE_WAITID
> + if (waitid(P_PID, 0, , WEXITED)) {

Why change this?

Could this code not be rewritten to use waitpid() instead of waitid()?

>   stderr_set_color(RED);
>   fprintf(stderr, "waitid failed: %m\n");
>   stderr_reset_color();
> @@ -367,6 +375,25 @@ int main(int argc, char *argv[])
>  
>   break;
>   }
> +#else
> + if (waitpid(-1, , 0) == -1) {

Why not wait for the right pid?

Re: [RFC wayland-protocols v2 1/1] Add the color-management protocol

2019-03-04 Thread Sebastian Wick

On 2019-03-04 12:27, Pekka Paalanen wrote:

On Mon, 4 Mar 2019 19:04:11 +1100
Graeme Gill  wrote:


Pekka Paalanen wrote:

> My failing is that I haven't read about what ICC v4 definition actually
> describes, does it characterise content or a device, or is it more
> about defining a transformation from something to something without
> saying what something is.

The ICC format encompasses several related forms. The one
that is pertinent to this discussion is ICC device profiles.

At a minimum an ICC device profile characterizes a devices color
response by encoding a model of device values (i.e. RGB value 
combinations)

to device independent color values (i.e. values related to device
independent CIE XYZ, called Profile Connection Space values in ICC
terms). A simple model such as color primary values, white point
and per channel responses is easily invertible to allow transformation
both directions.

For less additive devices there are more general models (cLut -
multi-dimensional color Lookup Table), and they are non-trivial
to invert, so a profile contains both forward tables (device -> PCS
AKA A2B tables) and reverse tables (PCS -> device AKA B2A tables).

Then there is intents. The most basic is Absolute Colorimetric
and Relative Colorimetric. The former relates the measured
values, while the latter one assumes that the observer is adapted
to the white point of the devices. Typically the difference is assumed
to be a simple chromatic adaptation transform that can be encoded
as the absolute white point or a 3x3 matrix. The default intent
is Relative Colorimetric because this is the transform of least
surprise.

cLUT based profiles allow for two additional intents,
Perceptual where out of gamut colors are mapped to be within
gamut while retaining proportionality, and Saturation where
colors are expanded if possible to maximize colorfulness. These
two intents allow the profile creator considerable latitude in
how they achieve these aims, and they can only be encoded using
a cLUT model.


Hi Graeme,

thank you for taking the time to explain this, much appreciated.

I'm still wondering, if an application uses an ICC profile for the
content it provides and defines an intent with it, should a compositor
apply that intent when converting from application color space to the
blending color space in the compositor?


I think the correct approach would be to first convert from
application color space to the output color space using the intent and
then to blending color space. That way all colors in the blending
color space will fit in the output color space.


Should the same application provided intent be used when converting the
composition result of the window to the output color space?


When all blending color sources are in the output color space so is
the resulting color. No intent required.


What would be a reasonable way to do those conversions, using which
intents?


So in summary an ICC profile provides device characterization, as well
as facilitating fast and efficient transformation between different
devices, as well as a choice of intent handling that cannot typically
be computed on the fly. Naturally to do a device to device space 
transform

you need two device profiles, one for the source space and one
for the destination.


Do I understand correctly that an ICC profile can provide separate
(A2B and B2A) cLUT for each intent?


That's my understanding as well.


> What is the use for a device link profile?

Device link profile use was a suggestion to overcome the previously 
stated
impossibility of a client knowing which output a surface was mapped 
to.
Since this no longer appears to be impossible (due to 
wl_surface.enter/leave events
being available), device link profiles should be dropped from the 
extension.

It is sufficient that a client can do its own color transformation
to the primary output if it chooses, while leaving the compositor to 
perform
a fallback color transform for any portion that is mapped to a 
secondary output,
or for any client that is color management unaware, or does not wish 
to

implement its own color transforms.
This greatly reduces the implementation burden on the compositor.


Btw. wl_surface.enter/leave is not unambigous, because they may
indicate multiple outputs simultaneously. I did talk with you about
adding an event to define the one output the app should be optimizing
for, but so far neither protocol proposal has that.

Niels, Sebastian, would you consider such event?


My proposal has the zwp_color_space_feedback_v1 interface which is
trying to solve this issue by listing the color spaces a surface was
converted to in order of importance.



Thanks,
pq

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org

Re: [RFC wayland-protocols 1/1] unstable: add color management protocol

2019-03-04 Thread Graeme Gill
Pekka Paalanen wrote:

> Is this sent for all color spaces added by clients as well
> (color_space_from_icc request)? If yes, I think that has potential to
> become denial-of-service attack vector. A malicious client adding a ton
> of color spaces could easily cause all new clients to disconnect when
> the compositor send buffer overflows on creating a zwp_color_manager_v1
> object. It is even more prone because the initial events will carry a
> fd per profile, and the number of fds is much more limited than
> messages.

I don't see a case for (source) color profiles uploaded by
a client being visible to other clients. Other clients
don't know what they represent, so can't really
make use of them. It would be a waste of time to even look
for them.

The idea of making standard profiles available to all clients
is a way of not duplicating well understood spaces. These
are usable by all clients because they have understood identifiers
and definitions, and are not going to change while the client is
using them. (Note however that it is possible for an output
profile to change, so ideally there should be a way of a client
being notified of this, so that they can take appropriate
action if they wish).

> Chris had some good comments on terminology. Is "color space" correct
> here? "Color image encoding"?

I suspect different people may mean different things by these.

By my definitions, a colorspace is something that comes
out of, or goes into a color capture or reproduction device,
or something that can be related to device independent color values.

Encoding on the other hand, is a way of representing those
color values in a digital form. So by my definition that
includes reversible transformations such as YCC like encodings,
different bit depths, float vs. integer and ranges (i.e. video range encoding) 
etc.

I'm not sure if there is a formal definition that can be pointed to.

> - zero/one or more outputs, can't the same color space apply to
>   multiple outputs? Most likely if the outputs are not actually
>   calibrated but use either the standard sRGB assumption or maybe an
>   ICC profile from then vendor rather than individually measured.

Yes, that's the most likely scenario. But all the profiles
(i.e. output profiles, standard profiles, client uploaded
 profiles) should be able to be referred to by a handle or
 object identifier, and that would be what is used to tag
 the output or surface.

Cheers,
Graeme Gill.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: HDR support in Wayland/Weston

2019-03-04 Thread Kai-Uwe
Am 04.03.19 um 09:32 schrieb Simon Ser:
> On Monday, March 4, 2019 8:13 AM, Graeme Gill  wrote:
>> And the current favorite is "blue light filter" effects, for which numerous
>> applications are currently available. They tweak the white point
>> of the display by arbitrarily modifying the hardware per channel LUTs.
>> (i.e. f.lux, Redshift, SunsetScreen, Iris, Night Shift, Twilight etc.)
>>
>> Such applications have their place for those who like the effect, but
>> ideally such usage would not simply blow color management away.
> FWIW wlroots has a protocol for this [1]. GNOME and KDE have this feature
> directly integrated in their compositor.

These tools work in a assumption of all device RGB is the same (sRGB).
That is, mildly written, not ideal.

The shortcomings of ignoring the output color space becomes evident,
when one does multi monitor white point adjustment. A BT2020 and a
REC709 or DCI3 display will simply widely disagree on red light
modification with manipulating the 1D LUT without considering the device
specifics. The correct way is:

* convert R+G+B ramps to CIE*XYZ (usually by a ICC profile)
* do chromatic adaption in CIE*XYZ, monitor white point -> common
destination white point (D50, K2800...)
* convert back to per output device RGB [CIE*XYZ->deviceRGB]
* merge with ICC profile VCGT calibration ramp, and finally
* apply the 1D LUT for the correct output

All the above tools do not take care of the output profile and apply to
all outputs the same device unspecific manipulation assuming sRGB.
(Assuming sRGB as device primaries performs pretty bad in face of BT2020
/ HDR.)

>> In order of desirability it would be nice to:
>>
>> 3) Have the hardware Luts restored after an application that uses them
>>exits (i.e. like OS X handles it).
> Agreed. This is done in our protocol and there's no such issue when builtin in
> the compositor.
>
>> 2) Implement virtual per channel LUTs, with the compositor combining them
>>together in some way, and have some means of the color management 
>> applications
>>being aware when the display is being interfered with by another 
>> application,
>>so that the user can be warned that the color management state is invalid.
> Is there a "good way" to combine multiple LUTs?

You mean 1D LUT, yes in CIE*XYZ or a other liner blending space. See above.

>> 1) A color managed API that lets an application shift the display white point
>>using chromatic adaptation, so that such blue light filter applications
>>can operate more predictably, as well as some means of the color 
>> management
>>applications being aware of when this is happening.
> How should this look like? Disclaimer: I have no idea how these applications
> work and I know nothing about color management.
>
> I'm guessing this is a restriction of the "change the whole LUTs" API. Are 
> there
> any features the "blue light filter" app won't be able to implement when
> switching to this API? Would the compositor part become complicated (judging
> from [2] it seems different "blue light filter" apps may compute LUTs
> differently)?

White point adaption is singe very long time part of the ICC spec.

> Since many compositors (GNOME, KDE, wlroots, maybe more) implement a way to
> apply a "blue light filter", I think it's important to be able to notify color
> management applications that they don't have exclusive access. Or maybe this
> should just be handled internally by the compositor? (Display a warning or
> something?)
A color management front end can take care of such things, but only if
it can know, that a manipulation is in action. Be that manipulation
visual impairment simulation, white point altering, saturation altering,
sepia effect, you name it 
> [1]: 
> https://github.com/swaywm/wlr-protocols/blob/master/unstable/wlr-gamma-control-unstable-v1.xml
> [2]: https://github.com/jonls/redshift/blob/master/README-colorramp

Both links do not talk about device specific LUT manipulation. They are
color management unaware and the outcome will vary from device to device.

That said, blue light reduction is a desirable feature. I enjoy it each
day/night using ICC style color management.

Kai-Uwe Behrmann
-- 
www.oyranos.org

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [RFC wayland-protocols v2 1/1] Add the color-management protocol

2019-03-04 Thread Graeme Gill

Pekka Paalanen wrote:

Hi,

> another thought about a compositor implementation detail I would like
> to ask you all is about the blending space.
> 
> If the compositor blending space was CIE XYZ with direct (linear)
> encoding to IEEE754 32-bit float values in pixels, with the units of Y
> chosen to match an absolute physical luminance value (or something that
> corresponds with the HDR specifications), would that be sufficient for
> all imaginable and realistic color reproduction purposes, HDR included?

I don't think such a thing is necessary. There is no need to transform
to some other primary basis such as XYZ, unless you were attempting
to compose different colorspaces together, something that is
highly undesirable at the compositor blending stage, due to the
lack of input possible from the clients as to source colorspace
encoding and gamut mapping/intent handling.

AFAIK, blending just has to be in a linear light space in
a common set of primaries. If the surfaces that will be
composed have already been converted to the output device colorspace,
then all that is necessary for blending is that they be converted
to a linear light version of the output device colorspace
via per channel curves. Such curves do not have to be 100% accurate
to get most of the benefits of linear light composition. If the
per channel LUTs and compositing is done to sufficient
resolution, this will leave the color management fidelity
completely in tact.

> Or do I have false assumptions about HDR specifications and they do
> not define brightness in physical absolute units but somehow in
> relative units? I think I saw "nit" as the unit somewhere which is an
> absolute physical unit.

Unfortunately the HDR specifications that have been (hastily) adopted
by the video industry are specified in absolute units. I say unfortunately
because the mastering standards they are derived from have a specified
viewing conditions and brightness, something that does not occur
in typical consumer viewing situations. So the consumer needs a "brightness"
control to adapt the imagery for their actual viewing environment and
display capability. The problem is that even if the user was to specify
somehow what absolute "brightness" they wanted, the HDR specs do not
specify what level the "typical" (or as I would call it, the diffuse
white value) of the program material should be, so there is no way to
calculate the brightness multiplier.

Why is this important ? Because if you know the nominal diffuse white
(which is equivalent to the 100% white of SDR), then you can know
how to process the HDR you get from the source to the HDR capabilities
of the display. You can map 100% diffuse white to the users brightness
setting, and then compress the specular highlights/direct light source
values above this to match the displays maximum HDR brightness level.
(Of course propriety dynamic mappings are all the rage too.)

Interestingly it seems that some systems are starting to simply
assume that the HDR 48 or 100 cd/m^2 encoding level diffuse white level,
and going from there.

> It might be heavy to use, both storage wise and computationally, but I
> think Weston should start with a gold standard approach that we can
> verify to be correct, encode the behaviour into the test suite, and
> then look at possible optimizations by looking at e.g. other blending
> spaces or opportunistically skipping the blending space.

The HDR literature has a bunch of information on encoding precision
requirements, since they spend a lot of time trying to figure
out how to encode HDR efficiently. Encoded HDR typically uses
12 bits, and 16 bit half float encoding would work well if you
have hardware to handle it, but for integer encoding, I suspect
24 - 32 bits/channel might be needed.

> Meaning, that all client content gets converted according to the client
> provided ICC profiles to CIE XYZ, composited/blended, and then
> converted to output space according to the output ICC profile.

See all my previous discussions. This approach has many problems
when it comes to gamut and intent.

Cheers,
Graeme Gill.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [PATCH v8 1/6] tests: use /dev/fd to count open fds

2019-03-04 Thread Pekka Paalanen
On Wed, 27 Feb 2019 21:13:08 +0200
Leonid Bobrov  wrote:

> *BSD don't have /proc/self/fd, they use /dev/fd instead.
> At Linux /dev/fd is a symlink to /proc/self/fd
> 
> Signed-off-by: Leonid Bobrov 
> ---
>  tests/test-helpers.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/tests/test-helpers.c b/tests/test-helpers.c
> index b2189d8..03419fb 100644
> --- a/tests/test-helpers.c
> +++ b/tests/test-helpers.c
> @@ -47,8 +47,8 @@ count_open_fds(void)
>   struct dirent *ent;
>   int count = 0;
>  
> - dir = opendir("/proc/self/fd");
> - assert(dir && "opening /proc/self/fd failed.");
> + dir = opendir("/dev/fd");
> + assert(dir && "opening /dev/fd failed.");
>  
>   errno = 0;
>   while ((ent = readdir(dir))) {
> @@ -57,7 +57,7 @@ count_open_fds(void)
>   continue;
>   count++;
>   }
> - assert(errno == 0 && "reading /proc/self/fd failed.");
> + assert(errno == 0 && "reading /dev/fd failed.");
>  
>   closedir(dir);
>  

Hi,

this patch works for me, and is a candidate for landing without BSD CI.

Reviewed-by: Pekka Paalanen 

(Leonid, this kind of reply means that I'm ok with the patch, and will
merge it a day or two later unless someone disagrees.)


Thanks,
pq


pgpj2oxyHNWjH.pgp
Description: OpenPGP digital signature
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [RFC wayland-protocols v2 1/1] Add the color-management protocol

2019-03-04 Thread Pekka Paalanen
On Mon, 4 Mar 2019 22:07:24 +1100
Graeme Gill  wrote:

> Pekka Paalanen wrote:
> 
> Hi,
> 
> > Does that even make any difference if the output space was linear at
> > blending step, and gamma was applied after that?  
> 
> as mentioned earlier, I think talk of using device links is now a red herring.
> 
> If it is desirable to do blending in a linear light space (as is typically
> the case), then this can be implemented in a way that leverages color 
> management,
> without interfering with it. In a color managed workflow all color values 
> intended
> for a particular output will be converted to that devices output space either
> by the client or the compositor on the clients behalf. The installed output 
> device
> profile then can provide the necessary blending information. Even if
> a device colorspace is not terribly additive for the purposes of
> accurate profile creation, RGB devices are generally additive enough to
> approximate linear light mixing with per channel lookup curves. It's
> pretty straightforward to use the ICC profile to create such curves
> (for each of the RGB channels in turn with the others at zero, lookup
>  the Relative Colorimetric XYZ value and take the dot product with the
>  100% channel value. Ensure the curves are monotonic, and normalize them
>  to map 0 and 1 unchanged.)
> Compositing in a linear light space has to occur to sufficiently high
> precision of course, so as not to introduce quantization errors.
> After composition the inverse curves would be applied to return to the
> output device space.

Excellent!

Thank you,
pq


pgpB7CY4b0SqU.pgp
Description: OpenPGP digital signature
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [RFC wayland-protocols v2 1/1] Add the color-management protocol

2019-03-04 Thread Pekka Paalanen
On Mon, 4 Mar 2019 19:04:11 +1100
Graeme Gill  wrote:

> Pekka Paalanen wrote:
> 
> > My failing is that I haven't read about what ICC v4 definition actually
> > describes, does it characterise content or a device, or is it more
> > about defining a transformation from something to something without
> > saying what something is.  
> 
> The ICC format encompasses several related forms. The one
> that is pertinent to this discussion is ICC device profiles.
> 
> At a minimum an ICC device profile characterizes a devices color
> response by encoding a model of device values (i.e. RGB value combinations)
> to device independent color values (i.e. values related to device
> independent CIE XYZ, called Profile Connection Space values in ICC
> terms). A simple model such as color primary values, white point
> and per channel responses is easily invertible to allow transformation
> both directions.
> 
> For less additive devices there are more general models (cLut -
> multi-dimensional color Lookup Table), and they are non-trivial
> to invert, so a profile contains both forward tables (device -> PCS
> AKA A2B tables) and reverse tables (PCS -> device AKA B2A tables).
> 
> Then there is intents. The most basic is Absolute Colorimetric
> and Relative Colorimetric. The former relates the measured
> values, while the latter one assumes that the observer is adapted
> to the white point of the devices. Typically the difference is assumed
> to be a simple chromatic adaptation transform that can be encoded
> as the absolute white point or a 3x3 matrix. The default intent
> is Relative Colorimetric because this is the transform of least
> surprise.
> 
> cLUT based profiles allow for two additional intents,
> Perceptual where out of gamut colors are mapped to be within
> gamut while retaining proportionality, and Saturation where
> colors are expanded if possible to maximize colorfulness. These
> two intents allow the profile creator considerable latitude in
> how they achieve these aims, and they can only be encoded using
> a cLUT model.

Hi Graeme,

thank you for taking the time to explain this, much appreciated.

I'm still wondering, if an application uses an ICC profile for the
content it provides and defines an intent with it, should a compositor
apply that intent when converting from application color space to the
blending color space in the compositor?

Should the same application provided intent be used when converting the
composition result of the window to the output color space?

What would be a reasonable way to do those conversions, using which
intents?

> So in summary an ICC profile provides device characterization, as well
> as facilitating fast and efficient transformation between different
> devices, as well as a choice of intent handling that cannot typically
> be computed on the fly. Naturally to do a device to device space transform
> you need two device profiles, one for the source space and one
> for the destination.

Do I understand correctly that an ICC profile can provide separate
(A2B and B2A) cLUT for each intent?

> > What is the use for a device link profile?  
> 
> Device link profile use was a suggestion to overcome the previously stated
> impossibility of a client knowing which output a surface was mapped to.
> Since this no longer appears to be impossible (due to wl_surface.enter/leave 
> events
> being available), device link profiles should be dropped from the extension.
> It is sufficient that a client can do its own color transformation
> to the primary output if it chooses, while leaving the compositor to perform
> a fallback color transform for any portion that is mapped to a secondary 
> output,
> or for any client that is color management unaware, or does not wish to
> implement its own color transforms.
> This greatly reduces the implementation burden on the compositor.

Btw. wl_surface.enter/leave is not unambigous, because they may
indicate multiple outputs simultaneously. I did talk with you about
adding an event to define the one output the app should be optimizing
for, but so far neither protocol proposal has that.

Niels, Sebastian, would you consider such event?


Thanks,
pq


pgpK14TCEsjRB.pgp
Description: OpenPGP digital signature
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [RFC wayland-protocols v2 1/1] Add the color-management protocol

2019-03-04 Thread Graeme Gill
Pekka Paalanen wrote:

Hi,

> Does that even make any difference if the output space was linear at
> blending step, and gamma was applied after that?

as mentioned earlier, I think talk of using device links is now a red herring.

If it is desirable to do blending in a linear light space (as is typically
the case), then this can be implemented in a way that leverages color 
management,
without interfering with it. In a color managed workflow all color values 
intended
for a particular output will be converted to that devices output space either
by the client or the compositor on the clients behalf. The installed output 
device
profile then can provide the necessary blending information. Even if
a device colorspace is not terribly additive for the purposes of
accurate profile creation, RGB devices are generally additive enough to
approximate linear light mixing with per channel lookup curves. It's
pretty straightforward to use the ICC profile to create such curves
(for each of the RGB channels in turn with the others at zero, lookup
 the Relative Colorimetric XYZ value and take the dot product with the
 100% channel value. Ensure the curves are monotonic, and normalize them
 to map 0 and 1 unchanged.)
Compositing in a linear light space has to occur to sufficiently high
precision of course, so as not to introduce quantization errors.
After composition the inverse curves would be applied to return to the
output device space.

Cheers,
Graeme Gill.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: HDR support in Wayland/Weston

2019-03-04 Thread Graeme Gill
Chris Murphy wrote:

Hi Chris,

> Well you need a client to do display calibration which necessarily
> means altering the video LUT (to linear) in order to do the
> measurements from which a correction curve is computed, and then that
> client needs to install that curve into the video LUT. Now, colord
> clearly has such capability, as it's applying vcgt tags in ICC
> profiles now. If colord can do it, then what prevents other clients
> from doing it?

my suggestion is not to make the profiling application deal in
these sort of nuts and bolts. If there is an API to install
a profile for a particular output, then the Compositor can
take responsibility for implementing the ICC 'vcgt' tag.
It can choose to implement it in hardware (CRTC), or any other
way it wants, as long as it is implemented so that it
doesn't disadvantage the result compared to implementing it in hardware.

Cheers,
Graeme.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: HDR support in Wayland/Weston

2019-03-04 Thread Pekka Paalanen
On Fri, 1 Mar 2019 12:47:05 -0700
Chris Murphy  wrote:

> On Fri, Mar 1, 2019 at 3:10 AM Pekka Paalanen  wrote:
> >
> > On Thu, 28 Feb 2019 18:28:33 -0700
> > Chris Murphy  wrote:
> >  
> > > I'm curious how legacy applications including games used to manipulate
> > > actual hardware LUT in a video card, if the application asked the
> > > client to do it, in which case it still could do that?  
> >
> > Hi Chris,
> >
> > right now, in no case.  
> 
> I made a typo.
> s/client/kernel
> 
> Or has LUT manipulation only ever been done via X11?

Hi Chris,

X11 apps have (hopefully) done hardware LUT manipulation only through
X11. There was no other way AFAIK, unless the app started essentially
hacking the system via root privileges.

The current DRM KMS kernel design has always had a "lock" (the DRM
master concept) so that if one process (e.g. the X server) is in
control of the display, then no other process can touch the display
hardware at the same time.

Before DRM KMS, the video mode programming was in the Xorg video
drivers, so if an app wanted to bypass the X server, it would have had
to poke the hardware "directly" bypassing any drivers, which is even
more horrible that it might sound.

Non-X11 apps, such as fbdev, svgalib, DirectFB, etc., would be
something different that I'm not too familiar with. Fbdev was a
standard'ish kernel ABI, while the rest more or less poked the hardware
directly bypassing any kernel drivers, if not using fbdev under the
hood. These I would just ignore, they were running without a window
system to begin with.


> > > a. I've already done color management, I *really do* need deviceRGB
> > > b. display this, its color space is _.  
> >
> > Case b) is already in both of the protocol proposals.
> >
> > Case a) is in Niels' proposal, but I raised some issues with that. It is
> > a very problematic case to implement in general, too, because the
> > compositor is in some cases very likely to have to undo the color
> > transformations the application already did to go back to a common
> > blending space or to the spaces for other outputs.  
> 
> Case a) is a subset of the calibration/characterization application's
> requirement.
> 
> Even if it turns out the application tags its content with displayRGB,
> thereby in effect getting a null transform, (or a null transform with
> whatever quantization happens through 32bpc float intermediate color
> image encoding), that's functionally a do not color manage deviceRGB
> path.

What is "displayRGB"? Does it essentially mean "write these pixel
values to any monitor as is"? What if the channel value's data type
does not match?

I suppose if a compositor receives content with "displayRGB" profile,
assuming my guess above is correct, it would have to apply the inverse
of the blending space to output space transform first, so that the
total result would be a null transform for pixels that were not blended
with anything else.

> > > Both types of applications exist. It might very well be reasonable to
> > > say, yeah we're not going to support use case a.) Such smarter
> > > applications are going to have to do their color management however
> > > they want internally, and transform to a normalized color space like
> > > P3 or Rec.2020 or opRGB and follow use case b.) where they tag all
> > > content with that normalized color space.  
> >
> > Right. We'll see. And measurement/calibration/characterisation
> > applications are a third category completely different to the two
> > above, by window management requirements if nothing else.  
> 
> It is fair to keep track of, and distinguish a display path with:
> 
> 1. no calibration and no/null transform;
> 2. calibration applied, but no/null transform;
> 3. calibration and transform applied.
> 
> The calibration application does need a means of ensuring explicitly
> getting each of those. 1, is needed to figure out the uncorrected
> state and hopefully give the user some guidance on knob settings via
> OSD, and then to take meausurements to compute a corrective curve
> typically going in the video card LUT or equivalent wherever else that
> would go; 2, is needed to build an ICC profile for the display; and 3,
> is needed for verifying the path.
> 
> An application doing color management internally only really needs
> path 2. Nevertheless, that app's need is a subset of what's already
> needed by an application that does display calibration and profiling.

Yes, this is very much why I would prefer
measurement/calibration/characterization applications to use another
protocol extension that is explicitly designed for these needs instead
of or in addition to a generic "color management" extension that is
designed for all apps for general content delivery purposes.

I presume the measurement or calibration use case always involves
"owning" the whole monitor, and the very specific monitor at that. That
is, making the monitor temporarily exclusive to the app, so that
nothing else can interfere 

Re: HDR support in Wayland/Weston

2019-03-04 Thread The Rasterman
On Mon, 04 Mar 2019 08:32:45 + Simon Ser  said:

> On Monday, March 4, 2019 8:13 AM, Graeme Gill  wrote:
> > And the current favorite is "blue light filter" effects, for which numerous
> > applications are currently available. They tweak the white point
> > of the display by arbitrarily modifying the hardware per channel LUTs.
> > (i.e. f.lux, Redshift, SunsetScreen, Iris, Night Shift, Twilight etc.)
> >
> > Such applications have their place for those who like the effect, but
> > ideally such usage would not simply blow color management away.
> 
> FWIW wlroots has a protocol for this [1]. GNOME and KDE have this feature
> directly integrated in their compositor.
> 
> > In order of desirability it would be nice to:
> >
> > 3) Have the hardware Luts restored after an application that uses them
> >exits (i.e. like OS X handles it).
> 
> Agreed. This is done in our protocol and there's no such issue when builtin in
> the compositor.
> 
> > 2) Implement virtual per channel LUTs, with the compositor combining them
> >together in some way, and have some means of the color management
> > applications being aware when the display is being interfered with by
> > another application, so that the user can be warned that the color
> > management state is invalid.
> 
> Is there a "good way" to combine multiple LUTs?
> 
> > 1) A color managed API that lets an application shift the display white
> > point using chromatic adaptation, so that such blue light filter
> > applications can operate more predictably, as well as some means of the
> > color management applications being aware of when this is happening.
> 
> How should this look like? Disclaimer: I have no idea how these applications
> work and I know nothing about color management.
> 
> I'm guessing this is a restriction of the "change the whole LUTs" API. Are
> there any features the "blue light filter" app won't be able to implement when
> switching to this API? Would the compositor part become complicated (judging
> from [2] it seems different "blue light filter" apps may compute LUTs
> differently)?
> 
> Since many compositors (GNOME, KDE, wlroots, maybe more) implement a way to
> apply a "blue light filter", I think it's important to be able to notify color
> management applications that they don't have exclusive access. Or maybe this
> should just be handled internally by the compositor? (Display a warning or
> something?)

apps should not have exclusive access. we're re-doing the whole horrid "install
colormap" thing from the x days of 256 color (or paletted/colormapped displays).

> Thanks,
> 
> [1]:
> https://github.com/swaywm/wlr-protocols/blob/master/unstable/wlr-gamma-control-unstable-v1.xml
> [2]: https://github.com/jonls/redshift/blob/master/README-colorramp
> 
> --
> Simon Ser
> https://emersion.fr
> 
> ___
> wayland-devel mailing list
> wayland-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/wayland-devel

-- 
- Codito, ergo sum - "I code, therefore I am" --
Carsten Haitzler - ras...@rasterman.com

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: HDR support in Wayland/Weston

2019-03-04 Thread Simon Ser
On Monday, March 4, 2019 8:13 AM, Graeme Gill  wrote:
> And the current favorite is "blue light filter" effects, for which numerous
> applications are currently available. They tweak the white point
> of the display by arbitrarily modifying the hardware per channel LUTs.
> (i.e. f.lux, Redshift, SunsetScreen, Iris, Night Shift, Twilight etc.)
>
> Such applications have their place for those who like the effect, but
> ideally such usage would not simply blow color management away.

FWIW wlroots has a protocol for this [1]. GNOME and KDE have this feature
directly integrated in their compositor.

> In order of desirability it would be nice to:
>
> 3) Have the hardware Luts restored after an application that uses them
>exits (i.e. like OS X handles it).

Agreed. This is done in our protocol and there's no such issue when builtin in
the compositor.

> 2) Implement virtual per channel LUTs, with the compositor combining them
>together in some way, and have some means of the color management 
> applications
>being aware when the display is being interfered with by another 
> application,
>so that the user can be warned that the color management state is invalid.

Is there a "good way" to combine multiple LUTs?

> 1) A color managed API that lets an application shift the display white point
>using chromatic adaptation, so that such blue light filter applications
>can operate more predictably, as well as some means of the color management
>applications being aware of when this is happening.

How should this look like? Disclaimer: I have no idea how these applications
work and I know nothing about color management.

I'm guessing this is a restriction of the "change the whole LUTs" API. Are there
any features the "blue light filter" app won't be able to implement when
switching to this API? Would the compositor part become complicated (judging
from [2] it seems different "blue light filter" apps may compute LUTs
differently)?

Since many compositors (GNOME, KDE, wlroots, maybe more) implement a way to
apply a "blue light filter", I think it's important to be able to notify color
management applications that they don't have exclusive access. Or maybe this
should just be handled internally by the compositor? (Display a warning or
something?)

Thanks,

[1]: 
https://github.com/swaywm/wlr-protocols/blob/master/unstable/wlr-gamma-control-unstable-v1.xml
[2]: https://github.com/jonls/redshift/blob/master/README-colorramp

--
Simon Ser
https://emersion.fr

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel

Re: [RFC wayland-protocols v2 1/1] Add the color-management protocol

2019-03-04 Thread Graeme Gill
Pekka Paalanen wrote:

> My failing is that I haven't read about what ICC v4 definition actually
> describes, does it characterise content or a device, or is it more
> about defining a transformation from something to something without
> saying what something is.

The ICC format encompasses several related forms. The one
that is pertinent to this discussion is ICC device profiles.

At a minimum an ICC device profile characterizes a devices color
response by encoding a model of device values (i.e. RGB value combinations)
to device independent color values (i.e. values related to device
independent CIE XYZ, called Profile Connection Space values in ICC
terms). A simple model such as color primary values, white point
and per channel responses is easily invertible to allow transformation
both directions.

For less additive devices there are more general models (cLut -
multi-dimensional color Lookup Table), and they are non-trivial
to invert, so a profile contains both forward tables (device -> PCS
AKA A2B tables) and reverse tables (PCS -> device AKA B2A tables).

Then there is intents. The most basic is Absolute Colorimetric
and Relative Colorimetric. The former relates the measured
values, while the latter one assumes that the observer is adapted
to the white point of the devices. Typically the difference is assumed
to be a simple chromatic adaptation transform that can be encoded
as the absolute white point or a 3x3 matrix. The default intent
is Relative Colorimetric because this is the transform of least
surprise.

cLUT based profiles allow for two additional intents,
Perceptual where out of gamut colors are mapped to be within
gamut while retaining proportionality, and Saturation where
colors are expanded if possible to maximize colorfulness. These
two intents allow the profile creator considerable latitude in
how they achieve these aims, and they can only be encoded using
a cLUT model.

So in summary an ICC profile provides device characterization, as well
as facilitating fast and efficient transformation between different
devices, as well as a choice of intent handling that cannot typically
be computed on the fly. Naturally to do a device to device space transform
you need two device profiles, one for the source space and one
for the destination.

> What is the use for a device link profile?

Device link profile use was a suggestion to overcome the previously stated
impossibility of a client knowing which output a surface was mapped to.
Since this no longer appears to be impossible (due to wl_surface.enter/leave 
events
being available), device link profiles should be dropped from the extension.
It is sufficient that a client can do its own color transformation
to the primary output if it chooses, while leaving the compositor to perform
a fallback color transform for any portion that is mapped to a secondary output,
or for any client that is color management unaware, or does not wish to
implement its own color transforms.
This greatly reduces the implementation burden on the compositor.

> If there is only the fd, you can fstat() it to get a size, but that
> may not be the exact size of the payload. Does the ICC data header or
> such indicate the exact size of the payload?

Yes, the size of an ICC profile is stored in its header.

Cheers,
Graeme Gill.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel