Re: Remote display with 3D acceleration using Wayland/Weston

2016-12-14 Thread DRC
On 12/14/16 8:52 PM, Carsten Haitzler (The Rasterman) wrote:
> weston is not the only wayland compositor. is the the sample/test compositor.
> wayalnd does not mean sticking to just what weston does.
> 
> i suspect weston's rdp back-end forces a sw gl stack because it's easier to be
> driver agnostic and run everywhere and as you have to read-back pixel data for
> transmitting over rdp... why bother with the complexity of actual driver setup
> and hw device permissions etc...
> 
> what pekka is saying that it's kind of YOUR job then to make a headless
> compositor (base it on weston code or write your own entirely from scratch
> etc.), and this headless compositor does return a hw egl context to clients. 
> it
> can transport data to the other server via vnc. rdp or any other method
> you like. your headless compositor will get new drm buffers from client when
> they display (having rendered using the local gpu) and then transfer tot he
> other end. the other end can be a vnc or rdp viewer or a custom app your wrote
> for your protocol etc. ... but what you want is perfectly doable with
> wayland... but it's kind of your job to do it. that is what virtual-gl would
> be. a local headless wayland compositor (for wayland mode) with some kind of
> display front end on the other end.

Exactly what I needed to know.  Thanks.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Remote display with 3D acceleration using Wayland/Weston

2016-12-14 Thread The Rasterman
On Wed, 14 Dec 2016 11:42:54 -0600 DRC  said:

...snip...

> Again, not how it currently works when using Weston with the RDP back end.

weston is not the only wayland compositor. is the the sample/test compositor.
wayalnd does not mean sticking to just what weston does.

i suspect weston's rdp back-end forces a sw gl stack because it's easier to be
driver agnostic and run everywhere and as you have to read-back pixel data for
transmitting over rdp... why bother with the complexity of actual driver setup
and hw device permissions etc...

what pekka is saying that it's kind of YOUR job then to make a headless
compositor (base it on weston code or write your own entirely from scratch
etc.), and this headless compositor does return a hw egl context to clients. it
can transport data to the other server via vnc. rdp or any other method
you like. your headless compositor will get new drm buffers from client when
they display (having rendered using the local gpu) and then transfer tot he
other end. the other end can be a vnc or rdp viewer or a custom app your wrote
for your protocol etc. ... but what you want is perfectly doable with
wayland... but it's kind of your job to do it. that is what virtual-gl would
be. a local headless wayland compositor (for wayland mode) with some kind of
display front end on the other end.

-- 
- Codito, ergo sum - "I code, therefore I am" --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC wayland-protocols] Color management protocol

2016-12-14 Thread The Rasterman
On Wed, 14 Dec 2016 23:23:59 +1100 Graeme Gill  said:

> Carsten Haitzler (The Rasterman) wrote:
> > On Tue, 13 Dec 2016 17:14:21 +1100 Graeme Gill  said:
> 
> > this == support for color correction protocol AND actually the support for
> > providing the real colorspace of the monitor, providing non SRGB pixel data
> > by clients in another colorspace (eg adobe) and it MUST work or apps will
> > literally fall over.
> 
> That's not a scheme I'm recommending.

ok. that's fine. i'm happy with that.

...snip...

> > not a format. a colorspce. R numbers are still R, as are G and B. it's just
> > that they point to different "real life spectrum" colors and so they need
> > to be transformed from one colorspace (sRGB to adobe RGB or adobe to sRGB).
> 
> My point stands. I've not mentioned new colorspaces.

if it's RGB or YUV (YCbCr) it's the same thing. just vastly different color
mechanisms. color correction in RGB space is actually the same as in YUV. it's
different spectrum points in space that the primaries point to.

 color management require introducing such things. BT.601, BT.709, BT.2020.
the compositor MUST KNOW which colorspace the YUV data uses to get it correct.
i'm literally starting at datasheets of some hardware and you have to tell it
to use BT.601 or 709 equation when dealing with YUV. otherwise the video data
will look wrong. colors will be off. in fact BT.709 == sRGB.

now here comes the problem... each hardware plane yuv may be assigned to MAY
have a different colorspace. they also then get affected by the color
reproduction of the screen at the other end.

you HAVE to provide the colorspace information so the compositor CAN assign you
to the correct hw plane OR configure the plane correctly OR configure the
yuv->rgb conversion hardware to be correct.

this is no different to RGB space but you don't tend to find hardware
specifically helping you out (yes gpu + shader can do a transform fast. i'm
talking hw layers or dedicated acceleration units that have existed for yuv
for a long time).

any list of colorspaces IMHO should also include the yuv colorspaces where/if
possible. if a colorspace is not supported by the compositor then the appjust
needs to take a "best effort". the default colorspace today could be considered
BT.709/sRGB. also you could say "it's null transform" colorspace. i.e. you know
nothing so don't try colorcorrect.

> > i really do not think this is needed. simply a list of available colorspaces
> > would be sufficient. application then provide data in the colorspace of it's
> > choice given what is supported by the compositor and the input data it
> > has...
> 
> And I think it is core, for all the reasons I've listed.
> 
> >>  enhanced: The compositor is provided with source colorspace
> >>   profiles by the application.
> > 
> > i again don't see why this is needed.
> 
> Hmm. How else does the compositor know how to transform from
> the given colorspace to the display colorspace ?

my point was i don't think it's needed to split this up.

compositor lists available colorspaces. a list of 1 sRGB or null-transform or
adobe-rgb(with transform matrix), wide-gammut, etc. means thathat is the one
and only output supported.

> > again - we're just arguing who does the transform.
> 
> It's not "just", it's the point.

not as i see it. given a choice of output colorspaces the client can choose to
do its own conversion, OR if it's colorspace of preference is supported by the
compositor then choose to pass the data in that colorspace to the compositor
and have the compositor do it.

> > i don't see the point. the
> > compositor will have a list of colorspaces it can display (either A screen
> > can display this OR can be configured to display in this colorspace, ... OR
> > the compositor can software transform pixels in this colorspace to whatever
> > is necessary to display correctly).
> 
> I don't know how many different ways I can explain the same thing. The
> compositor can't know how to transform color in all the ways an application
> may need to transform color. I think it is unlikely for instance, that you are
> proposing that the compositor support full N-Color, device links, ICC-Max
> support, etc., or the infinity of ways that haven't been invented yet to
> transform between color spaces. So the core color management requirement
> is that the application be able to transform into the device colorspace
> itself.

*sigh* and THAT IS WHY i keep saying that the client can choose to do it's own!
BUT this is not going to be perfect across multiple screens unless all screens
share the same colorspace/profile. let's say:

1 screen is a professional grade monitor with wide gammut rgb output.
1 screen is a $50 thing i picked up from the bargain basement bin at walmart.

dumb compositor example:

compositor reports 2 colorspaces:
  null transform RGB
  BT.709 YUV

smart compositor:

compositor reports 5 colorspaces:
  null transform RGB
 

Re: Remote display with 3D acceleration using Wayland/Weston

2016-12-14 Thread DRC
On 12/14/16 3:27 AM, Pekka Paalanen wrote:
> could you be more specific on what you mean by "server-side", please?
> Are you referring to the machine where the X server runs, or the
> machine that is remote from a user perspective where the app runs?

Few people use remote X anymore in my industry, so the reality of most
VirtualGL deployments (and all of the commercial VGL deployments of
which I'm aware) is that the X servers and the GPU are all on the
application host, the machine where the applications are actually
executed.  Typically people allocate beefy server hardware with multiple
GPUs, hundreds of gigabytes of memory, and as many as 32-64 CPU cores to
act as VirtualGL servers for 50 or 100 users.  We use the terms "3D X
server" and "2D X server" to indicate where the 3D and 2D rendering is
actually occurring.  The 3D X server is located on the application host
and is usually headless, since it only needs to be used by VirtualGL for
obtaining Pbuffer contexts from the GPU-accelerated OpenGL
implementation (usually nVidia or AMD/ATI.)  There is typically one 3D X
server shared by all users of the machine (VirtualGL allows this
sharing, since it rewrites all of the GLX calls from applications and
automatically converts all of them for off-screen rendering), and the 3D
X server has a separate screen for each GPU.  The 2D X server is usually
an X proxy such as TurboVNC, and there are multiple instances of it (one
or more per user.)  These 2D X server instances are usually located on
the application host but don't necessarily have to be.  The client
machine simply runs a VNC viewer.

X proxies such as Xvnc do not support hardware-accelerated OpenGL,
because they are implemented on top of a virtual framebuffer stored in
main memory.  The only way to implement hardware-accelerated OpenGL in
that environment is to use "split rendering", which is what VirtualGL
does.  It splits off the 3D rendering to another X server that has a GPU
attached.


> Wayland apps handle all rendering themselves, there is nothing for
> sending rendering commands to another process like the Wayland
> compositor.
> 
> What a Wayland compositor needs to do is to advertise support for EGL
> Wayland platform for clients. That it does by using the
> EGL_WL_bind_wayland_display extension.
> 
> If you want all GL rendering to happen in the machine where the app
> runs, then you don't have to do much anything, it already works like
> that. You only need to make sure the compositor initializes EGL, which
> in Weston's case means using the gl-renderer. The renderer does not
> have to actually composite anything if you want to remote windows
> separately, but it is needed to gain access to the window contents. In
> Weston, only the renderer knows how to access the contents of all
> windows (wl_surfaces).
> 
> If OTOH you want to send GL rendering commands to the other machine
> than where the app is running, that will require a great deal of work,
> since you have to implement serialization and de-serialization of
> OpenGL (and EGL) yourself. (It has been done before, do ask me if you
> want details.)

But if you run OpenGL applications in Weston, as it is currently
implemented, then the OpenGL applications are either GPU-accelerated or
not, depending on the back end used.  If you run Weston nested in a
Wayland compositor that is already GPU-accelerated, then OpenGL
applications run in the Weston session will be GPU-accelerated as well.
If you run Weston with the RDP back end, then OpenGL applications run in
the Weston session will use Mesa llvmpipe instead.  I'm trying to
understand, quite simply, whether it's possible for unmodified Wayland
OpenGL applications-- such as the example OpenGL applications in the
Weston source-- to take advantage of OpenGL GPU acceleration when they
are running with the RDP back end.  (I'm assuming that whatever
restrictions there are on the RDP back end would exist for the TurboVNC
back end I intend to develop.)  My testing thus far indicates that this
is not currently possible, but I need to understand the source of the
limitation so I can understand how to work around it.  Instead, you seem
to be telling me that the limitation doesn't exist, but I can assure you
that it does.  Please test Weston with the RDP back end and confirm that
OpenGL applications run in that environment are not GPU-accelerated.


> I think you have an underlying assumption that EGL and GL would somehow
> automatically be carried over the network, and you need to undo it.
> That does not happen, as the display server always runs in the same
> machine as the application. The Wayland display is always local, it can
> never be remote simply because Wayland can never go over a network.

No I don't have that assumption at all, because that does not currently
occur with VirtualGL.  VirtualGL is designed precisely to avoid that
situation.  The problem is quite simply:  In Weston, as it is currently
implemented, OpenGL applications are not 

Re: [PATCH weston 01/68] libweston: Add pixel-format helpers

2016-12-14 Thread Pekka Paalanen
On Fri,  9 Dec 2016 19:57:16 +
Daniel Stone  wrote:

> Rather than duplicating knowledge of pixel formats across several
> components, create a custom central repository.
> 
> Signed-off-by: Daniel Stone 
> 
> Differential Revision: https://phabricator.freedesktop.org/D1511
> ---
>  libweston/pixel-formats.c | 398 
> ++
>  libweston/pixel-formats.h | 112 +
>  2 files changed, 510 insertions(+)
>  create mode 100644 libweston/pixel-formats.c
>  create mode 100644 libweston/pixel-formats.h
> 
> diff --git a/libweston/pixel-formats.c b/libweston/pixel-formats.c
> new file mode 100644
> index 000..9c70e73
> --- /dev/null
> +++ b/libweston/pixel-formats.c
> @@ -0,0 +1,398 @@
> +/*
> + * Copyright © 2016 Collabora, Ltd.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * Author: Daniel Stone 
> + */
> +
> +#include "config.h"
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "helpers.h"
> +#include "wayland-util.h"
> +#include "pixel-formats.h"
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "weston-egl-ext.h"
> +
> +/**
> + * Table of DRM formats supported by Weston; RGB, ARGB and YUV formats are
> + * supported. Indexed/greyscale formats, and formats not containing complete
> + * colour channels, are not supported.
> + */
> +static const struct pixel_format_info pixel_format_table[] = {
> + {
> + .format = DRM_FORMAT_XRGB,
> + },
> + {
> + .format = DRM_FORMAT_ARGB,
> + .opaque_substitute = DRM_FORMAT_XRGB,
> + },
> + {
> + .format = DRM_FORMAT_XBGR,
> + },
> + {
> + .format = DRM_FORMAT_ABGR,
> + .opaque_substitute = DRM_FORMAT_XBGR,
> + },
> + {
> + .format = DRM_FORMAT_RGBX,
> + .gl_format = GL_RGBA,
> + .gl_type = GL_UNSIGNED_SHORT_4_4_4_4,

Could there be any concern about sampling garbage as alpha?
Should we have a flag 'ignore_alpha'?

> + },
> + {
> + .format = DRM_FORMAT_RGBA,
> + .opaque_substitute = DRM_FORMAT_RGBX,
> + .gl_format = GL_RGBA,
> + .gl_type = GL_UNSIGNED_SHORT_4_4_4_4,
> + },
> + {
> + .format = DRM_FORMAT_BGRX,
> + .gl_format = GL_BGRA_EXT,
> + .gl_type = GL_UNSIGNED_SHORT_4_4_4_4,
> + },
> + {
> + .format = DRM_FORMAT_BGRA,
> + .opaque_substitute = DRM_FORMAT_BGRX,
> + .gl_format = GL_BGRA_EXT,
> + .gl_type = GL_UNSIGNED_SHORT_4_4_4_4,
> + },
> + {
> + .format = DRM_FORMAT_XRGB1555,
> + .depth = 15,
> + .bpp = 16,
> + },
> + {
> + .format = DRM_FORMAT_ARGB1555,
> + .opaque_substitute = DRM_FORMAT_XRGB1555,
> + },
> + {
> + .format = DRM_FORMAT_XBGR1555,
> + },
> + {
> + .format = DRM_FORMAT_ABGR1555,
> + .opaque_substitute = DRM_FORMAT_XBGR1555,
> + },
> + {
> + .format = DRM_FORMAT_RGBX5551,
> + .gl_format = GL_RGBA,
> + .gl_type = GL_UNSIGNED_SHORT_5_5_5_1,
> + },
> + {
> + .format = DRM_FORMAT_RGBA5551,
> + .opaque_substitute = DRM_FORMAT_RGBX5551,
> + .gl_format = GL_RGBA,
> + .gl_type = GL_UNSIGNED_SHORT_5_5_5_1,
> + },
> + {
> + .format = DRM_FORMAT_BGRX5551,
> + .gl_format = GL_BGRA_EXT,
> + .gl_type = GL_UNSIGNED_SHORT_5_5_5_1,
> + },
> + {
> + .format = DRM_FORMAT_BGRA5551,
> + .opaque_substitute = DRM_FORMAT_BGRX5551,
> + .gl_format = 

Re: [PATCH weston] compositor: Assign new views to the primary plane

2016-12-14 Thread Pekka Paalanen
On Wed, 14 Dec 2016 14:53:37 +
Daniel Stone  wrote:

> Hi Pekka,
> 
> On 14 December 2016 at 14:17, Pekka Paalanen
>  wrote:
> > On Fri, 9 Dec 2016 17:27:13 + Daniel Stone  
> > wrote:  
> >> However, this is undesirable for DRM. With multi-output, when
> >> assign_planes() is called, any view which wasn't on a plane for our
> >> putput was moved to the primary plane, thus causing damage, and an  
> >
> > Putput! \o/  
> 
> /o\
> 
> >> output repaint. Fixing this, to ignore views which do not touch our
> >> output at all, means that the view wouldn't have a plane assigned until
> >> the other output eventually ran through repaint.
> >>
> >> For large SHM buffers (weston_surface->keep_buffer == false), this means
> >> that by the time the other output assigned it to a plane, the buffer may
> >> have been discarded on the client side.  
> >
> > Oh, upload garbage, right. That's a very interesting side-effect of
> > weston_surface_damage().  
> 
> Yes, you noted it quite well in e508ce6a. ;)
> 
> > To me it makes perfect sense to assign views to the primary plane by
> > default, on its own.
> >
> > "definitely the wrong way to fix" what? The weston_surface_damage()
> > issue? I'd probably not even make the connection there.  
> 
> Well, it's not a complete fix for the entire surface damage system. It
> is, however, _a_ correct fix, in that by bypassing the issue, we
> completely prevent the problem recurring. More below.
> 
> > The
> > DRM output/plane assignment? Is there something that makes a decision
> > based on which plane a view is already on?
> >
> > Oh right, there has to be, because we don't support having the same
> > view on several planes, right? (Hmm, why was that... let me guess:
> > damage tracking?)  
> 
> Right, because it just can't work at the moment. compositor-drm uses
> weston_plane as a 'base class' of drm_plane (drm_sprite), so if
> multiple outputs were to both promote the same view to a plane, it
> would be inconsistent as each repaint cycle assigned it to a different
> plane; a nightmare for state tracking. If one output promotes the view
> to a plane and the other does not, then it just goes missing from the
> other output, since renderer repaint no longer shows it.
> 
> > I just couldn't spot where we actually check the plane assignment. Is
> > that something you are adding with atomic prep?  
> 
> No, nothing so obvious. It only trips with 'Ignore views on other
> outputs', when you have multiple outputs in Weston, and are using
> something which paints a SHM buffer once rather than continuously.
> 
> Simplified slightly, this is the behaviour in current master:
>   - views A and Z get created for outputs B and Y, respectively,
> initial with view->plane == NULL
>   - SHM buffers are attached to A and Z, and uploaded to the renderer
>   - output B repaint gets called
>   - assign_planes for output B observes that buffer for views A and Z
> can never be promoted to a plane as it is SHM, so sets keep_buffer ==
> false
>   - assign_planes assigns view A to the primary plane; as the initial
> plane was NULL, the comparison in weston_view_assign_to_plane does not
> trigger, and weston_surface_damage is called
>   - assign_planes also assigns view Z to the primary plane, with the same 
> effect
>   - after repaint, as there is no need for the buffer content to be
> kept, the buffers are released
>   - output Y repaint gets called
>   - assign_planes for output Y has no effect on keep_buffer here, as
> it is already false (i.e. this is a static/constant/deterministic
> calculation)
>   - assign_planes for output Y assigns view Z to the primary plane,
> which is a no-op as the plane was already set by output B repaint
>   - everyone lives happily ever after
> 
> The specific failure I saw with the 'Ignore views on other outputs'
> patch, but _without_ this patch, was:
>   - assign_planes for output B does _not_ assign view Z to the primary
> plane anymore
>   - assign_planes for output Y _does_ assign view Z to the primary
> plane, however this is _not_ a no-op as the previous plane was NULL
>   - weston_surface_damage is called, uploading content from ... somewhere
> 
> This patch fixes this specific breakage, by assigning views to the
> primary plane at time of creation. The only way
> weston_view_assign_to_plane can be called with something other than
> the primary plane, is if the view is assigned to a plane by the
> backend. If that ever happens, keep_buffer is guaranteed to not be
> false: we very conservatively set keep_buffer for _all_ views globally
> on every output repaint, for exactly this reason. Given all the above,
> I think this is a complete fix for the problem I've just described.
> 
> FWIW, the reason I wrote 'Ignore views on other outputs' is in the
> commit message, but elaborated, if you had multiple outputs with
> planes in use, you were in for a bad time. assign_planes would walk

Re: [PATCH weston] compositor: Assign new views to the primary plane

2016-12-14 Thread Daniel Stone
Hi Pekka,

On 14 December 2016 at 14:17, Pekka Paalanen
 wrote:
> On Fri, 9 Dec 2016 17:27:13 + Daniel Stone  wrote:
>> However, this is undesirable for DRM. With multi-output, when
>> assign_planes() is called, any view which wasn't on a plane for our
>> putput was moved to the primary plane, thus causing damage, and an
>
> Putput! \o/

/o\

>> output repaint. Fixing this, to ignore views which do not touch our
>> output at all, means that the view wouldn't have a plane assigned until
>> the other output eventually ran through repaint.
>>
>> For large SHM buffers (weston_surface->keep_buffer == false), this means
>> that by the time the other output assigned it to a plane, the buffer may
>> have been discarded on the client side.
>
> Oh, upload garbage, right. That's a very interesting side-effect of
> weston_surface_damage().

Yes, you noted it quite well in e508ce6a. ;)

> To me it makes perfect sense to assign views to the primary plane by
> default, on its own.
>
> "definitely the wrong way to fix" what? The weston_surface_damage()
> issue? I'd probably not even make the connection there.

Well, it's not a complete fix for the entire surface damage system. It
is, however, _a_ correct fix, in that by bypassing the issue, we
completely prevent the problem recurring. More below.

> The
> DRM output/plane assignment? Is there something that makes a decision
> based on which plane a view is already on?
>
> Oh right, there has to be, because we don't support having the same
> view on several planes, right? (Hmm, why was that... let me guess:
> damage tracking?)

Right, because it just can't work at the moment. compositor-drm uses
weston_plane as a 'base class' of drm_plane (drm_sprite), so if
multiple outputs were to both promote the same view to a plane, it
would be inconsistent as each repaint cycle assigned it to a different
plane; a nightmare for state tracking. If one output promotes the view
to a plane and the other does not, then it just goes missing from the
other output, since renderer repaint no longer shows it.

> I just couldn't spot where we actually check the plane assignment. Is
> that something you are adding with atomic prep?

No, nothing so obvious. It only trips with 'Ignore views on other
outputs', when you have multiple outputs in Weston, and are using
something which paints a SHM buffer once rather than continuously.

Simplified slightly, this is the behaviour in current master:
  - views A and Z get created for outputs B and Y, respectively,
initial with view->plane == NULL
  - SHM buffers are attached to A and Z, and uploaded to the renderer
  - output B repaint gets called
  - assign_planes for output B observes that buffer for views A and Z
can never be promoted to a plane as it is SHM, so sets keep_buffer ==
false
  - assign_planes assigns view A to the primary plane; as the initial
plane was NULL, the comparison in weston_view_assign_to_plane does not
trigger, and weston_surface_damage is called
  - assign_planes also assigns view Z to the primary plane, with the same effect
  - after repaint, as there is no need for the buffer content to be
kept, the buffers are released
  - output Y repaint gets called
  - assign_planes for output Y has no effect on keep_buffer here, as
it is already false (i.e. this is a static/constant/deterministic
calculation)
  - assign_planes for output Y assigns view Z to the primary plane,
which is a no-op as the plane was already set by output B repaint
  - everyone lives happily ever after

The specific failure I saw with the 'Ignore views on other outputs'
patch, but _without_ this patch, was:
  - assign_planes for output B does _not_ assign view Z to the primary
plane anymore
  - assign_planes for output Y _does_ assign view Z to the primary
plane, however this is _not_ a no-op as the previous plane was NULL
  - weston_surface_damage is called, uploading content from ... somewhere

This patch fixes this specific breakage, by assigning views to the
primary plane at time of creation. The only way
weston_view_assign_to_plane can be called with something other than
the primary plane, is if the view is assigned to a plane by the
backend. If that ever happens, keep_buffer is guaranteed to not be
false: we very conservatively set keep_buffer for _all_ views globally
on every output repaint, for exactly this reason. Given all the above,
I think this is a complete fix for the problem I've just described.

FWIW, the reason I wrote 'Ignore views on other outputs' is in the
commit message, but elaborated, if you had multiple outputs with
planes in use, you were in for a bad time. assign_planes would walk
the view list, placing views either on a drm_plane if they were active
on that output, or primary_plane otherwise. So for views on other
outputs which were promoted to planes, the other output's
assign_planes would assign the view to the primary plane, causing
damage and a repaint on the other output: 

Re: [PATCH weston] compositor: Assign new views to the primary plane

2016-12-14 Thread Pekka Paalanen
On Fri, 9 Dec 2016 17:27:13 +
Daniel Stone  wrote:

> Hi,
> 
> On 9 December 2016 at 16:35, Daniel Stone  wrote:
> > However, this is undesirable for DRM. In a multi-output situation,
> > we would see a view only visible on another output, reasonably decide we
> > didn't want it in a plane on our output, and move it to the primary
> > plane, causing damage, and an output repaint. The plane wouldn't be
> > assigned until the other output ran through repaint.
> >
> > For large SHM buffers (weston_surface->keep_buffer as false), this means
> > that the other output would assign it to a plane later, which caused
> > weston_surface_damage to be called - in the exact way the comment says
> > it shouldn't - which triggered a flush and buffer upload. By this stage,
> > the buffer content would be gone and we would upload garbage.  
> 
> Er, I can't English this afternoon. Imagine the above two paragraphs
> never happened, and mentally replace them with:
> 
> However, this is undesirable for DRM. With multi-output, when
> assign_planes() is called, any view which wasn't on a plane for our
> putput was moved to the primary plane, thus causing damage, and an

Putput! \o/

> output repaint. Fixing this, to ignore views which do not touch our
> output at all, means that the view wouldn't have a plane assigned until
> the other output eventually ran through repaint.
> 
> For large SHM buffers (weston_surface->keep_buffer == false), this means
> that by the time the other output assigned it to a plane, the buffer may
> have been discarded on the client side.

Oh, upload garbage, right. That's a very interesting side-effect of
weston_surface_damage().

To me it makes perfect sense to assign views to the primary plane by
default, on its own.

"definitely the wrong way to fix" what? The weston_surface_damage()
issue? I'd probably not even make the connection there. The
DRM output/plane assignment? Is there something that makes a decision
based on which plane a view is already on?

Oh right, there has to be, because we don't support having the same
view on several planes, right? (Hmm, why was that... let me guess:
damage tracking?)

I just couldn't spot where we actually check the plane assignment. Is
that something you are adding with atomic prep?

So, reading both versions of the commit message, I think I was able
reconstruct enough of the idea to see what's going on, but please have
a third try on explaining it. ;-)

Anyway, the change itself is:
Reviewed-by: Pekka Paalanen 


Thanks,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH weston v4] libweston-desktop: fix stale ping when a wl_shell_surface is destroyed

2016-12-14 Thread Pekka Paalanen
On Thu, 8 Dec 2016 16:58:36 +0100
Quentin Glidic  wrote:

> On 08/12/2016 16:52, Giulio Camuffo wrote:
> > 2016-12-08 16:46 GMT+01:00 Quentin Glidic 
> > :  
> >> On 08/12/2016 16:20, Giulio Camuffo wrote:  
> >>>
> >>> When sending a ping event to a surface using the wl_shell interface,
> >>> if that surface is destroyed before we receive the pong we will never
> >>> receive it, even if the client is actually responsive, since the
> >>> interface does not exist anymore. So when the surface if destroyed
> >>> pretend it's a pong and reset the ping state.
> >>>
> >>> Signed-off-by: Giulio Camuffo   
> >>
> >>
> >> I made a few renames to match the current code.
> >> I was going to push it, but I found a last issue, see below.
> >>
> >>  
> >>> ---
> >>>
> >>> v3: store the ping serial in the surface instead of the client wrapper
> >>> v4: removed leftover change
> >>>
> >>>  libweston-desktop/wl-shell.c | 22 +-
> >>>  1 file changed, 21 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/libweston-desktop/wl-shell.c b/libweston-desktop/wl-shell.c
> >>> index 399139c..f75b022 100644
> >>> --- a/libweston-desktop/wl-shell.c
> >>> +++ b/libweston-desktop/wl-shell.c
> >>> @@ -57,6 +57,7 @@ struct weston_desktop_wl_shell_surface {
> >>> struct weston_desktop_seat *popup_seat;
> >>> enum weston_desktop_wl_shell_surface_state state;
> >>> struct wl_listener wl_surface_resource_destroy_listener;
> >>> +   uint32_t ping_serial;
> >>>  };
> >>>
> >>>  static void
> >>> @@ -112,6 +113,7 @@ weston_desktop_wl_shell_surface_ping(struct
> >>> weston_desktop_surface *dsurface,
> >>>  {
> >>> struct weston_desktop_wl_shell_surface *surface = user_data;
> >>>
> >>> +   surface->ping_serial = serial;
> >>> wl_shell_surface_send_ping(surface->resource, serial);
> >>>  }
> >>>
> >>> @@ -181,11 +183,27 @@ weston_desktop_wl_shell_change_state(struct
> >>> weston_desktop_wl_shell_surface *sur
> >>>  }
> >>>
> >>>  static void
> >>> +pong_client(struct weston_desktop_wl_shell_surface *surface)  
> >>
> >>
> >> weston_desktop_wl_shell_surface_pong_client
> >>  
> >>> +{
> >>> +   struct weston_desktop_client *client =
> >>> +   weston_desktop_surface_get_client(surface->surface);
> >>> +
> >>> +   weston_desktop_client_pong(client, surface->ping_serial);
> >>> +   surface->ping_serial = 0;
> >>> +}
> >>> +
> >>> +static void
> >>>  weston_desktop_wl_shell_surface_destroy(struct weston_desktop_surface
> >>> *dsurface,
> >>> void *user_data)
> >>>  {
> >>> struct weston_desktop_wl_shell_surface *surface = user_data;
> >>>
> >>> +   /* If the surface being destroyed was the one that was pinged
> >>> before
> >>> +* we need to fake a pong here, because it cannot answer the ping
> >>> anymore,
> >>> +* even if the client is responsive. */
> >>> +   if (surface->ping_serial != 0)
> >>> +   pong_client(surface);
> >>> +
> >>>
> >>> wl_list_remove(>wl_surface_resource_destroy_listener.link);
> >>>
> >>> weston_desktop_wl_shell_surface_maybe_ungrab(surface);
> >>> @@ -203,8 +221,10 @@ weston_desktop_wl_shell_surface_protocol_pong(struct
> >>> wl_client *wl_client,
> >>>   uint32_t serial)
> >>>  {
> >>> struct weston_desktop_surface *surface =
> >>> wl_resource_get_user_data(resource);  
> >>
> >>
> >> dsurface
> >>  
> >>> +   struct weston_desktop_wl_shell_surface *wls =
> >>> +   weston_desktop_surface_get_implementation_data(surface);  
> >>
> >>
> >> surface
> >>  
> >>> -
> >>> weston_desktop_client_pong(weston_desktop_surface_get_client(surface),
> >>> serial);
> >>> +   pong_client(wls);  
> >>
> >>
> >> We should check that surface->ping_serial == serial somehow.
> >> Maybe it is safe to ignore serial for now, as it fixes something, then add 
> >> a
> >> check in a follow-up commit.  
> >
> > Well, but weston_desktop_client_pong() already checks that, so i don't
> > think we need to check it here too.  
> 
> It checks that the last ping matches the passed serial. This (passed) 
> serial should be the one the *client* sent. Here, we just pass 
> ->ping_serial, which is guaranteed to be the one   
> weston_desktop_client_pong() is waiting for. But nothing prevents the 
> client to send wl_shell_surface.pong(1337), and we wouldn’t catch these.

No asserts here. You cannot use asserts for validating external data.

The serial in pong could be from any pending ping. Can there be only
one ping in flight at a time per object?

The spec does not say what happens if the serial is not the expected
one, and wl_shell_surface does not define any error codes, so I don't
think we should enforce correct serials here.


Thanks,
pq


pgpNl7IXCMhLd.pgp
Description: OpenPGP digital signature

Re: [PATCH wayland] server: use a safer signal type for the wl_resource destruction signals

2016-12-14 Thread Pekka Paalanen
On Mon,  5 Dec 2016 16:20:22 +0100
Giulio Camuffo  wrote:

> wl_list_for_each_safe, which is used by wl_signal_emit is not really
> safe. If a signal has two listeners, and the first one removes and
> re-inits the second one, it would enter an infinite loop, which was hit
> in weston on resource destruction, which emits a signal.
> This commit adds a new version of wl_signal, called wl_priv_signal,
> which is private in wayland-server.c and which does not have this problem.
> The old wl_signal cannot be improved without breaking backwards compatibility.
> ---
>  Makefile.am|   4 +
>  src/wayland-private.h  |  18 +++
>  src/wayland-server.c   | 110 
>  tests/newsignal-test.c | 337 
> +
>  4 files changed, 445 insertions(+), 24 deletions(-)
>  create mode 100644 tests/newsignal-test.c

Hi,

this patch fixes the following issue I have been having:

Start weston/x11 or weston/wayland, run kcachegrind (Qt4) via Xwayland,
hover pointer over any kcachegrind button that shows a tooltip, move
pointer away, repeat a couple of times. Sooner rather than later Weston
gets into a busyloop and freezes.

That no longer happens.

Tested-by: Pekka Paalanen 

Reviewed-by: Pekka Paalanen 


The new tests not only include the old signal tests, you also added
tests for the exact case we were tripping over: removing the next
callback from current callback while iterating the list of callbacks.
It is missing the case of a callback removing the callback added and
called just before the current one.

You also add tests for removing and adding a listener from a callback
to the same signal. Interesting behavioral guarantees. You test for
get() returning non-NULL in various cases too, though it would be much
better if it could test for the correct pointer rather than just
non-NULL.

Anyway, I find the tests adequate. You could also add yourself to the
copyright holders, this is significant new code IMO.


You missed one place that is still using the old wl_signal:
event-loop.c, struct wl_event_loop, destroy_signal. Luckily that seems
to be a completely private definition, and the implementation can be
switched in a follow-up patch.


> diff --git a/Makefile.am b/Makefile.am
> index d78a0ca..d0c8bd3 100644
> --- a/Makefile.am
> +++ b/Makefile.am
> @@ -159,6 +159,7 @@ built_test_programs = \
>   socket-test \
>   queue-test  \
>   signal-test \
> + newsignal-test  \
>   resources-test  \
>   message-test\
>   headers-test\
> @@ -226,6 +227,9 @@ queue_test_SOURCES = tests/queue-test.c
>  queue_test_LDADD = libtest-runner.la
>  signal_test_SOURCES = tests/signal-test.c
>  signal_test_LDADD = libtest-runner.la
> +# wayland-server.c is needed here to access wl_priv_* functions
> +newsignal_test_SOURCES = tests/newsignal-test.c src/wayland-server.c
> +newsignal_test_LDADD = libtest-runner.la

The other alternative would be to put wl_priv_*() into wayland-util.c.
That one gets built into a static helper lib, so the test programs
would get the definitions.

>  resources_test_SOURCES = tests/resources-test.c
>  resources_test_LDADD = libtest-runner.la
>  message_test_SOURCES = tests/message-test.c
> diff --git a/src/wayland-private.h b/src/wayland-private.h
> index 676b181..434cb04 100644
> --- a/src/wayland-private.h
> +++ b/src/wayland-private.h
> @@ -35,6 +35,7 @@
>  #define WL_HIDE_DEPRECATED 1
>  
>  #include "wayland-util.h"
> +#include "wayland-server-core.h"
>  
>  /* Invalid memory address */
>  #define WL_ARRAY_POISON_PTR (void *) 4
> @@ -233,4 +234,21 @@ zalloc(size_t s)
>   return calloc(1, s);
>  }
>  
> +struct wl_priv_signal {
> + struct wl_list listener_list;
> + struct wl_list emit_list;
> +};
> +
> +void
> +wl_priv_signal_init(struct wl_priv_signal *signal);
> +
> +void
> +wl_priv_signal_add(struct wl_priv_signal *signal, struct wl_listener 
> *listener);
> +
> +struct wl_listener *
> +wl_priv_signal_get(struct wl_priv_signal *signal, wl_notify_func_t notify);
> +
> +void
> +wl_priv_signal_emit(struct wl_priv_signal *signal, void *data);
> +
>  #endif
> diff --git a/src/wayland-server.c b/src/wayland-server.c
> index 9d7d9c1..dae3c1d 100644
> --- a/src/wayland-server.c
> +++ b/src/wayland-server.c
> @@ -78,10 +78,10 @@ struct wl_client {
>   uint32_t mask;
>   struct wl_list link;
>   struct wl_map objects;
> - struct wl_signal destroy_signal;
> + struct wl_priv_signal destroy_signal;
>   struct ucred ucred;
>   int error;
> - struct wl_signal resource_created_signal;
> + struct wl_priv_signal resource_created_signal;
>  };
>  
>  struct wl_display {
> @@ -97,8 +97,8 @@ struct 

Re: package configuration file is missing in wayland-ivi-extension

2016-12-14 Thread Pekka Paalanen
On Wed, 14 Dec 2016 10:44:30 +0530
Arun Kumar  wrote:

> Hi,
> 
> The package configuration file is missing in wayland-ivi-extension.
> 
> This file is hardly dependent when building wayland-ivi-extension along
> with other dependent packages[gstreamer].
> 
> An example file
> 
> 
> prefix=/usr
> exec_prefix=/usr
> libdir=/usr/lib
> includedir=/usr/include
> 
> Name: Wayland-ivi-extension
> Description: interface library layermanager
> Version: 1.9.1
> Libs: -L${libdir} -lilmClient -lilmCommon -lilmControl -lilmInput
> Cflags: -I${includedir}
> 

Hi,

this does not sound related to anything we care about in Wayland/Weston
upstream. We have nothing that would depend on or provide any of the
ilm* libraries.


Thanks,
pq


pgpc6C3nDS1tI.pgp
Description: OpenPGP digital signature
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Remote display with 3D acceleration using Wayland/Weston

2016-12-14 Thread Pekka Paalanen
On Tue, 13 Dec 2016 14:39:31 -0600
DRC  wrote:

> Greetings.  I am the founder and principal developer for The VirtualGL
> Project, which has (since 2004) produced a GLX interposer (VirtualGL)
> and a high-speed X proxy (TurboVNC) that are widely used for running
> Linux/Unix OpenGL applications remotely with hardware-accelerated
> server-side 3D rendering.  For those who aren't familiar with VirtualGL,
> it basically works by:

Hi,

could you be more specific on what you mean by "server-side", please?
Are you referring to the machine where the X server runs, or the
machine that is remote from a user perspective where the app runs?

My confusion is caused by the difference in the X11 vs. Wayland models.
The display server the app connects to is not on the same side in one
model as in the other model.


With X11 (traditional indirect rendering with X11 over network):

Machine A| Machine B
 |
App -> libs (X11, GLX)> X server -> display
 |   -> GPU B



With Wayland apps remoted:

Machine A |   Machine B
  |
App   |
  -> EGL and GL libs -> GPU A |
  --(wayland)--> Weston --(VNC/RDP)---> VNC/RDP viewer -> window system 
-> display


Wayland apps handle all rendering themselves, there is nothing for
sending rendering commands to another process like the Wayland
compositor.

What a Wayland compositor needs to do is to advertise support for EGL
Wayland platform for clients. That it does by using the
EGL_WL_bind_wayland_display extension.

If you want all GL rendering to happen in the machine where the app
runs, then you don't have to do much anything, it already works like
that. You only need to make sure the compositor initializes EGL, which
in Weston's case means using the gl-renderer. The renderer does not
have to actually composite anything if you want to remote windows
separately, but it is needed to gain access to the window contents. In
Weston, only the renderer knows how to access the contents of all
windows (wl_surfaces).

If OTOH you want to send GL rendering commands to the other machine
than where the app is running, that will require a great deal of work,
since you have to implement serialization and de-serialization of
OpenGL (and EGL) yourself. (It has been done before, do ask me if you
want details.)

> -- Interposing (via LD_PRELOAD) GLX calls from the OpenGL application
> -- Rewriting the GLX calls such that OpenGL contexts are created in
> Pbuffers instead of windows
> -- Redirecting the GLX calls to the server's local display (usually :0,
> which presumably has a GPU attached) rather than the remote display or
> the X proxy
> -- Reading back the rendered 3D images from the server's local display
> and transferring them to the remote display or X proxy when the
> application swaps buffers or performs other "triggers" (such as calling
> glFinish() when rendering to the front buffer)
> 
> There is more complexity to it than that, but that's at least the
> general idea.

Ok, so that sounds like you want the GL execution to happen in the
app-side machine. That's the easy case. :-)

> At the moment, I'm investigating how best to accomplish a similar feat
> in a Wayland/Weston environment.  I'm given to understand that building
> a VNC server on top of Weston is straightforward and has already been
> done as a proof of concept, so really my main question is how to do the
> OpenGL stuff.  At the moment, my (very limited) understanding of the
> architecture seems to suggest that I have two options:

Weston has the RDP backend already, indeed.

> (1) Implement an interposer similar in concept to VirtualGL, except that
> this interposer would rewrite EGL calls to redirect them from the
> Wayland display to a low-level EGL device that supports off-screen
> rendering (such as the devices provided through the
> EGL_PLATFORM_DEVICE_EXT extension, which is currently supported by
> nVidia's drivers.)  How to get the images from that low-level device
> into the Weston compositor when it is using a remote display back-end is
> an open question, but I assume I'd have to ask the compositor for a
> surface (which presumably would be allocated from main memory) and
> handle the transfer of the pixels from the GPU to that surface.  That is
> similar in concept to how VirtualGL currently works, vis-a-vis using
> glReadPixels to transfer the rendered OpenGL pixels into an MIT-SHM image.

I think you have an underlying assumption that EGL and GL would somehow
automatically be carried over the network, and you need to undo it.
That does not happen, as the display server always runs in the same
machine as the application. The Wayland display is always local, it can
never be remote simply because Wayland can never go over a network.

Furthermore, all GL rendering is always 

Re: [PATCH libinput 1/3] touchpad: convert two functions to use the device->phys helpers

2016-12-14 Thread Hans de Goede

Hi,

On 14-12-16 08:36, Peter Hutterer wrote:

Signed-off-by: Peter Hutterer 


Series looks good to me:

Reviewed-by: Hans de Goede 

Regards,

Hans



---
 src/evdev-mt-touchpad.c | 24 +---
 src/evdev.h | 27 +++
 2 files changed, 40 insertions(+), 11 deletions(-)

diff --git a/src/evdev-mt-touchpad.c b/src/evdev-mt-touchpad.c
index 7bac8ec..26b65de 100644
--- a/src/evdev-mt-touchpad.c
+++ b/src/evdev-mt-touchpad.c
@@ -478,18 +478,19 @@ tp_process_key(struct tp_dispatch *tp,
 static void
 tp_unpin_finger(const struct tp_dispatch *tp, struct tp_touch *t)
 {
-   double xdist, ydist;
+   struct phys_coords mm;
+   struct device_coords delta;

if (!t->pinned.is_pinned)
return;

-   xdist = abs(t->point.x - t->pinned.center.x);
-   xdist *= tp->buttons.motion_dist.x_scale_coeff;
-   ydist = abs(t->point.y - t->pinned.center.y);
-   ydist *= tp->buttons.motion_dist.y_scale_coeff;
+   delta.x = abs(t->point.x - t->pinned.center.x);
+   delta.y = abs(t->point.y - t->pinned.center.y);
+
+   mm = evdev_device_unit_delta_to_mm(tp->device, );

/* 1.5mm movement -> unpin */
-   if (hypot(xdist, ydist) >= 1.5) {
+   if (hypot(mm.x, mm.y) >= 1.5) {
t->pinned.is_pinned = false;
return;
}
@@ -962,8 +963,8 @@ tp_need_motion_history_reset(struct tp_dispatch *tp)
 static bool
 tp_detect_jumps(const struct tp_dispatch *tp, struct tp_touch *t)
 {
-   struct device_coords *last;
-   double dx, dy;
+   struct device_coords *last, delta;
+   struct phys_coords mm;
const int JUMP_THRESHOLD_MM = 20;

/* We haven't seen pointer jumps on Wacom tablets yet, so exclude
@@ -978,10 +979,11 @@ tp_detect_jumps(const struct tp_dispatch *tp, struct 
tp_touch *t)
/* called before tp_motion_history_push, so offset 0 is the most
 * recent coordinate */
last = tp_motion_history_offset(t, 0);
-   dx = 1.0 * abs(t->point.x - last->x) / 
tp->device->abs.absinfo_x->resolution;
-   dy = 1.0 * abs(t->point.y - last->y) / 
tp->device->abs.absinfo_y->resolution;
+   delta.x = abs(t->point.x - last->x);
+   delta.y = abs(t->point.y - last->y);
+   mm = evdev_device_unit_delta_to_mm(tp->device, );

-   return hypot(dx, dy) > JUMP_THRESHOLD_MM;
+   return hypot(mm.x, mm.y) > JUMP_THRESHOLD_MM;
 }

 static void
diff --git a/src/evdev.h b/src/evdev.h
index 071b9ec..c07b09f 100644
--- a/src/evdev.h
+++ b/src/evdev.h
@@ -596,6 +596,33 @@ evdev_libinput_context(const struct evdev_device *device)
 }

 /**
+ * Convert the pair of delta coordinates in device space to mm.
+ */
+static inline struct phys_coords
+evdev_device_unit_delta_to_mm(const struct evdev_device* device,
+ const struct device_coords *units)
+{
+   struct phys_coords mm = { 0,  0 };
+   const struct input_absinfo *absx, *absy;
+
+   if (device->abs.absinfo_x == NULL ||
+   device->abs.absinfo_y == NULL) {
+   log_bug_libinput(evdev_libinput_context(device),
+"%s: is not an abs device\n",
+device->devname);
+   return mm;
+   }
+
+   absx = device->abs.absinfo_x;
+   absy = device->abs.absinfo_y;
+
+   mm.x = 1.0 * units->x/absx->resolution;
+   mm.y = 1.0 * units->y/absy->resolution;
+
+   return mm;
+}
+
+/**
  * Convert the pair of coordinates in device space to mm. This takes the
  * axis min into account, i.e. a unit of min is equivalent to 0 mm.
  */


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC wayland-protocols] Color management protocol

2016-12-14 Thread The Rasterman
On Wed, 14 Dec 2016 18:49:14 +1100 Graeme Gill  said:

> Carsten Haitzler (The Rasterman) wrote:
> > On Mon, 12 Dec 2016 17:57:08 +1100 Graeme Gill  said:
> 
> >> Right. So a protocol for querying the profile of each output for its
> >> surface is a base requirement.
> > 
> > i totally disagree. the compositor should simply provide available
> > colorspaces (and generally only provide those that hardware can do). what
> > screen they apply to is unimportant.
> 
> Please read my earlier posts. No (sane) compositor can implement CMM
> capabilities to a color critical applications requirements,
> so color management without any participation of a compositor
> is a core requirement.

oh course it can. client provides 30bit (10bit per rgb) buffers for example and
compositor can remap. from the provided colorspace for that buffer to the real
display colorspace.

> > if the colorspace is native to that display or possible, the compositor
> > will do NO CONVERSION of your pixel data and display directly (and instead
> > convert sRGB data into that colorspace).
> 
> Relying on an artificial side effect (the so called "null color transform")
> to implement the ability to directly control what is displayed, is a poor
> approach, as I've explained at length previously.

but that is EXACTLY what you have detailed to rely on for color managed
applications. for core color management you say that the client knows the
colorspace/profile/mappings of the monitor and renders appropriately and
expects its pixel values to be presented 1:1 without remapping on the screen
because it knows the colorspace...

> > if your surface spans 2 screens the compositor may
> > convert some to the colorspace of a monitor if it does not support that
> > colorspace. choose the colorspace (as a client) that matches your data best.
> > compositor will do a "best effort".
> 
> No compositor should be involved for core support. The application
> should be able to render appropriately to each portion of the span.

then no need for any extension. :) compositor HAs to be involved to at least
tell you the colorspace of the monitor... as the screen is its resource.

> > this way client doesnt need to know about outputs, which outputs it spans
> > etc. and compositor will pick up the pieces. let me give some more complex
> > examples:
> 
> That only works if the client doesn't care about color management very much -
> i.e. it's not a color critical application. I'd hope that the intended use of
> Wayland is wider in scope than that.

how does it NOT work? let me give a really simple version of this.

you have a YUV buffer. some screens can display yuv, some cannot. you want to
know which screens support yuv and know where your surface is mapped to which
screens so you can render some of your buffer (some regions) in yuv and some
in rgb (i'm assuming packed YUVxYUVxYUVx and RGBxRGBxRGBx layout here for
example)... you wish to move all color correct rendering, clipping that correct
(yuv vs rgb) rendering client-side and have the compositor just not care.

this leads to the artifacts i was mentioning. just this one will be a LOT more
obvious.

> > compositor has a mirroring mode where it can mirror a window across multiple
> > screens.
> 
> Sure, and in that case the user has a choice about which screen is
> properly color managed. Nothing new there - the same currently
> applies on X11, OS X, MSWin. Anyone doing color critical work
> will not run in such modes, or will just use the color managed screen.

the point of wayland is to be "every frame is perfect". this breaks that.

> > some screens can or cannot do color management.
> 
> Nothing to do with screens - core color management is up to
> the application, and all it needs is to know the display profile.

i mean they are able to display wider gammut of color beyond the limited sRGB
range that is common.

> > what happens when the colorspace changes on the fly (you recalbrate
> > the screen or output driving hardware). you expect applications to directly
> > control this and have to respond to this and redraw content all the time?
> 
> Yep, same as any other sort of re-rendering event (i.e. exactly what happens
> with current systems - nothing new here.)

and this leads to imperfect frames.

> > this can be far simpler:
> > 
> > 1. list of supported colorspaces (bonus points if flags say if its able to
> > be native or is emulated).
> > 2. colorspace attached to buffer by client.
> > 
> > that's it.
> 
> If you don't care so much about color, yes. i.e. this is
> what I call "Enhanced" color management, rather than core.
> It doesn't have to be as flexible or as accurate, but it has
> the benefit of being easy to use for applications that don't care
> as much, or currently aren't color managed at all.

how not? a colorspace/profile can be a full transform with r/g/b points in
space... not just a simple enum with only fixed values (well thats how i'm

Re: [RFC wayland-protocols] Color management protocol

2016-12-14 Thread The Rasterman
On Wed, 14 Dec 2016 18:27:13 +1100 Graeme Gill  said:

> Carsten Haitzler (The Rasterman) wrote:
> > On Mon, 12 Dec 2016 18:18:21 +1100 Graeme Gill  said:
> 
> >> The correct approach to avoiding such issues is simply
> >> to make both aspects Wayland (extension) protocols, so
> >> that Wayland color management and color sensitive applications
> >> have the potential to work across all Wayland systems,
> >> rather than being at best balkanized, or at worst, not
> >> supported.
> > 
> > "not supported" == sRGB (gamma). 
> 
> No, not supported = native device response = not color managed.

and for most displays that is sRGB.

> > render appropriately.
> > most displays are not
> > capable of wide gammuts so you'll HAVE to handle this case no matter what.
> 
> I've no idea what you mean.

most displays have "horrible color response". they are sRGB. they cannot
display a wider part of the color spectrum. some professional monitors/displays
can do this. eg 98% of abobe RGB for example.

either way monitors tend to have slightly different color reproduction and most
are "not that good" so basically sRGB. the compositor then is effectively
saying "unmanaged == sRGB, but it may really be anything so don't be fussy".

> > either compositor will fake it and reduce your colors down to sRGB, or your
> > apps produce sRGB by default and have code paths for extended colorspace
> > support *IF* it exists AND different colorspaces are natively supported by
> > the display hardware.
> 
> No compositor is involved. If the application doesn't know
> the output display profile, then it can't do color management.

it can assume sRGB.

-- 
- Codito, ergo sum - "I code, therefore I am" --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel