Re: [RFC wayland-protocols] Color management protocol

2016-12-13 Thread Graeme Gill
Carsten Haitzler (The Rasterman) wrote:
> On Mon, 12 Dec 2016 17:57:08 +1100 Graeme Gill  said:

>> Right. So a protocol for querying the profile of each output for its surface
>> is a base requirement.
> 
> i totally disagree. the compositor should simply provide available colorspaces
> (and generally only provide those that hardware can do). what screen they 
> apply
> to is unimportant.

Please read my earlier posts. No (sane) compositor can implement CMM
capabilities to a color critical applications requirements,
so color management without any participation of a compositor
is a core requirement.

> if the colorspace is native to that display or possible, the compositor will 
> do
> NO CONVERSION of your pixel data and display directly (and instead convert 
> sRGB
> data into that colorspace).

Relying on an artificial side effect (the so called "null color transform")
to implement the ability to directly control what is displayed, is a poor
approach, as I've explained at length previously.

> if your surface spans 2 screens the compositor may
> convert some to the colorspace of a monitor if it does not support that
> colorspace. choose the colorspace (as a client) that matches your data best.
> compositor will do a "best effort".

No compositor should be involved for core support. The application
should be able to render appropriately to each portion of the span.

> this way client doesnt need to know about outputs, which outputs it spans etc.
> and compositor will pick up the pieces. let me give some more complex 
> examples:

That only works if the client doesn't care about color management very much -
i.e. it's not a color critical application. I'd hope that the intended use of
Wayland is wider in scope than that.

> compositor has a mirroring mode where it can mirror a window across multiple
> screens.

Sure, and in that case the user has a choice about which screen is
properly color managed. Nothing new there - the same currently
applies on X11, OS X, MSWin. Anyone doing color critical work
will not run in such modes, or will just use the color managed screen.

> some screens can or cannot do color management.

Nothing to do with screens - core color management is up to
the application, and all it needs is to know the display profile.

> what happens when the colorspace changes on the fly (you recalbrate
> the screen or output driving hardware). you expect applications to directly
> control this and have to respond to this and redraw content all the time?

Yep, same as any other sort of re-rendering event (i.e. exactly what happens 
with
current systems - nothing new here.)

> this can be far simpler:
> 
> 1. list of supported colorspaces (bonus points if flags say if its able to be
> native or is emulated).
> 2. colorspace attached to buffer by client.
> 
> that's it.

If you don't care so much about color, yes. i.e. this is
what I call "Enhanced" color management, rather than core.
It doesn't have to be as flexible or as accurate, but it has
the benefit of being easy to use for applications that don't care
as much, or currently aren't color managed at all.

Graeme Gill.


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH libinput 1/3] touchpad: convert two functions to use the device->phys helpers

2016-12-13 Thread Peter Hutterer
Signed-off-by: Peter Hutterer 
---
 src/evdev-mt-touchpad.c | 24 +---
 src/evdev.h | 27 +++
 2 files changed, 40 insertions(+), 11 deletions(-)

diff --git a/src/evdev-mt-touchpad.c b/src/evdev-mt-touchpad.c
index 7bac8ec..26b65de 100644
--- a/src/evdev-mt-touchpad.c
+++ b/src/evdev-mt-touchpad.c
@@ -478,18 +478,19 @@ tp_process_key(struct tp_dispatch *tp,
 static void
 tp_unpin_finger(const struct tp_dispatch *tp, struct tp_touch *t)
 {
-   double xdist, ydist;
+   struct phys_coords mm;
+   struct device_coords delta;
 
if (!t->pinned.is_pinned)
return;
 
-   xdist = abs(t->point.x - t->pinned.center.x);
-   xdist *= tp->buttons.motion_dist.x_scale_coeff;
-   ydist = abs(t->point.y - t->pinned.center.y);
-   ydist *= tp->buttons.motion_dist.y_scale_coeff;
+   delta.x = abs(t->point.x - t->pinned.center.x);
+   delta.y = abs(t->point.y - t->pinned.center.y);
+
+   mm = evdev_device_unit_delta_to_mm(tp->device, );
 
/* 1.5mm movement -> unpin */
-   if (hypot(xdist, ydist) >= 1.5) {
+   if (hypot(mm.x, mm.y) >= 1.5) {
t->pinned.is_pinned = false;
return;
}
@@ -962,8 +963,8 @@ tp_need_motion_history_reset(struct tp_dispatch *tp)
 static bool
 tp_detect_jumps(const struct tp_dispatch *tp, struct tp_touch *t)
 {
-   struct device_coords *last;
-   double dx, dy;
+   struct device_coords *last, delta;
+   struct phys_coords mm;
const int JUMP_THRESHOLD_MM = 20;
 
/* We haven't seen pointer jumps on Wacom tablets yet, so exclude
@@ -978,10 +979,11 @@ tp_detect_jumps(const struct tp_dispatch *tp, struct 
tp_touch *t)
/* called before tp_motion_history_push, so offset 0 is the most
 * recent coordinate */
last = tp_motion_history_offset(t, 0);
-   dx = 1.0 * abs(t->point.x - last->x) / 
tp->device->abs.absinfo_x->resolution;
-   dy = 1.0 * abs(t->point.y - last->y) / 
tp->device->abs.absinfo_y->resolution;
+   delta.x = abs(t->point.x - last->x);
+   delta.y = abs(t->point.y - last->y);
+   mm = evdev_device_unit_delta_to_mm(tp->device, );
 
-   return hypot(dx, dy) > JUMP_THRESHOLD_MM;
+   return hypot(mm.x, mm.y) > JUMP_THRESHOLD_MM;
 }
 
 static void
diff --git a/src/evdev.h b/src/evdev.h
index 071b9ec..c07b09f 100644
--- a/src/evdev.h
+++ b/src/evdev.h
@@ -596,6 +596,33 @@ evdev_libinput_context(const struct evdev_device *device)
 }
 
 /**
+ * Convert the pair of delta coordinates in device space to mm.
+ */
+static inline struct phys_coords
+evdev_device_unit_delta_to_mm(const struct evdev_device* device,
+ const struct device_coords *units)
+{
+   struct phys_coords mm = { 0,  0 };
+   const struct input_absinfo *absx, *absy;
+
+   if (device->abs.absinfo_x == NULL ||
+   device->abs.absinfo_y == NULL) {
+   log_bug_libinput(evdev_libinput_context(device),
+"%s: is not an abs device\n",
+device->devname);
+   return mm;
+   }
+
+   absx = device->abs.absinfo_x;
+   absy = device->abs.absinfo_y;
+
+   mm.x = 1.0 * units->x/absx->resolution;
+   mm.y = 1.0 * units->y/absy->resolution;
+
+   return mm;
+}
+
+/**
  * Convert the pair of coordinates in device space to mm. This takes the
  * axis min into account, i.e. a unit of min is equivalent to 0 mm.
  */
-- 
2.9.3

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH libinput 3/3] filter: add a comment for how we calculate velocity

2016-12-13 Thread Peter Hutterer
Signed-off-by: Peter Hutterer 
---
 src/filter.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/src/filter.c b/src/filter.c
index a7cb545..77652a2 100644
--- a/src/filter.c
+++ b/src/filter.c
@@ -228,6 +228,13 @@ calculate_velocity_after_timeout(struct pointer_tracker 
*tracker)
  tracker->time + MOTION_TIMEOUT);
 }
 
+/**
+ * Calculate the velocity based on the tracker data. Velocity is averaged
+ * across multiple historical values, provided those values aren't "too
+ * different" to our current one. That includes either being too far in the
+ * past, moving into a different direction or having too much of a velocity
+ * change between events.
+ */
 static double
 calculate_velocity(struct pointer_accelerator *accel, uint64_t time)
 {
-- 
2.9.3

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH libinput 2/3] filter: split a condition up so we can mark it as bug

2016-12-13 Thread Peter Hutterer
Signed-off-by: Peter Hutterer 
---
 src/filter.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/filter.c b/src/filter.c
index 0bb066c..a7cb545 100644
--- a/src/filter.c
+++ b/src/filter.c
@@ -245,9 +245,12 @@ calculate_velocity(struct pointer_accelerator *accel, 
uint64_t time)
for (offset = 1; offset < NUM_POINTER_TRACKERS; offset++) {
tracker = tracker_by_offset(accel, offset);
 
+   /* Bug: time running backwards */
+   if (tracker->time > time)
+   break;
+
/* Stop if too far away in time */
-   if (time - tracker->time > MOTION_TIMEOUT ||
-   tracker->time > time) {
+   if (time - tracker->time > MOTION_TIMEOUT) {
if (offset == 1)
result = 
calculate_velocity_after_timeout(tracker);
break;
-- 
2.9.3

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC wayland-protocols] Color management protocol

2016-12-13 Thread Graeme Gill
Carsten Haitzler (The Rasterman) wrote:
> On Mon, 12 Dec 2016 18:18:21 +1100 Graeme Gill  said:

>> The correct approach to avoiding such issues is simply
>> to make both aspects Wayland (extension) protocols, so
>> that Wayland color management and color sensitive applications
>> have the potential to work across all Wayland systems,
>> rather than being at best balkanized, or at worst, not
>> supported.
> 
> "not supported" == sRGB (gamma). 

No, not supported = native device response = not color managed.

> render appropriately.
> most displays are not
> capable of wide gammuts so you'll HAVE to handle this case no matter what.

I've no idea what you mean.

> either compositor will fake it and reduce your colors down to sRGB, or your
> apps produce sRGB by default and have code paths for extended colorspace
> support *IF* it exists AND different colorspaces are natively supported by the
> display hardware.

No compositor is involved. If the application doesn't know
the output display profile, then it can't do color management.

Graeme Gill.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC wayland-protocols] Color management protocol

2016-12-13 Thread The Rasterman
On Tue, 13 Dec 2016 17:14:21 +1100 Graeme Gill  said:

> Carsten Haitzler (The Rasterman) wrote:
> > On Fri, 9 Dec 2016 15:30:40 +1100 Graeme Gill  said:
> 
> >> I thought I'd explained this in the previous post ? - perhaps
> >> I'm simply not understanding where you are coming from on this.
> > 
> > you didn't explain before if the point is for this to be mandatory or
> > optional.
> 
> I'm not sure exactly what you mean by "this".

this == support for color correction protocol AND actually the support for
providing the real colorspace of the monitor, providing non SRGB pixel data by
clients in another colorspace (eg adobe) and it MUST work or apps will
literally fall over.

at the end of the day there will be some extension and then some form of
guidance to developers. e.g. you can guarantee this will work, or "this may or
may not work. deal with it".

> I've suggested two color management extensions, one building on
> the other. By being an extension, I assume that neither one is
> mandatory, although "core" would be a dependence of "enhanced".

well there are extensions that are managed outside of wayland as a project.
these obviously are not mandatory. the more core it becomes the more it is
likely "mandatory".

> > above you basically say it has to be mandatory as applications will then
> > fail to run (at least some set of them might).
> 
> That's not at all what I said. I said that applications have the
> option of falling back to more basic approaches :-
> enhance back to core, core back to none.

yes. my bad. i misread the replies and quotes. i thought you said "not at all"
to the "clients should fall back" path.

> > when you add extra features like new colorspace support, people writing apps
> > and toolkits nee to know what the expectation is. can they just assume it
> > will work, or do they need to have a fallback path on their side.
> 
> I've not mentioned any new colorspace support, so I'm not
> sure what you are talking about.

not a format. a colorspce. R numbers are still R, as are G and B. it's just
that they point to different "real life spectrum" colors and so they need to be
transformed from one colorspace (sRGB to adobe RGB or adobe to sRGB).

> > compositor writers need to know too. if color management is mandatory then
> > their compositor is basically broken until they add it.
> 
> Naturally a compositor that supports an extension, would
> would have to implement it!
> 
> > i don't see a difference between enhanced and core.
> 
> Hmm. They are quite distinct.
> 
>  core: The application either implements its own CMM (lcms or
>   ArgyllCMS icclib/imdi etc.), or uses a system provided CMM
>   (i.e. ColorSync, WCS, AdobeCMM etc.).
>  enchanced: The compositor implements some level of CMM itself,
>   using one of the above libraries, GPU etc.
> 
>  core: The application requires information from the graphics
>   system (via Wayland in this particular discussion), namely
>   the profile for the display corresponding to each
>   pixel region.

i really do not think this is needed. simply a list of available colorspaces
would be sufficient. application then provide data in the colorspace of it's
choice given what is supported by the compositor and the input data it has...

>  enhanced: The compositor is provided with source colorspace
>   profiles by the application.

i again don't see why this is needed.

>  core: The application uses its CMM to transform source colorspaces
>   to the display colorspaces, and sends the pixels to the graphics system.
> 
>  enhanced: The compositor uses its CMM to transform the pixels provided
>   by the application in the provided source colorspaces to the display
>   colorspaces.

again - we're just arguing who does the transform. i don't see the point. the
compositor will have a list of colorspaces it can display (either A screen can
display this OR can be configured to display in this colorspace, ... OR the
compositor can software transform pixels in this colorspace to whatever is
necessary to display correctly).

the client simply chooses what colorspace to provide buffers in. it chooses the
one that is best for it.

> > color management means:
> > 
> > 1. being able to report what colorspace is "default" on the display right
> > now (and what may be able to be enabled possible at request).
> 
> I'm not sure what you mean by "enabled". A display colorspaces is just
> information that is needed by the CMM, so it is either known and available to
> what needs it, or not known or not available to what needs it.

a colorspace that is enabled is when the display output for that screen maps
RGB values directly to that given colorspace. i.e. the common default is sRGB.
the display may be also able to switch to adobe rgb at the flip of a switch or
by request from the host system (via data lines). it may be fixed and only one
colorspace is every able to be displayed.

> > 2. being able to report what colorspaces 

Re: [RFC wayland-protocols] Color management protocol

2016-12-13 Thread The Rasterman
On Tue, 13 Dec 2016 17:46:25 +1100 Graeme Gill  said:

> Carsten Haitzler (The Rasterman) wrote:
> 
> > wouldn't it be best not to explicitly ask for an output colorspace and just
> > provide the colorspace of your buffer and let the compositor decide? e.g. if
> > your window is on top, or it's the largest one, or it's focused,then the
> > compositor MAY switch the colorspace of that monitor to match your surface's
> > buffer colorspace, and if it goes into the background or whatever, switch
> > back? it can (and likely should) emulato other colorspaced then.
> 
> That doesn't seem like color management. Ultimately you arrive
> at the native display space, so if things are to look as intended,
> something (application or compositor) should transform from
> a non-native spaces into the native space.
> 
> At a practical level, if it is expected that the compositor
> deals with transparency (which I assume it does), then I'd
> suggest something simple - compositing in output device space
> (Isn't that what current Wayland compositors are doing ?),

a display may not have a single native colorspace. it may be able to switch.
embedded devices can do this as the display panel may have extra control lines
for switching to a different display gammut/profile. it may be done at the gfx
card output level too... so it can change on the fly.

yes. compositors right now work in display colorspace. they do no conversions.
eventually they SHOULD to display correctly. to do so they need a color profile
for the display.

it may be that a window spans 8 different screens all with different profiles.
then what? currently the image looks a bit different on each display. with a
proper color correcting compositor it can make them all look the same. if you
want apps to be able to provide "raw in screen colorspace pixels" this is going
to be horrible especially as windows span multilpe screens. if i mmove the
window around the client has drawn different parts of its buffer with different
colorspaces/profiles in mind and then has to keep redrawing to adjust as it
moves. you'll be ablew to see "trails" of incorrect coloring around the
boundaries of the screens untl the client catches up.

the compositor SHOULD do any color correction needed at this point. if you want
PROPER color correction the compositor at a MINIMUM needs to be able to report
the color profile of a screen even if it does no correcting. yes you may have
multiple screens. i really dislike the above scenario of incorrect pixel tails
because this goes against the whole philosophy of "every frame is perfect". you
cannot do this given your proposal. it can only be done if the compositor
handles the color correction and the clients just provide the colorspace being
used for their pixel data.

> or as a refinement, compositing in a per-channel light
> linearised space, that is reversible at the bit level.
> 
> Bottom line is that a color critical application won't
> use compositor transparency for anything it cares about.

i'm totally ignoring the case of having alpha. yes. blending in gamma space is
"wrong". but it's fast. :)

> > e.g. if buffer is adobe rgb, then switch display to work in adobe rgb but
> > re-render everything else that is sRGB into adobe argb space... there might
> > be a slight "flicker" so to speak as maybe some banding appears in some
> > gradients of SRGB windows or colors are ever so slightly off, but the
> > compositor is optimizing for the surface it thinks it most important.
> 
> Sounds cumbersome. It's certainly not how existing systems work.
> 
> > i really don't like the
> > idea of applications explicitly controlling screen colorspace.
> 
> I'm not sure what you mean by that. Traditionally applications render
> to the display colorspace. Changing the display setup (i.e. switching
> display colorspace emulation) is a user action, complicated only by the
> need to make the corresponding change to the display profile, and re-rendering
> anything that depends on the display profile.

being able to  modify what the screen colorspace is in any way is what i
dislike. only the compositor should affect this based on it's own decisions.

-- 
- Codito, ergo sum - "I code, therefore I am" --
The Rasterman (Carsten Haitzler)ras...@rasterman.com

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


package configuration file is missing in wayland-ivi-extension

2016-12-13 Thread Arun Kumar
Hi,

The package configuration file is missing in wayland-ivi-extension.

This file is hardly dependent when building wayland-ivi-extension along
with other dependent packages[gstreamer].

An example file


prefix=/usr
exec_prefix=/usr
libdir=/usr/lib
includedir=/usr/include

Name: Wayland-ivi-extension
Description: interface library layermanager
Version: 1.9.1
Libs: -L${libdir} -lilmClient -lilmCommon -lilmControl -lilmInput
Cflags: -I${includedir}



Thanks and Regards,
B.Arun kumar.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Remote display with 3D acceleration using Wayland/Weston

2016-12-13 Thread Christian Stroetmann

On 13.Dec.2016 21:39, DRC wrote:

I thought about this on the 14th of March 2014 (see also [1]).
Have you looked at https://github.com/waltham/waltham ?



Regards
Christian Stroetmann

[1] OntoGraphics 
(www.ontolinux.com/technology/ontographics/ontographics.htm)



Greetings.  I am the founder and principal developer for The VirtualGL
Project, which has (since 2004) produced a GLX interposer (VirtualGL)
and a high-speed X proxy (TurboVNC) that are widely used for running
Linux/Unix OpenGL applications remotely with hardware-accelerated
server-side 3D rendering.  For those who aren't familiar with VirtualGL,
it basically works by:

-- Interposing (via LD_PRELOAD) GLX calls from the OpenGL application
-- Rewriting the GLX calls such that OpenGL contexts are created in
Pbuffers instead of windows
-- Redirecting the GLX calls to the server's local display (usually :0,
which presumably has a GPU attached) rather than the remote display or
the X proxy
-- Reading back the rendered 3D images from the server's local display
and transferring them to the remote display or X proxy when the
application swaps buffers or performs other "triggers" (such as calling
glFinish() when rendering to the front buffer)

There is more complexity to it than that, but that's at least the
general idea.

At the moment, I'm investigating how best to accomplish a similar feat
in a Wayland/Weston environment.  I'm given to understand that building
a VNC server on top of Weston is straightforward and has already been
done as a proof of concept, so really my main question is how to do the
OpenGL stuff.  At the moment, my (very limited) understanding of the
architecture seems to suggest that I have two options:

(1) Implement an interposer similar in concept to VirtualGL, except that
this interposer would rewrite EGL calls to redirect them from the
Wayland display to a low-level EGL device that supports off-screen
rendering (such as the devices provided through the
EGL_PLATFORM_DEVICE_EXT extension, which is currently supported by
nVidia's drivers.)  How to get the images from that low-level device
into the Weston compositor when it is using a remote display back-end is
an open question, but I assume I'd have to ask the compositor for a
surface (which presumably would be allocated from main memory) and
handle the transfer of the pixels from the GPU to that surface.  That is
similar in concept to how VirtualGL currently works, vis-a-vis using
glReadPixels to transfer the rendered OpenGL pixels into an MIT-SHM image.

(2) Figure out some way of redirecting the OpenGL rendering within
Weston itself, rather than using an interposer.  This is where I'm fuzzy
on the details.  Is this even possible with a remote display back-end?
Maybe it's as straightforward as writing a back-end that allows Weston
to use the aforementioned low-level EGL device to obtain all of the
rendering surfaces that it passes to applications, but I don't have a
good enough understanding of the architecture to know whether or not
that idea is nonsense.  I know that X proxies, such as Xvnc, allocate a
"virtual framebuffer" that is used by the X.org code for performing X11
rendering.  Because this virtual framebuffer is located in main memory,
you can't do hardware-accelerated OpenGL with it unless you use a
solution like VirtualGL.  It would be impractical to allocate the X
proxy's virtual framebuffer in GPU memory because of the fine-grained
nature of X11, but since Wayland is all image-based, perhaps that is no
longer a limitation.

Any advice is greatly appreciated.  Thanks for your time.

DRC


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Problem With libwayland-cursor.so

2016-12-13 Thread Pekka Paalanen
On Sat, 10 Dec 2016 22:09:29 +0330
Ali  wrote:

> Dear Wayland,
> 
> After I’ve read “Building Weston” doc I cross compiled the wayland and then I 
> tried to run Weston by :
> start-weston 
> 
> It complained:
> 
> [00:01:00.878] weston 1.0.3
>http://wayland.freedesktop.org/
>Bug reports to: 
> https://bugs.freedesktop.org/enter_bug.cgi?product=weston
>Build:  
> [00:01:00.878] OS: Linux, 3.0.35, #3 SMP PREEMPT Tue Nov 15 09:36:14 MST 
> 2016, armv7l
> [00:01:00.883] Loading module '/usr/lib/weston/gal2d-backend.so'
> [00:01:01.031] input device unknown, /dev/input/mice ignored: unsupported 
> device type
> [00:01:01.191] Loading module '/usr/lib/weston/desktop-shell.so'
> [00:01:01.199] libwayland: using socket /tmp_weston/wayland-0
> [00:01:01.200] launching '/usr/libexec/weston-desktop-shell'
> /usr/libexec/weston-desktop-shell: symbol lookup error: 
> /usr/lib/libwayland-cursor.so.0: undefined symbol: 
> wl_proxy_marshal_constructor


Hi,

this happens when you have built libwayland-cursor with a
wayland-scanner version that is higher than the libwayland-client you
built or are using at runtime.

It is recommended to always have wayland-scanner exactly the same
version as the libwayland you are building and running. It is ok to use
an older wayland-scanner than the libwayland you are running, but the
opposite is not guaranteed to work so far.


Thanks,
pq


pgpa46SeJ3PT3.pgp
Description: OpenPGP digital signature
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel