Re: [PATCH weston 1/2] toytoolkit: avoid unnecessary redraws when focus changes

2014-02-17 Thread Emilio Pozuelo Monfort
On 12/02/14 15:55, Jasper St. Pierre wrote:
 What reschedules the frame being drawn when focused is gained / lost, then?

I'm not sure what reschedules it, but it does happen: twice when the window is
focused, twice when it is unfocused (maybe something to optimize, why are we
redrawing twice?).

Emilio
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC v2] Wayland presentation extension (video protocol)

2014-02-17 Thread Pekka Paalanen
On Mon, 17 Feb 2014 03:23:40 +
Zhang, Xiong Y xiong.y.zh...@intel.com wrote:

 On Thu, 2014-01-30 at 17:35 +0200, Pekka Paalanen wrote:
  Hi,
  
  it's time for a take two on the Wayland presentation extension.
  
  
  1. Introduction
  
  The v1 proposal is here:
  http://lists.freedesktop.org/archives/wayland-devel/2013-October/011496.html
  
  In v2 the basic idea is the same: you can queue frames with a
  target presentation time, and you can get accurate presentation
  feedback. All the details are new, though. The re-design started
  from the wish to handle resizing better, preferably without
  clearing the buffer queue.
  
  All the changed details are probably too much to describe here,
  so it is maybe better to look at this as a new proposal. It
  still does build on Frederic's work, and everyone who commented
  on it. Special thanks to Axel Davy for his counter-proposal and
  fighting with me on IRC. :-)
  
  Some highlights:
  
  - Accurate presentation feedback is possible also without
queueing.
  
  - You can queue also EGL-based rendering, and get presentation
feedback if you want. Also EGL can do this internally, too, as
long as EGL and the app do not try to use queueing at the same time.
  
  - More detailed presentation feedback to better allow predicting
future display refreshes.
  
  - If wl_viewport is used, neither video resolution changes nor
surface (window) size changes alone require clearing the queue.
Video can continue playing even during resizes.
 Sorry, I can't understand this. Could you explain this more?
 What's the current problem for resizing window? How will you resolve it
 in presentation extensions?

Hi,

the presentation extension adds a buffer queue to wl_surfaces. The
compositor autonomously takes buffers from that queue, and applies them
to the surface when it deems the time is right. The client is notified
about this only after the fact.

Without wl_viewport, the problem with resizing is, that for the client
to do a guaranteed glitch-free resize, it has to:
1. Discard_queue, so that the compositor stops processing the queue.
2. Wait for the feedback to arrive, so that the client knows exactly
   which buffer (frame) is on screen.
3. Re-draw the frame in the new size.
4. Start queueing frames in the new size again.

As you see, that algorithm requires the client and the server to
synchronize by flushing out the whole queue and re-queue everything,
because the surface size is determined from the buffer dimensions. Step
2 is also important to avoid going backwards in the video; that is
accidentally showing a frame from an earlier time than what the
compositor already put on screen.

If the client wants to change either the surface size, or the buffer
size, (well, the same thing really, without wl_viewport), it has to do
this synchronization dance. It will have a high risk of causing a jerk
in the video playback.

Using a sub-surface with the proper sub-surface resizing algorithm, and
re-queueing in reverse chronological order would mitigate the problem,
but still have a considerable risk of producing a jerk in playback.

The solution I propose, that comes from using wl_viewport is, that a
client can resize the surface, or can resize the buffers
independently without having to discard_queue. If the client needs to
synchronize surface state updates to a particular buffer (frame), then
it still needs to do the synchronization dance.

However, video content is usually (right?) of fixed size and then gets
scaled to a right size for display. Video players would use wl_viewport
for that anyway. The added benefit explicitly enabled by the split to
surface vs. buffer state in the presentation protocol is, that you can
do surface (window) resizing without any synchronization wrt. the
buffer queue. The compositor will always scale the buffer to the right
size for the window. You do not need to send discard_queue or know what
buffer is currently presenting. And you don't need to re-draw the
buffers either when the window size changes.

OTOH, if the window size is supposed to stay the same while the video
resolution changes (for instance, QoS with a live video stream), the
client still does not need to synchronize: just keep on queueing
buffers like it always does, and the compositor takes care of the
appropriate scaling.

(When the compositor takes care of scaling with wl_viewport, it may also
be able to use overlay hardware to do superior scaling quality with
little cost.)

Therefore, I designed the presentation extension to have this, at first
perhaps rather surprising, explicit split between surface and buffer
state to specifically enable this co-operation with wl_viewport.

Note, that this extension now re-specifies what happens when a normal
wl_surface.commit is not preceded by a wl_surface.attach!

I think I need to really emphasize that on a follow-up email.


Thanks,
pq
___
wayland-devel 

Core protocol change; [RFC v2] Wayland presentation extension

2014-02-17 Thread Pekka Paalanen
Hi,

there is one important thing in the below spec I really need to
highlight! See further below.


On Thu, 30 Jan 2014 17:35:17 +0200
Pekka Paalanen ppaala...@gmail.com wrote:

 Hi,
 
 it's time for a take two on the Wayland presentation extension.
 
 
   1. Introduction
 
 The v1 proposal is here:
 http://lists.freedesktop.org/archives/wayland-devel/2013-October/011496.html
 
 In v2 the basic idea is the same: you can queue frames with a
 target presentation time, and you can get accurate presentation
 feedback. All the details are new, though. The re-design started
 from the wish to handle resizing better, preferably without
 clearing the buffer queue.
 
 All the changed details are probably too much to describe here,
 so it is maybe better to look at this as a new proposal. It
 still does build on Frederic's work, and everyone who commented
 on it. Special thanks to Axel Davy for his counter-proposal and
 fighting with me on IRC. :-)
 
 Some highlights:
 
 - Accurate presentation feedback is possible also without
   queueing.
 
 - You can queue also EGL-based rendering, and get presentation
   feedback if you want. Also EGL can do this internally, too, as
   long as EGL and the app do not try to use queueing at the same time.
 
 - More detailed presentation feedback to better allow predicting
   future display refreshes.
 
 - If wl_viewport is used, neither video resolution changes nor
   surface (window) size changes alone require clearing the queue.
   Video can continue playing even during resizes.
 
 The protocol interfaces are arranged as
 
   global.method(wl_surface, ...)
 
 just for brewity. We could as well do the factory approach:
 
   o = global.get_presentation(wl_surface)
   o.method(...)
 
 Or if we wanted to make it a mandatory part of the Wayland core
 protocol, we could just extend wl_surface itself:
 
   wl_surface.method(...)
 
 and put the clock_id event in wl_compositor. That all is still
 open and fairly uninteresting, so let's concentrate on the other
 details.
 
 The proposal refers to wl_viewport.set_source and
 wl_viewport.destination requests, which do not yet exist in the
 scaler protocol extension. These are just the wl_viewport.set
 arguments split into separate src and dst requests.
 
 Here is the new proposal, some design rationale follows. Please,
 do ask why something is designed like it is if it puzzles you. I
 have a load of notes I couldn't clean up for this email. This
 does not even intend to completely solve all XWayland needs, but
 for everything native on Wayland I hope it is sufficient.
 
 
   2. The protocol specification
 
 ?xml version=1.0 encoding=UTF-8?
 protocol name=presentation_timing
 
   copyright
 Copyright © 2013-2014 Collabora, Ltd.
 
 Permission to use, copy, modify, distribute, and sell this
 software and its documentation for any purpose is hereby granted
 without fee, provided that the above copyright notice appear in
 all copies and that both that copyright notice and this permission
 notice appear in supporting documentation, and that the name of
 the copyright holders not be used in advertising or publicity
 pertaining to distribution of the software without specific,
 written prior permission.  The copyright holders make no
 representations about the suitability of this software for any
 purpose.  It is provided as is without express or implied
 warranty.
 
 THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
 SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
 FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
 SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
 WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
 AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
 ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
 THIS SOFTWARE.
   /copyright
 
   interface name=presentation version=1
 description summary=timed presentation related wl_surface requests
   The main features of this interface are accurate presentation
   timing feedback, and queued wl_surface content updates to ensure
   smooth video playback while maintaining audio/video
   synchronization. Some features use the concept of a presentation
   clock, which is defined in presentation.clock_id event.
 
   Requests 'feedback' and 'queue' can be regarded as additional
   wl_surface methods. They are part of the double-buffered
   surface state update mechanism, where other requests first set
   up the state and then wl_surface.commit atomically applies the
   state into use. In other words, wl_surface.commit submits a
   content update.
 
   Interface wl_surface has requests to set surface related state
   and buffer related state, because there is no separate interface
   for buffer state alone. Queueing requires 

Re: [RFC v2] Wayland presentation extension (video protocol)

2014-02-17 Thread Pekka Paalanen
On Mon, 17 Feb 2014 01:25:07 +0100
Mario Kleiner mario.kleiner...@gmail.com wrote:

 Hello Pekka,
 
 i'm not yet subscribed to wayland-devel, and a bit short on time atm., 
 so i'll take a shortcut via direct e-mail for some quick feedback for 
 your Wayland presentation extension v2.

Hi Mario,

I'm very happy to hear from you! I have seen your work fly by on
dri-devel@ (IIRC) mailing list, and when I was writing the RFCv2 email,
I was thinking whether I should personally CC you. Sorry I didn't. I
will definitely include you on v3.

I hope you don't mind me adding wayland-devel@ to CC, your feedback is
much appreciated and backs up my design nicely. ;-)

 I'm the main developer of a FOSS toolkit called Psychtoolbox-3 
 (www.psychtoolbox.org) which is used a lot by neuro-scientists for 
 presentation of visual and auditory stimuli, mostly for basic and 
 clinical brain research. As you can imagine, very good audio-video sync 
 and precisely timed and time stamped visual stimulus presentation is 
 very important to us. We currently use X11 + GLX + OML_sync_control 
 extension to present OpenGL rendered frames precisely on the classic 
 X-Server + DRI2 + DRM/KMS. We need frame accurate presentation and 
 sub-millisecond accurate presentation timestamps. For this reason i was 
 quite a bit involved in development and testing of DRI2/DRM/KMS 
 functionality related to page-flipping and time stamping.
 
 One of the next steps for me will be to implement backends into 
 Psychtoolbox for DRI3/Present and for Wayland, so i was very happy when 
 i stumbled over your RFC for the Wayland presentation extension. Core 
 Wayland protocol seems to only support millisecond accurate 32-Bit 
 timestamps, which are not good enough for many neuro-science 
 applications. So first off, thank you for your work on this!

Right, and those 32-bit timestamps are not even guaranteed to be
related to any particular output refresh cycle wrt. what you have
presented before.

 Anyway, i read the mailing list thread and had some look over the 
 proposed patches in your git branch, and this mostly looks very good to 
 me :). I haven't had time to test any of this, or so far to play even 
 with the basic Wayland/Weston, but i thought i give you some quick 
 feedback from the perspective of a future power-user who will certainly 
 use this extension heavily for very timing sensitive applications.

Excellent!

 1. Wrt. an additional preroll_feedback request 
 http://lists.freedesktop.org/archives/wayland-devel/2014-January/013014.html,
  
 essentially the equivalent of glXGetSyncValuesOML(), that would be very 
 valuable to us.
 
 To answer the question you had on #dri-devel on this: DRM/KMS implements 
 an optional driver hook that, if implemented by a KMS driver, allows to 
 get the vblank count and timestamps at any time with almost microsecond 
 precision, even if vblank irq's were disabled for extended periods of 
 time, without the need to wait for the next vblank irq. This is 
 implemented on intel-kms and radeon-kms for all Intel/AMD/ATI gpu's 
 since around October 2010 or Linux 2.6.35 iirc. It will be supported for 
 nouveau / NVidia desktop starting with the upcoming Linux 3.14. I have 
 verified with external high precision measurement equipment that those 
 timestamps for vblank and kms-pageflip completion are accurate down to 
 better than 20 usecs on those drivers.
 
 If a kms driver doesn't implement the hook, as is currently the case for 
 afaik all embedded gpu drivers, then the vblank count will be reported 
 instantaneously, but the vblank timestamp will be reported as zero until 
 the first vblank irq has happened if irq's were turned off.
 
 For reference:
 
 http://lxr.free-electrons.com/source/drivers/gpu/drm/drm_irq.c#L883
 

Indeed, the preroll_feedback request was modeled to match
glXGetSyncValuesOML.

Do you need to be able to call GetSyncValues at any time and have it
return ASAP? Do you call it continuously, and even between frames?

Or would it be enough for you to present a dummy frame, and just wait
for the presentation feedback as usual? Since you are asking, I guess
this is not enough.

We could take it even further if you need to monitor the values
continuously. We could add a protocol interface, where you could
subscribe to an event, per-surface or maybe per-output, whenever the
vblank counter increases. If the compositor is not in a continuous
repaint loop, it could use a timer to approximately sample the values
(ask DRM), or whatever. The benefit would be that we would avoid a
roundtrip for each query, as the compositor is streaming the events to
your app. Do you need the values so often that this would be worth it?

For special applications this would be ok, as in those cases power
consumption is not an issue.

 2. As far as the decision boundary for your presentation target 
 timestamps. Fwiw, the only case i know of NV_present_video extension, 
 and the way i expose it to user code in my toolkit 

Re: Core protocol change; [RFC v2] Wayland presentation extension

2014-02-17 Thread Jason Ekstrand
On Feb 17, 2014 2:35 AM, Pekka Paalanen ppaala...@gmail.com wrote:

 Hi,

 there is one important thing in the below spec I really need to
 highlight! See further below.


 On Thu, 30 Jan 2014 17:35:17 +0200
 Pekka Paalanen ppaala...@gmail.com wrote:

  Hi,
 
  it's time for a take two on the Wayland presentation extension.
 
 
1. Introduction
 
  The v1 proposal is here:
 
http://lists.freedesktop.org/archives/wayland-devel/2013-October/011496.html
 
  In v2 the basic idea is the same: you can queue frames with a
  target presentation time, and you can get accurate presentation
  feedback. All the details are new, though. The re-design started
  from the wish to handle resizing better, preferably without
  clearing the buffer queue.
 
  All the changed details are probably too much to describe here,
  so it is maybe better to look at this as a new proposal. It
  still does build on Frederic's work, and everyone who commented
  on it. Special thanks to Axel Davy for his counter-proposal and
  fighting with me on IRC. :-)
 
  Some highlights:
 
  - Accurate presentation feedback is possible also without
queueing.
 
  - You can queue also EGL-based rendering, and get presentation
feedback if you want. Also EGL can do this internally, too, as
long as EGL and the app do not try to use queueing at the same time.
 
  - More detailed presentation feedback to better allow predicting
future display refreshes.
 
  - If wl_viewport is used, neither video resolution changes nor
surface (window) size changes alone require clearing the queue.
Video can continue playing even during resizes.
 
  The protocol interfaces are arranged as
 
global.method(wl_surface, ...)
 
  just for brewity. We could as well do the factory approach:
 
o = global.get_presentation(wl_surface)
o.method(...)
 
  Or if we wanted to make it a mandatory part of the Wayland core
  protocol, we could just extend wl_surface itself:
 
wl_surface.method(...)
 
  and put the clock_id event in wl_compositor. That all is still
  open and fairly uninteresting, so let's concentrate on the other
  details.
 
  The proposal refers to wl_viewport.set_source and
  wl_viewport.destination requests, which do not yet exist in the
  scaler protocol extension. These are just the wl_viewport.set
  arguments split into separate src and dst requests.
 
  Here is the new proposal, some design rationale follows. Please,
  do ask why something is designed like it is if it puzzles you. I
  have a load of notes I couldn't clean up for this email. This
  does not even intend to completely solve all XWayland needs, but
  for everything native on Wayland I hope it is sufficient.
 
 
2. The protocol specification
 
  ?xml version=1.0 encoding=UTF-8?
  protocol name=presentation_timing
 
copyright
  Copyright © 2013-2014 Collabora, Ltd.
 
  Permission to use, copy, modify, distribute, and sell this
  software and its documentation for any purpose is hereby granted
  without fee, provided that the above copyright notice appear in
  all copies and that both that copyright notice and this permission
  notice appear in supporting documentation, and that the name of
  the copyright holders not be used in advertising or publicity
  pertaining to distribution of the software without specific,
  written prior permission.  The copyright holders make no
  representations about the suitability of this software for any
  purpose.  It is provided as is without express or implied
  warranty.
 
  THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
  SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
  FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
  SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
  WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
  AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
  ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
  THIS SOFTWARE.
/copyright
 
interface name=presentation version=1
  description summary=timed presentation related wl_surface
requests
The main features of this interface are accurate presentation
timing feedback, and queued wl_surface content updates to ensure
smooth video playback while maintaining audio/video
synchronization. Some features use the concept of a presentation
clock, which is defined in presentation.clock_id event.
 
Requests 'feedback' and 'queue' can be regarded as additional
wl_surface methods. They are part of the double-buffered
surface state update mechanism, where other requests first set
up the state and then wl_surface.commit atomically applies the
state into use. In other words, wl_surface.commit submits a
content update.
 
Interface wl_surface has 

[PATCH libinput] evdev: fix device_transform_ functions

2014-02-17 Thread Benjamin Tissoires
X and Y are li_fixed_t, which is 24.8 fixed point real number.
li_fixed_t max is thus ~8388607.

On a touchscreen with a range of 32767 values (like a 3M sensor), and
mapped on monitor with a resolution of 1920x1080, we currently have:
(x - li_fixed_from_int(device-abs.min_x)) * width == 62912640

which is 7 times bigger than li_fixed_t max.

To keep the precision of the sensor, first compute the uniformized
coordinate (in range 0 .. 1.0) of the touch point, then multiply it
by the screen dimension, and revert it to a li_fixed_t.

Signed-off-by: Benjamin Tissoires benjamin.tissoi...@gmail.com
---

Hi,

I have hit this problem by playing with a touchscreen reporting 4096 values, on
xf86-input-libinput. xf86-input-libinput does not use the real screen size, but
0x instead. This allows to report a touchscreen with a range of 128 values
to work properly :(

I went through the multitouch device database, and took one example of a more
real use-case (xf86-input-libinput is still in an early shape).

Cheers,
Benjamin

 src/evdev.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/src/evdev.c b/src/evdev.c
index d8dff65..0d033b8 100644
--- a/src/evdev.c
+++ b/src/evdev.c
@@ -91,8 +91,9 @@ evdev_device_transform_x(struct evdev_device *device,
 li_fixed_t x,
 uint32_t width)
 {
-   return (x - li_fixed_from_int(device-abs.min_x)) * width /
+   double x_scaled = (li_fixed_to_double(x) - device-abs.min_x) /
(device-abs.max_x - device-abs.min_x + 1);
+   return li_fixed_from_double(x_scaled * width);
 }
 
 li_fixed_t
@@ -100,8 +101,9 @@ evdev_device_transform_y(struct evdev_device *device,
 li_fixed_t y,
 uint32_t height)
 {
-   return (y - li_fixed_from_int(device-abs.min_y)) * height /
+   double y_scaled = (li_fixed_to_double(y) - device-abs.min_y) /
(device-abs.max_y - device-abs.min_y + 1);
+   return li_fixed_from_double(y_scaled * height);
 }
 
 static void
-- 
1.8.5.3

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Inter-client surface embedding

2014-02-17 Thread Jasper St. Pierre
GtkPlug and GtkSocket are really implemented in terms of XEmbed. As we've
found, XEmbed has a surprising number of problems in real-world use cases,
so it's considered deprecated.

Building something special-case for panels seems much better than trying to
implement something generic like WaylandEmbed.


On Mon, Feb 17, 2014 at 5:59 PM, Mark Thomas
mark-wayland-de...@efaref.netwrote:

 On Mon, 17 Feb 2014, Pekka Paalanen wrote:

  On Mon, 17 Feb 2014 00:04:19 + (GMT)
 Mark Thomas mark-wayland-de...@efaref.net wrote:

 - The subsurface has separate focus from the main window surface.  For
 the usual use cases of embedding like this, you'd prefer the parent
 surface to remain focused (or at least, appear focused) while the
 embedded
 surface is being interacted with.   Not sure if this is a general feature
 of subsurfaces, nor what could be done about it.


 set an empty input region, and the input events will fall through the
 surface. So you intend that the embedded client never gets input for
 the embedded surface directly?


 I think that the embedded client should still get input as normal, however
 the surface that it's embedded within should still appear to be focussed. I
 guess this is a shell interface question, and will probably need a shell
 interface extension.  I'll think about this more later on when I come round
 to writing the shell plugin that I'll undoubtedly need.


  Did you know about the earlier attempts on this? I think you should be
 able to find some discussion or a proposal by searching the
 wayland-devel@ archives for foreign surface protocol, IIRC. At that
 time, the conclusion was to use a nested mini-compositor approach
 instead, which is demoed in weston clients as weston-nested.


 I did not.  That's quite frustrating, I could have saved myself some time.
 I went back and looked and none of the posts mentioned embed or
 plug/socket so that's why I didn't find them. :(

 Do you know if any code came about from the foreign surface proposals?

 The nested mini-compositor example doesn't build for me as I don't have
 working EGL, so I never even noticed it!  Reading about it the approach
 seems to be more suited to nested application situations, e.g. a web
 browser embedding a document viewer.

 For the panel use case it seems like the wrong approach, as the embedded
 panel objects are merely fastened to the panel like badges, rather than
 part of the panel itself.  It seems a shame to reimplement a compositor in
 the panel when we've already got a perfectly good compositor to use.


  I see your protocol definition lacks all documentation on how it is
 supposed to be used and implemented. A verbal description would be nice,
 giving an overview.


 I did try to give a quick overview in the email, but it was late last
 night and I may not have been clear.

 I've pushed some doc updates to the protocol.xml file my git repo.  But
 in terms of Jonas Ådahl's proposal, my protocol works the other way round:

   A creates a main surface
   A creates a hole on that surface and sets its position and size
   A gets the uid (handle) from the server
   A passes that uid to B via IPC
   B creates a surface
   B creates a plug on that surface with the uid it got from A
   B receives a configure event from the server with the size of the hole
   B creates a buffer of the correct size and renders its image to the
 surface


  How do you handle glitch-free resizing? Sub-surfaces handle glitch-free
 resizing by temporarily changing the sub-surface into synchronized
 mode, assuring the sub-surface has new content in the correct new size,
 and then atomically commits the whole tree of sub-surfaces with a
 commit to the root wl_surface. Do you have any synchronization
 guarantees like that? With separate processes cooperating to create a
 single window it will be even more important than with the
 existing sub-surfaces case, and you will need more IPC between the two
 clients. Using client1-client2 IPC would be more efficient than
 client1-server-client2.


 I don't.  Sorting out glitch-free interactive resizing is delegated to the
 clients, although you can get pretty good glitchy resizing by B repainting
 whenever it receives the configure event.

 My anticipated use case is applets inside panels, which aren't typically
 resized, so this implementation should be sufficient.


  Have you considered if your use case could be better served by
 moving some functionality into a special DE-specific client (e.g.
 weston-desktop-shell) and having separate protocol (an alternative
 shell interface) for panel clients to tell their wl_surface needs to
 be embeded into the panel, rather than implementing a generic
 mechanism where you need to solve all corner-cases in a generic way?
 If the protocol extension was designed particularly for panels, you
 might have an easy way out by defining special resize behaviour which
 would avoid most client-client negotiation.


 My plan was to patch Gtk3 to 

Re: Inter-client surface embedding

2014-02-17 Thread Bill Spitzak

Mark Thomas wrote:


I've pushed some doc updates to the protocol.xml file my git repo.  But
in terms of Jonas Ådahl's proposal, my protocol works the other way round:

  A creates a main surface
  A creates a hole on that surface and sets its position and size
  A gets the uid (handle) from the server
  A passes that uid to B via IPC
  B creates a surface
  B creates a plug on that surface with the uid it got from A
  B receives a configure event from the server with the size of the hole
  B creates a buffer of the correct size and renders its image to the
surface


I do believe users are looking for something more like this than for 
implementing a subcompositor. Subcompositor really worries me as it 
relies on the buffers being passed through the intermediate client as 
fast as possible (ie without copying), and information that clients use 
to figure out how to allocate their buffers being passed the opposite 
direction. I can't believe every client is going to get this right, and 
would seem to make it impossible for a sub-client to take advantage of 
any new wayland api until the parent client is updated.


I think the above description can be greatly simplified by removing the 
hole and plug objects and just using a subsurface:


  A creates a main surface
  A creates a subsurface for the hole
  A gets the uid of the subsurface from the compositor
  A passes that uid to B via IPC
  B uses this uid to get access to the subsurface
  B can now attach buffers and do other actions to the subsurface

I proposed this before but I think I am failing to communicate what I 
wanted. The term uid seems pretty good and maybe makes it clearer. 
It's purpose is so B can't just guess at objects and get access to them, 
and to be a simple piece of data (probably a big number) that can be 
passed through IPC, in particular as part of the argv sent to exec the 
child.


As far as I can tell Wayland will have no difficulties with two clients 
both owning a subsurface and trying to update it, as it will serialize 
all the requests and perform them in whatever order they appear. Either 
client can destroy the surface and the other will cleanly hear about it.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH libinput] evdev: fix device_transform_ functions

2014-02-17 Thread Peter Hutterer
On Mon, Feb 17, 2014 at 01:42:52PM -0500, Benjamin Tissoires wrote:
 X and Y are li_fixed_t, which is 24.8 fixed point real number.
 li_fixed_t max is thus ~8388607.
 
 On a touchscreen with a range of 32767 values (like a 3M sensor), and
 mapped on monitor with a resolution of 1920x1080, we currently have:
 (x - li_fixed_from_int(device-abs.min_x)) * width == 62912640
 
 which is 7 times bigger than li_fixed_t max.
 
 To keep the precision of the sensor, first compute the uniformized
 coordinate (in range 0 .. 1.0) of the touch point, then multiply it
 by the screen dimension, and revert it to a li_fixed_t.
 
 Signed-off-by: Benjamin Tissoires benjamin.tissoi...@gmail.com
 ---
 
 Hi,
 
 I have hit this problem by playing with a touchscreen reporting 4096 values, 
 on
 xf86-input-libinput. xf86-input-libinput does not use the real screen size, 
 but
 0x instead. This allows to report a touchscreen with a range of 128 values
 to work properly :(
 
 I went through the multitouch device database, and took one example of a more
 real use-case (xf86-input-libinput is still in an early shape).

fwiw, this is exactly the type of use-case where it would be simple and
worth it to knock up a test for a single device and make sure that the
coordinates are correct. which gives us a nice reproducer and prevents us
from errors like this in the future.

  src/evdev.c | 6 --
  1 file changed, 4 insertions(+), 2 deletions(-)
 
 diff --git a/src/evdev.c b/src/evdev.c
 index d8dff65..0d033b8 100644
 --- a/src/evdev.c
 +++ b/src/evdev.c
 @@ -91,8 +91,9 @@ evdev_device_transform_x(struct evdev_device *device,
li_fixed_t x,
uint32_t width)
  {
 - return (x - li_fixed_from_int(device-abs.min_x)) * width /
 + double x_scaled = (li_fixed_to_double(x) - device-abs.min_x) /
   (device-abs.max_x - device-abs.min_x + 1);
 + return li_fixed_from_double(x_scaled * width);

A simple 1L *  should suffice, right?

Cheers,
   Peter

  }
  
  li_fixed_t
 @@ -100,8 +101,9 @@ evdev_device_transform_y(struct evdev_device *device,
li_fixed_t y,
uint32_t height)
  {
 - return (y - li_fixed_from_int(device-abs.min_y)) * height /
 + double y_scaled = (li_fixed_to_double(y) - device-abs.min_y) /
   (device-abs.max_y - device-abs.min_y + 1);
 + return li_fixed_from_double(y_scaled * height);
  }
  
  static void
 -- 
 1.8.5.3
 
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC v4] Fullscreen shell protocol

2014-02-17 Thread Jason Ekstrand
I just added an implementation of RFCv4 which can be found here:

https://github.com/jekstrand/weston/tree/fullscreen-shell-RFCv4

Thanks,
--Jason Ekstrand


On Sat, Feb 15, 2014 at 1:12 AM, Pekka Paalanen ppaala...@gmail.com wrote:

 On Fri, 14 Feb 2014 11:11:33 -0600
 Jason Ekstrand ja...@jlekstrand.net wrote:

  Hi Pekka!  Thanks for the review.  Comments follow.
 
 
  On Fri, Feb 14, 2014 at 1:14 AM, Pekka Paalanen
  ppaala...@gmail.com wrote:
 
   Hi Jason
  
   On Thu, 13 Feb 2014 22:37:53 -0600
   Jason Ekstrand ja...@jlekstrand.net wrote:
  
The following is yet another take on the fullscreen shell
protocol. Previous versions more-or-less followed the
approach taken in wl_shell. This version completely reworks
the concept.  In particular, the protocol is split into two
use-cases.  The first is that of a simple client that wants
to present a surface or set of surfaces possibly with some
scaling. This happens through the present_surface request
which looks similar to that of wl_shell only without the
modesetting.
   
The second use-case is of a client that wants more control
over the outputs.  In this case, the client uses the
present_surface_for_mode request to present the surface at a
particular output mode.  This request provides a more-or-less
atomic modeset operation.  If the compositor can satisfy the
requested mode, then the mode is changed and the new surface
   is
presented.  Otherwise, the compositor harmlessly falls back
to the previously presented surface and the client is
informed that the switch failed.  This way, the surface is
either displayed correctly or not at
   all.
Of course, a client is free to call present_surface_for_mode
with the currently presented surface and hope for the best.
However, this may result in strange behavior and there is no
reliable fallback if the mode switch fails.
   
In particular, I would like feedback on the modesetting
portion of this protocol.  This is particularly targetted at
compositors that want to run inside weston or some other
fullscreen compositor.  In the next week or
   so,
I will attempt to implement all this in weston and see how
well it works. However, I would also like to know how well
this will work for other compositors such as KWin or Hawaii.
   
Thanks for your feedback,
--Jason Ekstrand
   
= Protocol follows: =
   
protocol name=fullscreen_shell
  interface name=wl_fullscreen_shell version=1
  
   This interface should have a destructor request IMO. It's not
   stricly required, but I think it would be consistent (I think
   all global interfaces need an explicit destructor request) and
   more future-proof.
  
 
  Thanks for reminding me.  I'll get one added.
 
 
  
description summary=Displays a single surface per
output Displays a single surface per output.
   
  This interface provides a mechanism for a single client
to display simple full-screen surfaces.  While there
technically may be
   multiple
  clients bound to this interface, only one of those
clients should
   be
  shown at a time.
   
  To present a surface, the client uses either the
present_surface or present_surface_for_mode requests.
Presenting a surface takes
   effect
  on the next wl_surface.commit.  See the individual
requests for details about scaling and mode switches.
   
  The client can have at most one surface per output at
any time. Requesting a surface be presented on an output that
already has a surface replaces the previously presented
surface.  Presenting a
   null
  surface removes its content and effectively disables
the output. Exactly what happens when an output is disabled
is compositor-specific.  The same surface may be presented
multiple outputs simultaneously.
  
   If the same surface is presented on multiple outputs, should
   the client have a way to say which output is to be considered
   the surface's main output, where e.g. presentation feedback is
   synced to?
  
 
  That's a good question.  Simple clients probably don't care.
  More complex clients such as compositors probably will.  However,
  I'd expect them to have one surface per output most of the time
  anyway.  I'll give that some thought.
 
 
   Maybe also note explicitly, that once a surface has been
   presented on an output, it stays on that output until
   explicitly removed, or output is unplugged? So that simple
   attach+damage+commit can be used to update the content, if that
   is the intention.
  
 
  Yes, that's a good point.  And I do intend to provide that
  guarantee.
 
 
  
/description
   
enum name=present_method
  description summary=different method to set the
surface
   fullscreen
  Hints to indicate to the compositor how to deal with a
conflict between 

[PATCH libinput 1/2] Hook up libevdev as backend

2014-02-17 Thread Peter Hutterer
libevdev wraps the various peculiarities of the evdev kernel API into a
type-safe API. It also buffers the device so checking for specific features at
a later time is easier than re-issuing the ioctls. Plus, it gives us almost
free support for SYN_DROPPED events (in the following patch).

This patch switches all the bit checks over to libevdev and leaves the event
processing as-is. Makes it easier to review.

Signed-off-by: Peter Hutterer peter.hutte...@who-t.net
---
 configure.ac |  7 ++---
 src/Makefile.am  |  2 ++
 src/evdev-touchpad.c | 25 ++-
 src/evdev.c  | 87 +---
 src/evdev.h  | 14 ++---
 5 files changed, 52 insertions(+), 83 deletions(-)

diff --git a/configure.ac b/configure.ac
index 44729a9..68e1d35 100644
--- a/configure.ac
+++ b/configure.ac
@@ -44,6 +44,7 @@ AC_CHECK_DECL(CLOCK_MONOTONIC,[],
 PKG_PROG_PKG_CONFIG()
 PKG_CHECK_MODULES(MTDEV, [mtdev = 1.1.0])
 PKG_CHECK_MODULES(LIBUDEV, [libudev])
+PKG_CHECK_MODULES(LIBEVDEV, [libevdev = 0.4])
 
 if test x$GCC = xyes; then
GCC_CFLAGS=-Wall -Wextra -Wno-unused-parameter -g -Wstrict-prototypes 
-Wmissing-prototypes -fvisibility=hidden
@@ -64,20 +65,16 @@ AC_ARG_ENABLE(tests,
  [build_tests=$enableval],
  [build_tests=auto])
 
-PKG_CHECK_MODULES(LIBEVDEV, [libevdev = 0.4], [HAVE_LIBEVDEV=yes], 
[HAVE_LIBEVDEV=no])
 PKG_CHECK_MODULES(CHECK, [check = 0.9.9], [HAVE_CHECK=yes], 
[HAVE_CHECK=no])
 
 if test x$build_tests = xauto; then
-   if test x$HAVE_CHECK = xyes -a x$HAVE_LIBEVDEV = xyes; then
+   if test x$HAVE_CHECK = xyes; then
build_tests=yes
fi
 fi
 if test x$build_tests = xyes -a x$HAVE_CHECK = xno; then
AC_MSG_ERROR([Cannot build tests, check is missing])
 fi
-if test x$build_tests = xyes -a x$HAVE_LIBEVDEV = xno; then
-   AC_MSG_ERROR([Cannot build tests, libevdev is missing])
-fi
 
 AM_CONDITIONAL(BUILD_TESTS, [test x$build_tests = xyes])
 
diff --git a/src/Makefile.am b/src/Makefile.am
index 6e27b3b..f544ccd 100644
--- a/src/Makefile.am
+++ b/src/Makefile.am
@@ -20,9 +20,11 @@ libinput_la_SOURCES =\
 
 libinput_la_LIBADD = $(MTDEV_LIBS) \
 $(LIBUDEV_LIBS) \
+$(LIBEVDEV_LIBS) \
 -lm
 libinput_la_CFLAGS = $(MTDEV_CFLAGS)   \
 $(LIBUDEV_CFLAGS)  \
+$(LIBEVDEV_CFLAGS) \
 $(GCC_CFLAGS)
 
 pkgconfigdir = $(libdir)/pkgconfig
diff --git a/src/evdev-touchpad.c b/src/evdev-touchpad.c
index d65ebb2..8185bf2 100644
--- a/src/evdev-touchpad.c
+++ b/src/evdev-touchpad.c
@@ -170,16 +170,16 @@ struct touchpad_dispatch {
 static enum touchpad_model
 get_touchpad_model(struct evdev_device *device)
 {
-   struct input_id id;
+   int vendor, product;
unsigned int i;
 
-   if (ioctl(device-fd, EVIOCGID, id)  0)
-   return TOUCHPAD_MODEL_UNKNOWN;
+   vendor = libevdev_get_id_vendor(device-evdev);
+   product = libevdev_get_id_product(device-evdev);
 
for (i = 0; i  ARRAY_LENGTH(touchpad_spec_table); i++)
-   if (touchpad_spec_table[i].vendor == id.vendor 
+   if (touchpad_spec_table[i].vendor == vendor 
(!touchpad_spec_table[i].product ||
-touchpad_spec_table[i].product == id.product))
+touchpad_spec_table[i].product == product))
return touchpad_spec_table[i].model;
 
return TOUCHPAD_MODEL_UNKNOWN;
@@ -730,9 +730,7 @@ touchpad_init(struct touchpad_dispatch *touchpad,
 {
struct motion_filter *accel;
 
-   unsigned long prop_bits[INPUT_PROP_MAX];
-   struct input_absinfo absinfo;
-   unsigned long abs_bits[NBITS(ABS_MAX)];
+   const struct input_absinfo *absinfo;
 
bool has_buttonpad;
 
@@ -746,16 +744,13 @@ touchpad_init(struct touchpad_dispatch *touchpad,
/* Detect model */
touchpad-model = get_touchpad_model(device);
 
-   ioctl(device-fd, EVIOCGPROP(sizeof(prop_bits)), prop_bits);
-   has_buttonpad = TEST_BIT(prop_bits, INPUT_PROP_BUTTONPAD);
+   has_buttonpad = libevdev_has_property(device-evdev, 
INPUT_PROP_BUTTONPAD);
 
/* Configure pressure */
-   ioctl(device-fd, EVIOCGBIT(EV_ABS, sizeof(abs_bits)), abs_bits);
-   if (TEST_BIT(abs_bits, ABS_PRESSURE)) {
-   ioctl(device-fd, EVIOCGABS(ABS_PRESSURE), absinfo);
+   if ((absinfo = libevdev_get_abs_info(device-evdev, ABS_PRESSURE))) {
configure_touchpad_pressure(touchpad,
-   absinfo.minimum,
-   absinfo.maximum);
+   absinfo-minimum,
+   absinfo-maximum);
}
 
/* Configure acceleration factor */
diff --git a/src/evdev.c b/src/evdev.c
index d8dff65..ba28fc6 100644

Re: Inter-client surface embedding

2014-02-17 Thread Pekka Paalanen
On Mon, 17 Feb 2014 22:59:11 + (GMT)
Mark Thomas mark-wayland-de...@efaref.net wrote:

 On Mon, 17 Feb 2014, Pekka Paalanen wrote:
 
  On Mon, 17 Feb 2014 00:04:19 + (GMT)
  Mark Thomas mark-wayland-de...@efaref.net wrote:
 
 - The subsurface has separate focus from the main window surface.  For
  the usual use cases of embedding like this, you'd prefer the parent
  surface to remain focused (or at least, appear focused) while the embedded
  surface is being interacted with.   Not sure if this is a general feature
  of subsurfaces, nor what could be done about it.
 
  set an empty input region, and the input events will fall through the
  surface. So you intend that the embedded client never gets input for
  the embedded surface directly?
 
 I think that the embedded client should still get input as normal, however 
 the surface that it's embedded within should still appear to be focussed. 
 I guess this is a shell interface question, and will probably need a shell 
 interface extension.  I'll think about this more later on when I come 
 round to writing the shell plugin that I'll undoubtedly need.
 
  Did you know about the earlier attempts on this? I think you should be
  able to find some discussion or a proposal by searching the
  wayland-devel@ archives for foreign surface protocol, IIRC. At that
  time, the conclusion was to use a nested mini-compositor approach
  instead, which is demoed in weston clients as weston-nested.
 
 I did not.  That's quite frustrating, I could have saved myself some time. 
 I went back and looked and none of the posts mentioned embed or 
 plug/socket so that's why I didn't find them. :(
 
 Do you know if any code came about from the foreign surface proposals?

I don't recall if it did. I assume you found some email discussions,
you could ask directly from the people proposing them.

 The nested mini-compositor example doesn't build for me as I don't have 
 working EGL, so I never even noticed it!  Reading about it the approach 
 seems to be more suited to nested application situations, e.g. a web 
 browser embedding a document viewer.
 
 For the panel use case it seems like the wrong approach, as the embedded 
 panel objects are merely fastened to the panel like badges, rather than 
 part of the panel itself.  It seems a shame to reimplement a compositor in 
 the panel when we've already got a perfectly good compositor to use.
 
  I see your protocol definition lacks all documentation on how it is
  supposed to be used and implemented. A verbal description would be nice,
  giving an overview.
 
 I did try to give a quick overview in the email, but it was late last 
 night and I may not have been clear.
 
 I've pushed some doc updates to the protocol.xml file my git repo.  But
 in terms of Jonas Ådahl's proposal, my protocol works the other way round:
 
A creates a main surface
A creates a hole on that surface and sets its position and size
A gets the uid (handle) from the server
A passes that uid to B via IPC
B creates a surface
B creates a plug on that surface with the uid it got from A
B receives a configure event from the server with the size of the hole
B creates a buffer of the correct size and renders its image to the
  surface
 
  How do you handle glitch-free resizing? Sub-surfaces handle glitch-free
  resizing by temporarily changing the sub-surface into synchronized
  mode, assuring the sub-surface has new content in the correct new size,
  and then atomically commits the whole tree of sub-surfaces with a
  commit to the root wl_surface. Do you have any synchronization
  guarantees like that? With separate processes cooperating to create a
  single window it will be even more important than with the
  existing sub-surfaces case, and you will need more IPC between the two
  clients. Using client1-client2 IPC would be more efficient than
  client1-server-client2.
 
 I don't.  Sorting out glitch-free interactive resizing is delegated to the 
 clients, although you can get pretty good glitchy resizing by B repainting 
 whenever it receives the configure event.

Except if you do not define a way to push *all* new state accross the
multiple clients to the server in an atomic way, nothing the clients
can do will guarantee glitch-free operation. You cannot rely on e.g.
two wl_surface.commits happening back-to-back, there is always a
possibility that the server will repaint between them, which means the
server uses only half of the new state, leading to a visual glitch.

It is very easy to design a protocol that works glitch-free most of the
time, and it may be hard to even purposefully trigger the potential
glitches, but the race can still be there.

 My anticipated use case is applets inside panels, which aren't typically 
 resized, so this implementation should be sufficient.

Right, but if you build a generic protocol for embedding, you really
should try hard to solve all these ugly, nasty corner cases. Experience
has