On 08/03/2024 15:23, Pekka Paalanen wrote:
On Fri, 8 Mar 2024 14:50:30 +0000
Terry Barnaby <ter...@beam.ltd.uk> wrote:

On 05/03/2024 12:26, Pekka Paalanen wrote:
On Mon, 4 Mar 2024 17:59:25 +0000
Terry Barnaby <ter...@beam.ltd.uk> wrote:
...

I would have thought it better/more useful to have a Wayland API call
like "stopCommiting" so that an application can sort things out for this
and other things, providing more application control. But I really have
only very limited knowledge of the Wayland system. I just keep hitting
its restrictions.
Right, Wayland does not work that way. Wayland sees any client as a
single entity, regardless of its internal composition of libraries and
others.

When Wayland delivers any event, whether it is an explicit resize event
or an input event (or maybe the client just spontaneously decides to),
that causes the client to want to resize a window, it is then up to the
client itself to make sure it resizes everything it needs to, and keeps
everything atomic so that the end user does not see glitches on screen.

Sub-surfaces' synchronous mode was needed to let clients batch the
updates of multiple surfaces into a single atomic commit. It is the
desync mode that was a non-mandatory add-on. The synchronous mode was
needed, because there was no other way to batch multiple
wl_surface.commit requests to apply simultaneously guaranteed. Without
it, if you commit surface A and then surface B, nothing will guarantee
that the compositor would not show A updated and B not on screen for a
moment.

Wayland intentionally did not include any mechanism in its design
intended for communication between a single client's internal
components. Why use a display server as an IPC middle-man for something
that should be process-internal communication. After all, Wayland is
primarily a protocol - inter-process communication.
Well as you say it is up to the client to perform all of the surface
resize work. So it seems to me, if the client had an issue with pixel
perfect resizing it could always set any of its desynced surfaces to
sync mode, or just stop the update to them, while it resizes. I don't
see why Wayland needs to ignore the clients request to set a subsurface
desynced down the tree.
You're right, but it's in the spec now. I've gained a bit more
experience in the decade after writing the sub-surface spec.

You can still work around it by setting all sub-surfaces always desync.

Oh you wrote it, thanks for the work!

So maybe time for version n+1 then :)

Actually allowing sub/subsurfaces to work in desync should not break any existing clients as they cannot use it yet. Obviously new clients written for it would not work on older Wayland servers though.

Its difficult to desync all the higher surfaces in a Qt or probably other Widget set application, they are controlled by Qt and Qt does not give you access to the subsurfaces it has created. It would be better to have had a wl_surface_set_desync(wl_surface*) rather than a wl_subsurface_set_desync(wl_subsurface*).

With clients using lots of libraries/subsystems it is better to not use their internal workings unless you have to. Normally you try and work at the least common denominator, in this case the Wayland display system as that is the shared module they both use (at least when driving a Wayland display server). This is why it is nice to have a surface that is almost totally independent of others and just is shown/not shown, is over/below etc. other surfaces like an XWindow. The Wayland surfaces are mainly this as far as I can see, apart from this desync mode although maybe there are others.

I have asked in the Qt forums if they could provide some sort of API to allow the setting of desync up the tree, but this may not happen and it might be difficult for them as it could mess up their applications rendering. It also does not match other display system API's that they support. The higher level QWidgets ideally need synced surfaces, its just the Video surfaces that need desync. Really I think this is the Wayland servers job.



In fact does it return an error to the client
when the Wayland server ignores this command ?
There is no "return error" in Wayland. Either a request succeeds, or
the client is disconnected with an error. It's all asynchronous, too.

Any possibility for graceful failure must be designed into protocol
extensions at one step higher level. If there is room for a graceful
failure, it will be mentioned in the XML spec with explicit messages to
communicate it.

Which command do you mean?

I meant the wl_subsurface_set_desync() API call on a sub/subsurface that doesn't work. As no errors were returned it took a long time to find out why things weren't working, just some lower level threads locked up.

Personally I think these sort of occasional, performance irrelevant, types of methods/requests/commands should be synchronous (maybe under an asynchronous comms system) and return an error. Makes developing clients much easier and allows clients to recover from issues much easier or at least tell the users there is a problem at the point of the call. As part of my previous work I have designed/written and use an Object Access protocol API that I guess is similar to the lower Wayland protocol. It to has an IDL language but it does support asynchronous and synchronous RPC's.



There is no "ignore" with wl_surface nor wl_subsurface.
wl_surface.commit is always acted upon, but the sub-surface sync mode
determines whether the state update goes to the screen or to a cache.
No state update is ignored unless you destroy your objects. The frame
callbacks that seem to go unanswered are not ignored, they are just
sitting in the cache waiting to apply when the parent surface actually
updates on screen.

Well I consider the fact that I have told a subsurface to go into desync mode and it doesn't and doesn't return an error, an ignore. Unless there is a way to determine this did not work ?




Thanks,
pq

Anyway, just finished debugging our new IMX8mp board and have to tidy up the Linux port now. There is a kernel "crash" at the moment which could be to do with the way I am trying to get the video stream to work to a surface that is triggering some NXP 2D/3D hardware driver issue (probably a Mutex issue).

Thanks for the work on Wayland.


Reply via email to