Hi,

Wow, thanks for the thorough email. I'm still trying to digest it (it's quite a read and a big wall-of-text). There are just a couple of things that I'd like to point out:

On 08/10/2020 13:26, Shawn Rutledge wrote:
Qt Quick is basically trying to detect gestures in a distributed fashion: if you press and release a touchpoint on a button, that’s a click gesture.  If you press, drag and release, it’s a drag gesture, but it’s up to an item or handler to detect that’s what’s going on.  Items and handlers need to have enough autonomy to decide for themselves what the user is trying to do, in that geometric neighborhood where they live, while other users or other fingers might be doing something else in different neighborhoods at the same time.  That idea is very much in conflict with the idea that stopping propagation of events by accepting them could ever be OK.

I think there's some "truth untold" here. The fact that events propagate, and how they propagate, has always been event-specific (and possibly -- unfortunately -- UI-stack specific, but such differences should have never existed in the first place).

Specifically: mouse events have been designed for 90's (?) widgets; they come with assumptions that clearly show this, like the ones you mentioned:

* there is only one mouse cursor;
* a mouse event lands first and foremost on the widget below the cursor;
* if the widget accepts it, it becomes the only grabber;
* if the widget ignores it, it propagates to the parent (why the parent? why not the widget below it, in Z-stack, which might be a sibling? because widgets are meant to be laid out, not to stack on top of each other)

And so on.

Do the same set of assumptions hold in 2020? And do they hold for non-widgets UI? (For instance in Qt Quick, where completely different hierarchies of items are allowed to visually overlap.)

It should be pretty clear by now that no, they don't, and therefore mouse and touch handling needs to be rethought, maybe from scratch.



I think that when a mouse button or a touchpoint is pressed, we need to let that event propagate all the way down, until every item and handler that could possibly be interested has had a chance to see it.  In Qt 5.10 or so, we already added the passive grab concept: a pointer handler can subscribe for updates without having to say “the buck stops here”.  But in order to have a chance to do that, it needs to see the press event in the first place.  Any item that is on top by z-order can deny it the opportunity though, by stopping propagation.  So eventually it seems like we need to end up with an agreement that individual items don’t stop propagation anymore: they need to be humble and cooperative, not arrogant and unilateral.  That opens up the opportunity for some handlers to “do their thing” non-exclusively: e.g. PointHandler can act upon the movements of an individual touchpoint without ever taking sole responsibility.  So it can be used to show feedback as an independent aspect of the UI, regardless what else is going on at the same time; and it does it without the older event filtering mechanisms.

But the exclusive grab still means what grabMouse() always did: the item or handler is pretty sure that it has responsibility for the sequence of QEventPoint movements.  It has recognized a gesture and is acting upon it.  In pointer handlers, usually active() becomes true and the handler takes the exclusive grab of the point(s) within its bounds at the same time.  But it does not preempt passive grabs of other handlers: they still get to watch the movements too.  Conflict can arise when one handler (probably one that had a passive grab until now) decides to steal the exclusive grab from something else, and then we have a whole negotiation mechanism in place to deal with that: both handlers have to agree that it’s OK.

But how can all that keep working when we still have so many old-fashioned item subclasses in use?  Well it’s tricky… we keep fixing those cases one at a time.  So far there always seems to be a way.  The sooner we can deprecate some of the older techniques, the better, IMO.  But one of the nagging questions is still what should QEvent::accept() really mean?  Is it OK to have it mean something different in Qt Quick now?  We are not allowed to change what it means in widgets.  I wish we had time to refactor event delivery to be consistent everywhere, but probably we will never find time for that.

"How to get there" => I have no idea -- I didn't do any research in that regard, compared the competing APIs, etc.. But something is pretty clear to me here, and it's causing a knee-jerk reaction: whatever the new system is, it needs to be:

1) independent from the current one, which should be marked as "legacy" and somehow left working and left alone; as you write, trying to shoehorn a more complex system on top of the existing one will just cause regressions all over the place;

2) living in Qt Gui and therefore serving QtQuick, QtWidgets and 3D at the same time;

3) be done 100% of public APIs available in C++ and QML. An user should be able to write a Flickable in Widgets using nothing but public and supported APIs. And it should just work™ if combined with a Flickable coming from Qt.

The point is, widgets also need to handle multiple cursors. Widgets also need kinetic flicking while reacting to presses (a QPushButton in a QScrollArea). Widgets also need multitouch / gesture support. Leaving them alone as "done" or "tier-2" sends an awful message to those users. And let's not open the pandora box of QQuickWidget.


Thanks,
--
Giuseppe D'Angelo | giuseppe.dang...@kdab.com | Senior Software Engineer
KDAB (France) S.A.S., a KDAB Group company
Tel. France +33 (0)4 90 84 08 53, http://www.kdab.com
KDAB - The Qt, C++ and OpenGL Experts

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
Development mailing list
Development@qt-project.org
https://lists.qt-project.org/listinfo/development

Reply via email to