Hi Pavel,

Your example of 'child over child' is an interesting case which raises some 
design aspects of the desired picking algorithm:
1. Which node to pick when one node has a 'strict containership' over the touch 
center and the other node only has a fuzzy containership (the position falls in 
the fuzzy area).
2. Accounting for z-order for extended capture zone area.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position falls in the capture 
zone area of child2 but also clearly falls in the strict bounds of child1.
Generally, when two control nodes compete on same touch event (e.g. child1 & 
child2 in Daniel's diagram), it seems that we would like to give priority to 
"strict containership" over "fuzzy containership".
But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always exists come 
container/background node that strictly contains the touch point, but it would 
probably be an ancestor of the child node, so the usual parent-child 
relationship order will give preference to the child.

One way out it is to honor the usual z-order for the extended area of child2, 
so when a touch center hits the fuzzy area of child2, then child2 would be 
picked.

But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture zones do. 
Preferring one child by z-order (which matches the order of children in the 
parent) is not natural here. And we might better choose the node which is 
"closer"
 To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to mouse events and 
contain the touch point center either strictly or by their capture zone.
2. Remove all nodes that is strictly overlapped by another node and is below 
that node by z-order.
3. Out of those left choose the "closest" node. (the concept of "closet" should 
employ some calculation which might not be trivial in the general case).
4. Once a node has been picked, we follow the usual node chain list for event 
processing.

Care must be taken so we not break the current model for event processing. For 
example, if a node is picked by its capture zone, it means that the position 
does not fall in the boundaries of the node, so existing event handling code 
that relies on that would break. So I think the capture zone feature should be 
selectively enabled for certain type of nodes such buttons or other classic 
controls.

Regards,
Seeon





-----Original Message-----
From: Pavel Safrata 
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major difference is the 
"fair division of capture zones" among siblings. It's an interesting idea, 
let's explore it. What pops first is that children can also overlap. So I think 
it would behave like this (green capture zones
omitted):

Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing, both cases look 
the same but behave differently. Note that in the case on the right, the parent 
may be still the same, developer only adds a fancy background as a new child 
and suddenly the red child can't be hit that easily. What do you think? Is it 
an issue? Or would it not behave this way?

Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:
> (My original message didn't get through to openjfx-dev because I used 
> inline images. I've replaced those images with external links)
>
> On Nov 11, 2013, at 11:30 PM, Pavel Safrata <pavel.safr...@oracle.com 
> <mailto:pavel.safr...@oracle.com>> wrote:
>
>> On 11.11.2013 17:49, Tomas Mikula wrote:
>>> On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler 
>>> <phdoerf...@gmail.com <mailto:phdoerf...@gmail.com>> wrote:
>>>> I see the need to be aware of the area that is covered by fingers 
>>>> rather than just considering that area's center point.
>>>> I'd guess that this adds a new layer of complexity, though. For
>>>> instance:
>>>> Say we have a button on some background and both the background and 
>>>> the button do have an onClick listener attached. If you tap the 
>>>> button in a way that the touched area's center point is outside of 
>>>> the buttons boundaries - what event will be fired? Will both the 
>>>> background and the button receive a click event? Or just either the 
>>>> background or the button exclusively? Will there be a new event 
>>>> type which gets fired in case of such area-based taps?
>>>>
>>>> My suggestion would therefore be to have an additional area tap 
>>>> event which gives precise information about diameter and center of 
>>>> the tap. Besides that there should be some kind of "priority" for 
>>>> choosing which node's onClick will be called.
>>> What about picking the one that is closest to the center of the touch?
>>>
>>
>> There is always something directly on the center of the touch 
>> (possibly the scene background, but it can have event handlers too).
>> That's what we pick right now.
>> Pavel
>
> What Seeon, Assaf and I discussed earlier was building some fuzziness 
> into the node picker so that instead of each node capturing only 
> events directly on top of it:
>
> Non-fuzzy picker: http://i.imgur.com/uszql8V.png
>
> ..nodes at each level of the hierarchy would capture events beyond 
> their borders as well:
>
> Fuzzy picker: http://i.imgur.com/ELWamYp.png
>
> In the above, "Parent" would capture touch events within a certain 
> radius around it, as would its children "Child 1" and "Child 2". Since 
> "Child 1" and "Child 2" are peers, they would have a sharp division 
> between them, a watershed on either side of which events would go to 
> one child node or the other. This would also apply if the peer nodes 
> were further apart; they would divide the no-man's land between them.
> Of course this no-man's land would be part of "Parent" and could could 
> be touch-sensitive - but we won't consider "Parent" as an event target 
> until we have ruled out using one of its children's extended capture 
> zones.
>
> The capture radius could either be a styleable property on the nodes, 
> or could be determined by the X and Y size of a touch point as 
> reported by the touch screen. We'd still be reporting a touch point, 
> not a touch area. The touch target would be, as now, a single node.
>
> This would get us more reliable touch capture at leaf nodes of the 
> node hierarchy at the expense of it being harder to tap the 
> background. This is likely to be a good trade-off.
>
> Daniel
>
>
>
>>
>>> Tomas
>>>
>>>> Maybe the draw order / order in the scene graph / z buffer value 
>>>> might be sufficient to model what would happen in the real, 
>>>> physical world.
>>>> Am 11.11.2013 13:05 schrieb "Assaf Yavnai" <assaf.yav...@oracle.com
>>>> <mailto:assaf.yav...@oracle.com>>:
>>>>
>>>>> The ascii sketch looked fine on my screen before I sent the mail 
>>>>> :( I hope the idea is clear from the text (now in the reply dialog 
>>>>> its also look good)
>>>>>
>>>>> Assaf
>>>>> On 11/11/2013 12:51 PM, Assaf Yavnai wrote:
>>>>>
>>>>>> Hi Guys,
>>>>>>
>>>>>> I hope that I'm right about this, but it seems that touch events 
>>>>>> in glass are translated (and reported) as a single point events 
>>>>>> (x & y) without an area, like pointer events.
>>>>>> AFAIK, the controls response for touch events same as mouse 
>>>>>> events (using the same pickers) and as a result a button press, 
>>>>>> for example, will only triggered if the x & y of the touch event 
>>>>>> is within the control area.
>>>>>>
>>>>>> This means that small controls, or even quite large controls 
>>>>>> (like buttons with text) will often get missed because the 'strict'
>>>>>> node picking,
>>>>>> although from a UX point of view it is strange as the user 
>>>>>> clearly pressed on a node (the finger was clearly above it) but 
>>>>>> nothing happens...
>>>>>>
>>>>>> With current implementation its hard to use small features in 
>>>>>> controls, like scrollbars in lists, and it almost impossible to 
>>>>>> implement something like 'screen navigator' (the series of small 
>>>>>> dots in the bottom of a smart phones screen which allow you to 
>>>>>> jump directly to a 'far away'
>>>>>> screen)
>>>>>>
>>>>>> To illustrate it consider the bellow low resolution sketch, where 
>>>>>> the "+"
>>>>>> is the actual x,y reported, the ellipse is the finger touch area 
>>>>>> and the rectangle is the node.
>>>>>> With current implementation this type of tap will not trigger the 
>>>>>> node handlers
>>>>>>
>>>>>>                 __
>>>>>>               /     \
>>>>>>              /       \
>>>>>>        ___/ __+_ \___    in this scenario the 'button' will not get
>>>>>> pressed
>>>>>>        |    \         /    |
>>>>>>        |___\ ___ / __ |
>>>>>>               \___/
>>>>>>
>>>>>> If your smart phone support it, turn on the touch debugging 
>>>>>> options in settings and see that each point translate to a quite 
>>>>>> large circle and what ever fall in it, or reasonably close to it, 
>>>>>> get picked.
>>>>>>
>>>>>> I want to start a discussion to understand if my perspective is 
>>>>>> accurate and to understand what can be done, if any, for the 
>>>>>> coming release or the next one.
>>>>>>
>>>>>> We might use recently opened RT-34136 <https://javafx-jira.kenai.
>>>>>> com/browse/RT-34136> for logging this, or open a new JIRA for it
>>>>>>
>>>>>> Thanks,
>>>>>> Assaf
>

Reply via email to