Hi Pavel,

I agree with Assaf that it's a bit preliminary to discuss technical detail of the implementation. However, I think that in general the idea with using large-enough rectangular areas for registering Nodes dynamically and picking the "right" one to deliver a touch event might be useful nevertheless.

I'd still like to clarify a few points that you've raised:

On 12/13/2013 12:30 AM, Pavel Safrata wrote:
let me throw a few random problems without really thinking this through.
First of all, the whole thing relies on HashMaps, but we can't really
use nodes in hash-based collections as they are not immutable. To

What do you mean by this exactly? Note that the Node class doesn't override the hashCode() method, so whether it's mutable or not doesn't really matter because its hash code never changes. Therefore, I don't see a reason not to store Nodes in hash-based collections.


maintain the registrations with the rectangles, we would need to do some
computations (bounds) eagerly.

Yes. But we only need to compute the bounds relative to the stage (or the scene), which should be doable fast enough for any reasonably deep nodes hierarchy since we don't need to call any native methods for this.

There is again the problem of restricting
to touch-sensitive nodes only.

A matter of checking a boolean flag/property before adding a node to a hashtable? I don't think this is a real problem.

Finally, even if your proposal turns out
to work like a charm, it reduces the number of examined nodes, but the
most important and difficult part - how do we choose the right one -
needs to be specified.

For this latter part I don't have a solution unfortunately. I only wanted to suggest a way to resolve a performance issue that would arise if we decided to iterate over all the nodes in a scene, or checking intersections pixel-by-pixel.

However, I could speculate on this topic. Suppose we're using the above idea, and hence reduce the number of nodes to be checked on each touch event to a reasonably small number. I'd guess, we would have to deal with like 5 nodes at a time. Or maybe a dozen at most, but I hardly imagine how this number could get any higher than this (provided the UI is touch aware and uses nodes of reasonably large dimensions, of course). Now, I think it's not a problem at all to compute the areas of intersections between the "touch-oval" and all the selected nodes (which, remember, there are only a few). The largest intersection would pick up the node to send the touch event to, I guess.

--
best regards,
Anthony

Regards,
Pavel

On 20.11.2013 10:02, Anthony Petrov wrote:
How about you divide the top-level window surface on equal rectangles
of size, say, 0.5"x0.5" (the size in pixels will depend on the screen
DPI.) Each rectangle is basically a hashmap of nodes.

The nodes that are touch sensible register and update themselves with
the rectangles when their layout information changes (when they're
moved and/or resized). This should work reasonably fast given O(1)
complexity for hashmap operations.

When processing a touch event, given the low-resolution of the
rectangle mesh, we can quickly identify a set of rectangles that a
touch area (an oval) intersects. There usually will be no more than
four such rectangles. After that we construct a union of all nodes
laying in those rectangles, and simply iterate over the union to
choose a node laying closest to the touch area (or having the largest
intersection with it, whatever.) Again, this should work fast because
fetching nodes from the hashmaps is fast, and the number of nodes that
need to be examined should be relatively small.

--
best regards,
Anthony

On 11/19/2013 08:18 PM, Daniel Blaukopf wrote:

On Nov 19, 2013, at 4:34 PM, Pavel Safrata <pavel.safr...@oracle.com>
wrote:

Hello Daniel,

On 17.11.2013 15:09, Daniel Blaukopf wrote:
Hi Pavel,

I think we we do use CSS to configure feel as well as look - and
this is feel, not look - but I also don’t feel strongly about
whether this needs to be in CSS.

This was exactly my point.
Yes, we agree. I should have phrased the above differently to show that.



I like your idea of simply picking the closest touch sensitive node
that is within range. That puts the burden on the touch event to
describe what region it covers. On the touch screens we are
currently looking at, a region would be defined as an oval - a
combination of centre point, X diameter and Y diameter. However the
touch region can be any shape, so might need to be represented as a
Path.

I don't think we need to be precise about the shape, all of this is
about fixing the imprecise touch where the shape is rather
accidental, I think even a circle with an average diameter would be
sufficient to achieve the goal.
For now it certainly is sufficient. However these touch devices can
describe more complex shapes for the touch contact area and it is
possible we will want to take advantage of that in the future. We
should certainly optimize for a circle or oval shape.


Iterating over pixels just isn’t going to work though. If we have a
300dpi display the touch region could be 150 pixels across and have
an area of nearly 18000 pixels. Instead we’d want a way to ask a
parent node, “In your node hierarchy, which of your nodes’ borders
is closest to this region”. So we’d need to come up with an
efficient algorithm to answer this question. We’d only ask this
question for nodes with extended capture zone.

The nodes with extended capture zone may be fully or partially
hidden behind nodes without it and this needs to be taken into
account. How would it be done when the question would be limited to
nodes with extended capture zone? And even if it wasn't, what would
the position of the border tell us about the area covered by the
node? It can be a line.. Also what I wanted to achieve is pick the
node which has visible pixels in the picking area even if its border
is hidden behind other node or is somewhere far away. This wouldn't
be possible, would it?


We could reasonably limit the algorithm to dealing with convex shapes.

Can we? What about paths, polygons etc?
I realize that it is possible to describe touch sensitive concave
shapes, but I am not sure they matter for this. If developers are
going to go to the trouble of defining a concave shape that they want
to be touch sensitive within its area but not in all of its bounding
box, are they really then going to want that area to be extended? I’d
consider a concave touch shape with extended capture zone to be
sufficiently unlikely that we could treat it as concave. Which, I
realize is not quite what my proposed algorithm does.


Then we can consider an imaginary line L from the node center point
to the touch center point. The intersection of L with the node
perimeter is the closest point of contact.

It isn't. Imagine a very wide and flat (small height) rectangle and
the touch point directly above its upper-left corner. The closest
point is the corner, not the point on L which is close to
rectangle's center and may be many times farther.
You are correct, my algorithm won’t find the closest node. Back to
the drawing board.

Thanks,
Daniel


If this point is also within the touch area then we have a
potential match. We iterate over all nearby nodes with extended
capture zone in order to find the best match.

This will then be O(n) in both time and space for n nearby nodes,
given constant time to find the intersection of L with the node
perimeter. This assumption will be true for rectangular, oval and
rounded rectangle nodes.

So in summary, if I understand this algorithm correctly I don't
think it's going to work. On the other hand, I admit that computing
18000 pixels is probably not viable. Right now I don't have any
solution, I'll continue thinking..

Thanks,
Pavel


Thanks,
Daniel


On Nov 15, 2013, at 11:09 PM, Pavel Safrata
<pavel.safr...@oracle.com> wrote:

Hello,
let me start with a few comments.

"changing behavior based on which nodes have listeners on them" -
absolutely not. We have capturing, bubbling, hierarchical event
types, so we can't decide which nodes listen (in the extreme case,
scene can handle Event.ANY and perform actions on the target node
based on the event type).

"position does not fall in the boundaries of the node" - I don't
think it will be very harmful. Of course it's possible for users
to write handlers that will be affected, but I don't think it
happens often, it seems quite hard to invent such handler. The
delivery mechanism should be absolutely fine with it, we have
other cases like that (for instance, dragging can be delivered to
a node completely out of mouse position). Of course picking a 3D
node in its capture zone would mean useless PickResult (texture
coordinates etc.)

CSS-accessible vs. property-only - I don't have a strong opinion.
I agree it's rather "feel" than "look", on the other hand I think
there are such things already (scrollbar policy for instance).


Now I'll bring another problem to the concept. Take the situation
from Daniel's original picture with two siblings competing for the
capture zones:
http://i.imgur.com/ELWamYp.png
Put each of the red children to its own group - they are no longer
siblings, but the competition should still work.

The following may be a little wild, but anyway - have one of the
siblings with capture zone and the other one without it, the one
without it partly covering the one with it. Wouldn't it be great
if the capture zone was present around the visible part of the
node (reaching over the edge of the upper node)? I think it would
be really intuitive (fuzzy picking of what you see), but it's
getting pretty complicated.

 From now on, I'll call the node with enabled capture zone "touch
sensitive".

The only algorithm I can think of that would provide great results
is:
- Pick normally at the center. If the picked node is touch
sensitive, return it.
- Otherwise, run picking for each pixel in the touch area, find
the closest one belonging to a touch sensitive node and return
that node (if there is none, then of course return the node at the
center).

Obviously we can hardly do so many picking rounds. But it can be
significantly optimized:
- Perform the area picking in one pass, filling an array -
representing pixels - by the nodes picked on them
- Descend only when bounds intersect with the picking area
- Don't look farther from the center than the already found best
match
- Don't look at pixels with already picked node
- For many nodes (rectangular, circular, with pickOnBounds etc.),
instead of testing containment many times, we can quickly tell the
intersection with the picking area
- Perhaps also checking each nth pixel would be sufficient

This algorithm should be reasonably easy to code and very robust
(not suffering from various node-arrangement corner-cases), but
I'm still not sure about the performance (depends mostly on the
capture zone size - 30-pixel zones may result in calling
contains() nearly thousand times which might kill it). But perhaps
(hopefully) it can be perfected. Right now I can't see any other
algorithm that would work well and would result in more efficient
implementation (the search for overlapping nodes and closest
borders etc. is going to be pretty complicated as well, if it's
even possible to make it work).

What do you think? Any better ideas?

Pavel


On 13.11.2013 22:09, Daniel Blaukopf wrote:
Hi Seeon,

Summarizing our face to face talk today:

I see that the case described by Pavel is indeed a problem and
agree with you that not every node needs to be a participant in
the competition for which grabs touch input. However I’m not keen
on the idea of changing behavior based on which nodes have
listeners on them. CSS seems like the place to do this (as I
think Pavel suggested earlier). In Pavel’s case, either:
  - the upper child node has the CSS tag saying “enable extended
capture zone” and the lower child doesn’t: then the upper child’s
capture zone will extend over the lower child
  - or both will have the CSS tag, in which case the upper
child’s capture zone would be competing with the lower child’s
capture zone. As in any other                   competition
between capture zones the nearest node should win. The effect
would be the same as if the regular matching rules were applied
on the upper                   child. It would also be the same
as if only the lower child had an extended capture zone. However,
I’d consider this case to be bad UI programming.

We agreed that “in a competition between capture zones, pick the
node whose border is nearest the touch point” was a reasonable
way to resolve things.

Thanks,
Daniel

On Nov 13, 2013, at 12:31 PM, Seeon Birger
<seeon.bir...@oracle.com> wrote:

Hi Pavel,

Your example of 'child over child' is an interesting case which
raises some design aspects of the desired picking algorithm:
1. Which node to pick when one node has a 'strict containership'
over the touch center and the other node only has a fuzzy
containership (the position falls in the fuzzy area).
2. Accounting for z-order for extended capture zone area.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position falls
in the capture zone area of child2 but also clearly falls in the
strict bounds of child1.
Generally, when two control nodes compete on same touch event
(e.g. child1 & child2 in Daniel's diagram), it seems that we
would like to give priority to "strict containership" over
"fuzzy containership".
But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always exists
come container/background node that strictly contains the touch
point, but it would probably be an ancestor of the child node,
so the usual parent-child relationship order will give
preference to the child.

One way out it is to honor the usual z-order for the extended
area of child2, so when a touch center hits the fuzzy area of
child2, then child2 would be picked.

But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture
zones do. Preferring one child by z-order (which matches the
order of children in the parent) is not natural here. And we
might better choose the node which is "closer"
To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to mouse
events and contain the touch point center either strictly or by
their capture zone.
2. Remove all nodes that is strictly overlapped by another node
and is below that node by z-order.
3. Out of those left choose the "closest" node. (the concept of
"closet" should employ some calculation which might not be
trivial in the general case).
4. Once a node has been picked, we follow the usual node chain
list for event processing.

Care must be taken so we not break the current model for event
processing. For example, if a node is picked by its capture
zone, it means that the position does not fall in the boundaries
of the node, so existing event handling code that relies on that
would break. So I think the capture zone feature should be
selectively enabled for certain type of nodes such buttons or
other classic controls.

Regards,
Seeon





-----Original Message-----
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major
difference is the "fair division of capture zones" among
siblings. It's an interesting idea, let's explore it. What pops
first is that children can also overlap. So I think it would
behave like this (green capture zones
omitted):

Child in parent vs. Child over child:
http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing,
both cases look the same but behave differently. Note that in
the case on the right, the parent may be still the same,
developer only adds a fancy background as a new child and
suddenly the red child can't be hit that easily. What do you
think? Is it an issue? Or would it not behave this way?

Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:
(My original message didn't get through to openjfx-dev because
I used
inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata
<pavel.safr...@oracle.com
<mailto:pavel.safr...@oracle.com>> wrote:

On 11.11.2013 17:49, Tomas Mikula wrote:
On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
<phdoerf...@gmail.com <mailto:phdoerf...@gmail.com>> wrote:
I see the need to be aware of the area that is covered by
fingers
rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For
instance:
Say we have a button on some background and both the
background and
the button do have an onClick listener attached. If you tap the
button in a way that the touched area's center point is
outside of
the buttons boundaries - what event will be fired? Will both
the
background and the button receive a click event? Or just
either the
background or the button exclusively? Will there be a new event
type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional area tap
event which gives precise information about diameter and
center of
the tap. Besides that there should be some kind of
"priority" for
choosing which node's onClick will be called.
What about picking the one that is closest to the center of
the touch?


There is always something directly on the center of the touch
(possibly the scene background, but it can have event handlers
too).
That's what we pick right now.
Pavel

What Seeon, Assaf and I discussed earlier was building some
fuzziness
into the node picker so that instead of each node capturing only
events directly on top of it:

Non-fuzzy picker: http://i.imgur.com/uszql8V.png

..nodes at each level of the hierarchy would capture events beyond
their borders as well:

Fuzzy picker: http://i.imgur.com/ELWamYp.png

In the above, "Parent" would capture touch events within a certain
radius around it, as would its children "Child 1" and "Child
2". Since
"Child 1" and "Child 2" are peers, they would have a sharp
division
between them, a watershed on either side of which events would
go to
one child node or the other. This would also apply if the peer
nodes
were further apart; they would divide the no-man's land between
them.
Of course this no-man's land would be part of "Parent" and
could could
be touch-sensitive - but we won't consider "Parent" as an event
target
until we have ruled out using one of its children's extended
capture
zones.

The capture radius could either be a styleable property on the
nodes,
or could be determined by the X and Y size of a touch point as
reported by the touch screen. We'd still be reporting a touch
point,
not a touch area. The touch target would be, as now, a single
node.

This would get us more reliable touch capture at leaf nodes of the
node hierarchy at the expense of it being harder to tap the
background. This is likely to be a good trade-off.

Daniel




Tomas

Maybe the draw order / order in the scene graph / z buffer
value
might be sufficient to model what would happen in the real,
physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai"
<assaf.yav...@oracle.com
<mailto:assaf.yav...@oracle.com>>:

The ascii sketch looked fine on my screen before I sent the
mail
:( I hope the idea is clear from the text (now in the reply
dialog
its also look good)

Assaf
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:

Hi Guys,

I hope that I'm right about this, but it seems that touch
events
in glass are translated (and reported) as a single point
events
(x & y) without an area, like pointer events.
AFAIK, the controls response for touch events same as mouse
events (using the same pickers) and as a result a button
press,
for example, will only triggered if the x & y of the touch
event
is within the control area.

This means that small controls, or even quite large controls
(like buttons with text) will often get missed because the
'strict'
node picking,
although from a UX point of view it is strange as the user
clearly pressed on a node (the finger was clearly above
it) but
nothing happens...

With current implementation its hard to use small features in
controls, like scrollbars in lists, and it almost
impossible to
implement something like 'screen navigator' (the series of
small
dots in the bottom of a smart phones screen which allow
you to
jump directly to a 'far away'
screen)

To illustrate it consider the bellow low resolution
sketch, where
the "+"
is the actual x,y reported, the ellipse is the finger
touch area
and the rectangle is the node.
With current implementation this type of tap will not
trigger the
node handlers

                __
              /     \
             /       \
       ___/ __+_ \___    in this scenario the 'button'
will not get
pressed
       |    \         /    |
       |___\ ___ / __ |
              \___/

If your smart phone support it, turn on the touch debugging
options in settings and see that each point translate to a
quite
large circle and what ever fall in it, or reasonably close
to it,
get picked.

I want to start a discussion to understand if my
perspective is
accurate and to understand what can be done, if any, for the
coming release or the next one.

We might use recently opened RT-34136
<https://javafx-jira.kenai.
com/browse/RT-34136> for logging this, or open a new JIRA
for it

Thanks,
Assaf






Reply via email to