discussion about touch events

2013-11-11 Thread Assaf Yavnai

Hi Guys,

I hope that I'm right about this, but it seems that touch events in 
glass are translated (and reported) as a single point events (x & y) 
without an area, like pointer events.
AFAIK, the controls response for touch events same as mouse events 
(using the same pickers) and as a result a button press, for example, 
will only triggered if the x & y of the touch event is within the 
control area.


This means that small controls, or even quite large controls (like 
buttons with text) will often get missed because the 'strict' node 
picking, although from a UX point of view it is strange as the user 
clearly pressed on a node (the finger was clearly above it) but nothing 
happens...


With current implementation its hard to use small features in controls, 
like scrollbars in lists, and it almost impossible to implement 
something like 'screen navigator' (the series of small dots in the 
bottom of a smart phones screen which allow you to jump directly to a 
'far away' screen)


To illustrate it consider the bellow low resolution sketch, where the 
"+" is the actual x,y reported, the ellipse is the finger touch area and 
the rectangle is the node.
With current implementation this type of tap will not trigger the node 
handlers


__
  / \
 /   \
   ___/ __+_ \___in this scenario the 'button' will not get pressed
   |\ /|
   |___\ ___ / __ |
  \___/

If your smart phone support it, turn on the touch debugging options in 
settings and see that each point translate to a quite large circle and 
what ever fall in it, or reasonably close to it,  get picked.


I want to start a discussion to understand if my perspective is accurate 
and to understand what can be done, if any, for the coming release or 
the next one.


We might use recently opened RT-34136 
 for logging this, or 
open a new JIRA for it


Thanks,
Assaf


Re: discussion about touch events

2013-11-11 Thread Assaf Yavnai
The ascii sketch looked fine on my screen before I sent the mail :( I 
hope the idea is clear from the text

(now in the reply dialog its also look good)

Assaf
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:

Hi Guys,

I hope that I'm right about this, but it seems that touch events in 
glass are translated (and reported) as a single point events (x & y) 
without an area, like pointer events.
AFAIK, the controls response for touch events same as mouse events 
(using the same pickers) and as a result a button press, for example, 
will only triggered if the x & y of the touch event is within the 
control area.


This means that small controls, or even quite large controls (like 
buttons with text) will often get missed because the 'strict' node 
picking, although from a UX point of view it is strange as the user 
clearly pressed on a node (the finger was clearly above it) but 
nothing happens...


With current implementation its hard to use small features in 
controls, like scrollbars in lists, and it almost impossible to 
implement something like 'screen navigator' (the series of small dots 
in the bottom of a smart phones screen which allow you to jump 
directly to a 'far away' screen)


To illustrate it consider the bellow low resolution sketch, where the 
"+" is the actual x,y reported, the ellipse is the finger touch area 
and the rectangle is the node.
With current implementation this type of tap will not trigger the node 
handlers


__
  / \
 /   \
   ___/ __+_ \___in this scenario the 'button' will not get 
pressed

   |\ /|
   |___\ ___ / __ |
  \___/

If your smart phone support it, turn on the touch debugging options in 
settings and see that each point translate to a quite large circle and 
what ever fall in it, or reasonably close to it, get picked.


I want to start a discussion to understand if my perspective is 
accurate and to understand what can be done, if any, for the coming 
release or the next one.


We might use recently opened RT-34136 
 for logging this, or 
open a new JIRA for it


Thanks,
Assaf




Re: discussion about touch events

2013-11-11 Thread Pavel Safrata

Hi Assaf,
this was discussed during during the multi-touch API design phase. I 
completely understand what you are saying, yet it has no 
straightforward/automatic solution you may have imagined. First, we 
can't pick "whatever falls into the circle" - imagine there are two 
buttons next to each other, do we want to activate both of them? No, 
each event has to have a single target. Moreover, everybody can easily 
comprehend the idea of the imprecise touch on a button, but from FX 
point of view there is a bunch of nodes (not every application uses just 
controls) and each of the application's nodes needs to be pickable.


At that time, if I remember correctly, there was a preliminary agreement 
that the way around this might be giving the control to the user's hands 
in a form of an API that would for instance make a node pickable on a 
certain distance from it. So user could tell to a button "enlarge your 
bounds about 10 pixels in each direction and pick on those bounds", 
which would then behave as if the button had a transparent border around 
it (perhaps applying only to touch events / synthesized mouse events). 
Of course controls could use this API to implement a reasonable default 
behavior, but user still needs to have the option to create an 
unobtrusive ImageView-background with an event-greedy ImageView-button 
on it.


Regards,
Pavel

On 11.11.2013 11:51, Assaf Yavnai wrote:

Hi Guys,

I hope that I'm right about this, but it seems that touch events in 
glass are translated (and reported) as a single point events (x & y) 
without an area, like pointer events.
AFAIK, the controls response for touch events same as mouse events 
(using the same pickers) and as a result a button press, for example, 
will only triggered if the x & y of the touch event is within the 
control area.


This means that small controls, or even quite large controls (like 
buttons with text) will often get missed because the 'strict' node 
picking, although from a UX point of view it is strange as the user 
clearly pressed on a node (the finger was clearly above it) but 
nothing happens...


With current implementation its hard to use small features in 
controls, like scrollbars in lists, and it almost impossible to 
implement something like 'screen navigator' (the series of small dots 
in the bottom of a smart phones screen which allow you to jump 
directly to a 'far away' screen)


To illustrate it consider the bellow low resolution sketch, where the 
"+" is the actual x,y reported, the ellipse is the finger touch area 
and the rectangle is the node.
With current implementation this type of tap will not trigger the node 
handlers


__
  / \
 /   \
   ___/ __+_ \___in this scenario the 'button' will not get 
pressed

   |\ /|
   |___\ ___ / __ |
  \___/

If your smart phone support it, turn on the touch debugging options in 
settings and see that each point translate to a quite large circle and 
what ever fall in it, or reasonably close to it, get picked.


I want to start a discussion to understand if my perspective is 
accurate and to understand what can be done, if any, for the coming 
release or the next one.


We might use recently opened RT-34136 
 for logging this, or 
open a new JIRA for it


Thanks,
Assaf




Re: discussion about touch events

2013-11-11 Thread Philipp Dörfler
I see the need to be aware of the area that is covered by fingers rather
than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For instance:
Say we have a button on some background and both the background and the
button do have an onClick listener attached. If you tap the button in a way
that the touched area's center point is outside of the buttons boundaries -
what event will be fired? Will both the background and the button receive a
click event? Or just either the background or the button exclusively? Will
there be a new event type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional area tap event which
gives precise information about diameter and center of the tap. Besides
that there should be some kind of "priority" for choosing which node's
onClick will be called. Maybe the draw order / order in the scene graph / z
buffer value might be sufficient to model what would happen in the real,
physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai" :

> The ascii sketch looked fine on my screen before I sent the mail :( I hope
> the idea is clear from the text
> (now in the reply dialog its also look good)
>
> Assaf
> On 11/11/2013 12:51 PM, Assaf Yavnai wrote:
>
>> Hi Guys,
>>
>> I hope that I'm right about this, but it seems that touch events in glass
>> are translated (and reported) as a single point events (x & y) without an
>> area, like pointer events.
>> AFAIK, the controls response for touch events same as mouse events (using
>> the same pickers) and as a result a button press, for example, will only
>> triggered if the x & y of the touch event is within the control area.
>>
>> This means that small controls, or even quite large controls (like
>> buttons with text) will often get missed because the 'strict' node picking,
>> although from a UX point of view it is strange as the user clearly pressed
>> on a node (the finger was clearly above it) but nothing happens...
>>
>> With current implementation its hard to use small features in controls,
>> like scrollbars in lists, and it almost impossible to implement something
>> like 'screen navigator' (the series of small dots in the bottom of a smart
>> phones screen which allow you to jump directly to a 'far away' screen)
>>
>> To illustrate it consider the bellow low resolution sketch, where the "+"
>> is the actual x,y reported, the ellipse is the finger touch area and the
>> rectangle is the node.
>> With current implementation this type of tap will not trigger the node
>> handlers
>>
>> __
>>   / \
>>  /   \
>>___/ __+_ \___in this scenario the 'button' will not get
>> pressed
>>|\ /|
>>|___\ ___ / __ |
>>   \___/
>>
>> If your smart phone support it, turn on the touch debugging options in
>> settings and see that each point translate to a quite large circle and what
>> ever fall in it, or reasonably close to it, get picked.
>>
>> I want to start a discussion to understand if my perspective is accurate
>> and to understand what can be done, if any, for the coming release or the
>> next one.
>>
>> We might use recently opened RT-34136 > com/browse/RT-34136> for logging this, or open a new JIRA for it
>>
>> Thanks,
>> Assaf
>>
>
>


Re: discussion about touch events

2013-11-11 Thread Tomas Mikula
On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler  wrote:
> I see the need to be aware of the area that is covered by fingers rather
> than just considering that area's center point.
> I'd guess that this adds a new layer of complexity, though. For instance:
> Say we have a button on some background and both the background and the
> button do have an onClick listener attached. If you tap the button in a way
> that the touched area's center point is outside of the buttons boundaries -
> what event will be fired? Will both the background and the button receive a
> click event? Or just either the background or the button exclusively? Will
> there be a new event type which gets fired in case of such area-based taps?
>
> My suggestion would therefore be to have an additional area tap event which
> gives precise information about diameter and center of the tap. Besides
> that there should be some kind of "priority" for choosing which node's
> onClick will be called.

What about picking the one that is closest to the center of the touch?

Tomas

> Maybe the draw order / order in the scene graph / z
> buffer value might be sufficient to model what would happen in the real,
> physical world.
> Am 11.11.2013 13:05 schrieb "Assaf Yavnai" :
>
>> The ascii sketch looked fine on my screen before I sent the mail :( I hope
>> the idea is clear from the text
>> (now in the reply dialog its also look good)
>>
>> Assaf
>> On 11/11/2013 12:51 PM, Assaf Yavnai wrote:
>>
>>> Hi Guys,
>>>
>>> I hope that I'm right about this, but it seems that touch events in glass
>>> are translated (and reported) as a single point events (x & y) without an
>>> area, like pointer events.
>>> AFAIK, the controls response for touch events same as mouse events (using
>>> the same pickers) and as a result a button press, for example, will only
>>> triggered if the x & y of the touch event is within the control area.
>>>
>>> This means that small controls, or even quite large controls (like
>>> buttons with text) will often get missed because the 'strict' node picking,
>>> although from a UX point of view it is strange as the user clearly pressed
>>> on a node (the finger was clearly above it) but nothing happens...
>>>
>>> With current implementation its hard to use small features in controls,
>>> like scrollbars in lists, and it almost impossible to implement something
>>> like 'screen navigator' (the series of small dots in the bottom of a smart
>>> phones screen which allow you to jump directly to a 'far away' screen)
>>>
>>> To illustrate it consider the bellow low resolution sketch, where the "+"
>>> is the actual x,y reported, the ellipse is the finger touch area and the
>>> rectangle is the node.
>>> With current implementation this type of tap will not trigger the node
>>> handlers
>>>
>>> __
>>>   / \
>>>  /   \
>>>___/ __+_ \___in this scenario the 'button' will not get
>>> pressed
>>>|\ /|
>>>|___\ ___ / __ |
>>>   \___/
>>>
>>> If your smart phone support it, turn on the touch debugging options in
>>> settings and see that each point translate to a quite large circle and what
>>> ever fall in it, or reasonably close to it, get picked.
>>>
>>> I want to start a discussion to understand if my perspective is accurate
>>> and to understand what can be done, if any, for the coming release or the
>>> next one.
>>>
>>> We might use recently opened RT-34136 >> com/browse/RT-34136> for logging this, or open a new JIRA for it
>>>
>>> Thanks,
>>> Assaf
>>>
>>
>>


Re: discussion about touch events

2013-11-11 Thread Pavel Safrata

On 11.11.2013 17:49, Tomas Mikula wrote:

On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler  wrote:

I see the need to be aware of the area that is covered by fingers rather
than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For instance:
Say we have a button on some background and both the background and the
button do have an onClick listener attached. If you tap the button in a way
that the touched area's center point is outside of the buttons boundaries -
what event will be fired? Will both the background and the button receive a
click event? Or just either the background or the button exclusively? Will
there be a new event type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional area tap event which
gives precise information about diameter and center of the tap. Besides
that there should be some kind of "priority" for choosing which node's
onClick will be called.

What about picking the one that is closest to the center of the touch?



There is always something directly on the center of the touch (possibly 
the scene background, but it can have event handlers too). That's what 
we pick right now.

Pavel


Tomas


Maybe the draw order / order in the scene graph / z
buffer value might be sufficient to model what would happen in the real,
physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai" :


The ascii sketch looked fine on my screen before I sent the mail :( I hope
the idea is clear from the text
(now in the reply dialog its also look good)

Assaf
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:


Hi Guys,

I hope that I'm right about this, but it seems that touch events in glass
are translated (and reported) as a single point events (x & y) without an
area, like pointer events.
AFAIK, the controls response for touch events same as mouse events (using
the same pickers) and as a result a button press, for example, will only
triggered if the x & y of the touch event is within the control area.

This means that small controls, or even quite large controls (like
buttons with text) will often get missed because the 'strict' node picking,
although from a UX point of view it is strange as the user clearly pressed
on a node (the finger was clearly above it) but nothing happens...

With current implementation its hard to use small features in controls,
like scrollbars in lists, and it almost impossible to implement something
like 'screen navigator' (the series of small dots in the bottom of a smart
phones screen which allow you to jump directly to a 'far away' screen)

To illustrate it consider the bellow low resolution sketch, where the "+"
is the actual x,y reported, the ellipse is the finger touch area and the
rectangle is the node.
With current implementation this type of tap will not trigger the node
handlers

 __
   / \
  /   \
___/ __+_ \___in this scenario the 'button' will not get
pressed
|\ /|
|___\ ___ / __ |
   \___/

If your smart phone support it, turn on the touch debugging options in
settings and see that each point translate to a quite large circle and what
ever fall in it, or reasonably close to it, get picked.

I want to start a discussion to understand if my perspective is accurate
and to understand what can be done, if any, for the coming release or the
next one.

We might use recently opened RT-34136  for logging this, or open a new JIRA for it

Thanks,
Assaf







Re: discussion about touch events

2013-11-12 Thread Pavel Safrata

Hello Daniel,
this is quite similar to my idea described earlier. The major difference 
is the "fair division of capture zones" among siblings. It's an 
interesting idea, let's explore it. What pops first is that children can 
also overlap. So I think it would behave like this (green capture zones 
omitted):


see attachment

..wouldn't it? From user's point of view this seems confusing, both 
cases look the same but behave differently. Note that in the case on the 
right, the parent may be still the same, developer only adds a fancy 
background as a new child and suddenly the red child can't be hit that 
easily. What do you think? Is it an issue? Or would it not behave this way?


Regards,
Pavel

On 11.11.2013 23:44, Daniel Blaukopf wrote:


On Nov 11, 2013, at 11:30 PM, Pavel Safrata > wrote:



On 11.11.2013 17:49, Tomas Mikula wrote:
On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler 
mailto:phdoerf...@gmail.com>> wrote:
I see the need to be aware of the area that is covered by fingers 
rather

than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For 
instance:

Say we have a button on some background and both the background and the
button do have an onClick listener attached. If you tap the button 
in a way
that the touched area's center point is outside of the buttons 
boundaries -
what event will be fired? Will both the background and the button 
receive a
click event? Or just either the background or the button 
exclusively? Will
there be a new event type which gets fired in case of such 
area-based taps?


My suggestion would therefore be to have an additional area tap 
event which

gives precise information about diameter and center of the tap. Besides
that there should be some kind of "priority" for choosing which node's
onClick will be called.

What about picking the one that is closest to the center of the touch?



There is always something directly on the center of the touch 
(possibly the scene background, but it can have event handlers too). 
That's what we pick right now.

Pavel


What Seeon, Assaf and I discussed earlier was building some fuzziness 
into the node picker so that instead of each node capturing only 
events directly on top of it:


..nodes at each level of the hierarchy would capture events beyond 
their borders as well:


In the above, “Parent” would capture touch events within a certain 
radius around it, as would its children “Child 1” and “Child 2”. Since 
“Child 1” and “Child 2” are peers, they would have a sharp division 
between them, a watershed on either side of which events would go to 
one child node or the other. This would also apply if the peer nodes 
were further apart; they would divide the no-man’s land between them. 
Of course this no-man’s land would be part of “Parent” and could could 
be touch-sensitive - but we won’t consider “Parent” as an event target 
until we have ruled out using one of its children’s extended capture 
zones.


The capture radius could either be a styleable property on the nodes, 
or could be determined by the X and Y size of a touch point as 
reported by the touch screen. We’d still be reporting a touch point, 
not a touch area. The touch target would be, as now, a single node.


This would get us more reliable touch capture at leaf nodes of the 
node hierarchy at the expense of it being harder to tap the 
background. This is likely to be a good trade-off.


Daniel






Tomas


Maybe the draw order / order in the scene graph / z
buffer value might be sufficient to model what would happen in the 
real,

physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai" >:


The ascii sketch looked fine on my screen before I sent the mail 
:( I hope

the idea is clear from the text
(now in the reply dialog its also look good)

Assaf
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:


Hi Guys,

I hope that I'm right about this, but it seems that touch events 
in glass
are translated (and reported) as a single point events (x & y) 
without an

area, like pointer events.
AFAIK, the controls response for touch events same as mouse 
events (using
the same pickers) and as a result a button press, for example, 
will only

triggered if the x & y of the touch event is within the control area.

This means that small controls, or even quite large controls (like
buttons with text) will often get missed because the 'strict' 
node picking,
although from a UX point of view it is strange as the user 
clearly pressed

on a node (the finger was clearly above it) but nothing happens...

With current implementation its hard to use small features in 
controls,
like scrollbars in lists, and it almost impossible to implement 
something
like 'screen navigator' (the series of small dots in the bottom 
of a smart
phones screen which allow you to jump directly to a 'far away' 
screen)


To illustrate it consider the bellow low resolution sketch, where 
th

Re: discussion about touch events

2013-11-12 Thread Pavel Safrata

The image seems to have been stripped somewhere, trying to attach once more.
Pavel

On 12.11.2013 11:46, Pavel Safrata wrote:

Hello Daniel,
this is quite similar to my idea described earlier. The major 
difference is the "fair division of capture zones" among siblings. 
It's an interesting idea, let's explore it. What pops first is that 
children can also overlap. So I think it would behave like this (green 
capture zones omitted):


see attachment

..wouldn't it? From user's point of view this seems confusing, both 
cases look the same but behave differently. Note that in the case on 
the right, the parent may be still the same, developer only adds a 
fancy background as a new child and suddenly the red child can't be 
hit that easily. What do you think? Is it an issue? Or would it not 
behave this way?


Regards,
Pavel

On 11.11.2013 23:44, Daniel Blaukopf wrote:


On Nov 11, 2013, at 11:30 PM, Pavel Safrata > wrote:



On 11.11.2013 17:49, Tomas Mikula wrote:
On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler 
mailto:phdoerf...@gmail.com>> wrote:
I see the need to be aware of the area that is covered by fingers 
rather

than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For 
instance:
Say we have a button on some background and both the background 
and the
button do have an onClick listener attached. If you tap the button 
in a way
that the touched area's center point is outside of the buttons 
boundaries -
what event will be fired? Will both the background and the button 
receive a
click event? Or just either the background or the button 
exclusively? Will
there be a new event type which gets fired in case of such 
area-based taps?


My suggestion would therefore be to have an additional area tap 
event which
gives precise information about diameter and center of the tap. 
Besides
that there should be some kind of "priority" for choosing which 
node's

onClick will be called.

What about picking the one that is closest to the center of the touch?



There is always something directly on the center of the touch 
(possibly the scene background, but it can have event handlers too). 
That's what we pick right now.

Pavel


What Seeon, Assaf and I discussed earlier was building some fuzziness 
into the node picker so that instead of each node capturing only 
events directly on top of it:


..nodes at each level of the hierarchy would capture events beyond 
their borders as well:


In the above, “Parent” would capture touch events within a certain 
radius around it, as would its children “Child 1” and “Child 2”. 
Since “Child 1” and “Child 2” are peers, they would have a sharp 
division between them, a watershed on either side of which events 
would go to one child node or the other. This would also apply if the 
peer nodes were further apart; they would divide the no-man’s land 
between them. Of course this no-man’s land would be part of “Parent” 
and could could be touch-sensitive - but we won’t consider “Parent” 
as an event target until we have ruled out using one of its 
children’s extended capture zones.


The capture radius could either be a styleable property on the nodes, 
or could be determined by the X and Y size of a touch point as 
reported by the touch screen. We’d still be reporting a touch point, 
not a touch area. The touch target would be, as now, a single node.


This would get us more reliable touch capture at leaf nodes of the 
node hierarchy at the expense of it being harder to tap the 
background. This is likely to be a good trade-off.


Daniel






Tomas


Maybe the draw order / order in the scene graph / z
buffer value might be sufficient to model what would happen in the 
real,

physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai" 
mailto:assaf.yav...@oracle.com>>:


The ascii sketch looked fine on my screen before I sent the mail 
:( I hope

the idea is clear from the text
(now in the reply dialog its also look good)

Assaf
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:


Hi Guys,

I hope that I'm right about this, but it seems that touch events 
in glass
are translated (and reported) as a single point events (x & y) 
without an

area, like pointer events.
AFAIK, the controls response for touch events same as mouse 
events (using
the same pickers) and as a result a button press, for example, 
will only
triggered if the x & y of the touch event is within the control 
area.


This means that small controls, or even quite large controls (like
buttons with text) will often get missed because the 'strict' 
node picking,
although from a UX point of view it is strange as the user 
clearly pressed

on a node (the finger was clearly above it) but nothing happens...

With current implementation its hard to use small features in 
controls,
like scrollbars in lists, and it almost impossible to implement 
something
like 'screen navigator' (the series of small dots in the bottom 
of a smart
phones scree

Re: discussion about touch events

2013-11-12 Thread Daniel Blaukopf
(My original message didn't get through to openjfx-dev because I used 
inline images. I've replaced those images with external links)


On Nov 11, 2013, at 11:30 PM, Pavel Safrata > wrote:



On 11.11.2013 17:49, Tomas Mikula wrote:
On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler 
mailto:phdoerf...@gmail.com>> wrote:

I see the need to be aware of the area that is covered by fingers rather
than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For 
instance:

Say we have a button on some background and both the background and the
button do have an onClick listener attached. If you tap the button 
in a way
that the touched area's center point is outside of the buttons 
boundaries -
what event will be fired? Will both the background and the button 
receive a
click event? Or just either the background or the button 
exclusively? Will
there be a new event type which gets fired in case of such 
area-based taps?


My suggestion would therefore be to have an additional area tap 
event which

gives precise information about diameter and center of the tap. Besides
that there should be some kind of "priority" for choosing which node's
onClick will be called.

What about picking the one that is closest to the center of the touch?



There is always something directly on the center of the touch 
(possibly the scene background, but it can have event handlers too). 
That's what we pick right now.

Pavel


What Seeon, Assaf and I discussed earlier was building some fuzziness 
into the node picker so that instead of each node capturing only events 
directly on top of it:


Non-fuzzy picker: http://i.imgur.com/uszql8V.png

..nodes at each level of the hierarchy would capture events beyond their 
borders as well:


Fuzzy picker: http://i.imgur.com/ELWamYp.png

In the above, “Parent” would capture touch events within a certain 
radius around it, as would its children “Child 1” and “Child 2”. Since 
“Child 1” and “Child 2” are peers, they would have a sharp division 
between them, a watershed on either side of which events would go to one 
child node or the other. This would also apply if the peer nodes were 
further apart; they would divide the no-man’s land between them. Of 
course this no-man’s land would be part of “Parent” and could could be 
touch-sensitive - but we won’t consider “Parent” as an event target 
until we have ruled out using one of its children’s extended capture zones.


The capture radius could either be a styleable property on the nodes, or 
could be determined by the X and Y size of a touch point as reported by 
the touch screen. We’d still be reporting a touch point, not a touch 
area. The touch target would be, as now, a single node.


This would get us more reliable touch capture at leaf nodes of the node 
hierarchy at the expense of it being harder to tap the background. This 
is likely to be a good trade-off.


Daniel






Tomas


Maybe the draw order / order in the scene graph / z
buffer value might be sufficient to model what would happen in the real,
physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai" >:


The ascii sketch looked fine on my screen before I sent the mail :( 
I hope

the idea is clear from the text
(now in the reply dialog its also look good)

Assaf
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:


Hi Guys,

I hope that I'm right about this, but it seems that touch events 
in glass
are translated (and reported) as a single point events (x & y) 
without an

area, like pointer events.
AFAIK, the controls response for touch events same as mouse events 
(using
the same pickers) and as a result a button press, for example, 
will only

triggered if the x & y of the touch event is within the control area.

This means that small controls, or even quite large controls (like
buttons with text) will often get missed because the 'strict' node 
picking,
although from a UX point of view it is strange as the user clearly 
pressed

on a node (the finger was clearly above it) but nothing happens...

With current implementation its hard to use small features in 
controls,
like scrollbars in lists, and it almost impossible to implement 
something
like 'screen navigator' (the series of small dots in the bottom of 
a smart

phones screen which allow you to jump directly to a 'far away' screen)

To illustrate it consider the bellow low resolution sketch, where 
the "+"
is the actual x,y reported, the ellipse is the finger touch area 
and the

rectangle is the node.
With current implementation this type of tap will not trigger the node
handlers

__
  / \
 /   \
   ___/ __+_ \___in this scenario the 'button' will not get
pressed
   |\ /|
   |___\ ___ / __ |
  \___/

If your smart phone support it, turn on the touch debugging options in
settings and see that each point translate to a quite lar

Re: discussion about touch events

2013-11-12 Thread Pavel Safrata

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major difference 
is the "fair division of capture zones" among siblings. It's an 
interesting idea, let's explore it. What pops first is that children can 
also overlap. So I think it would behave like this (green capture zones 
omitted):


Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing, both 
cases look the same but behave differently. Note that in the case on the 
right, the parent may be still the same, developer only adds a fancy 
background as a new child and suddenly the red child can't be hit that 
easily. What do you think? Is it an issue? Or would it not behave this way?


Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:
(My original message didn't get through to openjfx-dev because I used 
inline images. I've replaced those images with external links)


On Nov 11, 2013, at 11:30 PM, Pavel Safrata > wrote:



On 11.11.2013 17:49, Tomas Mikula wrote:
On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler 
mailto:phdoerf...@gmail.com>> wrote:
I see the need to be aware of the area that is covered by fingers 
rather

than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For 
instance:

Say we have a button on some background and both the background and the
button do have an onClick listener attached. If you tap the button 
in a way
that the touched area's center point is outside of the buttons 
boundaries -
what event will be fired? Will both the background and the button 
receive a
click event? Or just either the background or the button 
exclusively? Will
there be a new event type which gets fired in case of such 
area-based taps?


My suggestion would therefore be to have an additional area tap 
event which

gives precise information about diameter and center of the tap. Besides
that there should be some kind of "priority" for choosing which node's
onClick will be called.

What about picking the one that is closest to the center of the touch?



There is always something directly on the center of the touch 
(possibly the scene background, but it can have event handlers too). 
That's what we pick right now.

Pavel


What Seeon, Assaf and I discussed earlier was building some fuzziness 
into the node picker so that instead of each node capturing only 
events directly on top of it:


Non-fuzzy picker: http://i.imgur.com/uszql8V.png

..nodes at each level of the hierarchy would capture events beyond 
their borders as well:


Fuzzy picker: http://i.imgur.com/ELWamYp.png

In the above, “Parent” would capture touch events within a certain 
radius around it, as would its children “Child 1” and “Child 2”. Since 
“Child 1” and “Child 2” are peers, they would have a sharp division 
between them, a watershed on either side of which events would go to 
one child node or the other. This would also apply if the peer nodes 
were further apart; they would divide the no-man’s land between them. 
Of course this no-man’s land would be part of “Parent” and could could 
be touch-sensitive - but we won’t consider “Parent” as an event target 
until we have ruled out using one of its children’s extended capture 
zones.


The capture radius could either be a styleable property on the nodes, 
or could be determined by the X and Y size of a touch point as 
reported by the touch screen. We’d still be reporting a touch point, 
not a touch area. The touch target would be, as now, a single node.


This would get us more reliable touch capture at leaf nodes of the 
node hierarchy at the expense of it being harder to tap the 
background. This is likely to be a good trade-off.


Daniel






Tomas


Maybe the draw order / order in the scene graph / z
buffer value might be sufficient to model what would happen in the 
real,

physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai" >:


The ascii sketch looked fine on my screen before I sent the mail 
:( I hope

the idea is clear from the text
(now in the reply dialog its also look good)

Assaf
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:


Hi Guys,

I hope that I'm right about this, but it seems that touch events 
in glass
are translated (and reported) as a single point events (x & y) 
without an

area, like pointer events.
AFAIK, the controls response for touch events same as mouse 
events (using
the same pickers) and as a result a button press, for example, 
will only

triggered if the x & y of the touch event is within the control area.

This means that small controls, or even quite large controls (like
buttons with text) will often get missed because the 'strict' 
node picking,
although from a UX point of view it is strange as the user 
clearly pressed

on a node (the finger was clearly above it) but nothing happens...

With current implementation its hard to use sm

Re: discussion about touch events

2013-11-12 Thread Assaf Yavnai

Daniel, Pavel ,
I tend to agree with both of you and also to add some ideas and comments:

1) I think that touch picker and mouse picker should be different 
implementations used according to origin of event. This means that if 
application is written to listen only to mouse events and its running 
with a touch screen, then touch will be 'usable', if application listen 
to touch events it should be preforming well (as expected), as opposed 
to what we have now that there is no reason to listen to simple touch 
events as they are the same as mouse events. (this is also a comment for 
earlier mail from sebastian.rheinnec...@yworks.com - Button and TouchEvents)


2) the capture zone should be configurable - I don't know in which 
level: build time (bare minimum) or -D property (my preference). It 
doesn't seems that runtime configuration is a must, but surly a nice to 
have feature to be available for different layouts implementations, For 
example take the screen lock application where there is a 3X3 matrix of 
dots and user need to set and follow a pattern to unlock the screen. The 
dots are relatively small and far a part. the capture zone can be much 
bigger in this scenario. If you ever used this type of applications, you 
probably noted the loosely matching (it's feels that it is impossible to 
miss). From the other hand it can nice to set it to a tighter value when 
nodes are close together (like in the VK or the 'screen slider' scenario)


3) What do you think about  also supply hints together with the touch 
event or the action event, for example action listener on a button can 
get called on a press and a hint can be supplied like: MATCH_EXACT (when 
center point falls inside the node), MATCH_CLOSE (when node is picked 
through the capture zone) MATCH_NEARBY (not a press per-se but rather a 
press was made near the node. This of course should only be sent if no 
other node with better matching have consume the event)


Assaf

On 11/12/2013 01:11 PM, Pavel Safrata wrote:

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major 
difference is the "fair division of capture zones" among siblings. 
It's an interesting idea, let's explore it. What pops first is that 
children can also overlap. So I think it would behave like this (green 
capture zones omitted):


Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing, both 
cases look the same but behave differently. Note that in the case on 
the right, the parent may be still the same, developer only adds a 
fancy background as a new child and suddenly the red child can't be 
hit that easily. What do you think? Is it an issue? Or would it not 
behave this way?


Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:
(My original message didn't get through to openjfx-dev because I used 
inline images. I've replaced those images with external links)


On Nov 11, 2013, at 11:30 PM, Pavel Safrata > wrote:



On 11.11.2013 17:49, Tomas Mikula wrote:
On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler 
mailto:phdoerf...@gmail.com>> wrote:
I see the need to be aware of the area that is covered by fingers 
rather

than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For 
instance:
Say we have a button on some background and both the background 
and the
button do have an onClick listener attached. If you tap the button 
in a way
that the touched area's center point is outside of the buttons 
boundaries -
what event will be fired? Will both the background and the button 
receive a
click event? Or just either the background or the button 
exclusively? Will
there be a new event type which gets fired in case of such 
area-based taps?


My suggestion would therefore be to have an additional area tap 
event which
gives precise information about diameter and center of the tap. 
Besides
that there should be some kind of "priority" for choosing which 
node's

onClick will be called.

What about picking the one that is closest to the center of the touch?



There is always something directly on the center of the touch 
(possibly the scene background, but it can have event handlers too). 
That's what we pick right now.

Pavel


What Seeon, Assaf and I discussed earlier was building some fuzziness 
into the node picker so that instead of each node capturing only 
events directly on top of it:


Non-fuzzy picker: http://i.imgur.com/uszql8V.png

..nodes at each level of the hierarchy would capture events beyond 
their borders as well:


Fuzzy picker: http://i.imgur.com/ELWamYp.png

In the above, “Parent” would capture touch events within a certain 
radius around it, as would its children “Child 1” and “Child 2”. 
Since “Child 1” and “Child 2” are peers, they would have a sharp 
division between them, a watershed on either side of which events 
would go to 

RE: discussion about touch events

2013-11-13 Thread Seeon Birger
Hi Pavel,

Your example of 'child over child' is an interesting case which raises some 
design aspects of the desired picking algorithm:
1. Which node to pick when one node has a 'strict containership' over the touch 
center and the other node only has a fuzzy containership (the position falls in 
the fuzzy area).
2. Accounting for z-order for extended capture zone area.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position falls in the capture 
zone area of child2 but also clearly falls in the strict bounds of child1.
Generally, when two control nodes compete on same touch event (e.g. child1 & 
child2 in Daniel's diagram), it seems that we would like to give priority to 
"strict containership" over "fuzzy containership".
But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always exists come 
container/background node that strictly contains the touch point, but it would 
probably be an ancestor of the child node, so the usual parent-child 
relationship order will give preference to the child.

One way out it is to honor the usual z-order for the extended area of child2, 
so when a touch center hits the fuzzy area of child2, then child2 would be 
picked.

But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture zones do. 
Preferring one child by z-order (which matches the order of children in the 
parent) is not natural here. And we might better choose the node which is 
"closer"
 To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to mouse events and 
contain the touch point center either strictly or by their capture zone.
2. Remove all nodes that is strictly overlapped by another node and is below 
that node by z-order.
3. Out of those left choose the "closest" node. (the concept of "closet" should 
employ some calculation which might not be trivial in the general case).
4. Once a node has been picked, we follow the usual node chain list for event 
processing.

Care must be taken so we not break the current model for event processing. For 
example, if a node is picked by its capture zone, it means that the position 
does not fall in the boundaries of the node, so existing event handling code 
that relies on that would break. So I think the capture zone feature should be 
selectively enabled for certain type of nodes such buttons or other classic 
controls.

Regards,
Seeon





-Original Message-
From: Pavel Safrata 
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major difference is the 
"fair division of capture zones" among siblings. It's an interesting idea, 
let's explore it. What pops first is that children can also overlap. So I think 
it would behave like this (green capture zones
omitted):

Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing, both cases look 
the same but behave differently. Note that in the case on the right, the parent 
may be still the same, developer only adds a fancy background as a new child 
and suddenly the red child can't be hit that easily. What do you think? Is it 
an issue? Or would it not behave this way?

Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:
> (My original message didn't get through to openjfx-dev because I used 
> inline images. I've replaced those images with external links)
>
> On Nov 11, 2013, at 11:30 PM, Pavel Safrata  <mailto:pavel.safr...@oracle.com>> wrote:
>
>> On 11.11.2013 17:49, Tomas Mikula wrote:
>>> On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler 
>>> mailto:phdoerf...@gmail.com>> wrote:
>>>> I see the need to be aware of the area that is covered by fingers 
>>>> rather than just considering that area's center point.
>>>> I'd guess that this adds a new layer of complexity, though. For
>>>> instance:
>>>> Say we have a button on some background and both the background and 
>>>> the button do have an onClick listener attached. If you tap the 
>>>> button in a way that the touched area's center point is outside of 
>>>> the buttons boundaries - what event will be fired? Will both the 
>>>> background and the button receive a click event? Or just either the 
>>>> background or the button exclusively

Re: discussion about touch events

2013-11-13 Thread Daniel Blaukopf
Hi Seeon,

Summarizing our face to face talk today:

I see that the case described by Pavel is indeed a problem and agree with you 
that not every node needs to be a participant in the competition for which 
grabs touch input. However I’m not keen on the idea of changing behavior based 
on which nodes have listeners on them. CSS seems like the place to do this (as 
I think Pavel suggested earlier). In Pavel’s case, either:
 - the upper child node has the CSS tag saying “enable extended capture zone” 
and the lower child doesn’t: then the upper child’s capture zone will extend 
over the lower child
 - or both will have the CSS tag, in which case the upper child’s capture zone 
would be competing with the lower child’s capture zone. As in any other 
competition between capture zones the nearest node should win. The effect would 
be the same as if the regular matching rules were applied on the upper child. 
It would also be the same as if only the lower child had an extended capture 
zone. However, I’d consider this case to be bad UI programming.

We agreed that “in a competition between capture zones, pick the node whose 
border is nearest the touch point” was a reasonable way to resolve things.

Thanks,
Daniel

On Nov 13, 2013, at 12:31 PM, Seeon Birger  wrote:

> Hi Pavel,
> 
> Your example of 'child over child' is an interesting case which raises some 
> design aspects of the desired picking algorithm:
> 1. Which node to pick when one node has a 'strict containership' over the 
> touch center and the other node only has a fuzzy containership (the position 
> falls in the fuzzy area).
> 2. Accounting for z-order for extended capture zone area.
> 3. Accounting for parent-child relationship.
> 
> Referring to your 'child over child' example:
> http://i.imgur.com/e92qEJA.jpg
> 
> The conflict would arise were touch point center position falls in the 
> capture zone area of child2 but also clearly falls in the strict bounds of 
> child1.
> Generally, when two control nodes compete on same touch event (e.g. child1 & 
> child2 in Daniel's diagram), it seems that we would like to give priority to 
> "strict containership" over "fuzzy containership".
> But in your case it's probably not the desired behavior.
> 
> Also note that in the general case there's almost always exists come 
> container/background node that strictly contains the touch point, but it 
> would probably be an ancestor of the child node, so the usual parent-child 
> relationship order will give preference to the child.
> 
> One way out it is to honor the usual z-order for the extended area of child2, 
> so when a touch center hits the fuzzy area of child2, then child2 would be 
> picked.
> 
> But is not ideal for Daniel's example:
> http://i.imgur.com/ELWamYp.png
> 
> where the 2 nodes don't strictly overlap, but their capture zones do. 
> Preferring one child by z-order (which matches the order of children in the 
> parent) is not natural here. And we might better choose the node which is 
> "closer"
> To the touch point.
> 
> So to summarize I suggest this rough picking algorithm:
> 1. Choose all uppermost nodes which are not transparent to mouse events and 
> contain the touch point center either strictly or by their capture zone.
> 2. Remove all nodes that is strictly overlapped by another node and is below 
> that node by z-order.
> 3. Out of those left choose the "closest" node. (the concept of "closet" 
> should employ some calculation which might not be trivial in the general 
> case).
> 4. Once a node has been picked, we follow the usual node chain list for event 
> processing.
> 
> Care must be taken so we not break the current model for event processing. 
> For example, if a node is picked by its capture zone, it means that the 
> position does not fall in the boundaries of the node, so existing event 
> handling code that relies on that would break. So I think the capture zone 
> feature should be selectively enabled for certain type of nodes such buttons 
> or other classic controls.
> 
> Regards,
> Seeon
> 
> 
> 
> 
> 
> -Original Message-
> From: Pavel Safrata 
> Sent: Tuesday, November 12, 2013 1:11 PM
> To: Daniel Blaukopf
> Cc: OpenJFX
> Subject: Re: discussion about touch events
> 
> (Now my answer using external link)
> 
> Hello Daniel,
> this is quite similar to my idea described earlier. The major difference is 
> the "fair division of capture zones" among siblings. It's an interesting 
> idea, let's explore it. What pops first is that children can also overlap. So 
> I think it would behave like this (green capture zones
> omitted):
> 
> Child in pa

Re: discussion about touch events

2013-11-13 Thread Anthony Petrov
I'm not sure if CSS is the best place to tag nodes with this attribute. 
CSS is supposed to describe styles (i.e. the way a node is represented 
on the screen, "the look"), while extending the capture zone doesn't 
affect the visual representation of a node, but instead is related to 
events handling (i.e. "the feel") and thus, IMO, should be handled in 
the code (e.g. as a property) rather than in CSS.


--
best regards,
Anthony

On 11/14/2013 01:09 AM, Daniel Blaukopf wrote:

Hi Seeon,

Summarizing our face to face talk today:

I see that the case described by Pavel is indeed a problem and agree with you 
that not every node needs to be a participant in the competition for which 
grabs touch input. However I’m not keen on the idea of changing behavior based 
on which nodes have listeners on them. CSS seems like the place to do this (as 
I think Pavel suggested earlier). In Pavel’s case, either:
  - the upper child node has the CSS tag saying “enable extended capture zone” 
and the lower child doesn’t: then the upper child’s capture zone will extend 
over the lower child
  - or both will have the CSS tag, in which case the upper child’s capture zone 
would be competing with the lower child’s capture zone. As in any other 
competition between capture zones the nearest node should win. The effect would 
be the same as if the regular matching rules were applied on the upper child. 
It would also be the same as if only the lower child had an extended capture 
zone. However, I’d consider this case to be bad UI programming.

We agreed that “in a competition between capture zones, pick the node whose 
border is nearest the touch point” was a reasonable way to resolve things.

Thanks,
Daniel

On Nov 13, 2013, at 12:31 PM, Seeon Birger  wrote:


Hi Pavel,

Your example of 'child over child' is an interesting case which raises some 
design aspects of the desired picking algorithm:
1. Which node to pick when one node has a 'strict containership' over the touch 
center and the other node only has a fuzzy containership (the position falls in 
the fuzzy area).
2. Accounting for z-order for extended capture zone area.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position falls in the capture 
zone area of child2 but also clearly falls in the strict bounds of child1.
Generally, when two control nodes compete on same touch event (e.g. child1 & child2 in Daniel's 
diagram), it seems that we would like to give priority to "strict containership" over 
"fuzzy containership".
But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always exists come 
container/background node that strictly contains the touch point, but it would 
probably be an ancestor of the child node, so the usual parent-child 
relationship order will give preference to the child.

One way out it is to honor the usual z-order for the extended area of child2, 
so when a touch center hits the fuzzy area of child2, then child2 would be 
picked.

But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture zones do. Preferring one 
child by z-order (which matches the order of children in the parent) is not natural here. 
And we might better choose the node which is "closer"
To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to mouse events and 
contain the touch point center either strictly or by their capture zone.
2. Remove all nodes that is strictly overlapped by another node and is below 
that node by z-order.
3. Out of those left choose the "closest" node. (the concept of "closet" should 
employ some calculation which might not be trivial in the general case).
4. Once a node has been picked, we follow the usual node chain list for event 
processing.

Care must be taken so we not break the current model for event processing. For 
example, if a node is picked by its capture zone, it means that the position 
does not fall in the boundaries of the node, so existing event handling code 
that relies on that would break. So I think the capture zone feature should be 
selectively enabled for certain type of nodes such buttons or other classic 
controls.

Regards,
Seeon





-Original Message-----
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major difference is the 
"fair division of capture zones" among siblings. It's an interesting idea, 
let's explore it. What pops first is

Re: discussion about touch events

2013-11-13 Thread Richard Bair
 and is 
>>> below that node by z-order.
>>> 3. Out of those left choose the "closest" node. (the concept of "closet" 
>>> should employ some calculation which might not be trivial in the general 
>>> case).
>>> 4. Once a node has been picked, we follow the usual node chain list for 
>>> event processing.
>>> 
>>> Care must be taken so we not break the current model for event processing. 
>>> For example, if a node is picked by its capture zone, it means that the 
>>> position does not fall in the boundaries of the node, so existing event 
>>> handling code that relies on that would break. So I think the capture zone 
>>> feature should be selectively enabled for certain type of nodes such 
>>> buttons or other classic controls.
>>> 
>>> Regards,
>>> Seeon
>>> 
>>> 
>>> 
>>> 
>>> 
>>> -Original Message-
>>> From: Pavel Safrata
>>> Sent: Tuesday, November 12, 2013 1:11 PM
>>> To: Daniel Blaukopf
>>> Cc: OpenJFX
>>> Subject: Re: discussion about touch events
>>> 
>>> (Now my answer using external link)
>>> 
>>> Hello Daniel,
>>> this is quite similar to my idea described earlier. The major difference is 
>>> the "fair division of capture zones" among siblings. It's an interesting 
>>> idea, let's explore it. What pops first is that children can also overlap. 
>>> So I think it would behave like this (green capture zones
>>> omitted):
>>> 
>>> Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg
>>> 
>>> ..wouldn't it? From user's point of view this seems confusing, both cases 
>>> look the same but behave differently. Note that in the case on the right, 
>>> the parent may be still the same, developer only adds a fancy background as 
>>> a new child and suddenly the red child can't be hit that easily. What do 
>>> you think? Is it an issue? Or would it not behave this way?
>>> 
>>> Regards,
>>> Pavel
>>> 
>>>> On 12.11.2013 12:06, Daniel Blaukopf wrote:
>>>> (My original message didn't get through to openjfx-dev because I used
>>>> inline images. I've replaced those images with external links)
>>>> 
>>>> On Nov 11, 2013, at 11:30 PM, Pavel Safrata >>> <mailto:pavel.safr...@oracle.com>> wrote:
>>>> 
>>>>>> On 11.11.2013 17:49, Tomas Mikula wrote:
>>>>>> On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
>>>>>> mailto:phdoerf...@gmail.com>> wrote:
>>>>>>> I see the need to be aware of the area that is covered by fingers
>>>>>>> rather than just considering that area's center point.
>>>>>>> I'd guess that this adds a new layer of complexity, though. For
>>>>>>> instance:
>>>>>>> Say we have a button on some background and both the background and
>>>>>>> the button do have an onClick listener attached. If you tap the
>>>>>>> button in a way that the touched area's center point is outside of
>>>>>>> the buttons boundaries - what event will be fired? Will both the
>>>>>>> background and the button receive a click event? Or just either the
>>>>>>> background or the button exclusively? Will there be a new event
>>>>>>> type which gets fired in case of such area-based taps?
>>>>>>> 
>>>>>>> My suggestion would therefore be to have an additional area tap
>>>>>>> event which gives precise information about diameter and center of
>>>>>>> the tap. Besides that there should be some kind of "priority" for
>>>>>>> choosing which node's onClick will be called.
>>>>>> What about picking the one that is closest to the center of the touch?
>>>>> 
>>>>> There is always something directly on the center of the touch
>>>>> (possibly the scene background, but it can have event handlers too).
>>>>> That's what we pick right now.
>>>>> Pavel
>>>> 
>>>> What Seeon, Assaf and I discussed earlier was building some fuzziness
>>>> into the node picker so that instead of each node capturing only
>>>> events directly on top of it:
>>>> 
>>>> Non-fuzzy picker:

Re: discussion about touch events

2013-11-14 Thread Anthony Petrov
such buttons or other classic 
controls.

Regards,
Seeon





-----Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major difference is the 
"fair division of capture zones" among siblings. It's an interesting idea, 
let's explore it. What pops first is that children can also overlap. So I think it would 
behave like this (green capture zones
omitted):

Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing, both cases look 
the same but behave differently. Note that in the case on the right, the parent 
may be still the same, developer only adds a fancy background as a new child 
and suddenly the red child can't be hit that easily. What do you think? Is it 
an issue? Or would it not behave this way?

Regards,
Pavel


On 12.11.2013 12:06, Daniel Blaukopf wrote:
(My original message didn't get through to openjfx-dev because I used
inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata mailto:pavel.safr...@oracle.com>> wrote:


On 11.11.2013 17:49, Tomas Mikula wrote:
On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
mailto:phdoerf...@gmail.com>> wrote:

I see the need to be aware of the area that is covered by fingers
rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For
instance:
Say we have a button on some background and both the background and
the button do have an onClick listener attached. If you tap the
button in a way that the touched area's center point is outside of
the buttons boundaries - what event will be fired? Will both the
background and the button receive a click event? Or just either the
background or the button exclusively? Will there be a new event
type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional area tap
event which gives precise information about diameter and center of
the tap. Besides that there should be some kind of "priority" for
choosing which node's onClick will be called.

What about picking the one that is closest to the center of the touch?


There is always something directly on the center of the touch
(possibly the scene background, but it can have event handlers too).
That's what we pick right now.
Pavel


What Seeon, Assaf and I discussed earlier was building some fuzziness
into the node picker so that instead of each node capturing only
events directly on top of it:

Non-fuzzy picker: http://i.imgur.com/uszql8V.png

..nodes at each level of the hierarchy would capture events beyond
their borders as well:

Fuzzy picker: http://i.imgur.com/ELWamYp.png

In the above, "Parent" would capture touch events within a certain
radius around it, as would its children "Child 1" and "Child 2". Since
"Child 1" and "Child 2" are peers, they would have a sharp division
between them, a watershed on either side of which events would go to
one child node or the other. This would also apply if the peer nodes
were further apart; they would divide the no-man's land between them.
Of course this no-man's land would be part of "Parent" and could could
be touch-sensitive - but we won't consider "Parent" as an event target
until we have ruled out using one of its children's extended capture
zones.

The capture radius could either be a styleable property on the nodes,
or could be determined by the X and Y size of a touch point as
reported by the touch screen. We'd still be reporting a touch point,
not a touch area. The touch target would be, as now, a single node.

This would get us more reliable touch capture at leaf nodes of the
node hierarchy at the expense of it being harder to tap the
background. This is likely to be a good trade-off.

Daniel






Tomas


Maybe the draw order / order in the scene graph / z buffer value
might be sufficient to model what would happen in the real,
physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai" mailto:assaf.yav...@oracle.com>>:


The ascii sketch looked fine on my screen before I sent the mail
:( I hope the idea is clear from the text (now in the reply dialog
its also look good)

Assaf

On 11/11/2013 12:51 PM, Assaf Yavnai wrote:

Hi Guys,

I hope that I'm right about this, but it seems that touch events
in glass are translated (and reported) as a single point events
(x & y) without an area, like pointer events.
AFAIK, the controls response for touch events same as mouse
events (using the same pickers) and as a result a button press,
for example, will only triggered if the x &

Re: discussion about touch events

2013-11-15 Thread Pavel Safrata
com>> wrote:



Hi Pavel,

Your example of 'child over child' is an interesting case which 
raises some design aspects of the desired picking algorithm:
1. Which node to pick when one node has a 'strict containership' over 
the touch center and the other node only has a fuzzy containership 
(the position falls in the fuzzy area).

2. Accounting for z-order for extended capture zone area.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position falls in 
the capture zone area of child2 but also clearly falls in the strict 
bounds of child1.
Generally, when two control nodes compete on same touch event (e.g. 
child1 & child2 in Daniel's diagram), it seems that we would like to 
give priority to "strict containership" over "fuzzy containership".

But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always exists come 
container/background node that strictly contains the touch point, but 
it would probably be an ancestor of the child node, so the usual 
parent-child relationship order will give preference to the child.


One way out it is to honor the usual z-order for the extended area of 
child2, so when a touch center hits the fuzzy area of child2, then 
child2 would be picked.


But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture zones do. 
Preferring one child by z-order (which matches the order of children 
in the parent) is not natural here. And we might better choose the 
node which is "closer"

To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to mouse 
events and contain the touch point center either strictly or by their 
capture zone.
2. Remove all nodes that is strictly overlapped by another node and 
is below that node by z-order.
3. Out of those left choose the "closest" node. (the concept of 
"closet" should employ some calculation which might not be trivial in 
the general case).
4. Once a node has been picked, we follow the usual node chain list 
for event processing.


Care must be taken so we not break the current model for event 
processing. For example, if a node is picked by its capture zone, it 
means that the position does not fall in the boundaries of the node, 
so existing event handling code that relies on that would break. So I 
think the capture zone feature should be selectively enabled for 
certain type of nodes such buttons or other classic controls.


Regards,
Seeon





-Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major 
difference is the "fair division of capture zones" among siblings. 
It's an interesting idea, let's explore it. What pops first is that 
children can also overlap. So I think it would behave like this 
(green capture zones

omitted):

Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing, both 
cases look the same but behave differently. Note that in the case on 
the right, the parent may be still the same, developer only adds a 
fancy background as a new child and suddenly the red child can't be 
hit that easily. What do you think? Is it an issue? Or would it not 
behave this way?


Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:

(My original message didn't get through to openjfx-dev because I used
inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata 
mailto:pavel.safr...@oracle.com>

<mailto:pavel.safr...@oracle.com>> wrote:


On 11.11.2013 17:49, Tomas Mikula wrote:

On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
<mailto:phdoerf...@gmail.com><mailto:phdoerf...@gmail.com>> wrote:

I see the need to be aware of the area that is covered by fingers
rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For
instance:
Say we have a button on some background and both the background and
the button do have an onClick listener attached. If you tap the
button in a way that the touched area's center point is outside of
the buttons boundaries - what event will be fired? Will both the
background and the button receive a click event? Or just either the
background or the button exclusively? Will there be a new event
type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an 

Re: discussion about touch events

2013-11-17 Thread Daniel Blaukopf
t; 
> This algorithm should be reasonably easy to code and very robust (not 
> suffering from various node-arrangement corner-cases), but I'm still not 
> sure about the performance (depends mostly on the capture zone size - 
> 30-pixel zones may result in calling contains() nearly thousand times which 
> might kill it). But perhaps (hopefully) it can be perfected. Right now I 
> can't see any other algorithm that would work well and would result in more 
> efficient implementation (the search for overlapping nodes and closest 
> borders etc. is going to be pretty complicated as well, if it's even possible 
> to make it work).
> 
> What do you think? Any better ideas?
> 
> Pavel
> 
> 
> On 13.11.2013 22:09, Daniel Blaukopf wrote:
>> Hi Seeon,
>> 
>> Summarizing our face to face talk today:
>> 
>> I see that the case described by Pavel is indeed a problem and agree with 
>> you that not every node needs to be a participant in the competition for 
>> which grabs touch input. However I’m not keen on the idea of changing 
>> behavior based on which nodes have listeners on them. CSS seems like the 
>> place to do this (as I think Pavel suggested earlier). In Pavel’s case, 
>> either:
>>  - the upper child node has the CSS tag saying “enable extended capture 
>> zone” and the lower child doesn’t: then the upper child’s capture zone will 
>> extend over the lower child
>>  - or both will have the CSS tag, in which case the upper child’s capture 
>> zone would be competing with the lower child’s capture zone. As in any other 
>> competition between capture zones the nearest node should win. The effect 
>> would be the same as if the regular matching rules were applied on the upper 
>> child. It would also be the same as if only the lower child had an extended 
>> capture zone. However, I’d consider this case to be bad UI programming.
>> 
>> We agreed that “in a competition between capture zones, pick the node whose 
>> border is nearest the touch point” was a reasonable way to resolve things.
>> 
>> Thanks,
>> Daniel
>> 
>> On Nov 13, 2013, at 12:31 PM, Seeon Birger  wrote:
>> 
>>> Hi Pavel,
>>> 
>>> Your example of 'child over child' is an interesting case which raises some 
>>> design aspects of the desired picking algorithm:
>>> 1. Which node to pick when one node has a 'strict containership' over the 
>>> touch center and the other node only has a fuzzy containership (the 
>>> position falls in the fuzzy area).
>>> 2. Accounting for z-order for extended capture zone area.
>>> 3. Accounting for parent-child relationship.
>>> 
>>> Referring to your 'child over child' example:
>>> http://i.imgur.com/e92qEJA.jpg
>>> 
>>> The conflict would arise were touch point center position falls in the 
>>> capture zone area of child2 but also clearly falls in the strict bounds of 
>>> child1.
>>> Generally, when two control nodes compete on same touch event (e.g. child1 
>>> & child2 in Daniel's diagram), it seems that we would like to give priority 
>>> to "strict containership" over "fuzzy containership".
>>> But in your case it's probably not the desired behavior.
>>> 
>>> Also note that in the general case there's almost always exists come 
>>> container/background node that strictly contains the touch point, but it 
>>> would probably be an ancestor of the child node, so the usual parent-child 
>>> relationship order will give preference to the child.
>>> 
>>> One way out it is to honor the usual z-order for the extended area of 
>>> child2, so when a touch center hits the fuzzy area of child2, then child2 
>>> would be picked.
>>> 
>>> But is not ideal for Daniel's example:
>>> http://i.imgur.com/ELWamYp.png
>>> 
>>> where the 2 nodes don't strictly overlap, but their capture zones do. 
>>> Preferring one child by z-order (which matches the order of children in the 
>>> parent) is not natural here. And we might better choose the node which is 
>>> "closer"
>>> To the touch point.
>>> 
>>> So to summarize I suggest this rough picking algorithm:
>>> 1. Choose all uppermost nodes which are not transparent to mouse events and 
>>> contain the touch point center either strictly or by their capture zone.
>>> 2. Remove all nodes that is strictly overlapped by another node and is 
>>> below that node by z-order.
>>> 3. 

Re: discussion about touch events

2013-11-18 Thread Assaf Yavnai
containment many times, we can quickly tell the intersection with 
the picking area
- Perhaps also checking each nth pixel would be sufficient

This algorithm should be reasonably easy to code and very robust (not suffering 
from various node-arrangement corner-cases), but I'm still not sure about 
the performance (depends mostly on the capture zone size - 30-pixel zones may 
result in calling contains() nearly thousand times which might kill it). But 
perhaps (hopefully) it can be perfected. Right now I can't see any other 
algorithm that would work well and would result in more efficient 
implementation (the search for overlapping nodes and closest borders etc. is 
going to be pretty complicated as well, if it's even possible to make it work).

What do you think? Any better ideas?

Pavel


On 13.11.2013 22:09, Daniel Blaukopf wrote:

Hi Seeon,

Summarizing our face to face talk today:

I see that the case described by Pavel is indeed a problem and agree with you 
that not every node needs to be a participant in the competition for which 
grabs touch input. However I’m not keen on the idea of changing behavior based 
on which nodes have listeners on them. CSS seems like the place to do this (as 
I think Pavel suggested earlier). In Pavel’s case, either:
  - the upper child node has the CSS tag saying “enable extended capture zone” 
and the lower child doesn’t: then the upper child’s capture zone will extend 
over the lower child
  - or both will have the CSS tag, in which case the upper child’s capture zone 
would be competing with the lower child’s capture zone. As in any other 
competition between capture zones the nearest node should win. The effect would 
be the same as if the regular matching rules were applied on the upper child. 
It would also be the same as if only the lower child had an extended capture 
zone. However, I’d consider this case to be bad UI programming.

We agreed that “in a competition between capture zones, pick the node whose 
border is nearest the touch point” was a reasonable way to resolve things.

Thanks,
Daniel

On Nov 13, 2013, at 12:31 PM, Seeon Birger  wrote:


Hi Pavel,

Your example of 'child over child' is an interesting case which raises some 
design aspects of the desired picking algorithm:
1. Which node to pick when one node has a 'strict containership' over the touch 
center and the other node only has a fuzzy containership (the position falls in 
the fuzzy area).
2. Accounting for z-order for extended capture zone area.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position falls in the capture 
zone area of child2 but also clearly falls in the strict bounds of child1.
Generally, when two control nodes compete on same touch event (e.g. child1 & child2 in Daniel's 
diagram), it seems that we would like to give priority to "strict containership" over 
"fuzzy containership".
But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always exists come 
container/background node that strictly contains the touch point, but it would 
probably be an ancestor of the child node, so the usual parent-child 
relationship order will give preference to the child.

One way out it is to honor the usual z-order for the extended area of child2, 
so when a touch center hits the fuzzy area of child2, then child2 would be 
picked.

But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture zones do. Preferring one 
child by z-order (which matches the order of children in the parent) is not natural here. 
And we might better choose the node which is "closer"
To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to mouse events and 
contain the touch point center either strictly or by their capture zone.
2. Remove all nodes that is strictly overlapped by another node and is below 
that node by z-order.
3. Out of those left choose the "closest" node. (the concept of "closet" should 
employ some calculation which might not be trivial in the general case).
4. Once a node has been picked, we follow the usual node chain list for event 
processing.

Care must be taken so we not break the current model for event processing. For 
example, if a node is picked by its capture zone, it means that the position 
does not fall in the boundaries of the node, so existing event handling code 
that relies on that would break. So I think the capture zone feature should be 
selectively enabled for certain type of nodes such buttons or other classic 
controls.

Regards,
Seeon





-Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 20

Re: discussion about touch events

2013-11-19 Thread Pavel Safrata
se the node which is "closer"

To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to mouse 
events and contain the touch point center either strictly or by 
their capture zone.
2. Remove all nodes that is strictly overlapped by another node and 
is below that node by z-order.
3. Out of those left choose the "closest" node. (the concept of 
"closet" should employ some calculation which might not be trivial 
in the general case).
4. Once a node has been picked, we follow the usual node chain list 
for event processing.


Care must be taken so we not break the current model for event 
processing. For example, if a node is picked by its capture zone, 
it means that the position does not fall in the boundaries of the 
node, so existing event handling code that relies on that would 
break. So I think the capture zone feature should be selectively 
enabled for certain type of nodes such buttons or other classic 
controls.


Regards,
Seeon





-Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major 
difference is the "fair division of capture zones" among siblings. 
It's an interesting idea, let's explore it. What pops first is that 
children can also overlap. So I think it would behave like this 
(green capture zones

omitted):

Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing, both 
cases look the same but behave differently. Note that in the case 
on the right, the parent may be still the same, developer only adds 
a fancy background as a new child and suddenly the red child can't 
be hit that easily. What do you think? Is it an issue? Or would it 
not behave this way?


Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:

(My original message didn't get through to openjfx-dev because I used
inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata 
mailto:pavel.safr...@oracle.com>

<mailto:pavel.safr...@oracle.com>> wrote:


On 11.11.2013 17:49, Tomas Mikula wrote:

On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
<mailto:phdoerf...@gmail.com><mailto:phdoerf...@gmail.com>> wrote:

I see the need to be aware of the area that is covered by fingers
rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For
instance:
Say we have a button on some background and both the background and
the button do have an onClick listener attached. If you tap the
button in a way that the touched area's center point is outside of
the buttons boundaries - what event will be fired? Will both the
background and the button receive a click event? Or just either the
background or the button exclusively? Will there be a new event
type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional area tap
event which gives precise information about diameter and center of
the tap. Besides that there should be some kind of "priority" for
choosing which node's onClick will be called.
What about picking the one that is closest to the center of the 
touch?




There is always something directly on the center of the touch
(possibly the scene background, but it can have event handlers too).
That's what we pick right now.
Pavel


What Seeon, Assaf and I discussed earlier was building some fuzziness
into the node picker so that instead of each node capturing only
events directly on top of it:

Non-fuzzy picker: http://i.imgur.com/uszql8V.png

..nodes at each level of the hierarchy would capture events beyond
their borders as well:

Fuzzy picker: http://i.imgur.com/ELWamYp.png

In the above, "Parent" would capture touch events within a certain
radius around it, as would its children "Child 1" and "Child 2". Since
"Child 1" and "Child 2" are peers, they would have a sharp division
between them, a watershed on either side of which events would go to
one child node or the other. This would also apply if the peer nodes
were further apart; they would divide the no-man's land between them.
Of course this no-man's land would be part of "Parent" and could could
be touch-sensitive - but we won't consider "Parent" as an event target
until we have ruled out using one of its children's extended capture
zones.

The capture radius could either be a styleable property on the nodes,
or could be determined by the X and Y size of a touch point as
reported by the touch screen. We'd still be repor

Re: discussion about touch events

2013-11-19 Thread Pavel Safrata
 case).
4. Once a node has been picked, we follow the usual node chain 
list for event processing.


Care must be taken so we not break the current model for event 
processing. For example, if a node is picked by its capture zone, 
it means that the position does not fall in the boundaries of the 
node, so existing event handling code that relies on that would 
break. So I think the capture zone feature should be selectively 
enabled for certain type of nodes such buttons or other classic 
controls.


Regards,
Seeon





-Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major 
difference is the "fair division of capture zones" among siblings. 
It's an interesting idea, let's explore it. What pops first is 
that children can also overlap. So I think it would behave like 
this (green capture zones

omitted):

Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing, 
both cases look the same but behave differently. Note that in the 
case on the right, the parent may be still the same, developer 
only adds a fancy background as a new child and suddenly the red 
child can't be hit that easily. What do you think? Is it an issue? 
Or would it not behave this way?


Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:
(My original message didn't get through to openjfx-dev because I 
used

inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata 

<mailto:pavel.safr...@oracle.com>> wrote:


On 11.11.2013 17:49, Tomas Mikula wrote:

On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
mailto:phdoerf...@gmail.com>> wrote:

I see the need to be aware of the area that is covered by fingers
rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For
instance:
Say we have a button on some background and both the 
background and

the button do have an onClick listener attached. If you tap the
button in a way that the touched area's center point is 
outside of

the buttons boundaries - what event will be fired? Will both the
background and the button receive a click event? Or just 
either the

background or the button exclusively? Will there be a new event
type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional area tap
event which gives precise information about diameter and 
center of

the tap. Besides that there should be some kind of "priority" for
choosing which node's onClick will be called.
What about picking the one that is closest to the center of the 
touch?



There is always something directly on the center of the touch
(possibly the scene background, but it can have event handlers 
too).

That's what we pick right now.
Pavel
What Seeon, Assaf and I discussed earlier was building some 
fuzziness

into the node picker so that instead of each node capturing only
events directly on top of it:

Non-fuzzy picker: http://i.imgur.com/uszql8V.png

..nodes at each level of the hierarchy would capture events beyond
their borders as well:

Fuzzy picker: http://i.imgur.com/ELWamYp.png

In the above, "Parent" would capture touch events within a certain
radius around it, as would its children "Child 1" and "Child 2". 
Since

"Child 1" and "Child 2" are peers, they would have a sharp division
between them, a watershed on either side of which events would go to
one child node or the other. This would also apply if the peer nodes
were further apart; they would divide the no-man's land between 
them.
Of course this no-man's land would be part of "Parent" and could 
could
be touch-sensitive - but we won't consider "Parent" as an event 
target

until we have ruled out using one of its children's extended capture
zones.

The capture radius could either be a styleable property on the 
nodes,

or could be determined by the X and Y size of a touch point as
reported by the touch screen. We'd still be reporting a touch point,
not a touch area. The touch target would be, as now, a single node.

This would get us more reliable touch capture at leaf nodes of the
node hierarchy at the expense of it being harder to tap the
background. This is likely to be a good trade-off.

Daniel




Tomas


Maybe the draw order / order in the scene graph / z buffer value
might be sufficient to model what would happen in the real,
physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai" 

<mailto:assaf.yav...@oracle.com>>:


The ascii sketch looked fine on my screen before I sent the mail
:( I h

Re: discussion about touch events

2013-11-19 Thread Daniel Blaukopf
 - the upper child node has the CSS tag saying “enable extended capture 
>>>> zone” and the lower child doesn’t: then the upper child’s capture zone 
>>>> will extend over the lower child
>>>>  - or both will have the CSS tag, in which case the upper child’s capture 
>>>> zone would be competing with the lower child’s capture zone. As in any 
>>>> other   competition between capture zones the nearest node 
>>>> should win. The effect would be the same as if the regular matching rules 
>>>> were applied on the upper   child. It would also be the 
>>>> same as if only the lower child had an extended capture zone. However, I’d 
>>>> consider this case to be bad UI programming.
>>>> 
>>>> We agreed that “in a competition between capture zones, pick the node 
>>>> whose border is nearest the touch point” was a reasonable way to resolve 
>>>> things.
>>>> 
>>>> Thanks,
>>>> Daniel
>>>> 
>>>> On Nov 13, 2013, at 12:31 PM, Seeon Birger  wrote:
>>>> 
>>>>> Hi Pavel,
>>>>> 
>>>>> Your example of 'child over child' is an interesting case which raises 
>>>>> some design aspects of the desired picking algorithm:
>>>>> 1. Which node to pick when one node has a 'strict containership' over the 
>>>>> touch center and the other node only has a fuzzy containership (the 
>>>>> position falls in the fuzzy area).
>>>>> 2. Accounting for z-order for extended capture zone area.
>>>>> 3. Accounting for parent-child relationship.
>>>>> 
>>>>> Referring to your 'child over child' example:
>>>>> http://i.imgur.com/e92qEJA.jpg
>>>>> 
>>>>> The conflict would arise were touch point center position falls in the 
>>>>> capture zone area of child2 but also clearly falls in the strict bounds 
>>>>> of child1.
>>>>> Generally, when two control nodes compete on same touch event (e.g. 
>>>>> child1 & child2 in Daniel's diagram), it seems that we would like to give 
>>>>> priority to "strict containership" over "fuzzy containership".
>>>>> But in your case it's probably not the desired behavior.
>>>>> 
>>>>> Also note that in the general case there's almost always exists come 
>>>>> container/background node that strictly contains the touch point, but it 
>>>>> would probably be an ancestor of the child node, so the usual 
>>>>> parent-child relationship order will give preference to the child.
>>>>> 
>>>>> One way out it is to honor the usual z-order for the extended area of 
>>>>> child2, so when a touch center hits the fuzzy area of child2, then child2 
>>>>> would be picked.
>>>>> 
>>>>> But is not ideal for Daniel's example:
>>>>> http://i.imgur.com/ELWamYp.png
>>>>> 
>>>>> where the 2 nodes don't strictly overlap, but their capture zones do. 
>>>>> Preferring one child by z-order (which matches the order of children in 
>>>>> the parent) is not natural here. And we might better choose the node 
>>>>> which is "closer"
>>>>> To the touch point.
>>>>> 
>>>>> So to summarize I suggest this rough picking algorithm:
>>>>> 1. Choose all uppermost nodes which are not transparent to mouse events 
>>>>> and contain the touch point center either strictly or by their capture 
>>>>> zone.
>>>>> 2. Remove all nodes that is strictly overlapped by another node and is 
>>>>> below that node by z-order.
>>>>> 3. Out of those left choose the "closest" node. (the concept of "closet" 
>>>>> should employ some calculation which might not be trivial in the general 
>>>>> case).
>>>>> 4. Once a node has been picked, we follow the usual node chain list for 
>>>>> event processing.
>>>>> 
>>>>> Care must be taken so we not break the current model for event 
>>>>> processing. For example, if a node is picked by its capture zone, it 
>>>>> means that the position does not fall in the boundaries of the node, so 
>>>>> existing event handling code that relies on that would break. So I think 
>>>&

Re: discussion about touch events

2013-11-20 Thread Anthony Petrov
hild. It would also be the same as if only the lower 
child had an extended capture zone. However, I’d consider this case to be bad 
UI programming.

We agreed that “in a competition between capture zones, pick the node whose 
border is nearest the touch point” was a reasonable way to resolve things.

Thanks,
Daniel

On Nov 13, 2013, at 12:31 PM, Seeon Birger  wrote:


Hi Pavel,

Your example of 'child over child' is an interesting case which raises some 
design aspects of the desired picking algorithm:
1. Which node to pick when one node has a 'strict containership' over the touch 
center and the other node only has a fuzzy containership (the position falls in 
the fuzzy area).
2. Accounting for z-order for extended capture zone area.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position falls in the capture 
zone area of child2 but also clearly falls in the strict bounds of child1.
Generally, when two control nodes compete on same touch event (e.g. child1 & child2 in Daniel's 
diagram), it seems that we would like to give priority to "strict containership" over 
"fuzzy containership".
But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always exists come 
container/background node that strictly contains the touch point, but it would 
probably be an ancestor of the child node, so the usual parent-child 
relationship order will give preference to the child.

One way out it is to honor the usual z-order for the extended area of child2, 
so when a touch center hits the fuzzy area of child2, then child2 would be 
picked.

But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture zones do. Preferring one 
child by z-order (which matches the order of children in the parent) is not natural here. 
And we might better choose the node which is "closer"
To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to mouse events and 
contain the touch point center either strictly or by their capture zone.
2. Remove all nodes that is strictly overlapped by another node and is below 
that node by z-order.
3. Out of those left choose the "closest" node. (the concept of "closet" should 
employ some calculation which might not be trivial in the general case).
4. Once a node has been picked, we follow the usual node chain list for event 
processing.

Care must be taken so we not break the current model for event processing. For 
example, if a node is picked by its capture zone, it means that the position 
does not fall in the boundaries of the node, so existing event handling code 
that relies on that would break. So I think the capture zone feature should be 
selectively enabled for certain type of nodes such buttons or other classic 
controls.

Regards,
Seeon





-Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major difference is the 
"fair division of capture zones" among siblings. It's an interesting idea, 
let's explore it. What pops first is that children can also overlap. So I think it would 
behave like this (green capture zones
omitted):

Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing, both cases look 
the same but behave differently. Note that in the case on the right, the parent 
may be still the same, developer only adds a fancy background as a new child 
and suddenly the red child can't be hit that easily. What do you think? Is it 
an issue? Or would it not behave this way?

Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:

(My original message didn't get through to openjfx-dev because I used
inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata mailto:pavel.safr...@oracle.com>> wrote:


On 11.11.2013 17:49, Tomas Mikula wrote:

On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
mailto:phdoerf...@gmail.com>> wrote:

I see the need to be aware of the area that is covered by fingers
rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For
instance:
Say we have a button on some background and both the background and
the button do have an onClick listener attached. If you tap the
button in a way that the touched area's center point is outside of
the buttons boundaries - what event will be fir

Re: discussion about touch events

2013-11-20 Thread Assaf Yavnai
z-order for extended capture zone area.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position falls 
in the capture zone area of child2 but also clearly falls in the 
strict bounds of child1.
Generally, when two control nodes compete on same touch event 
(e.g. child1 & child2 in Daniel's diagram), it seems that we 
would like to give priority to "strict containership" over "fuzzy 
containership".

But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always exists 
come container/background node that strictly contains the touch 
point, but it would probably be an ancestor of the child node, so 
the usual parent-child relationship order will give preference to 
the child.


One way out it is to honor the usual z-order for the extended 
area of child2, so when a touch center hits the fuzzy area of 
child2, then child2 would be picked.


But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture zones 
do. Preferring one child by z-order (which matches the order of 
children in the parent) is not natural here. And we might better 
choose the node which is "closer"

To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to mouse 
events and contain the touch point center either strictly or by 
their capture zone.
2. Remove all nodes that is strictly overlapped by another node 
and is below that node by z-order.
3. Out of those left choose the "closest" node. (the concept of 
"closet" should employ some calculation which might not be 
trivial in the general case).
4. Once a node has been picked, we follow the usual node chain 
list for event processing.


Care must be taken so we not break the current model for event 
processing. For example, if a node is picked by its capture zone, 
it means that the position does not fall in the boundaries of the 
node, so existing event handling code that relies on that would 
break. So I think the capture zone feature should be selectively 
enabled for certain type of nodes such buttons or other classic 
controls.


Regards,
Seeon





-Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major 
difference is the "fair division of capture zones" among 
siblings. It's an interesting idea, let's explore it. What pops 
first is that children can also overlap. So I think it would 
behave like this (green capture zones

omitted):

Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing, 
both cases look the same but behave differently. Note that in the 
case on the right, the parent may be still the same, developer 
only adds a fancy background as a new child and suddenly the red 
child can't be hit that easily. What do you think? Is it an 
issue? Or would it not behave this way?


Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:
(My original message didn't get through to openjfx-dev because I 
used

inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata 

<mailto:pavel.safr...@oracle.com>> wrote:


On 11.11.2013 17:49, Tomas Mikula wrote:

On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
mailto:phdoerf...@gmail.com>> wrote:
I see the need to be aware of the area that is covered by 
fingers

rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For
instance:
Say we have a button on some background and both the 
background and

the button do have an onClick listener attached. If you tap the
button in a way that the touched area's center point is 
outside of

the buttons boundaries - what event will be fired? Will both the
background and the button receive a click event? Or just 
either the

background or the button exclusively? Will there be a new event
type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional area tap
event which gives precise information about diameter and 
center of
the tap. Besides that there should be some kind of "priority" 
for

choosing which node's onClick will be called.
What about picking the one that is closest to the center of 
the touch?



There is always something directly on the center of the touch
(possibly the scene background, but it can have event handlers 
too).

That's wh

Re: discussion about touch events

2013-11-20 Thread Jim Graham

I'm only occasionally skimming this thread so I hope I don't disrupt 
discussions by throwing in a few observations now and then...

On 11/20/2013 7:30 AM, Assaf Yavnai wrote:

Pavel,

I think that this is a very good example why touch events should be processed 
separately from mouse events.
For example, if you will press a button with a touch it will remain in "hover" state 
although you released the finger from the screen. This is done because the "hover" state 
listen to the mouse coordinates, which is invalid for touch.
Touch events doesn't have the concept of move, only drag.


Some screens actually have a hover mode (the Galaxy Note series can detect both 
finger and stylus hover, but they use a Wacom-like sensor rather than the 
typical touch sensor).  In those cases, one could consider a hover mode to be 
similar to a mouse move event.

Even screens without hover detection have pressure detection and a light pressure below a 
configurable "touch" threshold could probably also be treated as hover as well?

Even with a mouse you have the concept of "the mouse has left the building" - consider if the mouse 
was moved to another screen for instance.  The loss of finger/stylus hover should probably be treated 
similarly, the location of the current "finger/mouse/pointer thing" is currently "nowhere near 
anything" when the touch implement leaves detection range...?

On the subject of detection of proximity of points to shapes, that is actually an area 
that I've tried to come up with good algorithms for the AA shaders and not had much great 
success (I've come up with various approximations, but they often tend to suffer from the 
points raised earlier about very wide and thin rectangles).  The CAG code of the 
java.awt.geom package (cloned into the FX source base) could potentially detect 
intersections between an oval and an arbitrary shape, but I'm not sure of the 
performance.  I know that I once found the AWT team using the Area code to do picking 
(with just rectangle shapes, though) until I explained to them that there are probably 
some more specific algorithms that would be an order of magnitude faster for the common 
case of rectangle-rectilinear math.  Still, they managed to use it just fine for a bit 
without noticing any performance issues.  The code base also has some fairly optimized 
"rectangle intersects arbitrary shape" code that!
 would b
e much faster than an Area.intersect(Shape) operation.  It might be generalizable to "oval 
intersects arbitrary shape" with some work, or perhaps "set of rectangles or polygons 
that approximate an oval intersects arbitrary shape".  I don't have time right now to look 
into that, but I'm throwing the suggestions out in case someone else is inspired...

...jim


Re: discussion about touch events

2013-12-12 Thread Pavel Safrata

Hello all,
I'm sorry for the long delay on my side.

Daniel, the only remaining point of our discussion that I feel deserves  
a comment:




We could reasonably limit the algorithm to dealing with convex shapes.


Can we? What about paths, polygons etc?
I realize that it is possible to describe touch sensitive concave 
shapes, but I am not sure they matter for this. If developers are 
going to go to the trouble of defining a concave shape that they want 
to be touch sensitive within its area but not in all of its bounding 
box, are they really then going to want that area to be extended? I’d 
consider a concave touch shape with extended capture zone to be 
sufficiently unlikely that we could treat it as concave. Which, I 
realize is not quite what my proposed algorithm does.


I can imagine, for instance, an application showing a graph - a set of 
vertices connected by edges. The edges are not straight lines, but are 
e.g. QuadCurves. I can touch an edge and drag it to change its shape 
(the control point). Now to be able to touch the curve, I certainly want 
it to have the extended capture zone (it's thin). But if an edge circles 
around the graph, I don't want it to be picked everywhere in between. 
I'm not sure if this is compelling enough, but to me it sounds like a 
reasonable use-case that needs concave extended capture zones..


Pavel


Re: discussion about touch events

2013-12-12 Thread Pavel Safrata
ot keen 
on the idea of changing behavior based on which nodes have 
listeners on them. CSS seems like the place to do this (as I 
think Pavel suggested earlier). In Pavel’s case, either:
  - the upper child node has the CSS tag saying “enable extended 
capture zone” and the lower child doesn’t: then the upper child’s 
capture zone will extend over the lower child
  - or both will have the CSS tag, in which case the upper 
child’s capture zone would be competing with the lower child’s 
capture zone. As in any other   competition 
between capture zones the nearest node should win. The effect 
would be the same as if the regular matching rules were applied 
on the upper   child. It would also be the same 
as if only the lower child had an extended capture zone. However, 
I’d consider this case to be bad UI programming.


We agreed that “in a competition between capture zones, pick the 
node whose border is nearest the touch point” was a reasonable 
way to resolve things.


Thanks,
Daniel

On Nov 13, 2013, at 12:31 PM, Seeon Birger 
 wrote:



Hi Pavel,

Your example of 'child over child' is an interesting case which 
raises some design aspects of the desired picking algorithm:
1. Which node to pick when one node has a 'strict containership' 
over the touch center and the other node only has a fuzzy 
containership (the position falls in the fuzzy area).

2. Accounting for z-order for extended capture zone area.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position falls 
in the capture zone area of child2 but also clearly falls in the 
strict bounds of child1.
Generally, when two control nodes compete on same touch event 
(e.g. child1 & child2 in Daniel's diagram), it seems that we 
would like to give priority to "strict containership" over 
"fuzzy containership".

But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always exists 
come container/background node that strictly contains the touch 
point, but it would probably be an ancestor of the child node, 
so the usual parent-child relationship order will give 
preference to the child.


One way out it is to honor the usual z-order for the extended 
area of child2, so when a touch center hits the fuzzy area of 
child2, then child2 would be picked.


But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture 
zones do. Preferring one child by z-order (which matches the 
order of children in the parent) is not natural here. And we 
might better choose the node which is "closer"

To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to mouse 
events and contain the touch point center either strictly or by 
their capture zone.
2. Remove all nodes that is strictly overlapped by another node 
and is below that node by z-order.
3. Out of those left choose the "closest" node. (the concept of 
"closet" should employ some calculation which might not be 
trivial in the general case).
4. Once a node has been picked, we follow the usual node chain 
list for event processing.


Care must be taken so we not break the current model for event 
processing. For example, if a node is picked by its capture 
zone, it means that the position does not fall in the boundaries 
of the node, so existing event handling code that relies on that 
would break. So I think the capture zone feature should be 
selectively enabled for certain type of nodes such buttons or 
other classic controls.


Regards,
Seeon





-Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major 
difference is the "fair division of capture zones" among 
siblings. It's an interesting idea, let's explore it. What pops 
first is that children can also overlap. So I think it would 
behave like this (green capture zones

omitted):

Child in parent vs. Child over child: 
http://i.imgur.com/e92qEJA.jpg


..wouldn't it? From user's point of view this seems confusing, 
both cases look the same but behave differently. Note that in 
the case on the right, the parent may be still the same, 
developer only adds a fancy background as a new child and 
suddenly the red child can't be hit that easily. What do you 
think? Is it an issue? Or would it not behave this way?


Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:
(My original message didn't get through to openjfx-dev because 
I used

inline image

Re: discussion about touch events

2013-12-12 Thread Pavel Safrata
hat do you think? Any better ideas?

Pavel


On 13.11.2013 22:09, Daniel Blaukopf wrote:

Hi Seeon,

Summarizing our face to face talk today:

I see that the case described by Pavel is indeed a problem and 
agree with you that not every node needs to be a participant in 
the competition for which grabs touch input. However I’m not keen 
on the idea of changing behavior based on which nodes have 
listeners on them. CSS seems like the place to do this (as I 
think Pavel suggested earlier). In Pavel’s case, either:
  - the upper child node has the CSS tag saying “enable extended 
capture zone” and the lower child doesn’t: then the upper child’s 
capture zone will extend over the lower child
  - or both will have the CSS tag, in which case the upper 
child’s capture zone would be competing with the lower child’s 
capture zone. As in any other competition between capture zones 
the nearest node should win. The effect would be the same as if 
the regular matching rules were applied on the upper child. It 
would also be the same as if only the lower child had an extended 
capture zone. However, I’d consider this case to be bad UI 
programming.


We agreed that “in a competition between capture zones, pick the 
node whose border is nearest the touch point” was a reasonable 
way to resolve things.


Thanks,
Daniel

On Nov 13, 2013, at 12:31 PM, Seeon Birger 
 wrote:



Hi Pavel,

Your example of 'child over child' is an interesting case which 
raises some design aspects of the desired picking algorithm:
1. Which node to pick when one node has a 'strict containership' 
over the touch center and the other node only has a fuzzy 
containership (the position falls in the fuzzy area).

2. Accounting for z-order for extended capture zone area.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position falls 
in the capture zone area of child2 but also clearly falls in the 
strict bounds of child1.
Generally, when two control nodes compete on same touch event 
(e.g. child1 & child2 in Daniel's diagram), it seems that we 
would like to give priority to "strict containership" over 
"fuzzy containership".

But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always exists 
come container/background node that strictly contains the touch 
point, but it would probably be an ancestor of the child node, 
so the usual parent-child relationship order will give 
preference to the child.


One way out it is to honor the usual z-order for the extended 
area of child2, so when a touch center hits the fuzzy area of 
child2, then child2 would be picked.


But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture 
zones do. Preferring one child by z-order (which matches the 
order of children in the parent) is not natural here. And we 
might better choose the node which is "closer"

To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to mouse 
events and contain the touch point center either strictly or by 
their capture zone.
2. Remove all nodes that is strictly overlapped by another node 
and is below that node by z-order.
3. Out of those left choose the "closest" node. (the concept of 
"closet" should employ some calculation which might not be 
trivial in the general case).
4. Once a node has been picked, we follow the usual node chain 
list for event processing.


Care must be taken so we not break the current model for event 
processing. For example, if a node is picked by its capture 
zone, it means that the position does not fall in the boundaries 
of the node, so existing event handling code that relies on that 
would break. So I think the capture zone feature should be 
selectively enabled for certain type of nodes such buttons or 
other classic controls.


Regards,
Seeon





-Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major 
difference is the "fair division of capture zones" among 
siblings. It's an interesting idea, let's explore it. What pops 
first is that children can also overlap. So I think it would 
behave like this (green capture zones

omitted):

Child in parent vs. Child over child: 
http://i.imgur.com/e92qEJA.jpg


..wouldn't it? From user's point of view this seems confusing, 
both cases look the same but behave differently. Note that in 
the case on the right, the parent may be still the same, 
developer only adds a fancy background 

Re: discussion about touch events

2013-12-15 Thread Assaf Yavnai
rea.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position falls 
in the capture zone area of child2 but also clearly falls in 
the strict bounds of child1.
Generally, when two control nodes compete on same touch event 
(e.g. child1 & child2 in Daniel's diagram), it seems that we 
would like to give priority to "strict containership" over 
"fuzzy containership".

But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always exists 
come container/background node that strictly contains the touch 
point, but it would probably be an ancestor of the child node, 
so the usual parent-child relationship order will give 
preference to the child.


One way out it is to honor the usual z-order for the extended 
area of child2, so when a touch center hits the fuzzy area of 
child2, then child2 would be picked.


But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture 
zones do. Preferring one child by z-order (which matches the 
order of children in the parent) is not natural here. And we 
might better choose the node which is "closer"

To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to 
mouse events and contain the touch point center either strictly 
or by their capture zone.
2. Remove all nodes that is strictly overlapped by another node 
and is below that node by z-order.
3. Out of those left choose the "closest" node. (the concept of 
"closet" should employ some calculation which might not be 
trivial in the general case).
4. Once a node has been picked, we follow the usual node chain 
list for event processing.


Care must be taken so we not break the current model for event 
processing. For example, if a node is picked by its capture 
zone, it means that the position does not fall in the 
boundaries of the node, so existing event handling code that 
relies on that would break. So I think the capture zone feature 
should be selectively enabled for certain type of nodes such 
buttons or other classic controls.


Regards,
Seeon





-Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major 
difference is the "fair division of capture zones" among 
siblings. It's an interesting idea, let's explore it. What pops 
first is that children can also overlap. So I think it would 
behave like this (green capture zones

omitted):

Child in parent vs. Child over child: 
http://i.imgur.com/e92qEJA.jpg


..wouldn't it? From user's point of view this seems confusing, 
both cases look the same but behave differently. Note that in 
the case on the right, the parent may be still the same, 
developer only adds a fancy background as a new child and 
suddenly the red child can't be hit that easily. What do you 
think? Is it an issue? Or would it not behave this way?


Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:
(My original message didn't get through to openjfx-dev because 
I used

inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata 

<mailto:pavel.safr...@oracle.com>> wrote:


On 11.11.2013 17:49, Tomas Mikula wrote:

On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
mailto:phdoerf...@gmail.com>> wrote:
I see the need to be aware of the area that is covered by 
fingers

rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. 
For

instance:
Say we have a button on some background and both the 
background and
the button do have an onClick listener attached. If you tap 
the
button in a way that the touched area's center point is 
outside of
the buttons boundaries - what event will be fired? Will 
both the
background and the button receive a click event? Or just 
either the
background or the button exclusively? Will there be a new 
event

type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional area 
tap
event which gives precise information about diameter and 
center of
the tap. Besides that there should be some kind of 
"priority" for

choosing which node's onClick will be called.
What about picking the one that is closest to the center of 
the touch?



There is always something directly on the center of the touch
(possibly the scene background, but it can have event 
handlers too).

That's what we pick right now.
Pavel

Re: discussion about touch events

2013-12-16 Thread Assaf Yavnai
e to pick when one node has a 'strict 
containership' over the touch center and the other node only 
has a fuzzy containership (the position falls in the fuzzy area).

2. Accounting for z-order for extended capture zone area.
3. Accounting for parent-child relationship.

Referring to your 'child over child' example:
http://i.imgur.com/e92qEJA.jpg

The conflict would arise were touch point center position 
falls in the capture zone area of child2 but also clearly 
falls in the strict bounds of child1.
Generally, when two control nodes compete on same touch event 
(e.g. child1 & child2 in Daniel's diagram), it seems that we 
would like to give priority to "strict containership" over 
"fuzzy containership".

But in your case it's probably not the desired behavior.

Also note that in the general case there's almost always 
exists come container/background node that strictly contains 
the touch point, but it would probably be an ancestor of the 
child node, so the usual parent-child relationship order will 
give preference to the child.


One way out it is to honor the usual z-order for the extended 
area of child2, so when a touch center hits the fuzzy area of 
child2, then child2 would be picked.


But is not ideal for Daniel's example:
http://i.imgur.com/ELWamYp.png

where the 2 nodes don't strictly overlap, but their capture 
zones do. Preferring one child by z-order (which matches the 
order of children in the parent) is not natural here. And we 
might better choose the node which is "closer"

To the touch point.

So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to 
mouse events and contain the touch point center either 
strictly or by their capture zone.
2. Remove all nodes that is strictly overlapped by another 
node and is below that node by z-order.
3. Out of those left choose the "closest" node. (the concept 
of "closet" should employ some calculation which might not be 
trivial in the general case).
4. Once a node has been picked, we follow the usual node chain 
list for event processing.


Care must be taken so we not break the current model for event 
processing. For example, if a node is picked by its capture 
zone, it means that the position does not fall in the 
boundaries of the node, so existing event handling code that 
relies on that would break. So I think the capture zone 
feature should be selectively enabled for certain type of 
nodes such buttons or other classic controls.


Regards,
Seeon





-Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major 
difference is the "fair division of capture zones" among 
siblings. It's an interesting idea, let's explore it. What 
pops first is that children can also overlap. So I think it 
would behave like this (green capture zones

omitted):

Child in parent vs. Child over child: 
http://i.imgur.com/e92qEJA.jpg


..wouldn't it? From user's point of view this seems confusing, 
both cases look the same but behave differently. Note that in 
the case on the right, the parent may be still the same, 
developer only adds a fancy background as a new child and 
suddenly the red child can't be hit that easily. What do you 
think? Is it an issue? Or would it not behave this way?


Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:
(My original message didn't get through to openjfx-dev 
because I used

inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata 

<mailto:pavel.safr...@oracle.com>> wrote:


On 11.11.2013 17:49, Tomas Mikula wrote:

On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
mailto:phdoerf...@gmail.com>> wrote:
I see the need to be aware of the area that is covered by 
fingers

rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity, 
though. For

instance:
Say we have a button on some background and both the 
background and
the button do have an onClick listener attached. If you 
tap the
button in a way that the touched area's center point is 
outside of
the buttons boundaries - what event will be fired? Will 
both the
background and the button receive a click event? Or just 
either the
background or the button exclusively? Will there be a new 
event

type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional 
area tap
event which gives precise information about diameter and 
center of
the tap. Besides that there should be some kind of 
"priority" for

choosing which node's onClick will be called.
What about picking t

Re: discussion about touch events

2013-12-18 Thread Anthony Petrov
that is strictly overlapped by another node
and is below that node by z-order.
3. Out of those left choose the "closest" node. (the concept of
"closet" should employ some calculation which might not be
trivial in the general case).
4. Once a node has been picked, we follow the usual node chain
list for event processing.

Care must be taken so we not break the current model for event
processing. For example, if a node is picked by its capture
zone, it means that the position does not fall in the boundaries
of the node, so existing event handling code that relies on that
would break. So I think the capture zone feature should be
selectively enabled for certain type of nodes such buttons or
other classic controls.

Regards,
Seeon





-Original Message-
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major
difference is the "fair division of capture zones" among
siblings. It's an interesting idea, let's explore it. What pops
first is that children can also overlap. So I think it would
behave like this (green capture zones
omitted):

Child in parent vs. Child over child:
http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing,
both cases look the same but behave differently. Note that in
the case on the right, the parent may be still the same,
developer only adds a fancy background as a new child and
suddenly the red child can't be hit that easily. What do you
think? Is it an issue? Or would it not behave this way?

Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:

(My original message didn't get through to openjfx-dev because
I used
inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata
mailto:pavel.safr...@oracle.com>> wrote:


On 11.11.2013 17:49, Tomas Mikula wrote:

On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
mailto:phdoerf...@gmail.com>> wrote:

I see the need to be aware of the area that is covered by
fingers
rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For
instance:
Say we have a button on some background and both the
background and
the button do have an onClick listener attached. If you tap the
button in a way that the touched area's center point is
outside of
the buttons boundaries - what event will be fired? Will both
the
background and the button receive a click event? Or just
either the
background or the button exclusively? Will there be a new event
type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional area tap
event which gives precise information about diameter and
center of
the tap. Besides that there should be some kind of
"priority" for
choosing which node's onClick will be called.

What about picking the one that is closest to the center of
the touch?



There is always something directly on the center of the touch
(possibly the scene background, but it can have event handlers
too).
That's what we pick right now.
Pavel


What Seeon, Assaf and I discussed earlier was building some
fuzziness
into the node picker so that instead of each node capturing only
events directly on top of it:

Non-fuzzy picker: http://i.imgur.com/uszql8V.png

..nodes at each level of the hierarchy would capture events beyond
their borders as well:

Fuzzy picker: http://i.imgur.com/ELWamYp.png

In the above, "Parent" would capture touch events within a certain
radius around it, as would its children "Child 1" and "Child
2". Since
"Child 1" and "Child 2" are peers, they would have a sharp
division
between them, a watershed on either side of which events would
go to
one child node or the other. This would also apply if the peer
nodes
were further apart; they would divide the no-man's land between
them.
Of course this no-man's land would be part of "Parent" and
could could
be touch-sensitive - but we won't consider "Parent" as an event
target
until we have ruled out using one of its children's extended
capture
zones.

The capture radius could either be a styleable property on the
nodes,
or could be determined by the X and Y size of a touch point as
reported by the touch screen. We'd still be reporting a touch
point,
not a touch area. The touch target would be, as now, a single
node.

This would get us more reliable touch capture at leaf nodes of the
node hierarchy at the expense of it being harder to tap the
background. This is likely to be a good trade-off.

Daniel






Tomas


Maybe the draw order / order in the scene graph / z buffer
value
might be sufficient to model what would happen in t