Re: mouse vs. touch events on touch systems like iOS/Android/Dukepad

2013-10-23 Thread Assaf Yavnai

Matthias,

Please look inline
On 10/22/2013 07:33 PM, Richard Bair wrote:

Hi Matthias, I think Assaf, one of the embedded engineers, is now on the 
mailing list and can help answer these questions.

Thanks
Richard

On Oct 21, 2013, at 1:58 AM, Matthias Hänel  wrote:


Hi,


I believe my conceptual question on touch/mouse events has been missed because 
of the other questions
in the "JAVAFX on ANDROID" thread. That's why I would like to start a new 
discussion about touch events.


1. The main question is how are touch and internal mouse events handled? Javafx 
controls seem to rely on mouse events.
That's why I assume there must be some kind of an emulation layer. Are these 
emulated in Prism, Glass (Java-Glasses)
or even lower? Where is it suppose to emulate the mouse events?

What I've seen right now is that iOS-native glass does the mouse
emulation by itself in GlassViewDelegate.m. Touch events and Mouse events are 
sent from the lowest layer.
In Android there are only touch events passed to the lens implementation. On 
udev which I assume is the implementation
that's used for Dukepad it does only pass touch events. Udev and Android are 
lens implementations so, they are using
the same Java classes which do kind of mouse emulation for toch events. But 
it's not exactly the same as the iOS
codes does.

iOS:
sends Touch, Mouse-Enter and Mouse-Down

Lens (Android/Dukepad):
sends Mouse-Enter and Touch


The major differences in calling order and the missung mouse down leeds me to 
the assumption that the events are actually
missing.
Basically the Glass port resbonsible to simulate mouse events from touch 
events. On Windows, for example, its done automaticaly by the OS, in 
other implementations its emulated in the native layer, in Lens 
currently its done in several  places and layers (the input driver, the 
window manger and the java glass code).
There are currently several issues with the touch support in Lens 
implementaion that effects all the ports that are using it.
You can track the work through the master bug RT-32802. 

I'm curently working on solving those issues, and one of the fixes is to 
make mouse simulation more unified.
Please feal free to open more bugs for specific scenrios and link them 
to the master bug, so we can track it and solve it.




2. Is that mouse emulation supposed to be eliminated due to the latest 
lensWindow changes?
  I believe that must be handled in higher layers not in the input layer itself.
What do you mean by 'latest lensWindow changes'? please clarify the 
question.



3. What is the input layer for the Dukepad? I think it's the udev 
implementation and this does pretty much the same as the current
android implementation. I just want to have a "stable" reference to look at ;)
The input driver, as we call it, is depend on the system you are running 
and not the application you are running. From Application prespective 
its all the same. So on embedded devices, like the Raspberry PI, it will 
be udevInput, on Windows it will be Window's Glass implementation (not 
Lens based), on Android it will be Android porting + Lens and so on.



4. Has anyone with a Dukepad the opportunity to test the ListView-Example? For 
me on Android, it doesn't scroll at all with any touches.
With the automatic scrolling (from Richard sources) I get around 30fps on the 
Samsung Galaxy Tab 10.1.

I didn't.
Again if you find a problem or a bug please open a JIRA against so we 
will be able to track and fix it.


Thanks,
Assaf




regards
Matthias





did anyone encountered this?

2013-11-06 Thread Assaf Yavnai

:apps:experiments:3DViewer:compileJava FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':apps:experiments:3DViewer:compileJava'.
> java.lang.ClassFormatError: Invalid Constant Pool entry Type 18

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or 
--debug option to get more log output.


BUILD FAILED

Total time: 39.292 secs
assaf@assaf-Latitude-E6410:~/ws/udev-refactoring/rt$ java -versionjava 
version "1.8.0-ea"

Java(TM) SE Runtime Environment (build 1.8.0-ea-b113)
Java HotSpot(TM) Server VM (build 25.0-b55, mixed mode)

Thanks,
Assaf


discussion about touch events

2013-11-11 Thread Assaf Yavnai

Hi Guys,

I hope that I'm right about this, but it seems that touch events in 
glass are translated (and reported) as a single point events (x & y) 
without an area, like pointer events.
AFAIK, the controls response for touch events same as mouse events 
(using the same pickers) and as a result a button press, for example, 
will only triggered if the x & y of the touch event is within the 
control area.


This means that small controls, or even quite large controls (like 
buttons with text) will often get missed because the 'strict' node 
picking, although from a UX point of view it is strange as the user 
clearly pressed on a node (the finger was clearly above it) but nothing 
happens...


With current implementation its hard to use small features in controls, 
like scrollbars in lists, and it almost impossible to implement 
something like 'screen navigator' (the series of small dots in the 
bottom of a smart phones screen which allow you to jump directly to a 
'far away' screen)


To illustrate it consider the bellow low resolution sketch, where the 
"+" is the actual x,y reported, the ellipse is the finger touch area and 
the rectangle is the node.
With current implementation this type of tap will not trigger the node 
handlers


__
  / \
 /   \
   ___/ __+_ \___in this scenario the 'button' will not get pressed
   |\ /|
   |___\ ___ / __ |
  \___/

If your smart phone support it, turn on the touch debugging options in 
settings and see that each point translate to a quite large circle and 
what ever fall in it, or reasonably close to it,  get picked.


I want to start a discussion to understand if my perspective is accurate 
and to understand what can be done, if any, for the coming release or 
the next one.


We might use recently opened RT-34136 
 for logging this, or 
open a new JIRA for it


Thanks,
Assaf


Re: discussion about touch events

2013-11-11 Thread Assaf Yavnai
The ascii sketch looked fine on my screen before I sent the mail :( I 
hope the idea is clear from the text

(now in the reply dialog its also look good)

Assaf
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:

Hi Guys,

I hope that I'm right about this, but it seems that touch events in 
glass are translated (and reported) as a single point events (x & y) 
without an area, like pointer events.
AFAIK, the controls response for touch events same as mouse events 
(using the same pickers) and as a result a button press, for example, 
will only triggered if the x & y of the touch event is within the 
control area.


This means that small controls, or even quite large controls (like 
buttons with text) will often get missed because the 'strict' node 
picking, although from a UX point of view it is strange as the user 
clearly pressed on a node (the finger was clearly above it) but 
nothing happens...


With current implementation its hard to use small features in 
controls, like scrollbars in lists, and it almost impossible to 
implement something like 'screen navigator' (the series of small dots 
in the bottom of a smart phones screen which allow you to jump 
directly to a 'far away' screen)


To illustrate it consider the bellow low resolution sketch, where the 
"+" is the actual x,y reported, the ellipse is the finger touch area 
and the rectangle is the node.
With current implementation this type of tap will not trigger the node 
handlers


__
  / \
 /   \
   ___/ __+_ \___in this scenario the 'button' will not get 
pressed

   |\ /|
   |___\ ___ / __ |
  \___/

If your smart phone support it, turn on the touch debugging options in 
settings and see that each point translate to a quite large circle and 
what ever fall in it, or reasonably close to it, get picked.


I want to start a discussion to understand if my perspective is 
accurate and to understand what can be done, if any, for the coming 
release or the next one.


We might use recently opened RT-34136 
<https://javafx-jira.kenai.com/browse/RT-34136> for logging this, or 
open a new JIRA for it


Thanks,
Assaf




review request - RT-34191 Lens: [touch] wrong logic for drag starting outside a window

2013-11-12 Thread Assaf Yavnai
JIRA - RT-34191 (webrev 
attached)


Summary:
2 problems where identified:
1) the logic issue described above
2) native mouse drag detection was relaying on an obsolete variable

Fixing item 2 also solved RT-34137 



Unit tests were also updated to reflect the change of logic

Attaching fix patches

Fix was tested using helloworld.HelloSanity, BrickBreaker, 
FXML-LoginDemo, Calculator and LinuxInputTest


Re: discussion about touch events

2013-11-12 Thread Assaf Yavnai
” and “Child 2” are peers, they would have a sharp 
division between them, a watershed on either side of which events 
would go to one child node or the other. This would also apply if the 
peer nodes were further apart; they would divide the no-man’s land 
between them. Of course this no-man’s land would be part of “Parent” 
and could could be touch-sensitive - but we won’t consider “Parent” 
as an event target until we have ruled out using one of its 
children’s extended capture zones.


The capture radius could either be a styleable property on the nodes, 
or could be determined by the X and Y size of a touch point as 
reported by the touch screen. We’d still be reporting a touch point, 
not a touch area. The touch target would be, as now, a single node.


This would get us more reliable touch capture at leaf nodes of the 
node hierarchy at the expense of it being harder to tap the 
background. This is likely to be a good trade-off.


Daniel






Tomas


Maybe the draw order / order in the scene graph / z
buffer value might be sufficient to model what would happen in the 
real,

physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai" 
mailto:assaf.yav...@oracle.com>>:


The ascii sketch looked fine on my screen before I sent the mail 
:( I hope

the idea is clear from the text
(now in the reply dialog its also look good)

Assaf
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:


Hi Guys,

I hope that I'm right about this, but it seems that touch events 
in glass
are translated (and reported) as a single point events (x & y) 
without an

area, like pointer events.
AFAIK, the controls response for touch events same as mouse 
events (using
the same pickers) and as a result a button press, for example, 
will only
triggered if the x & y of the touch event is within the control 
area.


This means that small controls, or even quite large controls (like
buttons with text) will often get missed because the 'strict' 
node picking,
although from a UX point of view it is strange as the user 
clearly pressed

on a node (the finger was clearly above it) but nothing happens...

With current implementation its hard to use small features in 
controls,
like scrollbars in lists, and it almost impossible to implement 
something
like 'screen navigator' (the series of small dots in the bottom 
of a smart
phones screen which allow you to jump directly to a 'far away' 
screen)


To illustrate it consider the bellow low resolution sketch, 
where the "+"
is the actual x,y reported, the ellipse is the finger touch area 
and the

rectangle is the node.
With current implementation this type of tap will not trigger 
the node

handlers

__
  / \
 /   \
   ___/ __+_ \___in this scenario the 'button' will not get
pressed
   |\ /|
   |___\ ___ / __ |
  \___/

If your smart phone support it, turn on the touch debugging 
options in
settings and see that each point translate to a quite large 
circle and what

ever fall in it, or reasonably close to it, get picked.

I want to start a discussion to understand if my perspective is 
accurate
and to understand what can be done, if any, for the coming 
release or the

next one.

We might use recently opened RT-34136 <https://javafx-jira.kenai.
com/browse/RT-34136> for logging this, or open a new JIRA for it

Thanks,
Assaf








review request - [RT-34192] Lens:[TOUCH] HelloSanity:controls:pane - when enabled, cursor doesn't always change shape from move to pointer

2013-11-17 Thread Assaf Yavnai

Dave,

Please take a look at RT-34192 


Webrev attached

Thanks,
Assaf


Re: discussion about touch events

2013-11-18 Thread Assaf Yavnai
13 1:11 PM
To: Daniel Blaukopf
Cc: OpenJFX
Subject: Re: discussion about touch events

(Now my answer using external link)

Hello Daniel,
this is quite similar to my idea described earlier. The major difference is the 
"fair division of capture zones" among siblings. It's an interesting idea, 
let's explore it. What pops first is that children can also overlap. So I think it would 
behave like this (green capture zones
omitted):

Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg

..wouldn't it? From user's point of view this seems confusing, both cases look 
the same but behave differently. Note that in the case on the right, the parent 
may be still the same, developer only adds a fancy background as a new child 
and suddenly the red child can't be hit that easily. What do you think? Is it 
an issue? Or would it not behave this way?

Regards,
Pavel

On 12.11.2013 12:06, Daniel Blaukopf wrote:

(My original message didn't get through to openjfx-dev because I used
inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata mailto:pavel.safr...@oracle.com>> wrote:


On 11.11.2013 17:49, Tomas Mikula wrote:

On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
mailto:phdoerf...@gmail.com>> wrote:

I see the need to be aware of the area that is covered by fingers
rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For
instance:
Say we have a button on some background and both the background and
the button do have an onClick listener attached. If you tap the
button in a way that the touched area's center point is outside of
the buttons boundaries - what event will be fired? Will both the
background and the button receive a click event? Or just either the
background or the button exclusively? Will there be a new event
type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional area tap
event which gives precise information about diameter and center of
the tap. Besides that there should be some kind of "priority" for
choosing which node's onClick will be called.

What about picking the one that is closest to the center of the touch?


There is always something directly on the center of the touch
(possibly the scene background, but it can have event handlers too).
That's what we pick right now.
Pavel

What Seeon, Assaf and I discussed earlier was building some fuzziness
into the node picker so that instead of each node capturing only
events directly on top of it:

Non-fuzzy picker: http://i.imgur.com/uszql8V.png

..nodes at each level of the hierarchy would capture events beyond
their borders as well:

Fuzzy picker: http://i.imgur.com/ELWamYp.png

In the above, "Parent" would capture touch events within a certain
radius around it, as would its children "Child 1" and "Child 2". Since
"Child 1" and "Child 2" are peers, they would have a sharp division
between them, a watershed on either side of which events would go to
one child node or the other. This would also apply if the peer nodes
were further apart; they would divide the no-man's land between them.
Of course this no-man's land would be part of "Parent" and could could
be touch-sensitive - but we won't consider "Parent" as an event target
until we have ruled out using one of its children's extended capture
zones.

The capture radius could either be a styleable property on the nodes,
or could be determined by the X and Y size of a touch point as
reported by the touch screen. We'd still be reporting a touch point,
not a touch area. The touch target would be, as now, a single node.

This would get us more reliable touch capture at leaf nodes of the
node hierarchy at the expense of it being harder to tap the
background. This is likely to be a good trade-off.

Daniel




Tomas


Maybe the draw order / order in the scene graph / z buffer value
might be sufficient to model what would happen in the real,
physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai" mailto:assaf.yav...@oracle.com>>:


The ascii sketch looked fine on my screen before I sent the mail
:( I hope the idea is clear from the text (now in the reply dialog
its also look good)

Assaf
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:


Hi Guys,

I hope that I'm right about this, but it seems that touch events
in glass are translated (and reported) as a single point events
(x & y) without an area, like pointer events.
AFAIK, the controls response for touch events same as mouse
events (using the same pickers) and as a result a button press,
for example, will only triggered if the x & y of the touch event
is within the control area.

This means that small controls, or even quite large controls
(like buttons with text) will of

Re: discussion about touch events

2013-11-20 Thread Assaf Yavnai

Pavel,

I think that this is a very good example why touch events should be 
processed separately from mouse events.
For example, if you will press a button with a touch it will remain in 
"hover" state although you released the finger from the screen. This is 
done because the "hover" state listen to the mouse coordinates, which is 
invalid for touch.

Touch events doesn't have the concept of move, only drag.

As I mentioned before, from my point of view, the goal of this thread is 
to understand and map the difference and expected behavior between touch 
events and mouse events and have separate behavior depends of the type 
of events (Nodes, Controls and application level). One of them is the 
picking mechanism, the hover is another. I think the discussion of how 
to implement it is currently in lower priority, as it is only a 
technical detail.


Also I think that the synthesize mouse event should only be used for 
making applications that doesn't listen to touch event usable (in most 
cases), but we can't expect them to work 1:1. (touch using a finger is a 
very different device then a mouse of stylus device). Touch applications 
supposed to be 'tailor made' for touch this include different UI layout, 
different UX and of course different logic (because the events and there 
meaning are different then mouse) Currently the is no reason for an 
application to listen to touch events, only if you need to support 
multi-touch, as the events are duplicated.


As the mouse events are synthesize in native layer, I think its 
important to know the origin of the event. Maybe the handlers should 
check if the event is synthetic and treat it as touch event in this 
case. We can also double check it by checking if there is currently 
touch device connected, for example.


Assaf


On 11/19/2013 04:34 PM, Pavel Safrata wrote:

Hello Assaf,
there is more to it than just listeners. For instance, every node has 
its "hover" and "pressed" states that are maintained based on the 
picking results and used by CSS. So I believe we can't ignore anything 
during picking.


By the way, I suppose this fuzzy picking should happen also for 
synthesized mouse events so that all the classic apps and standard 
controls can benefit. If I'm right, we definitely can't restrict it on 
touch-listening nodes.


Pavel

On 18.11.2013 14:10, Assaf Yavnai wrote:

I have a question,

Would it be possible to search only the nodes that have been 
registered for touch events notifications instead of the entire tree? 
(if it's not already been done)
Of course if one choose to register the root node as the listener, we 
will have to go all over the nodes, but it seems as bad practice and 
I think its OK to have performance hit on that case


Assaf
On 11/17/2013 04:09 PM, Daniel Blaukopf wrote:

Hi Pavel,

I think we we do use CSS to configure feel as well as look - and 
this is feel, not look - but I also don’t feel strongly about 
whether this needs to be in CSS.


I like your idea of simply picking the closest touch sensitive node 
that is within range. That puts the burden on the touch event to 
describe what region it covers. On the touch screens we are 
currently looking at, a region would be defined as an oval - a 
combination of centre point, X diameter and Y diameter. However the 
touch region can be any shape, so might need to be represented as a 
Path.
Iterating over pixels just isn’t going to work though. If we have a 
300dpi display the touch region could be 150 pixels across and have 
an area of nearly 18000 pixels. Instead we’d want a way to ask a 
parent node, “In your node hierarchy, which of your nodes’ borders 
is closest to this region”. So we’d need to come up with an 
efficient algorithm to answer this question. We’d only ask this 
question for nodes with extended capture zone.


We could reasonably limit the algorithm to dealing with convex 
shapes. Then we can consider an imaginary line L from the node 
center point to the touch center point. The intersection of L with 
the node perimeter is the closest point of contact. If this point is 
also within the touch area then we have a potential match. We 
iterate over all nearby nodes with extended capture zone in order to 
find the best match.


This will then be O(n) in both time and space for n nearby nodes, 
given constant time to find the intersection of L with the node 
perimeter. This assumption will be true for rectangular, oval and 
rounded rectangle nodes.


Thanks,
Daniel


On Nov 15, 2013, at 11:09 PM, Pavel Safrata 
 wrote:



Hello,
let me start with a few comments.

"changing behavior based on which nodes have listeners on them" - 
absolutely not. We have capturing, bubbling, hierarchical event 
types, so we can't decide which nodes listen (in the extreme case, 
scene can handle Event.ANY and perform actions on the target 
node based on the event type).


"

review request - RT-34481 [Lens] [touch] Saving of pending points may result in a bad state

2013-11-25 Thread Assaf Yavnai

Hi Dave,
Would you please review the webrev attached to RT-34481 



Thanks,
Assaf


review request: [RT-34477] [Lens] Command-line option to track input device raw events

2013-11-27 Thread Assaf Yavnai

Hi Daniel,

Would you please review the patch for RT-34477 
 (attached ti the JIRA)


Thanks,
Assaf


Re: discussion about touch events

2013-12-15 Thread Assaf Yavnai

Pavel,

I will summarize my answers here, and not inline, I hope I will touch 
all your points.


Let me start my answer with a different perspective.
I think that it is correct to try and make mouse oriented application to 
work under touch screens, but its not the same as looking how UI should 
behave with touch screens.
To explain myself, here an example tested on the iPhone. It doesn't mean 
that we need to do the same, but rather a pointer for checking how UI 
can behave under touch.
In iPhone, when you press on a control, say send button, and you start 
dragging it away the button will remain 'armed' for quite large 
distance, even if you drag it on top of other controls. Only after a 
large distance is passed, the button is been deactivated and the 
operation is canceled, this is true a cross the platform.


What I'm trying to say that we first need to define how we would like 
touch behave from UI/UX point of view and only then try to resolve the 
technical issues.


After saying that its very important to note that JFX is cross platform 
and device specific as smartphones are for example. So I believe that:

- mouse events should act as mouse events (regardless to touch)
- touch events should act as touch events (regardless to mouse)
and
- mouse applications that run on a touch device should have 80/20 
functionality working i.e usable but not perfect (again the way the 
application behaved and designed could not overlap in 100%, like small 
UI elements)
- a question how we want touch application work on mouse platform 
(migrating embedded application to desktop for example)

But
UI should behave differently on touch platforms and mouse platform, or 
more accurately when events derived from touch device or a pointer 
device. And this (mainly) is currently not supported.


I would like to suggest the following, hoping its feasible for the next 
release (8u20 or 9)

1) define UI/UX requirements for touch devices
2) check overlaps and unique behavior between mouse and touch behaviors
3) suggest 3 UI paths 1) mouse based UI 2) touch based UI 3) common behavior
4) discuss and define technical approach

We might end up in a solution very similar to what we have now, or as 
you said, something completely new. The challenge is to come to it with 
'empty minds' (that why it would be best if UX engineer will define it 
and not us)


Further more, I think that solutions like "Glass generates a 
MOUSE_EXITED event any time all touches are released" should be 
implemented in the shared code in not platform specific point, for 
obvious reasons as this example provide.


I apologize if it was hinted that the technical chalnges are not 
imoprtent or not challenging, I meant that it isn't the time to tackle 
them (top down approach vs bottom-up)


I believe that we both want to to deliver a top notch platform and not 
one that works most of the time, the difference between us, I think, is 
that I focus on the differences and you on the commonalities.


Maybe it would be best if we assemble a team to discuss those issues by 
phone instead of mails, what do you think?


Hope its clearer.

Assaf
On 12/12/2013 10:30 PM, Pavel Safrata wrote:

Hi Assaf,
please see my comments inline.

On 20.11.2013 16:30, Assaf Yavnai wrote:

Pavel,

I think that this is a very good example why touch events should be 
processed separately from mouse events.
For example, if you will press a button with a touch it will remain 
in "hover" state although you released the finger from the screen. 
This is done because the "hover" state listen to the mouse 
coordinates, which is invalid for touch.

Touch events doesn't have the concept of move, only drag.


My initial feeling would be for the synthesized mouse events to behave 
similarly to touch events - when touching the screen you want the 
application respond certain way regardless of the events used. Do we 
agree?


Specifically, the hover problem can be solved quite easily. On iOS, 
Glass generates a MOUSE_EXITED event any time all touches are 
released, which clears the hover state of everything. I suppose all 
platforms can do that (for synthesized mouse events).




As I mentioned before, from my point of view, the goal of this thread 
is to understand and map the difference and expected behavior between 
touch events and mouse events and have separate behavior depends of 
the type of events (Nodes, Controls and application level). One of 
them is the picking mechanism, the hover is another. I think the 
discussion of how to implement it is currently in lower priority, as 
it is only a technical detail.


OK. Regarding picking, I believe the logical algorithm "pick each 
pixel and find the touch-sensitive one closest to the center if 
there's any" is the most natural behavior (just let me note that 
implementation is not really a "technical detail" because right now I 
don't see 

Re: discussion about touch events

2013-12-16 Thread Assaf Yavnai

Pavel,

See RT-34945 <https://javafx-jira.kenai.com/browse/RT-34945> for a good 
example to the case that touch and mouse event should behave differently 
on controls.


Assaf
On 12/15/2013 05:43 PM, Assaf Yavnai wrote:

Pavel,

I will summarize my answers here, and not inline, I hope I will touch 
all your points.


Let me start my answer with a different perspective.
I think that it is correct to try and make mouse oriented application 
to work under touch screens, but its not the same as looking how UI 
should behave with touch screens.
To explain myself, here an example tested on the iPhone. It doesn't 
mean that we need to do the same, but rather a pointer for checking 
how UI can behave under touch.
In iPhone, when you press on a control, say send button, and you start 
dragging it away the button will remain 'armed' for quite large 
distance, even if you drag it on top of other controls. Only after a 
large distance is passed, the button is been deactivated and the 
operation is canceled, this is true a cross the platform.


What I'm trying to say that we first need to define how we would like 
touch behave from UI/UX point of view and only then try to resolve the 
technical issues.


After saying that its very important to note that JFX is cross 
platform and device specific as smartphones are for example. So I 
believe that:

- mouse events should act as mouse events (regardless to touch)
- touch events should act as touch events (regardless to mouse)
and
- mouse applications that run on a touch device should have 80/20 
functionality working i.e usable but not perfect (again the way the 
application behaved and designed could not overlap in 100%, like small 
UI elements)
- a question how we want touch application work on mouse platform 
(migrating embedded application to desktop for example)

But
UI should behave differently on touch platforms and mouse platform, or 
more accurately when events derived from touch device or a pointer 
device. And this (mainly) is currently not supported.


I would like to suggest the following, hoping its feasible for the 
next release (8u20 or 9)

1) define UI/UX requirements for touch devices
2) check overlaps and unique behavior between mouse and touch behaviors
3) suggest 3 UI paths 1) mouse based UI 2) touch based UI 3) common 
behavior

4) discuss and define technical approach

We might end up in a solution very similar to what we have now, or as 
you said, something completely new. The challenge is to come to it 
with 'empty minds' (that why it would be best if UX engineer will 
define it and not us)


Further more, I think that solutions like "Glass generates a 
MOUSE_EXITED event any time all touches are released" should be 
implemented in the shared code in not platform specific point, for 
obvious reasons as this example provide.


I apologize if it was hinted that the technical chalnges are not 
imoprtent or not challenging, I meant that it isn't the time to tackle 
them (top down approach vs bottom-up)


I believe that we both want to to deliver a top notch platform and not 
one that works most of the time, the difference between us, I think, 
is that I focus on the differences and you on the commonalities.


Maybe it would be best if we assemble a team to discuss those issues 
by phone instead of mails, what do you think?


Hope its clearer.

Assaf
On 12/12/2013 10:30 PM, Pavel Safrata wrote:

Hi Assaf,
please see my comments inline.

On 20.11.2013 16:30, Assaf Yavnai wrote:

Pavel,

I think that this is a very good example why touch events should be 
processed separately from mouse events.
For example, if you will press a button with a touch it will remain 
in "hover" state although you released the finger from the screen. 
This is done because the "hover" state listen to the mouse 
coordinates, which is invalid for touch.

Touch events doesn't have the concept of move, only drag.


My initial feeling would be for the synthesized mouse events to 
behave similarly to touch events - when touching the screen you want 
the application respond certain way regardless of the events used. Do 
we agree?


Specifically, the hover problem can be solved quite easily. On iOS, 
Glass generates a MOUSE_EXITED event any time all touches are 
released, which clears the hover state of everything. I suppose all 
platforms can do that (for synthesized mouse events).




As I mentioned before, from my point of view, the goal of this 
thread is to understand and map the difference and expected behavior 
between touch events and mouse events and have separate behavior 
depends of the type of events (Nodes, Controls and application 
level). One of them is the picking mechanism, the hover is another. 
I think the discussion of how to implement it is currently in lower 
priority, as it is only a technical detail.


OK. Regarding picking, I believe the logical algorithm "pick each 
pixel an

Re: Programmatic (Java) access to style / layout ?

2013-12-18 Thread Assaf Yavnai

I agree that different people would like different things
here is mine ;)
button.border().hover().setColor(red);

This styling is widely used in scripting languages, jQuery is a good 
example. Its done by always return an object from an operation, in this 
example setColor() will return the same object that was returned from 
hover(), allowing further manipulation in the same line of code. For 
example 
button.border().hover().setColor(red).setStrokeWidth(10).setBlink(false).setHandler(()->{}); 
If readability is an issue it can be formatted as:


button.border().hover()
.setColor(red)
.setStrokeWidth(10)
.setBlink(false)
.setHandler(()->{});

And this reminds me another issue, which may be a fork, but in my point 
of view its originated from the same approach, which is users manipulate 
internal data structures directly to do operations.
I didn't understand the design decision to expose internal structures in 
FX to do operations and manipulate them directly. In this case 
properties, another example is root.getChilderen().add(). This not only 
expose the internal data structure, but also requires more coding for 
simple trivial operation. users >90% don't need/care about the internal 
data structures, but 100% want to write less and simpler code. Usually 
user just want to do some trivial thing, add, remove, traverse..., for 
those common cases an alias can be used that encapsulate inside it the 
data structure used and the language quirks. In this example root.add() 
would do the trick. Other operations could look similar, like 
root.clear(), root, remove(...). root.traverse()(will return an 
iterator). For those edge cases which requires manipulating the data 
structure directly API can expose them, like root.getChildrensAsList(), 
or root.getChildren()



Combining the 2 approaches and the ability to retrieve the nodes related 
to the operations in 2 levels, say through .base() and .parent(), would 
yield the results that Oliver suggested (as I understood them)


(hope the layout wouldn't get scrambled upon send)

Group root = new Group()
.add( new Button()
.border()
.hover()
.setColor(red)
.setStrokeWidth(10)
.setBlink(false)
.setHandler(()->{})

.parent() //parent() 
return border

.setColor(blue)
.setBlink(true)

.base().caption() 
//base() return button

.setText("button1")
.setFontSize(30)
.base(),

new Button("button 2").caption()
  .setSize(25)
  .setHandler(()->{}))
.base()
)//end of add()
.some().other().operation.onGroup()
);


my 0.002$

Assaf
On 12/17/2013 11:18 PM, David Grieve wrote:

It is possible, but probably not in the way that 
https://javafx-jira.kenai.com/browse/RT-17293 would provide. By that I mean, it would be 
nice to be able to progammatically say "the border color for this button when 
hovered is red." This can be done now by adding an invalidation listener to the 
hoverProperty and setting the border on Region, but I don' t think this is what you are 
after.

I think RT-17923 means different things to different people. I'd be interested 
to know what such and API would look like to you.


On Dec 17, 2013, at 3:39 PM, Oliver Doepner  wrote:


Hello,

This might have been asked before but I haven't found it:
Is it possible to style JavaFX components using pure Java (i.e. without
CSS)?

If this is being worked on, does anyone know what the status is?

I think this is one of the areas where JavaFX can actually be better than
HTML5/CSS because a nicely typed Java representation of CSS could provide
great tooling and features that others had to invent LESS or SASS for.

I have seen this (from Oct 2012):
http://stackoverflow.com/questions/12943501/style-javafx-components-without-css

I have also seen this: https://javafx-jira.kenai.com/browse/RT-17293
Last update: Brian
Beckadded
a comment - Mar,
12 2012 09:01 PM : "It is now past the point where this feature can be
added for 2.1. Will need to be considered for a future release"

Thanks
Oliver

--
Oliver Doepner
http://doepner.net/