Revamping touch input on Windows

2013-04-18 Thread Jim Mathies
We have quite a few issues with touch enabled sites on Windows. [1] Our 
support for touch stretches back to when we first implemented MozTouch 
events which over time has morphed into a weird combination of W3C touch / 
simple gestures support. It is rather messy to fix, but I'd like to get this 
cleaned up now such that we are producing a reliable stream of events on all 
Windows platforms we support touch on. (This includes Win7 and Win8, and 
Metro.)


We are constrained by limitations in the way Windows handles touch input on 
desktop and our own implementation. For the desktop browser, there are two 
different Windows event sets we can work with which are implemented in 
Windows such that they are mutually exclusive - we can receive one type or 
the other, but not both. The two event sets are Gesture and Touch. The 
switch we use to decide which to process is based on a call to nsIWidget's 
RegisterTouchWindow.


If RegisterTouchWindow has not been called we consume only Windows Gesture 
events which generate nsIDOMSimpleGestureEvents (rotate, magnify, swipe) and 
pixel scroll events. For the specific case of panning content widget queries 
event state manager's DecideGestureEvent to see if the underlying element 
wants pixel scroll / pan feedback. [2] Based on the returned panDirection we 
request certain Gesture events from Windows and send pixel scroll 
accordingly. If the underlying element can't be panned in the direction the 
input is in, we opt out of receiving Gesture events and fall back on sending 
simple mouse input. (This is why you'll commonly get selection when dragging 
your finger horizontally across a page.)


On the flip side, if the DOM communicates the window supports touch input 
through RegisterTouchWindow, we bypass all Gesture events and instead 
request Touch events from Windows. In this case we do not fire 
nsIDOMSimpleGestureEvents, mouse, or pixel scroll events and instead fire 
W3C compliant touch input. We do not call DecideGestureEvent, and we do not 
generate pan feedback on the window. You can see this behavior using a good 
W3C touch demo. [3]


One of the concerns here is that since we do not differentiate the metro and 
desktop browsers via UA, the two should emulate each other closely. The 
browser would appear completely broken to content if the same UA sent two 
different event streams. So we need to take into account how metrofx works 
as well.


With metrofx we can differentiate between mouse and touch input when we 
receive input, so we split the two up and fire appropriate events for each. 
When receiving mouse input, we fire standard mouse events. When receiving 
touch input, we fire W3C touch input and nsIDOMSimpleGestureEvents. We also 
fire mouse down/mouse up (click) events from touch so taps on the screen 
emulate clicking the mouse. Metrofx ignores RegisterTouchWindow, never 
queries DecideGestureEvent, and does not fire pixel scroll events. Panning 
of web pages is currently handled in the front end via js in response to W3C 
touch events, which I might note is not as performant as desktop's pixel 
scroll handling. In time this front end handling will hopefully be replaced 
by async pan zoom which lives down in the layers backend.


Note that the metrofx front end makes very little use of 
nsIDOMSimpleGestureEvents, the only events we use are left/right swipe 
events for navigation. If we chose we could ignore these and not generate 
nsIDOMSimpleGestureEvents at all. [4]


To clean this up, I'd like to propose the following:

1) abandon generating nsIDOMSimpleGestureEvents on Windows for both backends 
when processing touch input from touch input displays.*


This would mean that if the desktop front end wants to do something with 
pinch or zoom, it would have to process W3C touch events instead. Note that 
we could still fire simple gestures from devices like track pads. But for 
touch input displays, we would not support these events.


* There's one exception to this in metro, we would continue to fire 
MozEdgeUIGesture. [5]


2) Rework how we process touch events in Windows widget such that:

* Both backends respect RegisterTouchWindow and only fire W3C events when it 
is set.

* If RegisterTouchWindow has been called:
** Send touchstart and the first touchmove and look at the return results.
** If either of these returns eConsumeNoDefault, continue sending W3C events 
only. No mouse or pixel scroll events would be sent.

** If both of these events do not return eConsumeNoDefault:
*** Abandon sending W3C touch events.
*** Generate pixel scroll events in the appropriate direction based on 
DecideGestureEvent, or simple mouse events if DecideGestureEvent indicates 
scrolling isn't possible.


Feedback welcome on this approach. With Win8 going full touch capable, this 
problem is only going to get worse with time, so I think we need to get it 
cleaned up and standardized.


There is also the open issue of future support for other touch specs which 

Re: Revamping touch input on Windows

2013-04-18 Thread Tim Abraldes
> 1) abandon generating nsIDOMSimpleGestureEvents on Windows for both
backends
> when processing touch input from touch input displays.*
>
> This would mean that if the desktop front end wants to do something with
> pinch or zoom, it would have to process W3C touch events instead. Note
that
> we could still fire simple gestures from devices like track pads. But for
> touch input displays, we would not support these events.

If I understand the proposal correctly, widget would be responsible for
sending only pointer/touch events (with the one exception you
mentioned), and we would implement "simple gesture recognition" in a js
module.  This would give us a single implementation of gesture
recognition that could be shared and used with multiple widget backends,
and would simplify the widget backends because they no longer have to do
gesture recognition.  For those reasons, I think this is a great idea.

The metro/WinRT widget backend can take advantage of native gesture
recognition, so maybe in the future we would want to implement the
ability to opt-out of front-end gesture recognition. I don't think we
should do this in the immediate term, but as backends get better and
better native support for things like gestures, we may want to allow
ourselves the opportunity to take advantage of that native support.


> 2) Rework how we process touch events in Windows widget such that:
> * Both backends respect RegisterTouchWindow and only fire W3C events
when it
> is set.
> * If RegisterTouchWindow has been called:

This is the first I've heard of RegisterTouchWindow, so I can't speak to
whether we should send touch events if it hasn't been called.

> ** Send touchstart and the first touchmove and look at the return results.
> ** If either of these returns eConsumeNoDefault, continue sending W3C
events
> only. No mouse or pixel scroll events would be sent.

Sounds reasonable.

> ** If both of these events do not return eConsumeNoDefault:
> *** Abandon sending W3C touch events.

I'm not sure that we should stop sending W3C pointer/touch events. It's
conceivable that a page exists that is not scrollable, uses touch input,
and does not bother to call `preventDefault` on the pointer/touch events
it receives (which means we won't get eConsumeNoDefault returned).  If a
page is ignoring W3C pointer/touch events anyway, I don't think it is
harmful to keep sending them.

> *** Generate pixel scroll events in the appropriate direction based on
> DecideGestureEvent, or simple mouse events if DecideGestureEvent indicates
> scrolling isn't possible.

We should definitely send scroll events. If scrolling isn't possible, I
don't think we should do anything special; certain mouse events will
already be sent when the user performs non-scroll touches (as per the
W3C specs) and we don't want to start sending a bunch of extra mouse
events all of a sudden.


> There is also the open issue of future support for other touch specs which
> I'm not taking into consideration. If anyone has any input on W3C support
> vs. MS Pointer support I'd love to hear it. I'd hate to get this all
cleaned
> up only to find that we have to change our touch input processing
again for
> the nth time. Maybe now might be a good time to abandon W3C
completely. As I
> understand it MS Pointer events solve some of the problems we have with
> mixed touch/mouse input.

As I understand it, W3C touch events (touchstart/touchmove/touchend) are
the current state of the art, but are being abandoned in favor of W3C
pointer events.  I believe that we should definitely implement W3C
pointer events.  I'm not sure how widespread the usage of W3C touch
events is, but it may be necessary for us to implement those as well for
compatibility.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revamping touch input on Windows

2013-04-18 Thread Jim Mathies
"Tim Abraldes"  wrote in message
news:...
> > 1) abandon generating nsIDOMSimpleGestureEvents on Windows for both
> backends
> > when processing touch input from touch input displays.*
> >
> > This would mean that if the desktop front end wants to do something with
> > pinch or zoom, it would have to process W3C touch events instead. Note
> that
> > we could still fire simple gestures from devices like track pads. But
for
> > touch input displays, we would not support these events.
> 
> If I understand the proposal correctly, widget would be responsible for
> sending only pointer/touch events (with the one exception you
> mentioned), and we would implement "simple gesture recognition" in a js
> module.  This would give us a single implementation of gesture
> recognition that could be shared and used with multiple widget backends,
> and would simplify the widget backends because they no longer have to do
> gesture recognition.  For those reasons, I think this is a great idea.


Agreed, that's the best place for it. If we run into platform specific
settings we can expose constants as needed.


> The metro/WinRT widget backend can take advantage of native gesture
> recognition, so maybe in the future we would want to implement the
> ability to opt-out of front-end gesture recognition. I don't think we
> should do this in the immediate term, but as backends get better and
> better native support for things like gestures, we may want to allow
> ourselves the opportunity to take advantage of that native support.


Maybe, if we need to. Currently the metro front end doesn't use most
of the simple gestures widget sends so might as well disable processing
and delivery for now. From what I remember we aren't planning on using
the native cross slide gesture recognizer either. 


> > 2) Rework how we process touch events in Windows widget such that:
> > * Both backends respect RegisterTouchWindow and only fire W3C events
> when it
> > is set.
> > * If RegisterTouchWindow has been called:
> 
> This is the first I've heard of RegisterTouchWindow, so I can't speak to
> whether we should send touch events if it hasn't been called.


It's called on the widget when something in front end registers a touch
event handler -

http://mxr.mozilla.org/mozilla-central/source/dom/base/nsGlobalWindow.cpp#82
95

So if it hasn't been called, nobody is listening for touch events, and
so we can skip sending them.


> > ** If both of these events do not return eConsumeNoDefault:
> > *** Abandon sending W3C touch events.
> 
> I'm not sure that we should stop sending W3C pointer/touch events. It's
> conceivable that a page exists that is not scrollable, uses touch input,
> and does not bother to call `preventDefault` on the pointer/touch events
> it receives (which means we won't get eConsumeNoDefault returned).  If a
> page is ignoring W3C pointer/touch events anyway, I don't think it is
> harmful to keep sending them.


I think we want to enforce the W3C spec. So for example, say a page is
listening to touch events and drawing something in response, but fails
to call preventDefault. Widget on desktop would generate pixel scroll
events and send touch events. You would have a web page reacting to
touch events while the browser scrolls.

This is a corner case I think, we could do either and see what bugs
crop up.


> 
> > *** Generate pixel scroll events in the appropriate direction based on
> > DecideGestureEvent, or simple mouse events if DecideGestureEvent
indicates
> > scrolling isn't possible.
> 
> We should definitely send scroll events. If scrolling isn't possible, I
> don't think we should do anything special; certain mouse events will
> already be sent when the user performs non-scroll touches (as per the
> W3C specs) and we don't want to start sending a bunch of extra mouse
> events all of a sudden.


If we implement sending synth mouse events up in the DOM, great. If not,
we'll need to deal with it down in widget.

Jim

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revamping touch input on Windows

2013-04-18 Thread Tim Abraldes
>> The metro/WinRT widget backend can take advantage of native gesture
>> recognition, so maybe in the future we would want to implement the
>> ability to opt-out of front-end gesture recognition. I don't think we
>> should do this in the immediate term, but as backends get better and
>> better native support for things like gestures, we may want to allow
>> ourselves the opportunity to take advantage of that native support.
> 
> 
> Maybe, if we need to. Currently the metro front end doesn't use most
> of the simple gestures widget sends so might as well disable processing
> and delivery for now. From what I remember we aren't planning on using
> the native cross slide gesture recognizer either. 

That's correct; we're implementing detection of the cross slide gesture
in js in bug 829056 [1].


>>> ** If both of these events do not return eConsumeNoDefault:
>>> *** Abandon sending W3C touch events.
>>
>> I'm not sure that we should stop sending W3C pointer/touch events. It's
>> conceivable that a page exists that is not scrollable, uses touch input,
>> and does not bother to call `preventDefault` on the pointer/touch events
>> it receives (which means we won't get eConsumeNoDefault returned).  If a
>> page is ignoring W3C pointer/touch events anyway, I don't think it is
>> harmful to keep sending them.
> 
> 
> I think we want to enforce the W3C spec. 

The spec doesn't say anything about not sending touch events if
preventDefault wasn't called on the first touchstart/touchmove.

> So for example, say a page is
> listening to touch events and drawing something in response, but fails
> to call preventDefault. Widget on desktop would generate pixel scroll
> events and send touch events. You would have a web page reacting to
> touch events while the browser scrolls.

Page A
  Responds to touch input by drawing
  Does not call preventDefault
  Is scrollable

Page B
  Responds to touch input by drawing
  Does not call preventDefault
  Is not scrollable

If we stop sending touch events when preventDefault isn't called on the
first touchstart/touchmove:
  Page A fails to draw but scrolls correctly
  Page B fails to draw

If we send touch events regardless of preventDefault:
  Page A scrolls correctly but draws while scrolling
  Page B draws correctly

Neither situation is ideal (the page really should call preventDefault
on its touch events) but I think the latter behavior is preferable.

> This is a corner case I think, we could do either and see what bugs
> crop up.

I agree; any page that actually listens for touch events should be
calling preventDefault on those events.


>>> *** Generate pixel scroll events in the appropriate direction based on
>>> DecideGestureEvent, or simple mouse events if DecideGestureEvent
> indicates
>>> scrolling isn't possible.
>>
>> We should definitely send scroll events. If scrolling isn't possible, I
>> don't think we should do anything special; certain mouse events will
>> already be sent when the user performs non-scroll touches (as per the
>> W3C specs) and we don't want to start sending a bunch of extra mouse
>> events all of a sudden.
> 
> 
> If we implement sending synth mouse events up in the DOM, great. If not,
> we'll need to deal with it down in widget.

In the metro/WinRT widget backend we handle sending
mousedown/mousemove/mouseup in response to touch input.  It sounds like
we'll want to additionally send scroll events for certain touch input.
I'm not sure how things are handled in the desktop widget backend.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=829056

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revamping touch input on Windows

2013-04-19 Thread smaug

On 04/18/2013 03:50 PM, Jim Mathies wrote:

We have quite a few issues with touch enabled sites on Windows. [1] Our support 
for touch stretches back to when we first implemented MozTouch events
which over time has morphed into a weird combination of W3C touch / simple 
gestures support. It is rather messy to fix, but I'd like to get this
cleaned up now such that we are producing a reliable stream of events on all 
Windows platforms we support touch on. (This includes Win7 and Win8, and
Metro.)

We are constrained by limitations in the way Windows handles touch input on 
desktop and our own implementation. For the desktop browser, there are two
different Windows event sets we can work with which are implemented in Windows 
such that they are mutually exclusive - we can receive one type or the
other, but not both. The two event sets are Gesture and Touch. The switch we 
use to decide which to process is based on a call to nsIWidget's
RegisterTouchWindow.

If RegisterTouchWindow has not been called we consume only Windows Gesture 
events which generate nsIDOMSimpleGestureEvents (rotate, magnify, swipe)
and pixel scroll events.

s/pixel scroll/wheel/ these days, right?



For the specific case of panning content widget queries event state manager's 
DecideGestureEvent to see if the underlying
element wants pixel scroll / pan feedback. [2] Based on the returned 
panDirection we request certain Gesture events from Windows and send pixel 
scroll
accordingly. If the underlying element can't be panned in the direction the 
input is in, we opt out of receiving Gesture events and fall back on
sending simple mouse input. (This is why you'll commonly get selection when 
dragging your finger horizontally across a page.)

On the flip side, if the DOM communicates the window supports touch input 
through RegisterTouchWindow, we bypass all Gesture events and instead
request Touch events from Windows. In this case we do not fire 
nsIDOMSimpleGestureEvents, mouse, or pixel scroll events and instead fire W3C 
compliant
touch input. We do not call DecideGestureEvent, and we do not generate pan 
feedback on the window. You can see this behavior using a good W3C touch
demo. [3]

One of the concerns here is that since we do not differentiate the metro and 
desktop browsers via UA, the two should emulate each other closely. The
browser would appear completely broken to content if the same UA sent two 
different event streams. So we need to take into account how metrofx works
as well.

With metrofx we can differentiate between mouse and touch input when we receive 
input, so we split the two up and fire appropriate events for each.
When receiving mouse input, we fire standard mouse events. When receiving touch 
input, we fire W3C touch input and nsIDOMSimpleGestureEvents. We also
fire mouse down/mouse up (click) events from touch so taps on the screen 
emulate clicking the mouse. Metrofx ignores RegisterTouchWindow, never
queries DecideGestureEvent, and does not fire pixel scroll events. Panning of 
web pages is currently handled in the front end via js in response to
W3C touch events, which I might note is not as performant as desktop's pixel 
scroll handling. In time this front end handling will hopefully be
replaced by async pan zoom which lives down in the layers backend.

Note that the metrofx front end makes very little use of 
nsIDOMSimpleGestureEvents, the only events we use are left/right swipe events 
for navigation.
If we chose we could ignore these and not generate nsIDOMSimpleGestureEvents at 
all. [4]

To clean this up, I'd like to propose the following:

1) abandon generating nsIDOMSimpleGestureEvents on Windows for both backends 
when processing touch input from touch input displays.*

This would mean that if the desktop front end wants to do something with pinch 
or zoom, it would have to process W3C touch events instead. Note that
we could still fire simple gestures from devices like track pads. But for touch 
input displays, we would not support these events.

Sounds ok to me. SimpleGestureEvents were originally for (OSX) touchpad case 
only anyway.




* There's one exception to this in metro, we would continue to fire 
MozEdgeUIGesture. [5]

We should perhaps then call it something else than SimpleGestureEvent




2) Rework how we process touch events in Windows widget such that:

* Both backends respect RegisterTouchWindow and only fire W3C events when it is 
set.
* If RegisterTouchWindow has been called:
** Send touchstart and the first touchmove and look at the return results.
** If either of these returns eConsumeNoDefault, continue sending W3C events 
only. No mouse or pixel scroll events would be sent.
** If both of these events do not return eConsumeNoDefault:
*** Abandon sending W3C touch events.
*** Generate pixel scroll events in the appropriate direction based on 
DecideGestureEvent, or simple mouse events if DecideGestureEvent indicates
scrolling isn't possible.

Feedback welcome on this approach.

Re: Revamping touch input on Windows

2013-04-20 Thread rbyers
I work on touch support in Chrome desktop (and the touch events and pointer 
events standards).  Most of the mozilla implementation details in this thread 
are over my head, but I wanted to add a couple comments in case it's helpful.  
Inline.

On Thursday, April 18, 2013 10:06:57 PM UTC-4, Tim Abraldes wrote:
> >> The metro/WinRT widget backend can take advantage of native gesture
> 
> >> recognition, so maybe in the future we would want to implement the
> 
> >> ability to opt-out of front-end gesture recognition. I don't think we
> 
> >> should do this in the immediate term, but as backends get better and
> 
> >> better native support for things like gestures, we may want to allow
> 
> >> ourselves the opportunity to take advantage of that native support.
> 
> > 
> 
> > 
> 
> > Maybe, if we need to. Currently the metro front end doesn't use most
> 
> > of the simple gestures widget sends so might as well disable processing
> 
> > and delivery for now. From what I remember we aren't planning on using
> 
> > the native cross slide gesture recognizer either. 
> 
> 
> 
> That's correct; we're implementing detection of the cross slide gesture
> 
> in js in bug 829056 [1].
> 

I think this is analogous to what we do in Chrome - always use 
RegisterTouchWindow and do our own gesture recognition (for scrolling, 
flinging, long press, etc.).  IE10 appears not to do this, instead relying on 
the pointer event support in Windows 8 and so a very limited touch input model 
on Windows 7.

> >>> ** If both of these events do not return eConsumeNoDefault:
> 
> >>> *** Abandon sending W3C touch events.
> 
> >>
> 
> >> I'm not sure that we should stop sending W3C pointer/touch events. It's
> 
> >> conceivable that a page exists that is not scrollable, uses touch input,
> 
> >> and does not bother to call `preventDefault` on the pointer/touch events
> 
> >> it receives (which means we won't get eConsumeNoDefault returned).  If a
> 
> >> page is ignoring W3C pointer/touch events anyway, I don't think it is
> 
> >> harmful to keep sending them.
> 
> > 
> 
> > 
> 
> > I think we want to enforce the W3C spec. 
> 
> 
> 
> The spec doesn't say anything about not sending touch events if
> 
> preventDefault wasn't called on the first touchstart/touchmove.
>  
> > So for example, say a page is
> 
> > listening to touch events and drawing something in response, but fails
> 
> > to call preventDefault. Widget on desktop would generate pixel scroll
> 
> > events and send touch events. You would have a web page reacting to
> 
> > touch events while the browser scrolls.
> 
> 
> 
> Page A
> 
>   Responds to touch input by drawing
> 
>   Does not call preventDefault
> 
>   Is scrollable
> 
> 
> 
> Page B
> 
>   Responds to touch input by drawing
> 
>   Does not call preventDefault
> 
>   Is not scrollable
> 
> 
> 
> If we stop sending touch events when preventDefault isn't called on the
> 
> first touchstart/touchmove:
> 
>   Page A fails to draw but scrolls correctly
> 
>   Page B fails to draw
> 
> 
> 
> If we send touch events regardless of preventDefault:
> 
>   Page A scrolls correctly but draws while scrolling
> 
>   Page B draws correctly
> 
> 
> 
> Neither situation is ideal (the page really should call preventDefault
> 
> on its touch events) but I think the latter behavior is preferable.
>

Yeah the spec is woefully under specified here - in part because the major 
implementations at the time differed.  There are legitimate scenarios where 
pages will listen for touch events but not call preventDefault (eg. they might 
want to respond to pan or pinch gestures without supressing click behavior for 
the simple case of a touch not moving much).

Mobile safari and Chrome desktop generally handle this as you've described as 
preferable.  You can experiment with this easily using 
www.rbyers.net/eventTest.html (the alternate prevent-default on touchmove every 
second option is particularly useful for seeing how partial consumption of 
moves during scrolling behaves).  This behavior can be a problem for threaded 
scrolling though (see 
https://plus.google.com/115788095648461403871/posts/cmzrtyBYPQc).  For this 
reason, Chrome for Android (and in some cases mobile safari) will send a 
touchcancel event (and then stop sending touch events) after scrolling has 
started.

> > This is a corner case I think, we could do either and see what bugs
> 
> > crop up.
> 
> 
> 
> I agree; any page that actually listens for touch events should be
> 
> calling preventDefault on those events.
> 
> 

That's what we'd like to encourage for the common scenarios (eg. see this 
article: http://www.html5rocks.com/en/mobile/touchandmouse/), but as I said 
above there are some special cases where it makes sense to do something else.

> >>> *** Generate pixel scroll events in the appropriate direction based on
> 
> >>> DecideGestureEvent, or simple mouse events if DecideGestureEvent
> 
> > indicates
> 
> >>> scrolling isn't possible.
> 
> >>
> 
> >> We should de

Re: Revamping touch input on Windows

2013-04-21 Thread Justin Dolske

On 4/18/13 5:50 AM, Jim Mathies wrote:


One of the concerns here is that since we do not differentiate the metro
and desktop browsers via UA, the two should emulate each other closely.
The browser would appear completely broken to content if the same UA
sent two different event streams. So we need to take into account how
metrofx works as well.


How does the current/proposed state of affairs compare to other 
platforms (notably OS X and Android)?


While the Metro-vs-Win8 case seems especially important to get right 
(since the same user on the same device may commonly flip between both), 
I'd hope web authors can write/test for on one platform and be 
reasonable able to expect their code to work the same everywhere. It's 
not clear to me how that ties in with what you're talking about.


Justin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revamping touch input on Windows

2013-04-21 Thread Wesley Johnston
On Android at least, we don't seem to have this same mutually exclusive 
gesture/touches problem. We receive touches in our Native UI layer, send them 
to Gecko, and use Native gesture detectors at the same time. That code is in a 
bit of flux at the moment as we're merging our Asynchronous Pan-Zoom code with 
B2G's (which is in turn, basically a C++ implementation of our code), while 
still using native gesture detectors (in some places). We also asynchronously 
fire nsIDOMSimpleGestureEvents for pinchzoom events (but not for any swipes). 
Because there's no exclusivity between determining gestures and touches, we 
fire touch events whether or not preventDefault() was called on the first 
touchstart/move of a series.

Mouse emulation is also done via native gesture detectors (detecting taps and 
double taps and sending message to our javascript frontend). It would be nice 
to move that into the platform to ensure its done consistently across 
platforms. I haven't really looked at the new AsyncPanZoom stuff, but I was 
hoping it would provide a path forward to make that happen.

Someone who knows that code better should comment, but I assume it should also 
replace the Metro javascript pan/zoom stuff at some point, perhaps backed by 
native gesture detection when the page doesn't have touch handlers, and using 
b2g's gesture detection at other times? Maybe that's orthogonal to this 
problem...

- Wes

- Original Message -
From: "Justin Dolske" 
To: dev-platform@lists.mozilla.org
Sent: Sunday, April 21, 2013 7:01:07 PM
Subject: Re: Revamping touch input on Windows

On 4/18/13 5:50 AM, Jim Mathies wrote:

> One of the concerns here is that since we do not differentiate the metro
> and desktop browsers via UA, the two should emulate each other closely.
> The browser would appear completely broken to content if the same UA
> sent two different event streams. So we need to take into account how
> metrofx works as well.

How does the current/proposed state of affairs compare to other 
platforms (notably OS X and Android)?

While the Metro-vs-Win8 case seems especially important to get right 
(since the same user on the same device may commonly flip between both), 
I'd hope web authors can write/test for on one platform and be 
reasonable able to expect their code to work the same everywhere. It's 
not clear to me how that ties in with what you're talking about.

Justin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fw: Revamping touch input on Windows

2013-04-22 Thread Jim Mathies
I don't think we fire touch events on OSX, so probably not an issue. I'm 
confident Android's behavior is more compliant that Win8 desktop. Metro is 
somewhere in between. Wes makes a good point in the next follow up about 
moving simulated click events into shared code to help standardize. For now 
behavior would be dependent on the widget backend. My goal here is to start 
by getting the two win widget layers acting the same.


As far as moz simple gestures go, they are moz/chrome specific, so they do 
not play into compliance. The metro front end doesn't consume these and we 
don't have add-ons yet, and the desktop front end uses them sparingly (for 
fake pinch zoom). I don't think there are major issues with removing support 
for these on Windows. There might be some addons that consume them. Authors 
could migrate uses to W3C touch events pretty easily I would think.


Jim

-Original Message- 
From: Justin Dolske

Sent: Sunday, April 21, 2013 9:01 PM Newsgroups: mozilla.dev.platform
To: dev-platform@lists.mozilla.org
Subject: Re: Revamping touch input on Windows

On 4/18/13 5:50 AM, Jim Mathies wrote:


One of the concerns here is that since we do not differentiate the metro
and desktop browsers via UA, the two should emulate each other closely.
The browser would appear completely broken to content if the same UA
sent two different event streams. So we need to take into account how
metrofx works as well.


How does the current/proposed state of affairs compare to other
platforms (notably OS X and Android)?

While the Metro-vs-Win8 case seems especially important to get right
(since the same user on the same device may commonly flip between both),
I'd hope web authors can write/test for on one platform and be
reasonable able to expect their code to work the same everywhere. It's
not clear to me how that ties in with what you're talking about.

Justin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform 


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revamping touch input on Windows

2013-04-22 Thread Jim Mathies

Someone who knows that code better should comment, but
I assume it should also replace the Metro javascript pan/zoom
stuff at some point, perhaps backed by native gesture detection
when the page doesn't have touch handlers, and using b2g's
gesture detection at other times? Maybe that's orthogonal to
this problem...


We plan to use the async pan zoom controller down is layers and yes it is 
supposed to replace the front end code.


We don't want to use native gesture detection unless we have to. We might 
have to make Win8 specific modifications to the APZC so that reactions to 
user input comply with Win8 input guidelines.


This shouldn't play into what/how events get delivered to content though 
since the apzc (from what I understand) acts as an input trap.


Jim 


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform