On Wed, Oct 6, 2010 at 5:26 PM, James Carrington < [email protected]> wrote:
> For example SpaceClaim Engineer (a multi-touch CAD app on Windows) has > dozens, perhaps going on hundreds, of unique gestures it recognizes. They > also use combinations of pen & touch in innovative ways which motivates them > to want raw HID data from both touch and pen > > How we can get hundreds of gestures without the ability of factorising them into sub-known-gestures as drag/pinch/rotate/.. ? The engine may for example be tuned to recognize gestures occurring in sub-areas of the screen. (as in the video 1finger-hold + 2finger-drag) in big screen like 3M we can have more than 1 user (20 fingers), so recognizing gestures by areas simplify that handling. (for multi-user & meta-gestures) If we have to recognize more than that it will be very context specific which only a minority of applications needs it. A "Grammar of gestures" defined by combination of sub-known-gestures in space (areas) and in time (continuation/succession/cascading) simplifies life than having to deal with too many gestures. i
_______________________________________________ Mailing list: https://launchpad.net/~multi-touch-dev Post to : [email protected] Unsubscribe : https://launchpad.net/~multi-touch-dev More help : https://help.launchpad.net/ListHelp

