These simple stroke gestures, like we had years ago, now seems so
anachronistic.  It harkens to a time when we could only track a single
point of contact from the mouse.  In the video every gesture-drawing step
looked so unnecessary and time-wasting.

All tablets today support multi-touch interfaces, so there is no longer a
need to draw a symbol that indicates the  action you wish to take next.
Instead we want direct interaction with the objects.

The following YouTube video is an example of using multi-touch gestures for
manipulating 3D objects.

http://www.youtube.com/watch?v=6xIK07AhJjc


On Sun, Nov 18, 2012 at 6:03 AM, Jason Wilkins <jason.a.wilk...@gmail.com>wrote:

> More details about the video and the prototype.
>
> The recognizer used in the video is very simple to implement and
> understand.  It is called $1 (One Dollar) and was developed at the
> University of Washington [1].  We had a seminar recently about
> interfaces for children and extensions to $1 were presented and I was
> inspired by their simplicity because it meant I could just jump right
> in.  It works OK and is good enough for research purposes.
>
> One thing $1 does not do is input segmentation.  That means it cannot
> tell you how to split the input stream into chunks for individual
> recognition.  What I'm doing right now is segmenting by velocity.  If
> the cursor stops for 1/4 of a second then I attempt to match the
> input.  This worked great for mice but not at all for pens due to
> noise, so instead of requiring the cursor to stop I just require it to
> slow down a lot.  I'm experimenting with lots of different ideas in
> rejecting bad input.  I'm leaning towards a multi-modal approach where
> every symbol has its own separate criteria instead of attempting a
> one-size-fits-all approach.
>
> The recognizer is driven by the window manager and does not require a
> large amount of changes to capture the information it needs.
> Different recognizers could be plugged into the interface.
>
> The "afterglow" overlay is intended to give important feedback about
> how well the user is entering commands and to help them learn.  The
> afterglow gives an indication that a command was successfully entered
> (although I haven't disabled the display of valid but unbound gestures
> yet).  The afterglow morphs into the template shape to give the user
> both a clearer idea of what the gesture was and to help the user fix
> any problems with their form.
>
> In the future I want to use information about the gesture itself, such
> as its size and centroid, to drive any operator that is called.  For
> example, drawing a circle on an object might stamp it with a texture
> whose position and size were determined by the size and position of
> the circle.
>
> Additionally I want to create a new window region type for managing,
> training, and using gestures.  That might be doable as an add-on.
>
> [1] https://depts.washington.edu/aimgroup/proj/dollar/
>
>
> On Sun, Nov 18, 2012 at 7:42 AM, Jason Wilkins
> <jason.a.wilk...@gmail.com> wrote:
> > I've been exploring some research ideas (for university) and using
> > Blender to prototype them.  I made a short video that demonstrates
> > what I was able to do the last couple of days.  I'm starting to create
> > a general framework for sketch recognition in Blender.
> >
> > http://youtu.be/IeNjNbTz4CI
> >
> > The goal is an interface that could work without a keyboard or most
> > buttons.  I think a Blender with gestures is far more like Blender
> > than a Blender that is plastered with big buttons so it works on a
> > tablet.  It puts everything at your fingertips.
> _______________________________________________
> Bf-committers mailing list
> Bf-committers@blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>
_______________________________________________
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers

Reply via email to