Re: [Bf-committers] Blender At Your Fingertips: Prototype Unistroke Commands for Tablets

2012-11-18 Thread Jason Wilkins
Some errata; I forgot that any of the tap motions can be changed into
tag-hold.  Usually this subtly changes the meaning of a touch gesture.
 It is also a common replacement for right-click.

I also forgot to mention pen input, which is still single touch.  Some
pen tablets do support pen and fingers and can tell the difference and
I expect this to become universal.

Another thing I should emphasize is that I'm moving towards general
"sketch recognition", which is like using location and bounds of a
glyph, but goes beyond that and uses the glyph to generate content.
This is similar to how you can generate curves from the grease pencil
tool currently.

On Sun, Nov 18, 2012 at 10:57 PM, Jason Wilkins
 wrote:
> I think I came off a bit short in my response, so I thought I'd make a
> better case for what I'm thinking.  Of course it is my fault if
> anybody misses the point, so thank you for you feedback.
>
> The basic (easy) multitouch gestures are:
>
> 1, 2, 3, 4 finger N-tap
> 1, 2, 3, 4 finger drag
> 2 finger pinch
>
> You can use contextual clues such as proximity and orientation to
> change the exact meaning of these gestures.
>
> You could introduce the pinky finger, or use both hands, or even the
> palm, and that would be fine if we were building a piano application.
>
> Multitouch is good for spatial operations (this includes single touch
> operations with 1 finger).  However, single touch glyphs are good for
> activating commands that do not have a spatial aspect to them
> (although I intend to explore the idea of using some spatial
> information from glyphs in activating commands).
>
> Single-touch glyph gestures are not replaced by multitouch, but can
> complement them, by providing a way to access commands that are not
> spatially oriented and otherwise would be buried in some menu, on some
> button that takes up space, or would require the attachment of a
> keyboard, (or some other interface like speech-to-command).
>
> It should be possible to make a few dozen single touch stroke gestures
> and binding them to non-spatial commands would reduce the need to add
> buttons.
>
> Even though the interface in that video was a toy, even in it I could
> imagine hiding the shape toolbar and drawing the shapes that you
> wanted instead of selecting them.  Using the size and position of the
> drawn single touch glyph you could even determine and initial position
> and size for the new shapes.
>
> Glyphs and gestures kind of different.  I'm emphasizing the glyph
> drawing aspect of touch, which is most intuitive with a single touch.
> But in the future I intend to explore the possibility of multitouch
> glyphs.  I think multitouch glyphs would allow for fast command entry,
> and multisegmented mutitouch glyphs (multiple "letters") would allow
> for really complicated command entry and also be easy to communicate
> by using visual representations of the glyphs.  If that is unclear,
> think of how the '+' sign can be thought of as a single touch glyph,
> but '#' could be the same motion but with two fingers.  Then '##' be
> used to indicate in writing that you do that motion twice.
>
> I hope that makes it clear that I'm thinking of how single and
> multitouch can be used to enter glyph-like commands and not space-like
> gestures, so it isn't anachronistic at all.
>
>
>
> On Sun, Nov 18, 2012 at 11:17 AM, Harley Acheson
>  wrote:
>> These simple stroke gestures, like we had years ago, now seems so
>> anachronistic.  It harkens to a time when we could only track a single
>> point of contact from the mouse.  In the video every gesture-drawing step
>> looked so unnecessary and time-wasting.
>>
>> All tablets today support multi-touch interfaces, so there is no longer a
>> need to draw a symbol that indicates the  action you wish to take next.
>> Instead we want direct interaction with the objects.
>>
>> The following YouTube video is an example of using multi-touch gestures for
>> manipulating 3D objects.
>>
>> http://www.youtube.com/watch?v=6xIK07AhJjc
>>
>>
>> On Sun, Nov 18, 2012 at 6:03 AM, Jason Wilkins 
>> wrote:
>>
>>> More details about the video and the prototype.
>>>
>>> The recognizer used in the video is very simple to implement and
>>> understand.  It is called $1 (One Dollar) and was developed at the
>>> University of Washington [1].  We had a seminar recently about
>>> interfaces for children and extensions to $1 were presented and I was
>>> inspired by their simplicity because it meant I could just jump right
>>> in.  It works OK and is good enough for research purposes.
>>>
>>> One thing $1 does not do is input segmentation.  That means it cannot
>>> tell you how to split the input stream into chunks for individual
>>> recognition.  What I'm doing right now is segmenting by velocity.  If
>>> the cursor stops for 1/4 of a second then I attempt to match the
>>> input.  This worked great for mice but not at all for pens due to
>>> noise, so instead of requiring the cursor to stop I just require it to

Re: [Bf-committers] Blender At Your Fingertips: Prototype Unistroke Commands for Tablets

2012-11-18 Thread Jason Wilkins
I think I came off a bit short in my response, so I thought I'd make a
better case for what I'm thinking.  Of course it is my fault if
anybody misses the point, so thank you for you feedback.

The basic (easy) multitouch gestures are:

1, 2, 3, 4 finger N-tap
1, 2, 3, 4 finger drag
2 finger pinch

You can use contextual clues such as proximity and orientation to
change the exact meaning of these gestures.

You could introduce the pinky finger, or use both hands, or even the
palm, and that would be fine if we were building a piano application.

Multitouch is good for spatial operations (this includes single touch
operations with 1 finger).  However, single touch glyphs are good for
activating commands that do not have a spatial aspect to them
(although I intend to explore the idea of using some spatial
information from glyphs in activating commands).

Single-touch glyph gestures are not replaced by multitouch, but can
complement them, by providing a way to access commands that are not
spatially oriented and otherwise would be buried in some menu, on some
button that takes up space, or would require the attachment of a
keyboard, (or some other interface like speech-to-command).

It should be possible to make a few dozen single touch stroke gestures
and binding them to non-spatial commands would reduce the need to add
buttons.

Even though the interface in that video was a toy, even in it I could
imagine hiding the shape toolbar and drawing the shapes that you
wanted instead of selecting them.  Using the size and position of the
drawn single touch glyph you could even determine and initial position
and size for the new shapes.

Glyphs and gestures kind of different.  I'm emphasizing the glyph
drawing aspect of touch, which is most intuitive with a single touch.
But in the future I intend to explore the possibility of multitouch
glyphs.  I think multitouch glyphs would allow for fast command entry,
and multisegmented mutitouch glyphs (multiple "letters") would allow
for really complicated command entry and also be easy to communicate
by using visual representations of the glyphs.  If that is unclear,
think of how the '+' sign can be thought of as a single touch glyph,
but '#' could be the same motion but with two fingers.  Then '##' be
used to indicate in writing that you do that motion twice.

I hope that makes it clear that I'm thinking of how single and
multitouch can be used to enter glyph-like commands and not space-like
gestures, so it isn't anachronistic at all.



On Sun, Nov 18, 2012 at 11:17 AM, Harley Acheson
 wrote:
> These simple stroke gestures, like we had years ago, now seems so
> anachronistic.  It harkens to a time when we could only track a single
> point of contact from the mouse.  In the video every gesture-drawing step
> looked so unnecessary and time-wasting.
>
> All tablets today support multi-touch interfaces, so there is no longer a
> need to draw a symbol that indicates the  action you wish to take next.
> Instead we want direct interaction with the objects.
>
> The following YouTube video is an example of using multi-touch gestures for
> manipulating 3D objects.
>
> http://www.youtube.com/watch?v=6xIK07AhJjc
>
>
> On Sun, Nov 18, 2012 at 6:03 AM, Jason Wilkins 
> wrote:
>
>> More details about the video and the prototype.
>>
>> The recognizer used in the video is very simple to implement and
>> understand.  It is called $1 (One Dollar) and was developed at the
>> University of Washington [1].  We had a seminar recently about
>> interfaces for children and extensions to $1 were presented and I was
>> inspired by their simplicity because it meant I could just jump right
>> in.  It works OK and is good enough for research purposes.
>>
>> One thing $1 does not do is input segmentation.  That means it cannot
>> tell you how to split the input stream into chunks for individual
>> recognition.  What I'm doing right now is segmenting by velocity.  If
>> the cursor stops for 1/4 of a second then I attempt to match the
>> input.  This worked great for mice but not at all for pens due to
>> noise, so instead of requiring the cursor to stop I just require it to
>> slow down a lot.  I'm experimenting with lots of different ideas in
>> rejecting bad input.  I'm leaning towards a multi-modal approach where
>> every symbol has its own separate criteria instead of attempting a
>> one-size-fits-all approach.
>>
>> The recognizer is driven by the window manager and does not require a
>> large amount of changes to capture the information it needs.
>> Different recognizers could be plugged into the interface.
>>
>> The "afterglow" overlay is intended to give important feedback about
>> how well the user is entering commands and to help them learn.  The
>> afterglow gives an indication that a command was successfully entered
>> (although I haven't disabled the display of valid but unbound gestures
>> yet).  The afterglow morphs into the template shape to give the user
>> both a clearer idea of what the g

Re: [Bf-committers] Blender At Your Fingertips: Prototype Unistroke Commands for Tablets

2012-11-18 Thread Jason Wilkins
I think that single touch and multi touch gestures are two different
parts of the same interface.  Maybe you are hung up on the fact that I
bound the single touch gestures to actions that could be better done
with multitouch gestures, and if so you missed the point.

On Sun, Nov 18, 2012 at 11:17 AM, Harley Acheson
 wrote:
> These simple stroke gestures, like we had years ago, now seems so
> anachronistic.  It harkens to a time when we could only track a single
> point of contact from the mouse.  In the video every gesture-drawing step
> looked so unnecessary and time-wasting.
>
> All tablets today support multi-touch interfaces, so there is no longer a
> need to draw a symbol that indicates the  action you wish to take next.
> Instead we want direct interaction with the objects.
>
> The following YouTube video is an example of using multi-touch gestures for
> manipulating 3D objects.
>
> http://www.youtube.com/watch?v=6xIK07AhJjc
>
>
> On Sun, Nov 18, 2012 at 6:03 AM, Jason Wilkins 
> wrote:
>
>> More details about the video and the prototype.
>>
>> The recognizer used in the video is very simple to implement and
>> understand.  It is called $1 (One Dollar) and was developed at the
>> University of Washington [1].  We had a seminar recently about
>> interfaces for children and extensions to $1 were presented and I was
>> inspired by their simplicity because it meant I could just jump right
>> in.  It works OK and is good enough for research purposes.
>>
>> One thing $1 does not do is input segmentation.  That means it cannot
>> tell you how to split the input stream into chunks for individual
>> recognition.  What I'm doing right now is segmenting by velocity.  If
>> the cursor stops for 1/4 of a second then I attempt to match the
>> input.  This worked great for mice but not at all for pens due to
>> noise, so instead of requiring the cursor to stop I just require it to
>> slow down a lot.  I'm experimenting with lots of different ideas in
>> rejecting bad input.  I'm leaning towards a multi-modal approach where
>> every symbol has its own separate criteria instead of attempting a
>> one-size-fits-all approach.
>>
>> The recognizer is driven by the window manager and does not require a
>> large amount of changes to capture the information it needs.
>> Different recognizers could be plugged into the interface.
>>
>> The "afterglow" overlay is intended to give important feedback about
>> how well the user is entering commands and to help them learn.  The
>> afterglow gives an indication that a command was successfully entered
>> (although I haven't disabled the display of valid but unbound gestures
>> yet).  The afterglow morphs into the template shape to give the user
>> both a clearer idea of what the gesture was and to help the user fix
>> any problems with their form.
>>
>> In the future I want to use information about the gesture itself, such
>> as its size and centroid, to drive any operator that is called.  For
>> example, drawing a circle on an object might stamp it with a texture
>> whose position and size were determined by the size and position of
>> the circle.
>>
>> Additionally I want to create a new window region type for managing,
>> training, and using gestures.  That might be doable as an add-on.
>>
>> [1] https://depts.washington.edu/aimgroup/proj/dollar/
>>
>>
>> On Sun, Nov 18, 2012 at 7:42 AM, Jason Wilkins
>>  wrote:
>> > I've been exploring some research ideas (for university) and using
>> > Blender to prototype them.  I made a short video that demonstrates
>> > what I was able to do the last couple of days.  I'm starting to create
>> > a general framework for sketch recognition in Blender.
>> >
>> > http://youtu.be/IeNjNbTz4CI
>> >
>> > The goal is an interface that could work without a keyboard or most
>> > buttons.  I think a Blender with gestures is far more like Blender
>> > than a Blender that is plastered with big buttons so it works on a
>> > tablet.  It puts everything at your fingertips.
>> ___
>> Bf-committers mailing list
>> Bf-committers@blender.org
>> http://lists.blender.org/mailman/listinfo/bf-committers
>>
> ___
> Bf-committers mailing list
> Bf-committers@blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] BMO usage in bmesh_queries

2012-11-18 Thread Nicholas Bishop
Quick question about the BMesh query function BM_verts_in_face(): is it
correct that it uses BMO_elem_flag_enable/disable/test? Seems like this
won't work outside of an operator, and all the other query functions are
using BM_elem_flag_enable/disable/test.

-Nicholas
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Thinking about a "community edit mode" for blender

2012-11-18 Thread Chad Fraleigh
Oh and one thing that would definitely have to be done for security is
locking out access to arbitrary file paths while in shared edit mode
(perhaps defining a drive and/or directory white-list ahead of time).
That way remote users couldn't manipulate your system via special
files/devices that have side affects from just opening them.

And I just remembered watching a macro tutorial recently, which
implies that a lot of what is needed do high level replication already
exists.

While on this subject it got me thinking of another feature (which may
or may not already exist somewhere).. If actions can be "replay"
replicated between multiple instances of blender, then why not to
itself? The Smalltalk IDE by VisualWorks has image files (a snapshot
of the VM) and a change list/log. Because of this if the VM crashes
(or hangs and you have to kill it off) not all is lost from the last
image save. You can simply replay the change file (or even select
parts of it) and get back to where you left off. So if something like
this existed in blender (or an addon) then when blender crashes (which
I get the impression happens often with the newer features before they
have a few releases to become stable) it would be trivial to recover.
Maybe even when opening a "crashed" file it would ask if you want to
replay the changes. There are other examples of this (like 'vi' and
other editors), but VisualWorks could also do more with change lists
than just recover that session (which is the general limit in text
editors). If done right it could be used to export a set of changes as
a diff/patch like mechanism for .blend files (unless that's not very
useful in practice). Anyway that's enough on these side tangents.


On Sun, Nov 18, 2012 at 3:07 PM, Chad Fraleigh  wrote:
> What about instead of trying to keep the internal data directly in
> sync it was treated more like a high level database replay log. Sync
> the operations, like select vertex #5, begin move action, move by x/y,
> end move action; or set modifier #1 field X to value Y. Assuming all
> instances started out with logically identical data/state and have the
> same capabilities (i.e. same blender version, and same [active]
> addons), then in the end each edited copy should be identical. Of
> course this also assumes that all "actions" (whether UI or script
> triggered) can be hooked/captured and replicated to the other node(s).
>
> One catch would be any local system specific data (like non-relative
> filenames) that might get applied would not be portable across nodes.
> If support for virtual/aliased/mapped file resources existed, then
> both sides could have their system dependent resources mapped, and
> then these virtual "filenames" could be passed back and forth. While
> not a complete solution, it would allow even those that have a
> collection of [non-common directory] external data on incompatible
> directory structures (or OS's) to still work together. For that matter
> there could even be exported virtual resources.. so if resource X
> wasn't mapped on one side it could pull the data remotely from another
> node that does have it (when allowed).

-Chad
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Thinking about a "community edit mode" for blender

2012-11-18 Thread Chad Fraleigh
What about instead of trying to keep the internal data directly in
sync it was treated more like a high level database replay log. Sync
the operations, like select vertex #5, begin move action, move by x/y,
end move action; or set modifier #1 field X to value Y. Assuming all
instances started out with logically identical data/state and have the
same capabilities (i.e. same blender version, and same [active]
addons), then in the end each edited copy should be identical. Of
course this also assumes that all "actions" (whether UI or script
triggered) can be hooked/captured and replicated to the other node(s).

One catch would be any local system specific data (like non-relative
filenames) that might get applied would not be portable across nodes.
If support for virtual/aliased/mapped file resources existed, then
both sides could have their system dependent resources mapped, and
then these virtual "filenames" could be passed back and forth. While
not a complete solution, it would allow even those that have a
collection of [non-common directory] external data on incompatible
directory structures (or OS's) to still work together. For that matter
there could even be exported virtual resources.. so if resource X
wasn't mapped on one side it could pull the data remotely from another
node that does have it (when allowed).

Another issue would be error handling. So for example of user A loads
a texture file, but that file doesn't exist on the user B's side (or
it does but is an invalid image) then user B would get an error
dialog. This would place them in inconsistent states. Best case the
error is semi-ignorable (e.g. one side just doesn't have a nice
texture showing, but works otherwise), however worst case is the data
structures get out of sync (e.g. texture slot #2 is missing on side
B). Maybe an error catching hook that gives a dialog "There was an
error that didn't occur on the other node... would you like to resync
your blender data?" Yes, not ideal (especially if it is a big file and
happens a lot with a slow network connection).

On Sun, Nov 18, 2012 at 9:29 AM, Brecht Van Lommel
 wrote:
> Hi,
>
> There was indeed experimental Verse integration for this sort of
> thing, but it never got to a finished state. The idea was that Verse
> would be integrated in various applications and game engines, and that
> you could then interchange data. The main issue I guess is that
> synchronizing data is a hard problem, and that it's difficult to add
> this into Blender or other 3D app designs which weren't designed from
> the ground up with this in mind.
>
> For exporting to a game engine, I think this can work and could be
> implemented as an addon. But it's still a hard problem, especially if
> you want to do 2-way syncing. For Blender-Blender syncing, I don't see
> it happening, syncing all data structures is too much work to get
> reliable, with a game engine you only have to consider a subset, same
> as when writing an exporter.
>
> One mistake with Verse in my opinion is that it tried to be too fine
> grained in syncing, it's nice in theory to only send changed vertices,
> but this all just becomes incredibly complex when you consider that
> you have to sync all data at this level. It's better to work at the
> level of entire datablocks in my opinion, and if you want to optimize
> data transfer then maybe use an rsync like algorithm.
>
> Brecht.
>
> On Sun, Nov 18, 2012 at 5:05 PM, Gaia  wrote:
>> I remember there was some attempt to add a multi user mode for blender
>> (I think there was something setup in 2.4) The key idea was that 2 or
>> more users could share one Blender 3dView and do concurrent editing on
>> the objects right in blender.
>>
>> I would like to even add another thought: Maybe it is possible to setup
>> a bridge between blender and an online world, such that you can edit an
>> object in blender which is actually located in an online environment (or
>> visualize an object that is actually located in blender in an online
>> world) . I guess that all of this is far from trivial. But maybe it
>> would be fun to start thinking about how that could be done. Or if the
>> groundwork has already been done,  maybe it makes sense to make a
>> "production ready" tool (Addon?). As far as i know there was some work
>> on this done for "RealExtent" a while ago...
>>
>> Do you have any opinion on such a development ? Does it make sense, are
>> there better ways to go, is it doable, feasible ? Anybody working on it
>> even ?
>>
>> cheers,
>> Gaia


-Chad
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] Thinking about a "community edit mode" for blender

2012-11-18 Thread Sergey Kurdakov
Hi,

one approach which can be used - is similar to multiple VNC ( remote
desktop ) connection.

So, there is one 'server' Blender and many viewer screens, which can send
commands and updates to 'host' computer ( in our case - host Blender ).

as  ffmpeg is already used to save video, and ffmpeg  has capabilities to
send video over network (
http://ffmpeg.org/trac/ffmpeg/wiki/StreamingGuide) the initial
prototype could be coded relatively easily - there is just
need to encode user activities and send them over the net and translate
into 'host' computer clicks.

as with other vnc solutions there will be a somewhat noticeable lag, but
still might be useful to discuss some issues or demonstrate particular
points to the team.

Regards
Sergey
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Thinking about a "community edit mode" for blender

2012-11-18 Thread Brecht Van Lommel
Hi,

There was indeed experimental Verse integration for this sort of
thing, but it never got to a finished state. The idea was that Verse
would be integrated in various applications and game engines, and that
you could then interchange data. The main issue I guess is that
synchronizing data is a hard problem, and that it's difficult to add
this into Blender or other 3D app designs which weren't designed from
the ground up with this in mind.

For exporting to a game engine, I think this can work and could be
implemented as an addon. But it's still a hard problem, especially if
you want to do 2-way syncing. For Blender-Blender syncing, I don't see
it happening, syncing all data structures is too much work to get
reliable, with a game engine you only have to consider a subset, same
as when writing an exporter.

One mistake with Verse in my opinion is that it tried to be too fine
grained in syncing, it's nice in theory to only send changed vertices,
but this all just becomes incredibly complex when you consider that
you have to sync all data at this level. It's better to work at the
level of entire datablocks in my opinion, and if you want to optimize
data transfer then maybe use an rsync like algorithm.

Brecht.

On Sun, Nov 18, 2012 at 5:05 PM, Gaia  wrote:
> I remember there was some attempt to add a multi user mode for blender
> (I think there was something setup in 2.4) The key idea was that 2 or
> more users could share one Blender 3dView and do concurrent editing on
> the objects right in blender.
>
> I would like to even add another thought: Maybe it is possible to setup
> a bridge between blender and an online world, such that you can edit an
> object in blender which is actually located in an online environment (or
> visualize an object that is actually located in blender in an online
> world) . I guess that all of this is far from trivial. But maybe it
> would be fun to start thinking about how that could be done. Or if the
> groundwork has already been done,  maybe it makes sense to make a
> "production ready" tool (Addon?). As far as i know there was some work
> on this done for "RealExtent" a while ago...
>
> Do you have any opinion on such a development ? Does it make sense, are
> there better ways to go, is it doable, feasible ? Anybody working on it
> even ?
>
> cheers,
> Gaia
> ___
> Bf-committers mailing list
> Bf-committers@blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Blender At Your Fingertips: Prototype Unistroke Commands for Tablets

2012-11-18 Thread Harley Acheson
These simple stroke gestures, like we had years ago, now seems so
anachronistic.  It harkens to a time when we could only track a single
point of contact from the mouse.  In the video every gesture-drawing step
looked so unnecessary and time-wasting.

All tablets today support multi-touch interfaces, so there is no longer a
need to draw a symbol that indicates the  action you wish to take next.
Instead we want direct interaction with the objects.

The following YouTube video is an example of using multi-touch gestures for
manipulating 3D objects.

http://www.youtube.com/watch?v=6xIK07AhJjc


On Sun, Nov 18, 2012 at 6:03 AM, Jason Wilkins wrote:

> More details about the video and the prototype.
>
> The recognizer used in the video is very simple to implement and
> understand.  It is called $1 (One Dollar) and was developed at the
> University of Washington [1].  We had a seminar recently about
> interfaces for children and extensions to $1 were presented and I was
> inspired by their simplicity because it meant I could just jump right
> in.  It works OK and is good enough for research purposes.
>
> One thing $1 does not do is input segmentation.  That means it cannot
> tell you how to split the input stream into chunks for individual
> recognition.  What I'm doing right now is segmenting by velocity.  If
> the cursor stops for 1/4 of a second then I attempt to match the
> input.  This worked great for mice but not at all for pens due to
> noise, so instead of requiring the cursor to stop I just require it to
> slow down a lot.  I'm experimenting with lots of different ideas in
> rejecting bad input.  I'm leaning towards a multi-modal approach where
> every symbol has its own separate criteria instead of attempting a
> one-size-fits-all approach.
>
> The recognizer is driven by the window manager and does not require a
> large amount of changes to capture the information it needs.
> Different recognizers could be plugged into the interface.
>
> The "afterglow" overlay is intended to give important feedback about
> how well the user is entering commands and to help them learn.  The
> afterglow gives an indication that a command was successfully entered
> (although I haven't disabled the display of valid but unbound gestures
> yet).  The afterglow morphs into the template shape to give the user
> both a clearer idea of what the gesture was and to help the user fix
> any problems with their form.
>
> In the future I want to use information about the gesture itself, such
> as its size and centroid, to drive any operator that is called.  For
> example, drawing a circle on an object might stamp it with a texture
> whose position and size were determined by the size and position of
> the circle.
>
> Additionally I want to create a new window region type for managing,
> training, and using gestures.  That might be doable as an add-on.
>
> [1] https://depts.washington.edu/aimgroup/proj/dollar/
>
>
> On Sun, Nov 18, 2012 at 7:42 AM, Jason Wilkins
>  wrote:
> > I've been exploring some research ideas (for university) and using
> > Blender to prototype them.  I made a short video that demonstrates
> > what I was able to do the last couple of days.  I'm starting to create
> > a general framework for sketch recognition in Blender.
> >
> > http://youtu.be/IeNjNbTz4CI
> >
> > The goal is an interface that could work without a keyboard or most
> > buttons.  I think a Blender with gestures is far more like Blender
> > than a Blender that is plastered with big buttons so it works on a
> > tablet.  It puts everything at your fingertips.
> ___
> Bf-committers mailing list
> Bf-committers@blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] Blender developer meeting minutes - 18 november 2012

2012-11-18 Thread Brecht Van Lommel
Hi all,

Notes from today's meeting in irc.freenode.net #blendercoders

a) 2.65 Release Status

* We will do a test build next week, tuesday earliest, but may be a few
days later. Only OSL bugs and build issues are holding it up now, should be
solved in the next days.

b) Last week

* Sergey finished image threading safe improvements. Needs more speed tests
of Blender Internal with multi-core system (especially OSX run on Xeon
stations which was an issue in the past)

* Sergey and Bastien wrote a script which installs and compiles
dependencies for linux. This script replaces pre-compiled libraries from
the svn. Wiki pages were updated
http://wiki.blender.org/index.php/Dev:2.5/Doc/Building_Blender/Linux/Generic_Distro/CMake#Automatic_dependencies_installation

* OS X 10.8 + boost::locale + OCIO issue should be solved now, by using the
10.7 SDK for builds.

* OSL is now supposed to be working for all platforms expect for the issues
mentioned below. Thanks all for the hard work, seems this was a somewhat
painful experience :)

c) Next week

* Test build! We'll let platform maintainers know when we're ready.

* Brecht looks into OSL issues and getting cycles motion blur working on
GPU. OSL building still has 3 issues to solve: some crashes on windows
(connecting float to color socket), scons link errors on windows, and 32
bit not working on mac.

* Sergey will spend time on motion tracker, solving issues with masked
tracking.

* Thomas works on release notes for test build.

* To all developers: please check bug tracker for important bugs, and add
release notes for big features you added!

d) Other Projects

* Nicholas reports progress is being made on dyntopo, it's much more stable
now than a week or two ago.

* Howard mentions he will likely merge bridge tool improvements from summer
of code for the 2.66 release.


Thanks,
Brecht.
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] Thinking about a "community edit mode" for blender

2012-11-18 Thread Gaia
Hi all;

I remember there was some attempt to add a multi user mode for blender 
(I think there was something setup in 2.4) The key idea was that 2 or 
more users could share one Blender 3dView and do concurrent editing on 
the objects right in blender.

I would like to even add another thought: Maybe it is possible to setup 
a bridge between blender and an online world, such that you can edit an 
object in blender which is actually located in an online environment (or 
visualize an object that is actually located in blender in an online 
world) . I guess that all of this is far from trivial. But maybe it 
would be fun to start thinking about how that could be done. Or if the 
groundwork has already been done,  maybe it makes sense to make a 
"production ready" tool (Addon?). As far as i know there was some work 
on this done for "RealExtent" a while ago...

Do you have any opinion on such a development ? Does it make sense, are 
there better ways to go, is it doable, feasible ? Anybody working on it 
even ?

cheers,
Gaia
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Blender At Your Fingertips: Prototype Unistroke Commands for Tablets

2012-11-18 Thread Jason Wilkins
More details about the video and the prototype.

The recognizer used in the video is very simple to implement and
understand.  It is called $1 (One Dollar) and was developed at the
University of Washington [1].  We had a seminar recently about
interfaces for children and extensions to $1 were presented and I was
inspired by their simplicity because it meant I could just jump right
in.  It works OK and is good enough for research purposes.

One thing $1 does not do is input segmentation.  That means it cannot
tell you how to split the input stream into chunks for individual
recognition.  What I'm doing right now is segmenting by velocity.  If
the cursor stops for 1/4 of a second then I attempt to match the
input.  This worked great for mice but not at all for pens due to
noise, so instead of requiring the cursor to stop I just require it to
slow down a lot.  I'm experimenting with lots of different ideas in
rejecting bad input.  I'm leaning towards a multi-modal approach where
every symbol has its own separate criteria instead of attempting a
one-size-fits-all approach.

The recognizer is driven by the window manager and does not require a
large amount of changes to capture the information it needs.
Different recognizers could be plugged into the interface.

The "afterglow" overlay is intended to give important feedback about
how well the user is entering commands and to help them learn.  The
afterglow gives an indication that a command was successfully entered
(although I haven't disabled the display of valid but unbound gestures
yet).  The afterglow morphs into the template shape to give the user
both a clearer idea of what the gesture was and to help the user fix
any problems with their form.

In the future I want to use information about the gesture itself, such
as its size and centroid, to drive any operator that is called.  For
example, drawing a circle on an object might stamp it with a texture
whose position and size were determined by the size and position of
the circle.

Additionally I want to create a new window region type for managing,
training, and using gestures.  That might be doable as an add-on.

[1] https://depts.washington.edu/aimgroup/proj/dollar/


On Sun, Nov 18, 2012 at 7:42 AM, Jason Wilkins
 wrote:
> I've been exploring some research ideas (for university) and using
> Blender to prototype them.  I made a short video that demonstrates
> what I was able to do the last couple of days.  I'm starting to create
> a general framework for sketch recognition in Blender.
>
> http://youtu.be/IeNjNbTz4CI
>
> The goal is an interface that could work without a keyboard or most
> buttons.  I think a Blender with gestures is far more like Blender
> than a Blender that is plastered with big buttons so it works on a
> tablet.  It puts everything at your fingertips.
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] Blender At Your Fingertips: Prototype Unistroke Commands for Tablets

2012-11-18 Thread Jason Wilkins
I've been exploring some research ideas (for university) and using
Blender to prototype them.  I made a short video that demonstrates
what I was able to do the last couple of days.  I'm starting to create
a general framework for sketch recognition in Blender.

http://youtu.be/IeNjNbTz4CI

The goal is an interface that could work without a keyboard or most
buttons.  I think a Blender with gestures is far more like Blender
than a Blender that is plastered with big buttons so it works on a
tablet.  It puts everything at your fingertips.
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers