Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-05-09 Thread Dalai Felinto
Hello there,
I posted a video showing the current status of the branch here:

http://www.dalaifelinto.com/?p=843

A quick update:
* Compositor is partly working
* Image Node is working very nicely
* UV/Image Editor improved to group the views in passes
* lots of bug fixes, code cleanup, some refactoring ...

Next I will finish up compositor (right now you have to use File Output to
get the multiview result).
That's it, I hope there is anyone following it ;)

Dalai

--
blendernetwork.org/member/dalai-felinto
www.dalaifelinto.com
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-23 Thread Dalai Felinto
Hi Brecht,

I got the rendering to work for multiple views in Blender Internal (*) and
I would like to address Cycles next.

I don't want to abuse of your assistance, but it will likely save me some
time to hear from you how would you tackle that.

My first thought after spending osme time wandering on blender_session.cpp
and session.cpp was:

session.cpp (pseudo-code)
+ for view in ...
+ scene-camera = view.camera;
+ SOMEGLOBAL.actview = view;

if(device_use_gl)
run_gpu();
else
run_cpu();
(. . .)

Plus in the buffer merging routines, I would have to do something similar
with what I did for Blender in render_result.c

(*) - No Composite (because I haven't tackled it), no FSA (not sure why).
Anything willing to test the current state, get the code in github:

http://github.com/dfelinto/blender/tree/multiview

Thanks,
Dalai

--
blendernetwork.org/member/dalai-felinto
www.dalaifelinto.com
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-23 Thread Brecht Van Lommel
Hi Dalai,

On Tue, Apr 23, 2013 at 10:12 AM, Dalai Felinto dfeli...@gmail.com wrote:
 I got the rendering to work for multiple views in Blender Internal (*) and
 I would like to address Cycles next.

 I don't want to abuse of your assistance, but it will likely save me some
 time to hear from you how would you tackle that.

 My first thought after spending osme time wandering on blender_session.cpp
 and session.cpp was:

 session.cpp (pseudo-code)
 + for view in ...
 + scene-camera = view.camera;
 + SOMEGLOBAL.actview = view;

 if(device_use_gl)
 run_gpu();
 else
 run_cpu();
 (. . .)

Probably the easiest way to do it is in BlenderSession::render, so that you get:

for each render layer:
b_rlay_name = ..
for each view:
b_view_name = ...
...()

And then in BlenderSession::do_write_update_render_tile use the
b_view_name to write to the right passes.

It would be good to push the view stuff further into Session itself,
especially if we're going to support multiview rendering in the 3D
viewport, but for now I'd start with doing it this way.

Brecht.
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-22 Thread Dalai Felinto
Hi,
(Ton and Brecht in particularly)
I'm having some trouble finding the best way to render the scene views.

Currently in my last branch commit [1] I got Blender to render multiple
layers and passes for each view,
but still without using the cameras set in the UI [2].

My last attempt (non-committed and not working) is roughly below:

pipeline.c
static void do_render_3d(Render re*)
{
(. . .)
+   for (view = 0, srv = re-r.views.first; srv; srv = srv-next,
view++) {
+   re-r.actview = view;
+
+   if (render_scene_needs_vector(re))
+   RE_Database_FromScene_Vectors(re, re-main,
re-scene, re-lay, view);
+   else
+   RE_Database_FromScene(re, re-main, re-scene,
re-lay, 1, view);
(. . .)
+   threaded_tile_processor(re);
(. . .)
+   RE_Database_Free(re);
+   }

(in RE_Database_FromScene I set the camera based on the view id)

rendercore.c
static void add_passes(RenderLayer *rl, int offset, ShadeInput *shi,
ShadeResult *shr)
(. . .)
for (rpass= rl-passes.first; rpass; rpass= rpass-next) {
(. . .)
+   if (rpass-view_id != *(R.r.actview))
+   continue;


This doesn't work because the rendering is happening after we leave
do_render_3d (so not during threaded_tile_processor as I thought). So the
for loop there is just wasting cpu and time (and R.r.actview is always the
last view id by the time we get to add_passes).

Anyways, do you have some ideas on how to better tackle this?
This approach was based on my interpretation of the old Render Pipeline
wiki page [3]. But I clearly got it wrong


[1] -
https://github.com/dfelinto/blender/commit/cb47c64528d46619428e361d7c4999e684380ebb
[2] - http://dalaifelinto.com/ftp/multiview/multiview_panel.jpg
[3] - http://wiki.blender.org/index.php/Dev:Source/Render/Pipeline /
http://wiki.blender.org/index.php/File:Render_pipeline_API.png

Thanks,
Dalai
--
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-16 Thread Ton Roosendaal
Hi Dalai,

You know there's a 3d stereo version of ED for test?
http://orange.blender.org/

To my knowledge a CC stereo version of BBB is coming soon too.

-Ton-


Ton Roosendaal  Blender Foundation   t...@blender.orgwww.blender.org
Blender Institute   Entrepotdok 57A  1018AD Amsterdam   The Netherlands

On 15 Apr, 2013, at 9:39, Dalai Felinto wrote:

 Hi Adriano, not sure what to get from the test. I guess this means you
 are able to work with Blender as it, so there is no need for a more
 robust stereo support? ;)
 
 Now seriously, is this file cc? If so can you send it over? (or at
 least one exr per eye?). It will help with the tests at some point.
 --
 A quick update: I'm done with the read/write routines for multiview.
 The current code is on:
 http://github.com/dfelinto/blender/tree/multiview
 
 So far the implementation is by considering the views just as another
 pass (one of Ton's ideas). For example:
 
 RenderLayer.Combined.left.R
 
 layer: RenderLayer
 pass: Combined.left
 channel: R
 
 Note also that Combined.left is a pass and Combined.right would be another 
 pass.
 My next goal is to tackle render. For that I'll add views to the
 scene, which are a camera and a name.
 
 The plan is as follow:
 1) read multiview exr   [done]
 2) see multiview in UV/image editor as mono [done]
 3) write multiview exr  [done]
 4) render in multiview
 5) compo in multiview
 6) see multiview in UV/image editor as stereo
 7) see viewport preview
 8) ?
 
 --
 Dalai
 blendernetwork.org/member/dalai-felinto
 www.dalaifelinto.com
 
 
 2013/4/14 Adriano Oliveira adriano.u...@gmail.com:
 A new test:
 
 http://youtu.be/LhbkgRXBVJU
 
 
 Adriano A. Oliveira
 ___
 Bf-committers mailing list
 Bf-committers@blender.org
 http://lists.blender.org/mailman/listinfo/bf-committers
 ___
 Bf-committers mailing list
 Bf-committers@blender.org
 http://lists.blender.org/mailman/listinfo/bf-committers

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-15 Thread Dalai Felinto
Hi Adriano, not sure what to get from the test. I guess this means you
are able to work with Blender as it, so there is no need for a more
robust stereo support? ;)

Now seriously, is this file cc? If so can you send it over? (or at
least one exr per eye?). It will help with the tests at some point.
--
A quick update: I'm done with the read/write routines for multiview.
The current code is on:
http://github.com/dfelinto/blender/tree/multiview

So far the implementation is by considering the views just as another
pass (one of Ton's ideas). For example:

RenderLayer.Combined.left.R

layer: RenderLayer
pass: Combined.left
channel: R

Note also that Combined.left is a pass and Combined.right would be another pass.
My next goal is to tackle render. For that I'll add views to the
scene, which are a camera and a name.

The plan is as follow:
1) read multiview exr   [done]
2) see multiview in UV/image editor as mono [done]
3) write multiview exr  [done]
4) render in multiview
5) compo in multiview
6) see multiview in UV/image editor as stereo
7) see viewport preview
8) ?

--
Dalai
blendernetwork.org/member/dalai-felinto
www.dalaifelinto.com


2013/4/14 Adriano Oliveira adriano.u...@gmail.com:
 A new test:

 http://youtu.be/LhbkgRXBVJU


 Adriano A. Oliveira
 ___
 Bf-committers mailing list
 Bf-committers@blender.org
 http://lists.blender.org/mailman/listinfo/bf-committers
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-15 Thread Adriano Oliveira
Hi Dalai,

My intention in showing off is just state my interest in stereoscopy in
Blender.
The boat is not CC, but it is ok to use on tests, as long as you don't
share the model. I can send it to you as soon as possible.

Producing stereoscopic renders are realy easy in any 3d software, but
controling to get good takes is very hard and few softwares have good free
tools already implemented.

My exemple test is no good: the saparetion is too intense and it causes
problems to resolve in perception. I have a better render now, with
convergence corrected in After Effects, that has a nice and easy pluging to
deal with two cameras renders.
I have done three vertions: anaglyph, optimized analglyph [see link below],
and side by side. This last one to wach in .mp4 on my LG passive 3D TV, via
USB flashdrive (no need of a external player).

The conclusion: anaglyph is useless but for preview. 3D Tvs/monitors are
the way to go.

http://www.svoigt.net/index.php/tutorials/22-stereoscopic-3d/29-optimized-anaglyph

I like very much the development of your proposal.

PS: Sebastian Schneider's addon is not 100% funtional with latests builds,
since render layers got to its own place in UI.

;)



Adriano A. Oliveira

Livro: http://goo.gl/WtcNX
Lattes: http://lattes.cnpq.br/8343393957854863
Blog CG Total: http://cgtotal.net
Blog Anodinidades: http://anodinidades.wordpress.com/
Produções audiovisuais: *http://vimeo.com/anodinidades/videos*
http://vimeo.com/anodinidades/videosFotografia:
http://www.flickr.com/photos/adriano-ol/
Facebook: http://www.facebook.com/adriano.ol
Twitter: http://twitter.com/anodinidades


2013/4/15 Dalai Felinto dfeli...@gmail.com

 Hi Adriano, not sure what to get from the test. I guess this means you
 are able to work with Blender as it, so there is no need for a more
 robust stereo support? ;)

 Now seriously, is this file cc? If so can you send it over? (or at
 least one exr per eye?). It will help with the tests at some point.
 --
 A quick update: I'm done with the read/write routines for multiview.
 The current code is on:
 http://github.com/dfelinto/blender/tree/multiview

 So far the implementation is by considering the views just as another
 pass (one of Ton's ideas). For example:

 RenderLayer.Combined.left.R

 layer: RenderLayer
 pass: Combined.left
 channel: R

 Note also that Combined.left is a pass and Combined.right would be another
 pass.
 My next goal is to tackle render. For that I'll add views to the
 scene, which are a camera and a name.

 The plan is as follow:
 1) read multiview exr   [done]
 2) see multiview in UV/image editor as mono [done]
 3) write multiview exr  [done]
 4) render in multiview
 5) compo in multiview
 6) see multiview in UV/image editor as stereo
 7) see viewport preview
 8) ?

 --
 Dalai
 blendernetwork.org/member/dalai-felinto
 www.dalaifelinto.com


 2013/4/14 Adriano Oliveira adriano.u...@gmail.com:
  A new test:
 
  http://youtu.be/LhbkgRXBVJU
 
 
  Adriano A. Oliveira
  ___
  Bf-committers mailing list
  Bf-committers@blender.org
  http://lists.blender.org/mailman/listinfo/bf-committers
 ___
 Bf-committers mailing list
 Bf-committers@blender.org
 http://lists.blender.org/mailman/listinfo/bf-committers

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-14 Thread Adriano Oliveira
A new test:

http://youtu.be/LhbkgRXBVJU


Adriano A. Oliveira
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-07 Thread Bartek Skorupa (priv)

Oops, I was so fixed on standard stereo displayed on screens, that I didn't 
notice what this presentation is about. Sorry.

Cheers
Bartek Skorupa

Sent from my iPhone

On 6 kwi 2013, at 23:27, Harley Acheson harley.ache...@gmail.com wrote:

 Bartek,
 
 WRONG! When two view of an object are identical it tells your brain that
 they are ON THE SCREEN.
 When positive parallax of an object equals the distance between viewer's
 eyes - they appear at infinity.
 
 To be fair, he's not wrong. But neither are you, since you are both talking
 about different things...
 
 You are right in that if you present the same image onto a *screen* in
 front of the user then it will appear to be at the depth of the screen
 itself.  Really no different than a normal 2D image on the screen and the
 user can determine the distance to it using convergence.  Your right eye
 has to look a little to the left, and the left eye rotates a little to the
 right, the amount of which your brain uses to gauge the distance.
 
 However, the presentation was talking about VR headsets like the Oculus
 Rift.  Present an identical image to each eye on this type of headset and
 you no longer have convergence to determine depth.  Each eye will stare
 straight forward in this case and your brain will therefore place the image
 at infinity as was mentioned in the presentation.
 
 Cheers, Harley
 ___
 Bf-committers mailing list
 Bf-committers@blender.org
 http://lists.blender.org/mailman/listinfo/bf-committers
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-06 Thread Dalai Felinto
I had a talk with Ton on IRC to clear the storage part of his
proposal and the transcript is here:
http://dalaifelinto.com/ftp/tmp/ton_irc_chat.pdf

Basically he suggested to store the just like any other pass. So we
still would have RenderResult-RenderLayers-RenderPass

@Ton: after some exploratory coding, I think we can stick to your idea
(it simplifies implementation) but it would help a lot to store the
name in the pass in 2 to 3 ways:

For example the *diffuse.right.R* and *diffuse.R* channels (in this
case left is the default view):

** pass-name : full name of the patch with the view [diffuse.right.R
and diffuse.left.R]
** pass-ui_name : name without the view part [diffuse.R and diffuse.R]
** pass-exr_name : name to use for writing back in the exr
[diffuse.right.R and diffuse.R ]

pass-exr_name may not be needed since it's only required at writing
time when can be obtained with insertViewName(pass-name, multiView,
view_id).

Additionally we can store the view_id for the passes (pass-view_id).

And we need to store the view list in the ExrHandle as a StringVector
multiView. To sync with that RenderResult also needs to store a
multiView list, though it would be a ListBase.

The other idea of renderresult-renderview-renderlayer-renderpass is
more elegant than the juggling needed for handling views. However the
pay off of your idea is still worth I think. Note however, that there
is a chance IML/Weta pick this design simply because of their need of
backward-compatibility.

Anyways, any new feedback is welcome. I will tackle this new approach tomorrow.
--
Dalai

blendernetwork.org/member/dalai-felinto
www.dalaifelinto.com


2013/4/4 Ton Roosendaal t...@blender.org:
 Hi Dalai,

 Usability aside - that's stuff we can test and feedback later as well - I 
 think the main decisions to make is on storage and display level now.

 OpenEXR proposes to put the views inside the layer names, as channel 
 prefix... I don't fully grasp the benefit yet, but it's probably to be able 
 to localize any layer or pass, and have software figure out the views. 
 Following their approach, it would be in Blender:

 RenderLayer_001.diffuse.left.R
 RenderLayer_001.diffuse.right.R

 And not

 left.RenderLayer_001.diffuse.R
 right.RenderLayer_001.diffuse.R

 This proposal is supported by Weta and ILM - might be worth considering 
 carefully.

 ** RenderLayers

 Following this approach, the Scene itself can next to (outside) renderlayer 
 name/setting definitions, store a list with View names and settings. Which 
 would be for stereo left and right but it can be middle and whatever 
 views people like to have (I know 3d displays out there that need 9 views). 
 For each view cameras could be assigned, or a single camera with 
 stereo/multiview properties.

 For each view, the Render engine then runs a complete loop to go over the the 
 (already prepared) scene, set camera transforms and do the render.

 This then results in a complete filled RenderResult, which can save out to 
 MultiLayer exr, to temp buffers and FSA alike.

 If no views are set, the naming can be as usual. That also keeps render 
 result work as usual, and stay compatible for non-stereo cases.

 That means neather (1) or (2) or (3) as you proposed below btw!

 ** Compositor

 The compositor internally can stay nearly fully same. It gets a new 
 (optional) property for current view, which then gets used to extract the 
 correct buffers from Render Result, from Image nodes, and sets the right 
 buffers to RenderResult back, or saves to files.

 This makes the need for join or split nodes not needed either. For cases 
 where people want to input own footage, we can find ways to have the current 
 composite view map to the right name for input. It also keeps compositor 
 flexible for other multi-view cases.

 ** Regular images

 We can also internally handle special case regular images - like 
 side-by-side, or top-bottom, or whatever people can come up with.

 This then could become a property for the Image block, like right now for 
 Generated, Sequence, etc. The API can follow acquire_ibuf using current 
 view as well. Internally it can store, cache, or not, whatever is needed.

 ** Drawing images

 Support for OpenGL (3D drawn) based stereo (using shutterglasses or 
 red/green) we can quite easily add in the 3d window. This can be per-3d 
 window locally even.

 Support for Image-based stereo we should only allow per-window in Blender. 
 This window then should *entirely* do the display as set by the user (side by 
 side, alternating, etc). This including a possible UI that's being drawn 
 (with depth offset, if set).
 This keeps a UI work in stereo mode, prevents ugly flicker for side-by-side 
 monitors for example.

 Next to that, the Image window then can get a shortcut/menu to go to 
 entirely blank - not drawing UI at all, best for use in 
 fullscreen/borderless of course.

 Drawing can simply use the view property again to extract the right buffers 
 from 

Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-06 Thread Ton Roosendaal
Hi,

 ** pass-ui_name : name without the view part [diffuse.R and diffuse.R]
 ** pass-exr_name : name to use for writing back in the exr
 [diffuse.right.R and diffuse.R ]


I would be very careful storing UI assumptions in data.

The current system already has layer names and pass names. The 'view' names can 
be defined in the UI, and how it's all stored and retrieved internally software 
can handle transparently. That way you don't regret design decisions when 
things change internally after all.

Another good trick is to look at code design in a way what would break if we 
remove the feature. If things can get added and removed without bad 
versioning, you're much safer for future.

Forcing the render result to always store views is in this category. Views 
are optional, and can best stay that way - if possible.

Anyway - the proof of the concept is in the coding - usually you find out new 
or better ways during checking the impact of changes in Blender :)

-Ton-


Ton Roosendaal  Blender Foundation   t...@blender.orgwww.blender.org
Blender Institute   Entrepotdok 57A  1018AD Amsterdam   The Netherlands

On 6 Apr, 2013, at 10:50, Dalai Felinto wrote:

 I had a talk with Ton on IRC to clear the storage part of his
 proposal and the transcript is here:
 http://dalaifelinto.com/ftp/tmp/ton_irc_chat.pdf
 
 Basically he suggested to store the just like any other pass. So we
 still would have RenderResult-RenderLayers-RenderPass
 
 @Ton: after some exploratory coding, I think we can stick to your idea
 (it simplifies implementation) but it would help a lot to store the
 name in the pass in 2 to 3 ways:
 
 For example the *diffuse.right.R* and *diffuse.R* channels (in this
 case left is the default view):
 
 ** pass-name : full name of the patch with the view [diffuse.right.R
 and diffuse.left.R]
 ** pass-ui_name : name without the view part [diffuse.R and diffuse.R]
 ** pass-exr_name : name to use for writing back in the exr
 [diffuse.right.R and diffuse.R ]
 
 pass-exr_name may not be needed since it's only required at writing
 time when can be obtained with insertViewName(pass-name, multiView,
 view_id).
 
 Additionally we can store the view_id for the passes (pass-view_id).
 
 And we need to store the view list in the ExrHandle as a StringVector
 multiView. To sync with that RenderResult also needs to store a
 multiView list, though it would be a ListBase.
 
 The other idea of renderresult-renderview-renderlayer-renderpass is
 more elegant than the juggling needed for handling views. However the
 pay off of your idea is still worth I think. Note however, that there
 is a chance IML/Weta pick this design simply because of their need of
 backward-compatibility.
 
 Anyways, any new feedback is welcome. I will tackle this new approach 
 tomorrow.
 --
 Dalai
 
 blendernetwork.org/member/dalai-felinto
 www.dalaifelinto.com
 
 
 2013/4/4 Ton Roosendaal t...@blender.org:
 Hi Dalai,
 
 Usability aside - that's stuff we can test and feedback later as well - I 
 think the main decisions to make is on storage and display level now.
 
 OpenEXR proposes to put the views inside the layer names, as channel 
 prefix... I don't fully grasp the benefit yet, but it's probably to be able 
 to localize any layer or pass, and have software figure out the views. 
 Following their approach, it would be in Blender:
 
 RenderLayer_001.diffuse.left.R
 RenderLayer_001.diffuse.right.R
 
 And not
 
 left.RenderLayer_001.diffuse.R
 right.RenderLayer_001.diffuse.R
 
 This proposal is supported by Weta and ILM - might be worth considering 
 carefully.
 
 ** RenderLayers
 
 Following this approach, the Scene itself can next to (outside) renderlayer 
 name/setting definitions, store a list with View names and settings. Which 
 would be for stereo left and right but it can be middle and whatever 
 views people like to have (I know 3d displays out there that need 9 views). 
 For each view cameras could be assigned, or a single camera with 
 stereo/multiview properties.
 
 For each view, the Render engine then runs a complete loop to go over the 
 the (already prepared) scene, set camera transforms and do the render.
 
 This then results in a complete filled RenderResult, which can save out to 
 MultiLayer exr, to temp buffers and FSA alike.
 
 If no views are set, the naming can be as usual. That also keeps render 
 result work as usual, and stay compatible for non-stereo cases.
 
 That means neather (1) or (2) or (3) as you proposed below btw!
 
 ** Compositor
 
 The compositor internally can stay nearly fully same. It gets a new 
 (optional) property for current view, which then gets used to extract the 
 correct buffers from Render Result, from Image nodes, and sets the right 
 buffers to RenderResult back, or saves to files.
 
 This makes the need for join or split nodes not needed either. For cases 
 where people want to input own footage, we can find ways to have 

Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-06 Thread Brecht Van Lommel
Hi,

On Sat, Apr 6, 2013 at 11:12 AM, Ton Roosendaal t...@blender.org wrote:
 I would be very careful storing UI assumptions in data.

 The current system already has layer names and pass names. The 'view' names 
 can be defined in the UI, and how it's all stored and retrieved internally 
 software can handle transparently. That way you don't regret design decisions 
 when things change internally after all.

Personally I think explicit views  layers  passes (or layers  views
 passes) storage is better. My guess is that the way it's stored in
EXR files is for compatibility / simplicity. EXR also has no concept
of renderlayers and only stores them by name, yet we do have separate
data structures for them in Blender. So why are views different?

Maybe it requires a few extra code changes but I think the final code
will be more clear.

 Another good trick is to look at code design in a way what would break if we 
 remove the feature. If things can get added and removed without bad 
 versioning, you're much safer for future.

 Forcing the render result to always store views is in this category. Views 
 are optional, and can best stay that way - if possible.

RenderResult is also not stored in files, so I don't think we need to
worry about versioning. Maybe there's a few places in the UI like the
image editor or some compositing node where you want to specify a
view, but I think in that case you want to have the view as a separate
setting from the pass anyway?

So as far as I can tell, nothing would break if we remove this feature?

Brecht.
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-06 Thread Ton Roosendaal
Hi,

I just prefer to see this added in a way it's not changing the regular render 
flow. MultiView render is a new feature, and only used (very) occasionally. If 
you add this in a way that keeps existing code and data structures work, it can 
be done with minimal risk. Bugs then stay in the stereo-render feature, not for 
all the rest of Blender.

Many ways to do it though. 

It can also be via the 'get result' api, which then can give a complete render 
result from views (or just the old one).

That would be then more like:

render - views - renderresult - layers - passes.


-Ton-


Ton Roosendaal  Blender Foundation   t...@blender.orgwww.blender.org
Blender Institute   Entrepotdok 57A  1018AD Amsterdam   The Netherlands

On 6 Apr, 2013, at 13:55, Brecht Van Lommel wrote:

 Hi,
 
 On Sat, Apr 6, 2013 at 11:12 AM, Ton Roosendaal t...@blender.org wrote:
 I would be very careful storing UI assumptions in data.
 
 The current system already has layer names and pass names. The 'view' names 
 can be defined in the UI, and how it's all stored and retrieved internally 
 software can handle transparently. That way you don't regret design 
 decisions when things change internally after all.
 
 Personally I think explicit views  layers  passes (or layers  views
 passes) storage is better. My guess is that the way it's stored in
 EXR files is for compatibility / simplicity. EXR also has no concept
 of renderlayers and only stores them by name, yet we do have separate
 data structures for them in Blender. So why are views different?
 
 Maybe it requires a few extra code changes but I think the final code
 will be more clear.
 
 Another good trick is to look at code design in a way what would break if 
 we remove the feature. If things can get added and removed without bad 
 versioning, you're much safer for future.
 
 Forcing the render result to always store views is in this category. Views 
 are optional, and can best stay that way - if possible.
 
 RenderResult is also not stored in files, so I don't think we need to
 worry about versioning. Maybe there's a few places in the UI like the
 image editor or some compositing node where you want to specify a
 view, but I think in that case you want to have the view as a separate
 setting from the pass anyway?
 
 So as far as I can tell, nothing would break if we remove this feature?
 
 Brecht.
 ___
 Bf-committers mailing list
 Bf-committers@blender.org
 http://lists.blender.org/mailman/listinfo/bf-committers

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-06 Thread Ton Roosendaal
Hi Bartek,

We should try to tackle this topic in a couple of steps, put complexity or 
flexibility where it belongs, without bothering too much with it in early 
stages.

I suggest to carefully first design the internal data/api flow. All the stuff 
that's not really related to advanced stereoscopy workflow but just for our 
internal architecture to get ready for multiview renders, compositing and 
drawing.

Important to know is that we can also do efficient multiview renders in one 
pass (the scene data is coherent, only the camera moves).

Features people expect from this could be simply listed for Dalai as a 
reference. Such as:

- allow custom camera rigs
- allow multiview with more than 2 cameras
- make stereo camera option for quick setups, with python access
- allow preview of stereo buffers with common stereo displays
- make sure render pipeline can render additional pixels for stereo composite 
effects

Etc. :)

-Ton-


Ton Roosendaal  Blender Foundation   t...@blender.orgwww.blender.org
Blender Institute   Entrepotdok 57A  1018AD Amsterdam   The Netherlands

On 6 Apr, 2013, at 15:47, Bartek Skorupa (priv) wrote:

 My 2 cents to the topic (loose thought and presentation of my AddOn for 
 setting Stereo Base):
 
 On last Blender Conference in Amsterdam I made a speech about stereoscopy and 
 the way I'm creating stereo stuff in blender now:
 http://www.youtube.com/watch?v=WD7xzwxhhVU
 I was working on several stereo animations, some of them to be displayed on 
 52-inch monitors and some (three of them actually) for cinema screens.
 No live action was involved in those projects, so my task was a lot easier as 
 I had full control over the final result.
 I'm really glad that so much effort is put into the matter of stereoscopy 
 implementation in Blender.
 Having read all of the posts in this thread and the wiki entry I'd like to 
 share some of my loose thoughts:
 
 1.
 From user's perspective ability to view stereo when working would be great.
 
 2.
 WYSIWYG approach in this case is IMHO not good. What I mean by that is:
 User should have ability to make some post adjustments. It's obvious that 
 correcting depth bracket in post is practically impossible, but post shifting 
 should be made possible. This allows to adjust depth positioning.
 In my workflow I always render both views using parallel cameras. My goal is 
 to use off-axis approach, but I want to have freedom in post when it comes to 
 shifting images (setting depth positioning).
 Therefore I always render wider images than I need. This allows me to shift 
 them without losing the edges.
 Adding those spare pixels after setting everything up is at the moment not 
 possible with one click.
 We need to change the render resolution. Making the image wider means that 
 the aspect ratio changes. When we manually widen the image (increase X 
 resolution) - our view doesn't get wider, but instead it gets lower, so we 
 lose upper and lower parts of what we saw in our frame before. The workaround 
 is to adjust the focal length accordingly. The math behind it is not that 
 difficult, so it can be done by users who know the stuff, but it would be 
 great to have an option like: Add 40 pixels on each side in render settings 
 when working with stereo. If camera shift is possible - widening the image 
 shouldn't IMHO be much of a problem.
 
 3.
 left, right and center cameras:
 Most of the rigs I came across use the approach of having center camera and 
 left/right cameras are accordingly apart from center. This seems logical, 
 but I agree with Adriano about cons of such approach. (below the quote of 
 Adriano's post).
 It often happens that we need to convert some of our animations to stereo 
 and want to use our existing renders as left or right view and simply set 
 and render the other camera. Option for using main camera as left or right 
 should be definitely added.
 This of-course means that the pivot of our rig is not in the middle (between 
 the cameras), but my experience shows that it's not a problem in 99.9% of 
 cases.
 The additional advantage of having left or right camera at the pivot of 
 the rig is that when we happen to make a mistake in setting Stereo Base - it 
 can be corrected and only one view needs to be re-rendered, not both of them.
 
 On 4 kwi 2013, at 00:59, Adriano adriano.u...@gmail.com wrote:
 Sugestion:
 
 It would be nice if we can manage to set an existing camera to be left or
 right, and don't me moved at all when we setup planes in stereoscopy.
 
 This would be very usefull to convert old project to 3d, so we can keep old
 renders as left or right and just render one new camera.
 
 If the addon turns old camera into center, this is not possible and we
 have to render every thing allover again.
 
 
 
 4.
 Interocular Distance / Stereo Base:
 Lot of effort is put to create user friendly UI, implementing preview 
 capabilities, making 

Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-06 Thread Harley Acheson
Bartek,

 WRONG! When two view of an object are identical it tells your brain that
they are ON THE SCREEN.
 When positive parallax of an object equals the distance between viewer's
eyes - they appear at infinity.

To be fair, he's not wrong. But neither are you, since you are both talking
about different things...

You are right in that if you present the same image onto a *screen* in
front of the user then it will appear to be at the depth of the screen
itself.  Really no different than a normal 2D image on the screen and the
user can determine the distance to it using convergence.  Your right eye
has to look a little to the left, and the left eye rotates a little to the
right, the amount of which your brain uses to gauge the distance.

However, the presentation was talking about VR headsets like the Oculus
Rift.  Present an identical image to each eye on this type of headset and
you no longer have convergence to determine depth.  Each eye will stare
straight forward in this case and your brain will therefore place the image
at infinity as was mentioned in the presentation.

Cheers, Harley
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-04 Thread Ton Roosendaal
Hi Dalai,

Usability aside - that's stuff we can test and feedback later as well - I think 
the main decisions to make is on storage and display level now.

OpenEXR proposes to put the views inside the layer names, as channel prefix... 
I don't fully grasp the benefit yet, but it's probably to be able to localize 
any layer or pass, and have software figure out the views. Following their 
approach, it would be in Blender:

RenderLayer_001.diffuse.left.R
RenderLayer_001.diffuse.right.R

And not

left.RenderLayer_001.diffuse.R
right.RenderLayer_001.diffuse.R

This proposal is supported by Weta and ILM - might be worth considering 
carefully.

** RenderLayers

Following this approach, the Scene itself can next to (outside) renderlayer 
name/setting definitions, store a list with View names and settings. Which 
would be for stereo left and right but it can be middle and whatever 
views people like to have (I know 3d displays out there that need 9 views). For 
each view cameras could be assigned, or a single camera with stereo/multiview 
properties.

For each view, the Render engine then runs a complete loop to go over the the 
(already prepared) scene, set camera transforms and do the render. 

This then results in a complete filled RenderResult, which can save out to 
MultiLayer exr, to temp buffers and FSA alike.

If no views are set, the naming can be as usual. That also keeps render result 
work as usual, and stay compatible for non-stereo cases.

That means neather (1) or (2) or (3) as you proposed below btw!

** Compositor

The compositor internally can stay nearly fully same. It gets a new (optional) 
property for current view, which then gets used to extract the correct 
buffers from Render Result, from Image nodes, and sets the right buffers to 
RenderResult back, or saves to files.

This makes the need for join or split nodes not needed either. For cases 
where people want to input own footage, we can find ways to have the current 
composite view map to the right name for input. It also keeps compositor 
flexible for other multi-view cases.

** Regular images

We can also internally handle special case regular images - like side-by-side, 
or top-bottom, or whatever people can come up with. 

This then could become a property for the Image block, like right now for 
Generated, Sequence, etc. The API can follow acquire_ibuf using current 
view as well. Internally it can store, cache, or not, whatever is needed.

** Drawing images

Support for OpenGL (3D drawn) based stereo (using shutterglasses or red/green) 
we can quite easily add in the 3d window. This can be per-3d window locally 
even. 

Support for Image-based stereo we should only allow per-window in Blender. This 
window then should *entirely* do the display as set by the user (side by side, 
alternating, etc). This including a possible UI that's being drawn (with depth 
offset, if set).
This keeps a UI work in stereo mode, prevents ugly flicker for side-by-side 
monitors for example.

Next to that, the Image window then can get a shortcut/menu to go to entirely 
blank - not drawing UI at all, best for use in fullscreen/borderless of 
course. 

Drawing can simply use the view property again to extract the right buffers 
from RenderResult or Image blocks. No new StereoVisual concept needed.

Hope it's clear :) But I think the above would work nice and aligns fully your 
proposed and expected functionality.

-Ton-


Ton Roosendaal  Blender Foundation   t...@blender.orgwww.blender.org
Blender Institute   Entrepotdok 57A  1018AD Amsterdam   The Netherlands

On 2 Apr, 2013, at 7:33, Dalai Felinto wrote:

 Thanks everyone for the feedback.
 @ Brecht:
 
 OpenEXR in fact has native support for such stereo/multiview images, so
 it would make sense to be compatible with that and support saving and
 loading such EXR files.
 
 I think this also can be a good start for the project. The reading part can
 even make to trunk before the stereo code itself kicks in.
 Can we update trunk OpenEXR for the 1.7.1 version? All the API calls to use
 multi-view are there. (of course I can work locally with updated libs, but
 it's annoying if more people want to test/code)
 
 I think we should add views in the same way that we have layers and 
 passes
 now.
 The way OpenEXR organize the data seems a bit too loose (view can either be
 nested to a pass or be a top element, able to nest a layer or a pass
 directly).
 
 For Blender it would be easier if we stick to one of them:
 (the (1) seems to be the more logical in my opinion, but that also means
 the code will be more intrusive)
 
 (1)
 typedef struct RenderResult {
 - ListBase layers;
 +ListBase views;
 
 [ and make layers nested to RenderView ]
 
 (2)
 typedef struct RenderPass {
 + ListBase views;
 }
 
 [ and store the buffer itself (float *rect) in RenderView ]
 
 (3)
 Not sure how it would reflect in code, but we could have the views 

Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-03 Thread Adriano
Sugestion:

It would be nice if we can manage to set an existing camera to be left or
right, and don't me moved at all when we setup planes in stereoscopy.

This would be very usefull to convert old project to 3d, so we can keep old
renders as left or right and just render one new camera.

If the addon turns old camera into center, this is not possible and we
have to render every thing allover again.





--
View this message in context: 
http://blender.45788.n6.nabble.com/Stereoscopy-Implementation-Proposal-tp106106p106425.html
Sent from the Bf-committers mailing list archive at Nabble.com.
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-02 Thread Brecht Van Lommel
Hi,

On Tue, Apr 2, 2013 at 7:33 AM, Dalai Felinto dfeli...@gmail.com wrote:
 The way OpenEXR organize the data seems a bit too loose (view can either be
 nested to a pass or be a top element, able to nest a layer or a pass
 directly).

 For Blender it would be easier if we stick to one of them:
 (the (1) seems to be the more logical in my opinion, but that also means
 the code will be more intrusive)

I also think (1) is the correct solution.

Brecht.
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-02 Thread Sean Olson
Sorry for the slight aside, but some of this might be relevant to
conversation at hand even though applied in a different way:
http://media.steampowered.com/apps/valve/2013/Team_Fortress_in_VR_GDC.pdf


On Tue, Apr 2, 2013 at 9:17 AM, Brecht Van Lommel 
brechtvanlom...@pandora.be wrote:

 Hi,

 On Tue, Apr 2, 2013 at 7:33 AM, Dalai Felinto dfeli...@gmail.com wrote:
  The way OpenEXR organize the data seems a bit too loose (view can either
 be
  nested to a pass or be a top element, able to nest a layer or a pass
  directly).
 
  For Blender it would be easier if we stick to one of them:
  (the (1) seems to be the more logical in my opinion, but that also means
  the code will be more intrusive)

 I also think (1) is the correct solution.

 Brecht.
 ___
 Bf-committers mailing list
 Bf-committers@blender.org
 http://lists.blender.org/mailman/listinfo/bf-committers




-- 
||-- Instant Messengers --
|| ICQ at 11133295
|| AIM at shatterstar98
||  MSN Messenger at shatte...@hotmail.com
||  Yahoo Y! at the_7th_samuri
||--
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-02 Thread Dalai Felinto
Interesting, thanks for the link Sean. Slide 19 illustrates very well a
problem I was trying to argue with Ton the other day. Basically UI on top
of the stereo-3d view (mis)leads to confusing depth cues.

I'm currently working on the backend, and that alone will take a lot of
time (have all parts of Blender to work with view instead of layers
directly). So it wouldn't be any soon before I get to the fun part of
actually visualizing the 3d ;)

--
Dalai
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-01 Thread Brecht Van Lommel
Hi Dalai,

Regarding storing such stereo renders in the render results, I think
we should add views in the same way that we have layers and
passes now. OpenEXR in fact has native support for such
stereo/multiview images, so it would make sense to be compatible with
that and support saving and loading such EXR files.

For compositing I think you would just run compositing once for each
view, similar to the way FSA runs it once for each AA sample. I don't
think we should use a stereo buffer where multiple views are somehow
interleaved into a single buffer, that adds quite a bit of complexity.
You could have a node in the compositor which tells you which view is
being composited, so that you can do different logic if you want to,
and the depth of field node should perhaps work a bit different, but
further the existing nodes shouldn't really have to know that they are
working on a stereo image I think.

On Wed, Mar 27, 2013 at 2:40 AM, Harley Acheson
harley.ache...@gmail.com wrote:
 This might be a silly comment but it might simplify things by removing the
 need to switch into stereoscopic mode at all.

I think a stereo checkbox is still useful, I used to think it was
better to reduce the options in such cases but from experience it
seems to be useful to be able to turn some effect on/off without
having to remember what the setting was when you then want to turn it
back on.

Brecht.
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-01 Thread Adriano
Loved to know about this proposal, Dalai.

I have been studing stereoscopy lately.
I use this addon a lot:

http://www.noeol.de/s3d/


I sugest the funcionalities in this Plugin for 3dsmax:

http://davidshelton.de/blog/?p=354

hope to contribute more soon.

Adriano




--
View this message in context: 
http://blender.45788.n6.nabble.com/Stereoscopy-Implementation-Proposal-tp106106p106332.html
Sent from the Bf-committers mailing list archive at Nabble.com.
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-04-01 Thread Dalai Felinto
Thanks everyone for the feedback.
@ Brecht:

 OpenEXR in fact has native support for such stereo/multiview images, so
it would make sense to be compatible with that and support saving and
loading such EXR files.

I think this also can be a good start for the project. The reading part can
even make to trunk before the stereo code itself kicks in.
Can we update trunk OpenEXR for the 1.7.1 version? All the API calls to use
multi-view are there. (of course I can work locally with updated libs, but
it's annoying if more people want to test/code)

 I think we should add views in the same way that we have layers and 
 passes
now.
The way OpenEXR organize the data seems a bit too loose (view can either be
nested to a pass or be a top element, able to nest a layer or a pass
directly).

For Blender it would be easier if we stick to one of them:
(the (1) seems to be the more logical in my opinion, but that also means
the code will be more intrusive)

(1)
typedef struct RenderResult {
- ListBase layers;
+ListBase views;

[ and make layers nested to RenderView ]

(2)
typedef struct RenderPass {
+ ListBase views;
}

[ and store the buffer itself (float *rect) in RenderView ]

(3)
Not sure how it would reflect in code, but we could have the views handled
as passes (right.mist, left.depth, ...)



--
Dalai

blendernetwork.org/member/dalai-felinto
www.dalaifelinto.com


2013/4/1 Adriano adriano.u...@gmail.com

 Loved to know about this proposal, Dalai.

 I have been studing stereoscopy lately.
 I use this addon a lot:

 http://www.noeol.de/s3d/


 I sugest the funcionalities in this Plugin for 3dsmax:

 http://davidshelton.de/blog/?p=354

 hope to contribute more soon.

 Adriano




 --
 View this message in context:
 http://blender.45788.n6.nabble.com/Stereoscopy-Implementation-Proposal-tp106106p106332.html
 Sent from the Bf-committers mailing list archive at Nabble.com.
 ___
 Bf-committers mailing list
 Bf-committers@blender.org
 http://lists.blender.org/mailman/listinfo/bf-committers

___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-03-27 Thread Torsten Mohr
Hi,

i think it would be great to support 3D rendering and also 3D Viewport / 
Editing in Blender.  I guess these are two separate issues and can be 
separated.

Regarding 3D rendering:
To support 3D directly would be great, but i'd like to propose a different 
approach:
- Several cameras in a scene could be set Active and all these active 
cameras would render an image.
- In Compositing, the output of all these cameras would be available and could 
be composed to form an output image.
- The final output images dimensions would need to be independent of the 
cameras dimensions.
- The UV / Image Editor would need additional options on how to interpret the 
created image.

Consequences of the approach:
- For a 3D rendered image, two cameras could be placed in a scene, set active, 
maybe grouped together.
- also Picture-in-Picture Views would be possible, Overview of an image plus 
Detail view, ...
- the 3D output format would be easily to create in the Compositor:
- Red-Cyan, Red-Green, Amber-Blue, ... would be possible by mixing the colors 
of the two images
- Side-by-Side, Top-Bottom would be possible by translating one image and 
overlaying it with the other image.


Regarding 3D Viewport support:
- several output formats would need to be supported:
- Red-Cyan, Red-Green, Amber-Blue, ...
- Shutter glasses like NVidia
- odd-even polarized lines (I own one of those)

I hope i described the proposal clearly, what do you think of this?

I'm not a blender developer, though i already browsed through the sources to 
see how to best implement 3D rendering.


Best regards
Torsten


___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


[Bf-committers] Stereoscopy Implementation Proposal

2013-03-26 Thread Dalai Felinto
Hi there,

You can find here a proposal for built-in stereoscopy support in Blender
http://wiki.blender.org/index.php/User:Dfelinto/Stereoscopy

It touches various parts of Blender, so it would be nice if other devs
could take a look at it and share some thoughts. Once the design is
approved, everyone can join the effort as well.

Blender area maintainers: though I'm familiar with some of the parts this
proposal touches it, there are plenty of unknowns. Expected to be poked on
that regards, I hope you don't mind ;)

Thanks,
Dalai

blendernetwork.org/member/dalai-felinto
www.dalaifelinto.com
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Stereoscopy Implementation Proposal

2013-03-26 Thread Harley Acheson
Dalai,

This might be a silly comment but it might simplify things by removing the
need to switch into stereoscopic mode at all.

Instead consider that our camera is *always* capable of producing an image
from *center*, left, or right eye views with the default being the current
behavior of center.

So for viewport display options you don't need to click the Stereo
checkbox, select left/right/both and also mode.  Replace all of these
with a single list containing center, left eye, right eye, Anaglyph
Stereo, side by side stereo, etc.

So no need then for the Stereoscopy checkbox in scene either.  It is all
just a matter of which eye(s) to render and how to display them. It all
becomes a bit simpler if you add center...

Cheers, Harley
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers