Hi Jason,

On Tue, Oct 23, 2012 at 5:27 PM, Jason Daly <jd...@ist.ucf.edu> wrote:

>
> There's no difference between these two.  ARB_multitexture is a 15-year
> old extension that simply provides the specification for how multitexturing
> is done.  Multitexturing has been part of standard OpenGL since version 1.3.
>

Ok I see, I stumbled upon this terms when looking at

http://updraft.github.com/osgearth-doc/html/classosgEarth_1_1TextureCompositor.html


>   - using multi-pass rendering. Probably slower but not limited by
> hardware.
>
>
> I doubt you'll need to resort to this, but with the vague description of
> what you're doing, I can't be 100% sure.
>

Actually what I do is that I have a mesh that is generated from depth-maps.
In a post-processing step I want to apply photos (taken by arbitrary
cameras, but with known intrinsics) as textures. What I know is the
position of which the photo was taken (relativ to the mesh) and the camera
intrinsics.

What I do next is to calculate UV coordinates myself using projective
texturing. Of course, not all triangles of the mesh get textured by the
same photo, and there a triangles that are not visible from any photo.

>
> If this is true, you can generate the texture coordinates directly from
> the position/orientation of the photo and the positions of each vertex in
> the mesh.  Read up on projective texturing to see how this is done.  OSG
> can do this easily if you write a shader for it, or you can use the
> osg::TexGen state attribute to handle it for you (it works just like
> glTexGen in regular OpenGL).
>

How can TexGen and a shader help here? Would it allow me to calculate the
UV coordinates for a given photo (camera position etc.) and the mesh?



>
> The other thing you'll need to do is divide up the mesh so that only the
> faces that are covered by a given photo or photos are being drawn and
> textured by those photos.  This will eliminate the need for you to have as
> many texture units as there are photos.  There may be regions of the mesh
> where you want to blend two or more photos together, and this is the only
> time where you'd need multitexturing.  You should be able to handle this
> mesh segmentation with a not-too-complicated preprocessing step.
>

I wanted to avoid splitting the mesh, at least for the internal
representation (which I hoped included visualization). Pros and cons have
been discussed in this thread (in case you are interested)

https://groups.google.com/d/topic/reconstructme/sDb_A-n6_A0/discussion

Best,
Christoph
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to