Hi,
Hi Sebastian,

thanks for your input If I'd be switching texture binding, wouldn't that mean that the scene still has to be rasterized before display? My ultimate intention is to skip rasterization at all (i.e rasterizing all patterns once at init) and just copy final pixels. Is that feasible?
Changing texture coordinates or switching binding is fast enough.Rasterizing has to be done in any case you want to put it to screen. I assume your patterns are images that you can pre-calculate, so sending them to the GPU is the matter of adding them to a texture object. Simply try it, it is really simple. Else, try to explain the problem you want to solve more detailed.

Cheers
Sebastian

Best,
Christoph


Sebastian Messerschmidt <sebastian.messerschm...@gmx.de <mailto:sebastian.messerschm...@gmx.de>> schrieb am Fr., 13. Nov. 2015 um 20:42 Uhr:

    Hello Christoph,

    You can use either a texture which contains all the textures and
    modify the texture coordinates (so to say a matrix of different
    textures).
    Also texture arrays might help here a lot, if all the textures are
    of the same dimensions.
    Anyways, what you might you see are initial costs. Once the
    texture is transfered to the GPU and your memory is not filled,
    subsequent use should be a matter of switching texture units binding.

    Cheers
    Sebastian

    Hi everyone,

    I'm working on a project where I need to display different
    patterns (textured rectangles) as quickly as possible. Currently,
    i have an osgViewer with a single view running that holds a
    single textured rectangle and upon user request, the texture /
    image is updated. This works quite ok, but requires a CPU->GPU
    transfer every time I intend to update the image.

    I think there must be a way to improve performance when I know
    the set of patterns images beforehand. So, I would like to
    prepare pre-rendered textured-quads on the GPU and upon user
    request just cycle between those pre-rendered elements. That
    should reduce the CPU->GPU overhead to initialization time.

    I somehow have the feeling that the solution requires a
    frame-buffer / render-buffer object per pattern and somehow
    involves glBlitFramebuffer to display the correct frame buffer,
    but I'm stuck in how to implement this in OSG.

    Any kick start would be greatly appreciated!

    Best,
    Christoph


    _______________________________________________
    osg-users mailing list
    osg-users@lists.openscenegraph.org  
<mailto:osg-users@lists.openscenegraph.org>
    http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

    _______________________________________________
    osg-users mailing list
    osg-users@lists.openscenegraph.org
    <mailto:osg-users@lists.openscenegraph.org>
    http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to