Hi,

I have a set of images I want to render with OpenGL, and I do this by
doing some calculations, producing an array, then passing this array
to OpenGL.  Here's the relevant OpenGL call, for reference:

gluBuild2DMipmaps(GL_TEXTURE_2D, 1, image.shape[0], image.shape[1],
                                                        GL_LUMINANCE, 
GL_UNSIGNED_BYTE, flatImage)

flatImage is just image as a flattened, contiguous numpy array... all
that matters is that it contains the same values as in image.

The problem is that passing my arrays to pylab.imshow() displays them
exactly as they are meant to be, but in OpenGL they are 'twisted'.
There is an offset that is _different_ for each picture that only
seems to be resolved by replacing 'image.shape[0]' by
'image.shape[0]-5' or some other numbers that make the rows shorter.
How is it that OpenGL does weird things with the row length but pylab
is always happy?

I can send pictures of the problem if it helps....

Adeola

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Matplotlib-users mailing list
Matplotlib-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/matplotlib-users

Reply via email to