Thanks, Alex, for the help and suggestions! Hopefully the video card
on the microscope machine can handle large, rectangular, non-power-of-
two textures, because it sounds like that will make my life the
easiest...

I was initially thinking of doing the tone mapping on the textures
(all I need are controls for min/max pixel intensity and maybe a
gamma) with the glPixelTransfer and glPixelMap functions, which would
modify the intensity values when uploading the texture. But then the
texture-upload would need to be repeated each time the brightness/
contrast mapping changes. I guess the middle road is to upload the
image as an "original" texture, and then copy it around the video
memory with glCopyTexImage or similar, which will apply the pixel
intensity transforms set by glPixelTransfer and friends.

Perhaps most straightforward would be doing the mapping at render-time
with a shader, except that I have no idea how to do that in practice,
or what the level of support for this sort of thing in pyglet is. I do
see the "shader.py" code in the "experimental" directory in SVN, but I
don't know how robust that is -- especially for things like "uniform
variables" which look useful for changing the intensity mapping
parameters on-the-fly.

Given the current state of pyglet, are any of these approaches more or
less reasonable? Or is this basically an openGL question at this
point, not a pyglet one?

Thanks again for all the help and suggestions!
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"pyglet-users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/pyglet-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to