On 4/12/08, Zachrahan <[EMAIL PROTECTED]> wrote: > > Hello all, > > Thanks for the help with setting up pyglet windows in background > threads. The next step to my microscope-control workspace is to pull > images from the camera and put them onscreen as fast as possible, > using as little processor as possible (the proc will be in use for > other things, like auto-focus, etc.)
Sounds great. The pyglet.media module is able to upload images into existing textures using glTexSubImage2D very quickly. This hasn't been the bottleneck in playing back multiple 30fps videos on any of my computers, so I suspect it will be suitable for your setup as well. > Here are the basics of the system: > - Camera produces down 16-bit grey-scale images, of arbitrary size > (from 128x128 to 1300x1030). Your video card may have a limit on the maximum texture size, in which case a large image like 1300x1030 would need to be uploaded as several tiled images. > - I'll probably convert the pixels to 8-bit using some brightness/ > contrast/gamma function via numpy. (Or would doing this in OpenGL be > possible/better?) Most recent video cards support 32-bit floating point textures, which should be sufficient for tone mapping a 16-bit image. Very recent nvidia cards also support integer textures, and so will be completely suitable for your 16-bit data. pyglet doesn't supply any "nice" interfaces for these texture types (but all the relevant GL functions/enums are provided). > - I'll need to draw the image in a window, using an arbitrary zoom > factor (with 1:1 giving camera pixels = display pixels) and an > arbitrary window size. Uploading the image as a texture at its native resolution gives you the best flexibility here. > - Some times, the camera will be giving 30+ FPS (with small images), > at other times, the camera will be off, and the only image updates > needed will be when the window needs a redraw (or the user zooms in). > > Obviously, I'd like to handle the zooming on the video card, and > anything else I can. I assume that the fastest way to go is to upload > the pixels as a texture of Luminance-type pixels, and then use texture- > mapping to go from the image pixels to screen pixels with the required > zoom and viewport onto the image. > > However, I could alternately use glDrawPixels and glPixelZoom to > handle this, and deal with choosing the right image region to show > elsewhere. I wouldn't recommend glDrawPixels: if you end up displaying the same frame even twice you'll have lost performance over using a texture. > > Also, given that much of the time, there will be no new image input, > is there a way to not re-draw the window unless it needs it? (E.g. > part of the window has been occluded and needs a redraw?) Since > windows are double-buffered on OS X, and will likely be foreground > anyway, it seems a bit wasteful to blit new pixels to the screen every > cycle if nothing's changed on the display... Though perhaps that's not > the case. pyglet 1.1 has similar behaviour: windows are updated only when they receive an event. Since a redraw of just the image (as a texture) will be very cheap, this is probably going to be plenty CPU-friendly for you. If you really want it to update only when required, you can subclass pyglet.app.EventLoop and override idle(). Alex. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "pyglet-users" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pyglet-users?hl=en -~----------~----~----~----~------~----~------~--~---
