On Sun, Dec 12, 1999 at 11:22:06PM -0600, John Carmack wrote:
| ...
| One possible objection to this type of arrangement is that a card with 16
| bit textures would have permanently lost the low order bits of the texels
| after upload, and any glGet on the texels would return the lower precision
|
Whoops, sent to the wrong list...
>>I think we're in complete agreement on the two big things needed:
>>
>>1. Don't require Mesa core to store a copy of the texture.
>>
>>2. Implement glTexSubImage functions in the device driver.
>>
>>
>>One question is this: how early in the glTexImage (or glTex
John Carmack wrote:
>
> Since new driver interfaces have been brought up, here are some thoughts
> about improving texturing:
>
> With the current architecture, it isn't possible to accelerate
> glCopyTexSubimage, even though most non-voodoo hardware is capable of doing
> it completely asynhrono
Since new driver interfaces have been brought up, here are some thoughts
about improving texturing:
With the current architecture, it isn't possible to accelerate
glCopyTexSubimage, even though most non-voodoo hardware is capable of doing
it completely asynhronously. The requirement of having an