[ Please don't top-post ]

On Wed, 2005-03-09 at 10:38 +0100, Jerome Glisse wrote:
> 
> I don't think this will change anything for x86 setup.

Yeah, the real question is whether it breaks pre-R300 chips on big
endian machines, but it looks fine to me.

> Moreover could this change also affect way X do bitblit with xrender
> acceleration ?

No, as I explained to you before, the X driver has to deal with
different byte orders because the X server doesn't provide the data in
little endian.


> On Wed, 9 Mar 2005 11:24:02 +1100, Paul Mackerras <[EMAIL PROTECTED]> wrote:
> > I started looking into the issue of how we handle various texture
> > formats on R300 on big-endian machines.  It became evident that
> > textures were getting byte-swapped on their way to the framebuffer.

Yep, the comment the patch removes explains this. :)

> > We can cope with the byte-swap for textures with 4 bytes/texel, but
> > not for textures with 2 or 1 byte/texel.  So instead of using a
> > HOSTDATA_BLT in radeon_cp_dispatch_texture, I changed it to use a
> > BITBLT_MULTI.  I still copy the texture into gart memory, but instead
> > of using an indirect buffer I just put the blit command into the ring
> > buffer.  

Nice. It might also be interesting to experiment with copying the
texture data into the ring itself instead of into indirect buffers (and
use type 3 NOP packets to have the CP skip it), if someone feels so
inclined.

> > This avoids the byte swap that the CP does and gets the texture to the 
> > framebuffer without being byte-swapped.  It should be just as fast this 
> > way as with the HOSTDATA_BLT.

Yeah, it might actually be slightly faster. :)


> > +               OUT_RING((texpitch << 22) | (offset >> 10));
> > +               OUT_RING((texpitch << 22) | (tex->offset >> 10));

Are source and destination pitch always the same?

> > +               OUT_RING(0);
> > +               OUT_RING((image->x << 16) | image->y);
> > +               OUT_RING((image->width << 16) | height);
> > +               ADVANCE_RING();
> > +
> >                 radeon_cp_discard_buffer(dev, buf);

I think this needs a RADEON_WAIT_UNTIL_2D_IDLE(), or the indirect buffer
might get reused before the blit is complete.


-- 
Earthling Michel DÃnzer      |     Debian (powerpc), X and DRI developer
Libre software enthusiast    |   http://svcs.affero.net/rm.php?r=daenzer



-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_ide95&alloc_id396&op=click
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to