On Tue, Mar 23, 2010 at 7:26 PM, Luca Barbieri <l...@luca-barbieri.com> wrote:
> What is the rationale behind the gallium-resources changes?

Luca,

Thanks for the feedback.  I posted something describing this a little while ago:

http://www.mail-archive.com/mesa3d-dev@lists.sourceforge.net/msg11375.html

There are a bunch of things pushing us in this direction, but at its
most basic it is a recognition that we have two gpu-side objects with
very similar operations on them but exposed by the interface in quite
different ways.   Look for instance at the SVGA driver which has
implemented two separate (and fairly complex) non-blocking upload
paths for each of these entities.

And crucially, we also have APIs starting to blur the line between
textures and buffers.  In DX10 and 11 in particular, you can perform
operations on buffers (like binding as a render-target) which are
easier to cope with if we have a unified abstraction.


In the past we had some confusion in what a pipe_buffer really is -- is it:
a) a GPU-side entity which can be bound to the pipeline?
b) a mechanism for CPU/GPU communication - effectively a dma buffer?
c) a way of talking about the underlying storage for GPU resources,
effectively a linear allocation of VRAM memory?

What we're doing in gallium-resources is a unification of textures and
the view (a) of buffers as abstract GPU entities.

That implies that the roles b) and c) are covered by other entites --
in particular transfers become the generic CPU/GPU communication path,
and the underlying concept of winsys buffers (not strictly part of
gallium) provides (c).

Basically the change unifies all the GPU-side entities under a single
parent (resource).  The driver is free to implement textures and
buffers as one code path or two.  For expediency, I've tried to avoid
changing the drivers signficantly at this point, which has meant
keeping alive the separate texture and buffer implementations and
selecting between them with a vtbl.  That isn't a strict requirement
of the design, just something I've done to avoid rewriting all of the
drivers at once on my own...


> I couldn't find any and I see several points that could be improved:
> 1. How does one do texture_blanket with the gallium-resources API?
> That is, how does one bind a buffer as a texture with a specific
> format?
> 2. Why is there a transfer_inline_write and not a transfer_inline_read?
> 3. How about adding a transfer_update_region that would copy data from
> the resource to the transfer, just like transfer_inline_write copies
> from the transfer to the resource?
> 4. How about making transfers be always mapped when alive and removing
> transfer_map and transfer_unmap?

I think you brought up some of these points up in the followup to my
earlier post, eg:

http://www.mail-archive.com/mesa3d-dev@lists.sourceforge.net/msg11537.html

I think your suggestions are good - I had an 'inline_read' initially,
and although I took it out, I've been convinced by others that there
are actually users for such an interface - so no objections to it
coming back in.  Similarly I agree there isn't much value in
transfer_create/destroy being separate from map and unmap.

I see these as additional enhancements beyond getting the basic
resource concept up and running which can be done in follow-on work.

In reference to texture_blanket - this was actually removed from
gallium a little while ago - replaced by the texture_from_handle() and
texture_get_handle() interfaces.  In this case the 'handle' is a
pointer to an operating-system specific entity -- presumably
describing the underlying storage.

> In addition to these micro-level issues, is the bigger picture
> unification of buffers and textures as resources a good idea?

I think so, not least because other APIs are moving in this direction
and using them somewhat interchangeably.

> It will burden all buffer operations with redundant notions of 3D
> boxes, strides, formats and texture targets.

I'm not sure where you see this, but if there are specific cases where
there is a lot of new overhead, we can work to reduce that.

> How about instead layering textures over buffers, and exposing the
> underlying buffer of a texture, maybe also allowing to dynamically
> change it?

I think this makes sense for the view of buffers as memory-manager
allocations.  That works for certain cases, eg native rendering on
local machines, but not all uses of gallium can be described that way.
 We're really positioning the gallium api at a slightly higher
abstraction level, to cover both the case where that could work in
addition to ones which don't fit that mold.

> Then you could create a texture, asking the driver to create a buffer
> too, for the normal texture creation case.
> You could create a texture with a specified format and layout over an
> existing buffer to implement buffer-as-texture, or reinterpret the
> underyling buffer of an existing texture as another data format.
> You could also create a texture without an underlying buffer, to find
> out how large of a buffer you would need for that texture layout. (and
> whether it is supported). This could be useful for OpenGL texture
> proxies.
> For shared textures, you would call buffer_from_handle and then create
> a texture over it with the desired format/layout.
>
> Transfers can then be split in "texture transfers" and "buffer transfers".
> Note that they are often inherently different, since one often uses
> memcpy-like GPU functionality, and the other often uses 2D blitter or
> 3D engine functionality (and needs to worry about swizzling or tiling)
> Thus, they are probably better split and not unified.

My experience is that there is more in common than different about the
paths.  There are the same set of constraints about not wanting to
stall the GPU by mapping the underlying storage directly if it is
still in flight, and allocating a dma buffer for the upload if it is.
There will always be some differences, but probably no more than the
differences between uploading to eg a constant buffer and a vertex
buffer, or uploading to a swizzled and linear texture.

> Furthermore, in the gallium-resource branch both r300g and nouveau
> drivers have different internal implementations for buffer and texture
> transfers (they actually look fundamentally different, not just
> duplicated code): why not just expose them directly as two separate,
> more efficient, interfaces, instead of going through a single fat
> interface, and then a further indirect branch in the driver?

I hope that the driver will unify these implementations internally
over time.  The reason I have a split internally is because I want to
avoid changing too much of the driver in a single hit and hence have
done my best to keep the original code intact.  It will be up to the
driver owners over time to decide how and when  to merge those
implementations down to a single path.

It's just too much for me do that for all these drivers on my own -
even the current code is more change than I wanted to make, especially
for the nouveau drivers where there was this internal layering of
textures on top of pipe_buffers.  That really was a layering violation
- it would have been cleaner if both textures and buffers were
directly layered on top of some underlying drm/winsys bo, eg the
nouveau_bo.

> In addition transfers could be handled by an auxiliary module that
> would ask the driver to directly map the texture, and would otherwise
> create a temporary itself and use a driver-provided buffer_copy or
> surface_copy manually.
> Note that many drivers implement transfers this way and this would
> avoid duplicate code in drivers.
> transfer_inline_write can also be done by copies from user buffers, or
> textures layered over user buffers.

An auxilliary module for this sounds like a good idea -- I don't see
that having a combined GPU-side abstraction makes it any harder than
it would be otherwise though.

Some of the confusion we have is because of the many different
entities we can label as buffers.  Your suggestion sounds more like
how I'd describe layering graphics functionality on top of the
underlying video memory manager's idea of a buffer.  In DX they have
different nomenclature for this - the graphics API level entities are
resources and the underlying VMM buffers are labelled as allocations.
In gallium, we're exposing the resource concept, but allocations are
driver-internal entities, usually called winsys_buffers, or some
similar name.

Keith

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to