Instead of combining create/map and
unmap/destroy it would be better to make map more powerful.
If I understand correctly, you are proposing to add a subrectangle
parameter to map/unmap, so that you can collect multiple rectangle
updates in a single upload happening at transfer destruction
By making transfers context-private and non-shareable, it becomes
possible for a driver to implement interleaved drawing and dma-uploads
within a single command buffer.
While we do this, how about removing transfer map and unmap functions
and making the create and destroy functions do mapping
Map/unmap make it all the way to the user's program, and there will
likely be cases where user code maps/unmaps a buffer multiple times
before drawing. The current transfer semantics can handle that with
zero-copy if the state tracker does the right thing with it.
OpenGL does not allow you to
I tested this on Windows, using nVidia driver 195 on nv40, and it
seems we are all partially wrong.
SM3 does indeed allow semantics unrelated to hardware resources.
However, the semantic indices for any semantic type must be in the
range 0-15, or D3DX will report a compiler error during shader
At least for SM3.0, one can specify a vertex shader output semantic like
COLOR15 and have it running as long as one has also a pixel shader with a
matching input semantic. Though I agree with you we don't really want to go
this route and have something more sensible.
Do you know of any
Personally I'm
going to take a break from this thread, spend a couple of days looking
at i965, etc, to see what can be done to improve things there, and
maybe come back with an alternate proposal.
Yes, I think that the most important step is to precisely determine
how both hardware (and
On Tue, Feb 2, 2010 at 7:38 PM, Olivier Galibert galib...@pobox.com wrote:
On Tue, Feb 02, 2010 at 07:09:12PM +0100, Luca Barbieri wrote:
Otherwise, we will need to recompile either of the shaders at link
time, so that foo is assigned the same slot in both shaders, which
is what we do now
An overview of the possible options.
Let's call vertex shader outputs v and fragment shader inputs f
Let v - f mean that v connects to f.
NUM_INTERPOLATORS is the number of available interpolators. It is
usually between 8 and 32.
1. Current Gallium
v - f if and only if v == f
Any values of v and
On Mon, Feb 1, 2010 at 3:38 PM, Keith Whitwell kei...@vmware.com wrote:
This seems like a very different idea of semantics. These aren't intended to
be hardware resources, and there is no concept of querying the driver to
figure out how many the hardware supports. Further, the indices for
I can't really use a routing table state to produce a cso, because the hw
routing table I generate depends on rasterizer state, e.g. I must not
put in back face colour (we have a 2 to 1 mapping here) if twoside
is disabled.
Also, I'm routing based on the scalar *components* the FP reads,
On Mon, Feb 1, 2010 at 5:31 PM, Keith Whitwell kei...@vmware.com wrote:
Christoph, Luca,
Twoside lighting has is a bit of a special case GL-ism. On a lot of hardware
we end up implementing it by passing both front and back colors to the
fragment shader and selecting between them using the
DX9 semantic indexes are apparently unlimited
According to http://msdn.microsoft.com/en-us/library/ee418355%28VS.85%29.aspx,
this is not the case.
Here is the relevant text:
These semantics have meaning when attached to a vertex-shader
parameters. These semantics are supported in both Direct3D
I haven't tried to probe crazy high numbers, but within reason, my experience
is that the numbers are unconstrained.
No, according to that document if you use TEXCOORD[n] then n NUM_TEXCOORDS.
TEXCOORD[n] Texture coordinates float4
[...]
n is an optional integer between 0 and the
Where the semantic indicates some relationship to actual system resources, I
agree that the number is constrained by the number of those system resources.
In the case of the gallium GENERIC semantic, there is explicitly no system
resource that semantic is referring to and hence no limit on
A possible limitation of this scheme is that it doesn't readily map to
hardware that can configure its own interpolators to behave either as
GENERIC, COLOR (or some other semantic) dynamically.
However, it seems to me that at least ARB_fragment_program only
requires and supports 2 COLOR registers
On Sun, Jan 31, 2010 at 3:21 PM, José Fonseca jfons...@vmware.com wrote:
On Sat, 2010-01-30 at 04:06 -0800, Corbin Simpson wrote:
Handful of random things bugging me.
Bellow some answers for the things I know enough to comment.
1) Team Fortress 2 (and probably other Source games, haven't
from this webmail client is a total pain... Let me look
figure out an alternative.
Keith
From: Luca Barbieri [l...@luca-barbieri.com]
Sent: Thursday, January 28, 2010 10:18 PM
To: Brian Paul
Cc: Luca Barbieri; mesa3d-dev@lists.sourceforge.net
FWIW, I think DX10 required or at least encouraged semantic mapping
support in hardware. R6xx+ radeons support this and r3xx-r5xx
hardware do to a lesser degree. You can use arbitrary, driver
specific ids and the hardware will match up inputs and outputs based
on those ids.
Can you provide
Luca,
Let me make sure I understand the problem here.
Are you specifically concerned about the GENERIC[x] semantic
labels/indexes that are attached to VS outputs and FS inputs?
Yes.
This is as intended. The semantic indexes are used to match up
inputs/outputs logically but they should
As a concrete example, the current nv40 code does this during fragment
program translation.
case TGSI_SEMANTIC_GENERIC:
if (fdec-Semantic.Index = 7) {
hw = NV40_FP_OP_INPUT_SRC_TC(fdec-Semantic.
I just read the extension, and it seems to be that it clearly
indicates that routing is *not* used by OpenGL.
In particular, varyings with the same name are not linked together,
and instead the builtin varyings must be used.
As far as I know, the builtin varyings are gl_TexCoord[i] where i
On Fri, Jan 29, 2010 at 8:49 PM, Keith Whitwell kei...@vmware.com wrote:
So the nv40 code is doing the wrong thing... :)
The rule currently is that the generic tags are just tags and are used only
to establish mapping between fragment shader and vertex shader. Additionaly
the vertex
On Fri, Jan 29, 2010 at 11:09 PM, Corbin Simpson
mostawesomed...@gmail.com wrote:
I would say that the routing table really needs to be handled by the
driver implicitly. When you're told to draw things, you do your shader
routing/linking before you draw.
If the routing table really does
Changes in v4;
- Implemented Brian Paul's style suggestions
Changes in v3:
- Use positive caps instead of negative ones
Changes in v2:
- Updated formatting
The state tracker will use the TGSI convention properties if the hardware
exposes the appropriate capability, and otherwise adjust WPOS
Exposing it was incorrect, as the GLSL part of the extension is
missing.
We still keep the ARB_fragment_coord_conventions field, so that the
ARBfp parser can know whether to accept or reject the keywords.
---
src/mesa/main/extensions.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
Changes in v3:
- Renumber caps to accomodate caps to add to master in the meantime
- Document caps
- Add unsupported caps to *_screen.c too
Changes in v2:
- Split for properties patch
- Use positive caps instead of negative caps
This adds 4 caps to indicate support of each of the fragment coord
Changes in v3:
- Documented the new properties
- Added comments for property values
- Rebased to current master
Changes in v2:
- Caps are added in a separate, subsequent patch
This adds two TGSI fragment program properties that indicate the
fragment coord conventions.
The properties behave as
Changes in v4:
- Rebase and modify for changes in previous patches
Changes in v3:
- Use positive caps instead of negative caps
Changes in v2:
- Now takes the fragment convention directly from the fragment shader
Adds internal support for all fragment coord conventions to softpipe.
This patch
Patchset resent addressing comments, adding unsupported caps to
*_screen too, and documenting the added caps and properties.
--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with
Based on work by Dave Airlie.
Adds the 4 vertex formats for half float vertices.
This differs from Dave Airlie's patch in that it does not add padded
formats, but rather uses the convention already used for vertex
formats.
This allows to simplify the patches and is consistent with the
existing
Based on work by Dave Airlie.
Changes by me:
1. Fix assertion in st
2. Change to use unpadded Gallium formats
---
src/mesa/state_tracker/st_draw.c | 11 ++-
src/mesa/state_tracker/st_extensions.c |1 +
2 files changed, 11 insertions(+), 1 deletions(-)
diff --git
I'd like to have some more definitive review comments on this patch
(sending to Brian and Keith for this).
Right now GLSL is the *only* Gallium user that does not use sequential
indexes starting from 0 for vertex shader outputs and fragment shader
inputs.
This causes problems for some drivers
Didn't Keith had objections on this, on another thread? Luca re-sent the
patch but I don't see the remark being addressed.
Forwarded Message
From: Keith Whitwell kei...@vmware.com
To: Luca Barbieri l...@luca-barbieri.com
Cc: mesa3d-dev@lists.sourceforge.net
mesa3d-dev
On Wed, Jan 27, 2010 at 5:49 PM, Brian Paul bri...@vmware.com wrote:
Luca Barbieri wrote:
Changes in v3:
- Use positive caps instead of negative ones
Changes in v2:
- Updated formatting
The state tracker will use the TGSI convention properties if the hardware
exposes the appropriate
Hmmm, I'd really rather not special-case the extension code for this one
thing.
Isn't it possible to accomplish this by commenting out the following
line from extensions.c:
+ { OFF, GL_ARB_fragment_coord_conventions,
F(ARB_fragment_coord_conventions) },
Then swrast and Gallium can set
Changes in v3:
- Use positive caps instead of negative ones
Changes in v2:
- Updated formatting
The state tracker will use the TGSI convention properties if the hardware
exposes the appropriate capability, and otherwise adjust WPOS itself.
This will also fix some drivers that were previously
Changes in v2:
- Split from properties patch
- Use positive caps instead of negative caps
This adds 4 caps to indicate support of each of the fragment coord
conventions.
All drivers are also modifed to add the appropriate caps (3 lines each).
Some drivers were incorrectly using
Changes in v3:
- Use positive caps instead of negative caps
Changes in v2:
- Now takes the fragment convention directly from the fragment shader
Adds internal support for all fragment coord conventions to softpipe.
This patch is not required for use with the current state trackers, but it
On Tue, Jan 26, 2010 at 12:11 PM, Keith Whitwell kei...@vmware.com wrote:
Luca,
I would have expected fragment coord conventions to be device state, not
a part of the shader.
It seems like these new flags are really peers (or replacements?) of the
gl_rasterization_rules flag in
First adds a new screen interface for is_vertex_format_supported and also
we seems to have some GPUs with a single R16 and some with R16X16 so allow
or this.
Are you sure this is necessary?
Vertex shader formats have an explicitly specified stride, and so
padding does not matter for them.
I
Signed-off-by: Brian Paul bri...@vmware.com
Please push this as well.
Thanks.
--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities
How about this?
I'm using it locally for nv40 immediate mode vertex emission.
--- a/src/gallium/include/pipe/p_defines.h
+++ b/src/gallium/include/pipe/p_defines.h
@@ -236,6 +236,8 @@ enum pipe_transfer_usage {
#define PIPE_BUFFER_USAGE_VERTEX(1 5)
#define PIPE_BUFFER_USAGE_INDEX (1
How about this?
I'm using it locally for nv40 immediate mode vertex emission.
--- a/src/gallium/include/pipe/p_defines.h
+++ b/src/gallium/include/pipe/p_defines.h
@@ -236,6 +236,8 @@ enum pipe_transfer_usage {
#define PIPE_BUFFER_USAGE_VERTEX(1 5)
#define PIPE_BUFFER_USAGE_INDEX (1
On Thu, Jan 21, 2010 at 9:05 AM, Chia-I Wu olva...@gmail.com wrote:
On Thu, Jan 21, 2010 at 2:36 PM, Luca Barbieri l...@luca-barbieri.com wrote:
@@ -1132,7 +1132,7 @@ glXReleaseTexImageEXT(Display *dpy, GLXDrawable
drawable, int buffer
The GLX dispatch layer in src/mesa/drivers/x11/ should be removed. It
hasn't been used in years. I removed it from the stripped-down GLX in
src/gallium/state_trackers/glx/xlib/. That could be followed as an exmaple.
How about doing the opposite, and using it in the DRI GLX libGL too?
I used
These are an even more a temporary measure until drivers are fixed.
What will the final set of cap bits look like?
I'm hesitant to commit code with the temporary NO_CAPS.
The plan would be to have all Gallium drivers internally support all
conventions and then remove all the caps introduced
On Thu, Jan 21, 2010 at 6:34 PM, Corbin Simpson
mostawesomed...@gmail.com wrote:
Maybe it's just me, since I actually wrote the docs, but does anybody
else read them?
From cso/rasterizer.html (viewable at e.g.
http://people.freedesktop.org/~csimpson/gallium-docs/cso/rasterizer.html
):
Changes:
- Updated formatting
The state tracker will use the TGSI convention properties if the hardware
exposes the appropriate capability, and otherwise adjust WPOS itself.
Thus, this patch will work on unmodified drivers, and not require any
changes.
However, this should only be a temporary
Changes:
- Now takes the fragment convention directly from the fragment shader
The pixel center condition of softpipe was previously, incorrectly,
INTEGER.
This patch supports the new properties and also fixes this bug.
---
src/gallium/drivers/softpipe/sp_screen.c |3 +++
Since nv40 pipe doesn't invert anything itself, it seems it also renders
Y_0_TOP (coordinates after vport transform) but FragCoord is the other
way ? That's odd, makes me think there should be a switch, but then ...
maybe they just built it the OpenGL way back then.
(do you have
On the other hand, part of the reason this is acceptable in r300 is
that we do SSA and DCE during shader compile, so a double inversion
doesn't hurt us as much. :3
nv30/nv40 mostly do 1:1 mapping to the hardware, and it would be great
to keep it that way as much as possible.
What if we add
BTW, the ximage backend may be useful to do hardware accelerated
rendering on a local DRM card (with an hardware Gallium driver), but
display the result over a remote X11 connection.
This could be useful for instance to do development on multiple
Gallium drivers for different cards while working
I think I'd prefer to avoid pushing new requirements into drivers -
intel, vmware-svga, etc, to implement something which is handled pretty
easily in the state tracker.
Yes, this will require all drivers to be changed.
However, this is only relatively hard for drivers that support DirectX
10
Fixes link/runtime errors for missing glXGetProcAddressARB.
---
src/mesa/drivers/x11/glxapi.c | 24
1 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/src/mesa/drivers/x11/glxapi.c b/src/mesa/drivers/x11/glxapi.c
index a17c2c3..8a43c78 100644
---
---
progs/fp/position-frc-integer.txt |7 +++
progs/fp/position-frc.txt |6 ++
progs/fp/position-upper-left.txt |7 +++
progs/fp/position.txt |2 ++
4 files changed, 22 insertions(+), 0 deletions(-)
create mode 100644
---
src/mesa/main/extensions.c|1 +
src/mesa/main/mtypes.h|3 +++
src/mesa/shader/arbprogparse.c|2 ++
src/mesa/shader/program_parse_extra.c | 12
src/mesa/shader/program_parser.h |2 ++
5 files changed, 20 insertions(+), 0
The pixel center condition of softpipe was previously, incorrectly,
INTEGER.
This patch supports the new properties and also fixes this bug.
---
src/gallium/auxiliary/draw/draw_vertex.h|7 +++-
src/gallium/drivers/softpipe/sp_screen.c|3 ++
These drivers were broken and this fixes them.
---
src/gallium/drivers/nv04/nv04_screen.c |3 +++
src/gallium/drivers/nv10/nv10_screen.c |3 +++
src/gallium/drivers/nv20/nv20_screen.c |3 +++
src/gallium/drivers/nv30/nv30_screen.c |3 +++
src/gallium/drivers/nv40/nv40_screen.c |
This adds two TGSI fragment program properties that indicate the
fragment coord conventions.
The properties behave as described in the extension spec for
GL_ARB_fragment_coord_conventions, but the default origin in
upper left instead of lower left as in OpenGL.
The syntax is:
PROPERTY
For nv, could this be exposed as a hardware capability which the
state-tracker could take advantage of, and if not present fall back to
the current shader modification in the state-tracker?
I did exactly this in the patchset I sent.
The driver can support any set of fragment coord conventions
Investigating a vertical flipping problem in Doom 3, I discovered that
fragment shader wpos handling is incorrect in the nv40 driver.
The issue is that nv40 provides a position register with OpenGL
semantics, so if TGSI_SEMANTIC_POSITION is directly wired to it (as
the nv40 driver incorrectly
My commit eea6a7639f767b1d30b6ef1f91a9c49e3f3b78f0 does a memcpy of height
lines, but that's wrong because the texture has a block layout and we
must thus use the number of vertical blocks instead of the height.
---
src/mesa/state_tracker/st_cb_texture.c |6 --
1 files changed, 4
I think this is not necessary and fixing the rasterizer setup in the driver
would by better than fixing the state tracker.
In r300g, we dynamically allocate rasterizer units based on vertex shader
outputs. If the vertex shader uses slots 1, 5, 20, 100, the driver maps
them to units
Cool!
I'll add nv40 support (and Gallium support if it's still missing).
nv50 also supports this in hardware, and maybe nv30 too.
--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best
So, basically, you allocate the rasterizer units according to the
vertex shader, and when the fragment shader comes up, you say write
rasterizer output 4 to fragment input 100?
The current nouveau drivers can't do this.
There are routing registers in hardware, but I think the nVidia
This requires the arb_half_float_vertex Mesa branch, plus some unreleased
gallium support work by Dave Airlie.
You may need to fix an assertion in st_pipe_vertex_format too.
---
src/gallium/drivers/nv40/nv40_vbo.c | 14 ++
1 files changed, 14 insertions(+), 0 deletions(-)
diff
Breakpoint 3, _mesa_ProgramStringARB (target=34820, format=34933,
len=70, string=0x85922ba) at shader/arbprogram.c:434
434GET_CURRENT_CONTEXT(ctx);
$31 = 0x85922ba !!ARBfp1.0\n\nOPTION
ARB_precision_hint_fastest;\n\n\n\nEND\n
Not sure why Sauerbraten does this, but it
If you get this patch in, then you'll still have to fight with every
other state tracker that doesn't prettify their TGSI. It would be a
much better approach to attempt to RE the routing tables.
I don't think there any users of the Gallium interface that need more
than 8 vertex
Either way, I anticipate having to build a function that, given a
pipe_vertex_element and pipe_vertex_buffer, and a list of acceptable
pipe_formats, internally magically modifies things inside so that all
resulting VBOs are safe for HW.
As I mentioned on IRC, it may be possible to avoid this,
What are the Gallium semantics for nested buffer maps and unmaps?
The current situation seems the following:
- Nouveau doesn't support nested mapping. It fully supports them with
a patch to libdrm I posted.
- r300 fully supports nested mapping
- VMware supports nested mapping, but only the
of) GLSL work on NV30/NV40 and improves the
chances of complex programs working on other cards.
Signed-off-by: Luca Barbieri l...@luca-barbieri.com
---
src/mesa/shader/slang/slang_link.c | 62 ++-
1 files changed, 46 insertions(+), 16 deletions(-)
diff --git a/src
How do you make sure events are ordered correctly? Say a window is
resized and the client receives the ConfigureNotify event before us, and
it reacts drawing on the newly exposed areas: we aren't guaranteed to
have received our event yet, so it might end up rendered in the old
buffers.
OK,
Sauerbraten triggers this assert.
---
src/mesa/state_tracker/st_atom_shader.c |2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/src/mesa/state_tracker/st_atom_shader.c
b/src/mesa/state_tracker/st_atom_shader.c
index 176f3ea..fce533a 100644
---
---
src/gallium/drivers/nv40/nv40_fragprog.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/src/gallium/drivers/nv40/nv40_fragprog.c
b/src/gallium/drivers/nv40/nv40_fragprog.c
index 1237066..209d211 100644
--- a/src/gallium/drivers/nv40/nv40_fragprog.c
+++
What are the advantages of the new DRI2 event over the existing ConfigureNotify?
Couldn't that be used as a fallback on older servers?
--
Throughout its 18-year history, RSA Conference consistently attracts the
world's
Using SIGIO would be a problem in a library.
However, the kernel can be told to send an arbitrary signal (see
F_SETSIG) and glibc can allocate realtime signals with
__libc_allocate_rtsig() (pthread uses this for internal signals).
__glXSetCurrentContextNull currently does not set the GL context to null
in the direct rendering case.
This can result in a segfault trying to flush an invalid old context
in glXMakeCurrent.
This fixes a crash starting the Unigine demos (they still don't work due
to missing extensions though).
On Thu, Jan 14, 2010 at 9:27 AM, Chia-I Wu olva...@gmail.com wrote:
On Thu, Jan 14, 2010 at 4:08 PM, Luca Barbieri l...@luca-barbieri.com wrote:
validate_and_get_current_sequence_number(), and the results reused in
update_buffers().
This works too. It assumes fast texture creation
On Wed, Jan 13, 2010 at 3:55 AM, Chia-I Wu olva...@gmail.com wrote:
On Wed, Jan 13, 2010 at 2:52 AM, Luca Barbieri l...@luca-barbieri.com wrote:
Doesn't this make two DRI2GetBuffers protocol calls, in case of a resize?
I expect resizing happens rarely.
It likely happens every frame while
As far as end user benefits, currently there is the ability to
switch between the DRM Gallium driver and softpipe with an environment
variable (the DRI stack has a similar feature, but with swrast), and a
reduction of X server usage/roundtrips as it doesn't make any GLX
calls except for
Using this means however replacing (in actual use, not in the
repository, of course) all the GLX/DRI stack with a new Gallium-only
GLX implementation.
My suggestion for this is still
http://www.mail-archive.com/mesa3d-dev@lists.sourceforge.net/msg10541.html
I don't think egl_g3d can replace
Doesn't this make two DRI2GetBuffers protocol calls, in case of a resize?
A way to avoid this would be to have the first call update the
sequence number and store the buffer names (also destroying textures
whose names have changed), and the second call actually creating
textures for these names.
gcc -c -I../../include -I../../src/mesa -I../../src/gallium/include
-I../../src/gallium/auxiliary -Wall -Wmissing-prototypes
-Wdeclaration-after-statement -Wpointer-arith -g -fPIC -D_POSIX_SOURCE
-D_POSIX_C_SOURCE=199309L -D_SVID_SOURCE -D_BSD_SOURCE -D_GNU_SOURCE
-DPTHREADS -DUSE_XSHM
The feature levels in the attached table don't apply exactly to all hardware.
For instance:
1. Two sided stencil is supported from NV30/GeForce FX
2. Triangle fans and point sprites are supported in hardware on NV50
(according to Nouveau registers)
3. Alpha-to-coverage should be supported on R300
I thought MSVC supported C99, but that seems not to be the case.
However, it seems to have partial C99 support, and according to MSDN
the particular case of for loop initializers C99 behaviour may be
selected with /Zc:forScope.
I can't find any reference on exactly which parts of C99 are
Regardless of my personal preference as expressed, there are some minor issues
in the EGL part of the patch. One is that, it lifts certain restrictions
required by EGL 1.4 by commenting out the code (e.g. in eglSwapBuffers). It
should check if EGL_MESA_gallium is supported and decide what to
Indeed both EGL 1.0 and EGL1.4 contain that language in the specs, but
the Khronos manpage does not.
I think we can safely ignore this.
Applications are very unlikely to rely on eglSwapBuffers failing in
that case, and anyway the specification explicitly prohibits them from
doing so by saying
I left out depth/stencil attachment because I could not think of a good reason
for it. Do you have an example that it is better to ask the display server
for
a depth/stencil buffer than asking the pipe driver?
I'm not sure about this. I mostly added it just because the old driver
stack asks
This looks good. Do you mind re-create this patch without the
dependency on the depth/stencil patch?
OK.
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class
- num_surfaces = (gctx-base.ReadSurface == gctx-base.DrawSurface) ? 1 :
2;
- for (s = 0; s num_surfaces; s++) {
+ for (s = 0; s 2; s++) {
Why this change?
Ignore it.
struct pipe_texture *textures[NUM_NATIVE_ATTACHMENTS];
struct egl_g3d_surface *gsurf;
struct
The current glCompressedTexImage support in the state tracker assumes
that compressed textures have minimal pitch.
However, in some cases this is not true, such as for mipmaps of non-POT
compressed textures on nVidia hardware.
This patch adds a check and does a memcpy for each line instead of
While working on egl_g3d, i slowly define an interface that abstracts the
native display. The interface suits the need of EGL. And I believe it
suits
the need of GLX in its current form. Eventually, it might evolve into
`struct pipe_display` that can be shared. But that is not my current
Currently DRI2 always calls texture_from_shared_handle on validate.
This may cause problems due if it is called multiple times on the same handle,
since multiple struct pipe_texture pointing to the same GEM buffer will be
created.
On some drivers, this results in pushbuffers being submitted with
It is implemented by adding a new depth/stencil native attachment.
While depth seems to work even without this, due to the Mesa state tracker
creating it itself, this is the way other DRI2 drivers work and might work
better in some cases.
If we pass to validate a non-existent attachment or
The current code revalidates based on whether width or height have changed.
This is unreliable (it may change two times, with another context having got
the buffers for the intermediate size in the meantime) and causes two DRI2
calls.
Instead, we add the notion of a drawable sequence number,
This is a reimplementation of the EGL_MESA_gallium extension over egl_g3d.
It is much simpler and cleaner than the older patch I posted to the list, which
should be disregarded.
GLX support is not implemented. It may be added later to an eventual GLX API
implementation over the egl_g3d core.
On Mon, Jan 4, 2010 at 2:23 PM, Keith Whitwell kei...@vmware.com wrote:
Luca,
Thanks for looking into this - this is a bit of a grey area for me.
One question - do we need a full floating point value to represent
max_anisotropy? What is the typical maximum value for max_anisotropy in
I meant, how about having egl_g3d provide the GLX API as well as the EGL
API?
Of course, it will require some code in libGL.so to dispatch glX* functions
to egl_g3d.
That code already exists in src/mesa/drivers/x11/glxapi.c: it would only
need to be passed a suitable dispatch table.
This way, you
Note that different 3d apis have different requirements - ideally we
should be able to choose some state which suits all of them.
In particular, d3d10/11 have a separate filter mode for aniso (which
applies to all of min/mag/mip filters at the same time).
d3d9 also has special aniso filter,
Does this answer really to magnification in addition to minification?
In other words, does R300 with anisotropic minfilter + bilinear magfilter
behave differently than anisotropic minfilter + anisotropic magfilter?
--
This
101 - 200 of 217 matches
Mail list logo