---
src/gallium/drivers/nv40/nv40_fragprog.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/src/gallium/drivers/nv40/nv40_fragprog.c
b/src/gallium/drivers/nv40/nv40_fragprog.c
index 1237066..209d211 100644
--- a/src/gallium/drivers/nv40/nv40_fragprog.c
+++ b/src/galliu
> How do you make sure events are ordered correctly? Say a window is
> resized and the client receives the ConfigureNotify event before us, and
> it reacts drawing on the newly exposed areas: we aren't guaranteed to
> have received our event yet, so it might end up rendered in the old
> buffers.
OK
) GLSL work on NV30/NV40 and improves the
chances of complex programs working on other cards.
Signed-off-by: Luca Barbieri
---
src/mesa/shader/slang/slang_link.c | 62 ++-
1 files changed, 46 insertions(+), 16 deletions(-)
diff --git a/src/mesa/shader/slang
__glXSetCurrentContextNull currently does not set the GL context to null
in the direct rendering case.
This can result in a segfault trying to flush an invalid old context
in glXMakeCurrent.
This fixes a crash starting the Unigine demos (they still don't work due
to missing extensions though).
--
Using SIGIO would be a problem in a library.
However, the kernel can be told to send an arbitrary signal (see
F_SETSIG) and glibc can allocate realtime signals with
__libc_allocate_rtsig() (pthread uses this for internal signals).
---
How about just having GLX open another connection to the X server and
use that to receive ConfigureNotify?
Since we are using direct rendering, we must be on the same machine,
so it's just a unix/TCP loopback connection and should always work.
Xlib stores the display name in _XDisplay.display_name
What are the advantages of the new DRI2 event over the existing ConfigureNotify?
Couldn't that be used as a fallback on older servers?
--
Throughout its 18-year history, RSA Conference consistently attracts the
world's bes
On Thu, Jan 14, 2010 at 9:27 AM, Chia-I Wu wrote:
> On Thu, Jan 14, 2010 at 4:08 PM, Luca Barbieri wrote:
>>>> validate_and_get_current_sequence_number(), and the results reused in
>>>> update_buffers().
>>> This works too. It assumes fast texture creati
>> validate_and_get_current_sequence_number(), and the results reused in
>> update_buffers().
> This works too. It assumes fast texture creation (as they are always asked
> for), which is true with your DRI2 texture cache patch.
It's even better, they would only asked for is the surface is updated
On Wed, Jan 13, 2010 at 3:55 AM, Chia-I Wu wrote:
> On Wed, Jan 13, 2010 at 2:52 AM, Luca Barbieri wrote:
>> Doesn't this make two DRI2GetBuffers protocol calls, in case of a resize?
> I expect resizing happens rarely.
It likely happens every frame while the user is resizin
Doesn't this make two DRI2GetBuffers protocol calls, in case of a resize?
A way to avoid this would be to have the first call update the
sequence number and store the buffer names (also destroying textures
whose names have changed), and the second call actually creating
textures for these names.
>> Using this means however replacing (in actual use, not in the
>> repository, of course) all the GLX/DRI stack with a new Gallium-only
>> GLX implementation.
> My suggestion for this is still
> http://www.mail-archive.com/mesa3d-dev@lists.sourceforge.net/msg10541.html
>
> I don't think egl_g3d ca
As far as "end user" benefits, currently there is the ability to
switch between the DRM Gallium driver and softpipe with an environment
variable (the DRI stack has a similar feature, but with swrast), and a
reduction of X server usage/roundtrips as it doesn't make any GLX
calls except for initializ
>> - num_surfaces = (gctx->base.ReadSurface == gctx->base.DrawSurface) ? 1 :
>> 2;
>> - for (s = 0; s < num_surfaces; s++) {
>> + for (s = 0; s < 2; s++) {
> Why this change?
Ignore it.
>> struct pipe_texture *textures[NUM_NATIVE_ATTACHMENTS];
>> struct egl_g3d_surface *gsurf;
>
> This looks good. Do you mind re-create this patch without the
> dependency on the depth/stencil patch?
OK.
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class a
> I left out depth/stencil attachment because I could not think of a good reason
> for it. Do you have an example that it is better to ask the display server
> for
> a depth/stencil buffer than asking the pipe driver?
I'm not sure about this. I mostly added it just because the old driver
stack a
Indeed both EGL 1.0 and EGL1.4 contain that language in the specs, but
the Khronos manpage does not.
I think we can safely ignore this.
Applications are very unlikely to rely on eglSwapBuffers failing in
that case, and anyway the specification explicitly prohibits them from
doing so by saying that
> Regardless of my personal preference as expressed, there are some minor issues
> in the EGL part of the patch. One is that, it lifts certain restrictions
> required by EGL 1.4 by commenting out the code (e.g. in eglSwapBuffers). It
> should check if EGL_MESA_gallium is supported and decide what
I thought MSVC supported C99, but that seems not to be the case.
However, it seems to have partial C99 support, and according to MSDN
the particular case of for loop initializers C99 behaviour may be
selected with /Zc:forScope.
I can't find any reference on exactly which parts of C99 are supporte
> But for Mesa core and shared Gallium code, we target portability and
> that means pretty strict C90...
Well, then maybe -std=c99 should be removed from the global config and
put in driver Makefiles.
Anyway, it's not obvious that anyone is using a non-C99 compiler to
compile Mesa, and they could
The feature levels in the attached table don't apply exactly to all hardware.
For instance:
1. Two sided stencil is supported from NV30/GeForce FX
2. Triangle fans and point sprites are supported in hardware on NV50
(according to Nouveau registers)
3. Alpha-to-coverage should be supported on R300
> gcc -c -I../../include -I../../src/mesa -I../../src/gallium/include
> -I../../src/gallium/auxiliary -Wall -Wmissing-prototypes
> -Wdeclaration-after-statement -Wpointer-arith -g -fPIC -D_POSIX_SOURCE
> -D_POSIX_C_SOURCE=199309L -D_SVID_SOURCE -D_BSD_SOURCE -D_GNU_SOURCE
> -DPTHREADS -DUSE_XS
The current glCompressedTexImage support in the state tracker assumes
that compressed textures have minimal pitch.
However, in some cases this is not true, such as for mipmaps of non-POT
compressed textures on nVidia hardware.
This patch adds a check and does a memcpy for each line instead of the
>
> While working on egl_g3d, i slowly define an interface that abstracts the
> native display. The interface suits the need of EGL. And I believe it
> suits
> the need of GLX in its current form. Eventually, it might evolve into
> `struct pipe_display` that can be shared. But that is not my cu
No idea.
It could be tested either with Direct3D 9 on Windows (assuming the driver
does not reject that), by modifying the Mesa source or by using Gallium
directly.
The issue is whether to have the Gallium API support it by splitting
max_anisotropy into max_mag_anisotropy or max_min_anisotropy or
Does this answer really to magnification in addition to minification?
In other words, does R300 with anisotropic minfilter + bilinear magfilter
behave differently than anisotropic minfilter + anisotropic magfilter?
--
This
> Note that different 3d apis have different requirements - ideally we
> should be able to choose some state which suits all of them.
> In particular, d3d10/11 have a separate filter mode for aniso (which
> applies to all of min/mag/mip filters at the same time).
> d3d9 also has special aniso filte
I meant, how about having egl_g3d provide the GLX API as well as the EGL
API?
Of course, it will require some code in libGL.so to dispatch glX* functions
to egl_g3d.
That code already exists in src/mesa/drivers/x11/glxapi.c: it would only
need to be passed a suitable dispatch table.
This way, you
On Mon, Jan 4, 2010 at 2:23 PM, Keith Whitwell wrote:
> Luca,
>
> Thanks for looking into this - this is a bit of a grey area for me.
>
> One question - do we need a full floating point value to represent
> max_anisotropy? What is the typical maximum value for max_anisotropy in
> hardware, and h
This is a reimplementation of the EGL_MESA_gallium extension over egl_g3d.
It is much simpler and cleaner than the older patch I posted to the list, which
should be disregarded.
GLX support is not implemented. It may be added later to an eventual GLX API
implementation over the egl_g3d core.
The
The current code revalidates based on whether width or height have changed.
This is unreliable (it may change two times, with another context having got
the buffers for the intermediate size in the meantime) and causes two DRI2
calls.
Instead, we add the notion of a drawable sequence number, whic
Currently DRI2 always calls texture_from_shared_handle on validate.
This may cause problems due if it is called multiple times on the same handle,
since multiple struct pipe_texture pointing to the same GEM buffer will be
created.
On some drivers, this results in pushbuffers being submitted with
It is implemented by adding a new depth/stencil native attachment.
While depth seems to work even without this, due to the Mesa state tracker
creating it itself, this is the way other DRI2 drivers work and might work
better in some cases.
If we pass to validate a non-existent attachment or
NATIV
I'm porting the patch to egl_g3d right now, so that there is something
concrete to talk about.
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development s
Great!
egl/xegl* are working for me on Nouveau NV40, after installing a
src/gallium/winsys/xlib version of libGL.so.
Haven't tested OpenVG.
egl_g3d currently requires the winsys/xlib version of libGL.so., that use
the Mesa xlib driver, which implements GLX dispatch, even though it
currently only
Yes, that's a possible way to implement this.
However, it would artificially introduce the notion of a current context in
Gallium.
Also, if you mean implementing this as an egl_g3d API, it seems to me you
would have to implement the whole st_public_tmp.h for Gallium.
This would introduce more artif
I don't think we want to use the same Gallium context for multiple state
trackers and/or the Gallium code in the application, because they will break
each other's assumptions about bound constant objects and state.
There may be some synchronization issue. Currently Nouveau multiplexes all
the Gall
>
>
> This is great stuff, and it couldn't have been in better timing. I was
> just about to get the python gallium tests we have working with llvmpipe
> too, and your work will save me a bunch of time.
>
You can also use the framework to write tests in C/C++, which, using a bit
of framework over
Currently Gallium defines a specific filtering mode for anisotropic filtering.
This however prevents proper implementation of
GL_EXT_texture_filter_anisotropic.
The spec (written by nVidia) contains the following text:
<<<
A texture's maximum degree of anisotropy is specified independent
allium/programs/demos/galliumtri.c
@@ -0,0 +1,250 @@
+/*
+ * Copyright (C) 2009 Luca Barbieri All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+
Also some missing _src()s and cosmetic changes.
---
src/gallium/programs/galliumut/Makefile|5 +
.../programs/galliumut/gen_uureg_opcodes.sh| 29 +++
src/gallium/programs/galliumut/uureg.h | 196
3 files changed, 71 insertions(+), 159 del
Fixes progs/demos/fbotexture on Nouveau.
diff --git a/src/mesa/state_tracker/st_format.c
b/src/mesa/state_tracker/st_format.c
index 3e165c9..5f6f7d8 100644
--- a/src/mesa/state_tracker/st_format.c
+++ b/src/mesa/state_tracker/st_format.c
@@ -93,7 +93,7 @@ st_get_format_info(enum pipe_format forma
> The reason why I didn't implement the glX*Gallium*Mesa functions is
> because the glx* extensions are implemented by libGL, and a driver
> driver never has chance to export those extensions. And libGL is used
> for non-gallium drivers.
I solved this by adding a DRI driver extension for Gallium.
N
> In my view, screen surfaces do not exist on X11. Even if we try to
> approximate all aspects of EGL_MESA_screen_surface on X11, we provide
> nothing but a convenient library that is capable of a limited subset of
> what native libraries could have done.
Not so sure about that. X11 allows to set
> Screen surfaces are by definition scan-out buffers of the adapters. In
> theory, the extension is used by opengl applications in an environment
> without display server, or used by the display server itself. And the
> extension cannot be supported by any X11 driver.
>
> The main reason, at leas
This patch adds two extensions, EGL_MESA_gallium and GLX_MESA_gallium,
which allow an application to directly access the Gallium3D API
bypassing OpenGL and using EGL or GLX for general setup.
The python state tracker already uses the GLX_MESA_gallium functions
(due to a commit by Jose Fonseca), bu
This patch adds MESA_screen_surface support to the egl_glx EGL->GLX
wrapper and egl_xlib Gallium state tracker.
With this patch applied, you should be able to just run eglgears from
an X11 terminal and get a maximized hardware accelerated gears window.
Screen surfaces are window surfaces where th
Add GALLIUM_DUMP_VS to dump the vertex shader to the console like
GALLIUM_DUMP_FS in softpipe.
diff --git a/src/gallium/auxiliary/draw/draw_private.h
b/src/gallium/auxiliary/draw/draw_private.h
index 3850ced..3f9eca8 100644
--- a/src/gallium/auxiliary/draw/draw_private.h
+++ b/src/gallium/auxiliar
Softpipe currently does not saturate colors after add/subtract
blending. This violates the OpenGL specification leading to incorrect
rendering in some cases.
This patch fixes this in the obvious way.
Note that saturation should also happen after the fragment shader, but
I haven't checked whether s
201 - 249 of 249 matches
Mail list logo