Re: [Mesa3d-dev] RFC: GSOC Proposal: R300/Gallium GLSL Compiler

2010-04-03 Thread Zack Rusin
On Saturday 03 April 2010 19:07:59 Luca Barbieri wrote:
> > Gallium. Obviously a code-generator that can handle control-flow (to be
> > honest I'm really not sure why you want to restrict it to something
> > without control- flow in the first place).
> 
> The no-control-flow was just for the first step, with a second step
> supporting everything.

k, that's good.
 
> > Having said that I'm not sure whether this is something that's a good
> > GSOC project. It's a fairly difficult piece of code to write. One that to
> > do right will depend on adding some features to TGSI (a good source of
> > inspiration for those would be AMD's CAL and NVIDIA's PTX
> > http://developer.amd.com/gpu_assets/ATI_Intermediate_Language_(IL)_Specif
> >ication_v2b.pdf http://www.nvidia.com/content/CUDA-ptx_isa_1.4.pdf )
> 
> This would be required to handle arbitrary LLVM code (e.g. for
> clang/OpenCL use), but since GLSL shader code starts as TGSI, it
> should be possible to convert it back without TGSI.

Which of course means you have to have that reduced scope and well defined 
constraints that I mentioned. Otherwise it's gonna be impossible to judge the 
success of the project.
 
> I'd say, as an initial step, restricting to code produced by
> TGSI->LLVM (AoS) that can be expressed with no intrinsics, having a
> single basic block, with no optimization passes having been run on it.
> All 4 restrictions (from TGSI->LLVM, no instrinsics, single BB and no
> optimizations) can then be lifted in successive iterations.

Yes, that's all fine, just like the above it would simply have to be defined, 
e.g. no texture sampling (since for that stuff we'd obviously want our 
intrinsics) and whatever other features that go with it.

> The problem I see is that since OpenCL will be hopefully done at some
> point, then as you say TGSI->LLVM will also be done, and that will
> probably make any other optimization work irrelevant.

OpenCL has no need for for TGSI->LLVM translation. It deals only with LLVM IR 
inside.

> So basically the r300 optimization work looks doomed from the
> beginning to be eventually obsoleted.

Well, if that was the attitude we'd never get anything done, in 10 years the 
work we're doing right will be obsoleted, in 50 Gallium in general will be 
probably obsoleted and in 100 we'll be dead (except me, I decided that I'll 
live forever and so far so good), so what's the point?

Writing something simple well, is still a lot better, than writing something 
hard badly.

The point of GSOC is not to nail your first Nobel prize, it's to contribute to 
a Free Software project and ideally keep you interested so that you keep 
contributing. Picking insanely hard projects is counter productive even if 
technically they do make sense.  Just like for a GSOC for a Linux kernel you'd 
suggest someone improves Ext4 rather than write a whole new file system even if 
long term you'll want something better than Ext4 anyway. Or at least that's 
what I'd suggest, but that's probably because, in general, I'm just not into 
sadism.

z

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] RFC: GSOC Proposal: R300/Gallium GLSL Compiler

2010-04-03 Thread Zack Rusin
On Saturday 03 April 2010 18:58:36 Marek Olšák wrote:
> On Sun, Apr 4, 2010 at 12:10 AM, Zack Rusin
>  mailto:za...@vmware.com>> wrote: I thought the initial
>  proposal was likely a lot more feasible for a GSOC (of course there one
>  has to point out that Mesa's GLSL compiler already does unroll loops and
>  in general simplifies control-flow so the points #1 and #2 are largely
>  no-ops, but surely there's enough work on Gallium Radeon's drivers left to
>  keep Tom busy). Otherwise having a well-defined and reduced scope with
>  clear deliverables would be rather necessary for LLVM->TGSI code because
>  that is not something that you could get rock solid over a summer.
> 
> It doesn't seem to simplify branches or unroll loops that much, if at all.

It does for cases where the arguments are known.


>  It fails even for the simplest cases like this one:
> 
> if (gl_Vertex.x < 30.0)

which is unknown at the compilation time.

z

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] RFC: GSOC Proposal: R300/Gallium GLSL Compiler

2010-04-03 Thread Zack Rusin
On Saturday 03 April 2010 17:17:46 Luca Barbieri wrote:
> >> (2) Write a LLVM->TGSI backend, restricted to programs without any
> >> control flow
> >
> > I think (2) is probably the closest to what I am proposing, and it is
> > something I can take a look at.

> By the way, it would be interesting to know what people who are
> working on related things think about this (CCed them).
> In particular, Zack Rusin has worked extensively with LLVM and I think
> a prototype OpenCL implementation.

>From the compute support LLVM->TGSI translation isn't even about 
optimizations, it's about "working". Writing a full C/C++ compiler that 
generates TGSI is a lot less realistic than reusing Clang and writing a TGSI 
code-generator for it. 
So the LLVM code-generator for TGSI would be a very high impact project for 
Gallium. Obviously a code-generator that can handle control-flow (to be honest 
I'm really not sure why you want to restrict it to something without control-
flow in the first place).

Having said that I'm not sure whether this is something that's a good GSOC 
project. It's a fairly difficult piece of code to write. One that to do right 
will depend on adding some features to TGSI (a good source of inspiration for 
those would be AMD's CAL and NVIDIA's PTX 
http://developer.amd.com/gpu_assets/ATI_Intermediate_Language_(IL)_Specification_v2b.pdf
http://www.nvidia.com/content/CUDA-ptx_isa_1.4.pdf )

I thought the initial proposal was likely a lot more feasible for a GSOC (of 
course there one has to point out that Mesa's GLSL compiler already does 
unroll loops and in general simplifies control-flow so the points #1 and #2 are 
largely no-ops, but surely there's enough work on Gallium Radeon's drivers 
left to keep Tom busy). Otherwise having a well-defined and reduced scope with 
clear deliverables would be rather necessary for LLVM->TGSI code because that 
is not something that you could get rock solid over a summer.

z

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] GSOC: Gallium R300 driver

2010-03-30 Thread Zack Rusin
On Tuesday 30 March 2010 12:52:54 Luca Barbieri wrote:
> > There are several deep challenges in making TGSI <-> LLVM IR translation
> > lossless -- I'm sure we'll get around to overcome them -- but I don't
> > think that using LLVM is a requirement for this module. Having a shared
> > IR for simple TGSI optimization module would go a long way by itself.
> 
> What are these challenges?

Besides what Brian just pointed out, it's also worth noting that the one 
problem that everyone dreads is creating LLVM code-generator for TGSI. 
Everyone seems to agree that it's a darn complicated task with a somewhat 
undefined scope. It's obviously something that will be mandatory for OpenCL, 
but I doubt anyone will touch it before it's an absolute must.

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Commit messages broken??

2010-03-11 Thread Zack Rusin
On Thursday 11 March 2010 02:58:49 Tollef Fog Heen wrote:
> ]] Zack Rusin 
> | BTW, replacing a mail client on the server with something that's not
> | compatible is not very social.
> 
> Rather than assuming malice, you may assume that I was trying to fix
> something when I made that change.  

I was assuming that whoever did it was trying to do something, but the 
reasoning behind the change doesn't change the result at all - we were not 
informed of it and the commit messages broke. So as far as we are concerned 
there really wouldn't be any difference between someone just deleting 
/usr/bin/mail and you trying to fix something by replacing mail with something 
else. The bottom line is that there's quite a few projects hosted on fdo, with 
a lot of people depending on that setup and making changes to it without 
communicating it very clearly it is bound to break something.
I don't want to make into a big deal, because it wasn't but a short email or 
even an blog just saying "new /usr/bin/mail is coming in, make sure it doesn't 
break your project" would avoid the whole problem.

z

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Commit messages broken??

2010-03-10 Thread Zack Rusin
On Wednesday 10 March 2010 15:18:03 Zack Rusin wrote:
> On Wednesday 10 March 2010 14:59:42 Zack Rusin wrote:
> > Maybe /usr/bin/mail is broken, I'll double check it.
> 
> Yea, that's it. Someone installed a new mail daemon on the server. We're
>  using "-a" to specify the Content-Type header in mails, but the heirloom
>  mailx that has been installed uses the "-a" option to specify attachments
>  and since filename "Content-Type: text/plain;" is not a valid filename it
>  exits with an error. I'll try to fix it right now.

k, it should be working now. I switched it to use sendmail directly so that 
future changes to /usr/bin/mail don't affect it.

z

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Commit messages broken??

2010-03-10 Thread Zack Rusin
On Wednesday 10 March 2010 14:59:42 Zack Rusin wrote:
> Maybe /usr/bin/mail is broken, I'll double check it.

Yea, that's it. Someone installed a new mail daemon on the server. We're using 
"-a" to specify the Content-Type header in mails, but the heirloom mailx that 
has been installed uses the "-a" option to specify attachments and since 
filename "Content-Type: text/plain;" is not a valid filename it exits with an 
error. I'll try to fix it right now.
BTW, replacing a mail client on the server with something that's not 
compatible is not very social.

z

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Commit messages broken??

2010-03-10 Thread Zack Rusin
On Wednesday 10 March 2010 13:59:40 Brian Paul wrote:
> Brian Paul wrote:
> > Keith Whitwell wrote:
> >> I haven't seen any of these for a while now...  Anyone have any ideas?
> >
> > I haven't seen them either.  I don't know what's going on, but Tollef
> > Fog Heen (an FD.org admin) created new mesa lists on fd.o yesterday
> > (though Michel and I haven't move the subscriber lists yet).  Perhaps
> > something broke from that?
> >
> > Tollef?
> 
> It looks like the list itself is OK but the git trigger to send out
> the commit messages isn't working.
> 
> Do any git experts know what might be wrong?

I wrote that script and looked at it yesterday and I don't see what's wrong. 
The script uses /usr/bin/mail to send those mails. Has something changed on 
the server? Maybe /usr/bin/mail is broken, I'll double check it.

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH] st/vega: Fix OpenVG demo segfaults.

2010-03-05 Thread Zack Rusin
On Wednesday 03 March 2010 10:51:13 o...@lunarg.com wrote:
> From: Chia-I Wu 
> 
> When the paint is color, paint_bind_samplers binds a dummy sampler
> without a texture.  It causes demos requiring a sampler (those use a
> mask or an image) to crash.
> ---
>  src/gallium/state_trackers/vega/paint.c |3 ---
>  1 files changed, 0 insertions(+), 3 deletions(-)
> 
> diff --git a/src/gallium/state_trackers/vega/paint.c
>  b/src/gallium/state_trackers/vega/paint.c index caf0c14..cdb87d3 100644
> --- a/src/gallium/state_trackers/vega/paint.c
> +++ b/src/gallium/state_trackers/vega/paint.c
> @@ -639,9 +639,6 @@ VGint paint_bind_samplers(struct vg_paint *paint,
>  struct pipe_sampler_state **sa }
>break;
> default:
> -  samplers[0] = &paint->pattern.sampler; /* dummy */
> -  textures[0] = 0;
> -  return 0;
>break;
> }
> return 0;
> 

Yea, that's fine. The semantics for which of those need or don't have to be set 
seem to change from release to release, so whatever works currently is ok. I 
didn't have a working vg build since the egl rework ("Error: couldn't get an 
EGL visual config" which I just didn't have time to even look at), so if it 
works for you, feel free to commit.

z

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH] vega st: fix missing texture in mask when setup samplers

2010-02-03 Thread Zack Rusin
On Wednesday 03 February 2010 09:19:43 Igor Oliveira wrote:
> A new version and a little improvement.
> Makes more sense adding the texture in paint bind samplers than mask
> bind samplers.
> 
> Igor
> 
> On Wed, Feb 3, 2010 at 6:21 AM, Igor Oliveira
> 
>  wrote:
> > This patch fix segfaults in mask.cpp and mask4.cpp binding a missing
> > texture in mask bind samplers.

What's the stack trace again? I don't have a working vg setup right now. I'm a 
bit confused why it's crashing when we never sample from those units.

The issue is that while right now alpha_mask texture is unconditionally there 
that's actually a bug - there's no guarantee that the alpha mask will be 
always present (obviously for egl configs that didn't ask for it, it shouldn't 
be there). Meaning that for stuff like filter and lookup it's perfectly ok for 
it to be null.

I think ideally what we'd do is fix all those static sampler texture 
assignment. E.g. right now 
0 - paint sampler/texture for gradient/pattern
1 - mask sampler/texture
2 - blend sampler/texture
3 - image sampler/texture

meaning that if there's no paint, mask and blend and we only draw image then 
we have
0 - dummy
1 - dummy
2 - dummy
3 - image

We had to do it this way when we had the hand written text assembly to have 
some semblance of sanity but now that we use ureg we could fix it properly e.g. 
in asm_fill.c in mask(...) instead of doing sampler[1], we'd simply do *sampler 
and change the combine_shaders to pass the correct the sampler to the 
function. 

If you need to fix the crash just to move forward on something I can commit it 
but please just add a /* FIXME: the texture might not be there */ and 
debug_assert(ctx->draw_buffer->alpha_mask);.

z

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH] switch shaders assembly TGSI code by tgsi_ureg

2010-02-01 Thread Zack Rusin
On Monday 01 February 2010 21:28:53 Igor Oliveira wrote:
> Hi again,
> 
> Third version: removing debug messages


Nicely done, I just committed the patches. Thanks!

(a small nitpick, in future try to be a bit more descriptive in your commit 
log :) ).

z

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH] switch shaders assembly TGSI code by tgsi_ureg

2010-02-01 Thread Zack Rusin
On Monday 01 February 2010 10:19:53 Igor Oliveira wrote:
> Hello,
> 
> Theses patchs switch all shaders code implemeted using TGSI assembly
> by tgsi_ureg.

Hey Igor,

very nice work!

Since I don't have the conformance framework anymore, did you test your 
changes with the examples that we have? Before committing it'd be nice to know 
that we won't get too many obvious regressions.

z

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Vega advanced blending

2010-01-25 Thread Zack Rusin
On Monday 25 January 2010 16:38:46 Igor Oliveira wrote:
> Hello,
> 
> This is just a report about the work i am doing.
> 
> 2 days ago i began to study the vega state tracker to understand
> better and help a bit since there i already fix some bugs.
> So right now i am implementing the advanced blending extension[1].
> This extension include many blending methods supported in authoring
> tools & file formats like SVG 1.2. I just implemented right now 2
> blend operations in next days i am finishing all operation methods.

Sounds great Igor. Feel free to add some test to progs/openvg/trivial or such 
that shows the extended blending. When I was working on this code I used to 
use the Khronos conformance framework but I don't have access to it anymore 
and it'd be a good idea for us to build up more of a testing infrastructure 
for this stuff.

> diff --git a/include/VG/openvg.h b/include/VG/openvg.h
> index 60167e4..2a6c510 100644
> --- a/include/VG/openvg.h
> +++ b/include/VG/openvg.h
> @@ -444,6 +444,8 @@ typedef enum {
>VG_BLEND_DARKEN = 0x2007,
>VG_BLEND_LIGHTEN= 0x2008,
>VG_BLEND_ADDITIVE   = 0x2009,
> +  VG_BLEND_SUBTRACT_KHR   = 0x2017,
> +  VG_BLEND_INVERT_KHR = 0x2018,
> 
>VG_BLEND_MODE_FORCE_SIZE= VG_MAX_ENUM
>  } VGBlendMode;

This change isn't right, we'd like to keep the openvg.h header like it is 
provided from Khronos (exactly the same way we do for GL). We just need to add 
the new vgext.h header from Khronos to account for all those new extensions. 
If you'd like I can do that soon.

> +static const char blend_subtract_khr_asm[] =
> +   "TEX TEMP[1], IN[0], SAMP[2], 2D\n"
> +   "SUB TEMP[1], TEMP[1], TEMP[0]\n"
> +   "STR TEMP[2]\n"
> +   "NOT TEMP[2]\n"
> +   "MAX TEMP[1], TEMP[1], TEMP[2]\n"
> +   "MUL TEMP[2], TEMP[0]., TEMP[1].\n"
> +   "ADD TEMP[3], TEMP[0]., TEMP[1].\n"
> +   "SUB TEMP[1].w, TEMP[3], TEMP[2]\n"
> +   "MOV %s, TEMP[0]\n";
> +
> +static const char blend_invert_khr_asm[] =
> +   "TEX TEMP[1], IN[0], SAMP[2], 2D\n"
> +   "SUB TEMP[2], CONST[0]., TEMP[0].\n"
> +   "SUB TEMP[3], CONST[0]., TEMP[1]\n"
> +   "MUL TEMP[2].xyz, TEMP[1], TEMP[2].\n"
> +   "MUL TEMP[3].xyz, TEMP[0]., TEMP[3]\n"
> +   "ADD TEMP[0], TEMP[2], TEMP[3]\n"
> +   "MUL TEMP[2], TEMP[0]., TEMP[1].\n"
> +   "ADD TEMP[3], TEMP[0]., TEMP[1].\n"
> +   "SUB TEMP[1].w, TEMP[3], TEMP[2]\n"
> +   "MOV %s, TEMP[0]\n";

Looks good.
Ideally we'd switch all of this hand assembly to tgsi_ureg code. It'd be a lot 
more flexible and more readable than manual assembling of semi-completed 
assembly fragments.

> diff --git a/src/gallium/state_trackers/vega/shaders_cache.h
> b/src/gallium/state_trackers/vega/shaders_cache.h
> index feca58b..5bbb724 100644
> --- a/src/gallium/state_trackers/vega/shaders_cache.h
> +++ b/src/gallium/state_trackers/vega/shaders_cache.h
> @@ -48,11 +48,13 @@ enum VegaShaderType {
> VEGA_BLEND_SCREEN_SHADER   = 1 <<  9,
> VEGA_BLEND_DARKEN_SHADER   = 1 << 10,
> VEGA_BLEND_LIGHTEN_SHADER  = 1 << 11,
> +   VEGA_BLEND_SUBTRACT_KHR_SHADER = 1 << 12,
> +   VEGA_BLEND_INVERT_KHR_SHADER   = 1 << 13,
> 
> -   VEGA_PREMULTIPLY_SHADER= 1 << 12,
> -   VEGA_UNPREMULTIPLY_SHADER  = 1 << 13,
> +   VEGA_PREMULTIPLY_SHADER= 1 << 14,
> +   VEGA_UNPREMULTIPLY_SHADER  = 1 << 15,
> 
> -   VEGA_BW_SHADER = 1 << 14
> +   VEGA_BW_SHADER = 1 << 16
>  };

We'll probably also need to do something about the caching. With 20+ extra 
blend modes we'll run out of the bits in our 32bit key that we're using to 
lookup shaders in our cache right now.

z

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (mesa_7_7_branch): st/xorg: fix a rare video crash

2010-01-11 Thread Zack Rusin
On Monday 11 January 2010 18:18:30 Michel Dänzer wrote:
> On Mon, 2010-01-11 at 18:05 -0500, Zack Rusin wrote:
> > On Monday 11 January 2010 18:04:01 Michel Dänzer wrote:
> > > A better fix should be to make sure the exaMoveInPixmap() call is
> > > before the exaGetPixmapDriverPrivate() call. The latter should never
> > > return NULL then (unless we run out of resources maybe - might be worth
> > > keeping the checks for that).
> >
> > As in what's attached?
> 
> Exactly.
> 
> > The patch here is defensive which I think makes sense in any case, but if
> > we can avoid the case of dst being null that's gonna be even better.
> 
> Right.

Thanks for the review Michel!

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (mesa_7_7_branch): st/xorg: fix a rare video crash

2010-01-11 Thread Zack Rusin
On Monday 11 January 2010 18:04:01 Michel Dänzer wrote:
> A better fix should be to make sure the exaMoveInPixmap() call is before
> the exaGetPixmapDriverPrivate() call. The latter should never return
> NULL then (unless we run out of resources maybe - might be worth keeping
> the checks for that).

As in what's attached? 
The patch here is defensive which I think makes sense in any case, but if we 
can avoid the case of dst being null that's gonna be even better.

z
diff --git a/src/gallium/state_trackers/xorg/xorg_xv.c b/src/gallium/state_trackers/xorg/xorg_xv.c
index 666ff10..a437370 100644
--- a/src/gallium/state_trackers/xorg/xorg_xv.c
+++ b/src/gallium/state_trackers/xorg/xorg_xv.c
@@ -485,9 +485,12 @@ display_video(ScrnInfoPtr pScrn, struct xorg_xv_port_priv *pPriv, int id,
int dxo, dyo;
Bool hdtv;
int x, y, w, h;
-   struct exa_pixmap_priv *dst = exaGetPixmapDriverPrivate(pPixmap);
+   struct exa_pixmap_priv *dst;
struct pipe_surface *dst_surf = NULL;
 
+   exaMoveInPixmap(pPixmap);
+   dst = exaGetPixmapDriverPrivate(pPixmap);
+
if (dst && !dst->tex) {
 	xorg_exa_set_shared_usage(pPixmap);
 	pScrn->pScreen->ModifyPixmapHeader(pPixmap, 0, 0, 0, 0, 0, NULL);
@@ -516,7 +519,6 @@ display_video(ScrnInfoPtr pScrn, struct xorg_xv_port_priv *pPriv, int id,
bind_samplers(pPriv);
setup_fs_video_constants(pPriv->r, hdtv);
 
-   exaMoveInPixmap(pPixmap);
DamageDamageRegion(&pPixmap->drawable, dstRegion);
 
while (nbox--) {
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium feature levels

2010-01-11 Thread Zack Rusin
On Monday 11 January 2010 16:15:38 Jakob Bornecrantz wrote:
> On 11 jan 2010, at 17.49, Zack Rusin wrote:
> > I think the other stuff is acceptable. Take a look at the docs and
> > let me know
> > what you think.
> 
> Hmm I don't think you should remove the CAPs but instead just say if
> level X then CAPs Y,Z,W,Q are assumed to be present. This way the
> hardware that fall between the cracks can expose one level plus the
> extra CAPs it can do.

Would that be useful for anything? Or do you mean feature level + exceptions, 
oterhwise what's the point of feature levels if nothing supports them fully. 

> Another thing level 3 and below harder can not do ARB_npot but they
> can do NV_texture_rect, the only hardware we have drivers that this
> matter for is nv30 (I think) and r300.

Yes, that's what the "unnormalized coords" part is about :) 

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium feature levels

2010-01-11 Thread Zack Rusin
On Monday 11 January 2010 15:22:43 Luca Barbieri wrote:
> The feature levels in the attached table don't apply exactly to all
>  hardware.
> 
> For instance:
> 1. Two sided stencil is supported from NV30/GeForce FX
> 2. Triangle fans and point sprites are supported in hardware on NV50
> (according to Nouveau registers)
> 3. Alpha-to-coverage should be supported on R300 and NV30
> 4. Non-POT mipmapped textures and non-POT cubemaps are probably
> supported earlier than in the table in actual cards

I'm guessing each with its quirks given that Microsoft just doesn't require 
them for each of the respective levels. It's possible that GL requirements 
forced those upon IHV in certain cases (Roland mentioned a few that probably 
apply) but if that's the case we could certainly update the entire table.


> Shaders also have card specific extensions such as vertex shader
> texturing on NV40 and added instruction predication support (see the
> GL_NV_* extensions).
> 
> Thus the attached patch as-is will disable functionality that the
> hardware actually supports (not having two sided stencil in particular
> would hurt).
> 
> Also, the feature levels seem set mostly wrong:

They're just stubs, you'd have to fill them in for your drivers. As I mentioned 
in the response to Roland I didn't feel like looking up each hardware spec and 
intersecting that with what's in each driver before we even decided what each 
feature level means.

> How about keeping the caps, but adding helper functions that the
> drivers can use for the various API levels, so they need less cases in
> their get_param switches?
> The feature level are likely at least somewhat API-specific anyway, so
> maybe it would be better for each API to determine them itself from
> the separate caps exposed by the drivers.

That doesn't win us anything, does it? It's just more code all around rather 
than less. Feature levels + exceptions (the problematic things i've mentioned 
in the last email) could possibly make sense, I don't know.
 
> Anyway, there are only 3 companies with significant market share, so
> one may as well directly use the nVidia/ATI/Intel architecture version
> as the feature level (which is what most commercial games probably do
> anyway).

Well they don't have to because they use D3D version which defines those for 
them (aka. what the table is based on).

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium feature levels

2010-01-11 Thread Zack Rusin
On Monday 11 January 2010 15:17:00 Roland Scheidegger wrote:
> > - extra mirror wrap modes - i don't think mirror repeat was ever
> > supported and mirror clamp was removed in d3d10 but it seems that some
> > hardware kept support for those
> 
> Mirror repeat is a core feature in GL since 1.4 hence we can't just drop
> it. 

I wasn't suggesting that. I was just pointing out what happens with it from 
the D3D side.

> I think all hardware we'd ever care about would support it. mirror
> clamp / mirror clamp to edge are only an extension, though
> (ATI_texture_mirror_once). (I think the dx mirror once definition is
> probably mirror_clamp_to_edge in opengl parlance).

That's possible. As mentioned I'm not really sure what to do with this 
feature.

> > - shadow maps - it's more of an "researched guess" since it's largely
> > based on a format support, but as far as i can tell all d3d10 hardware
> > supports it, earlier it varies (e.g. nvidia did it for ages)
> 
> Required for GL 1.4. I thought it was pretty much required for d3d
> sm2.0, though you're right you could probably just not support the
> texture format there. Anyway, most hardware should support it, I believe
> even those which didn't really supported it at DX9 SM 2.0 time supported
> it (chips like radeon r300 lacked the hw to do the comparison in the
> texture unit, but it can be more or less easily implemented in the pixel
> shader, though the implementation will suck as it certainly won't do PCF
> just use some point sampling version - unless you're willing to do a
> much more complex implementation in the pixel shader, but then on this
> generation of hardware you might exceed maximum shader length). I
> believe all hardware supporting SM 2.0 could at least do some sampling
> of depth textures, though possibly only 16 bit and I'm not sure
> filtering worked in all cases.

Yes, but the issue is that I'm not sure how to represent it from a feature 
level case. Are you saying we should just enable it for all feature levels? 
That'd be nice.

> > I think the other stuff is acceptable. Take a look at the docs and let me
> > know what you think.
> 
> What is feature level 1 useful for? I thought we'd really wanted DX9
> level functionality as a bare minimum. GL2.x certainly needs cards
> supporting shader model 2 (and that is a cheat, in reality it would be
> shader model 3).

The main issue was having something without hardware vs in the feature levels. 
It was supposed to be whatever the current i915 driver currently supports, but 
yea, I think it doesn't make sense and level 2 should be minumum.

> Also, I don't quite get the shader model levels. I thought there were
> mainly two different DX9 versions, one with sm 2.0 the other with 3.0,
> with noone caring about other differences (as most stuff was cappable
> anyway). However, you've got 3 and all of them have 2.0 shader model?

As mentioned this is based on the D3D feature level concept. It's the first 
link I put in the the references:
http://msdn.microsoft.com/en-us/library/ee422086(VS.85).aspx#Overview
It's there because that's what Microsoft defined as feature level and I'm 
assuming it's because they had a good need for it :)

> More comments below.
> 
> > +static const enum pipe_feature_level
> > +i915_feature_level(struct pipe_screen *screen)
> > +{
> > +   return PIPE_FEATURE_LEVEL_1;
> > +}
> 
> What's the reason this is not feature level 2?

Yea, I was winging it for all the drivers because I couldn't be bothered to do 
a cross-section of what the hardware can teorethically support and what the 
driver actually supports so I just put level 1 or whatever felt close enough 
in all of them. The maintainers would have to actually do the right thing 
there.
 
> > 
> >
> > Profile 7 (2009)6 (2008)5
> > (2006)4 (2004)3 (2003)2 (2002) 1 (2000)
> > Fragment Shader Yes Yes Yes  
> >   Yes Yes Yes  Yes
> 
> DX 7 didn't have any shader model IIRC. DX8/8.1 introduced shader models
> 1.0-1.3/1.4.

Yea, that level should be gone.

> 
> > Vertex Shader   Yes Yes Yes  
> >   Yes Yes Yes  No
> 
> I don't think we care for this. Since the gallium API requires vertex
> shaders anyway, that's just a YES in all cases, regardless if it's
> executed by hardware or not.

Isn't that the same discussion with geometry shaders? We could support it 
everywhere like vertex shader. The snafu is what happens when people use it as 
fallback for other features.

> >  No Alpha-to-coverage   Yes Yes
> > Yes No  No  No   No
> 
> This is required for even GL 1.3, if ARB_multisample is supported.
> Though a

[Mesa3d-dev] Gallium feature levels

2010-01-11 Thread Zack Rusin
Hey,

knowing that we're starting to have serious issues with figuring out what 
features the given device supports and what api's/extensions can be reasonably 
implemented on top of it I've spent the weekend trying to define feature 
levels. Feature levels were effectively defined by the Direct3D version 
numbers. 
Attached is a patch and documentation for the feature levels. I'm also 
attaching gallium_feature_levels.rst file which documents what each feature 
level means and what apis can be reasonably supported by each (I figured it's 
going to be easier to look at it outside the diff).

There's a few features that are a bit problematic, in no particular order:
- unnormalized coordinates, we don't even have a cap for those right now but 
since that feature doesn't exist in direct3d (all coords are always normalized 
in d3d) the support for it is hard to define in term of a feature level
- two-sided stencil - d3d supports it in d3d10 but tons of hardware supported 
it earlier
- extra mirror wrap modes - i don't think mirror repeat was ever supported and 
mirror clamp was removed in d3d10 but it seems that some hardware kept support 
for those
- shadow maps - it's more of an "researched guess" since it's largely based on 
a format support, but as far as i can tell all d3d10 hardware supports it, 
earlier it varies (e.g. nvidia did it for ages)

I think the other stuff is acceptable. Take a look at the docs and let me know 
what you think.

z
From 962994ad9f05b6ae219f839082d4743e7d2a70fe Mon Sep 17 00:00:00 2001
From: Zack Rusin 
Date: Mon, 11 Jan 2010 12:34:26 -0500
Subject: [PATCH] gallium: implement feature levels

a broader way of figuring out features of the hardware we're running on
---
 src/gallium/docs/source/gallium_feature_levels.rst |   69 
 src/gallium/docs/source/screen.rst |5 ++
 src/gallium/drivers/cell/ppu/cell_screen.c |   27 ++--
 src/gallium/drivers/i915/i915_screen.c |   21 ++
 src/gallium/drivers/i965/brw_screen.c  |   21 ++
 src/gallium/drivers/identity/id_screen.c   |   10 +++
 src/gallium/drivers/llvmpipe/lp_screen.c   |   28 ++--
 src/gallium/drivers/nv04/nv04_screen.c |   29 ++--
 src/gallium/drivers/nv10/nv10_screen.c |   25 ++-
 src/gallium/drivers/nv20/nv20_screen.c |   25 ++-
 src/gallium/drivers/nv30/nv30_screen.c |   29 ++--
 src/gallium/drivers/nv40/nv40_screen.c |   28 ++--
 src/gallium/drivers/nv50/nv50_screen.c |   28 ++--
 src/gallium/drivers/r300/r300_screen.c |   60 +++--
 src/gallium/drivers/softpipe/sp_screen.c   |   28 ++--
 src/gallium/drivers/svga/svga_screen.c |   28 ++--
 src/gallium/drivers/trace/tr_screen.c  |   20 ++
 src/gallium/include/pipe/p_defines.h   |   29 +
 src/gallium/include/pipe/p_screen.h|5 ++
 src/mesa/state_tracker/st_cb_drawpixels.c  |4 +-
 src/mesa/state_tracker/st_extensions.c |   23 ---
 21 files changed, 231 insertions(+), 311 deletions(-)
 create mode 100644 src/gallium/docs/source/gallium_feature_levels.rst

diff --git a/src/gallium/docs/source/gallium_feature_levels.rst b/src/gallium/docs/source/gallium_feature_levels.rst
new file mode 100644
index 000..fcde68d
--- /dev/null
+++ b/src/gallium/docs/source/gallium_feature_levels.rst
@@ -0,0 +1,69 @@
+Profile 7 (2009)6 (2008)5 (2006)4 (2004)3 (2003)2 (2002) 1 (2000)
+
+API Support DX11DX10.1  DX10/GL3.2  DX9.2   DX9.1   DX9.0DX7.0
+GL4.0   GL3.2+  GL3.2   GL3.0   GL2.x   GL2.xGL2.x
+VG  VG  VG  VG  VG  VG   VG
+CL1.0   CL1.0   CL1.0
+
+Shader Model	5.0	4.x	4.0	2.0 2.0 2.0  1.0
+4_0_level_9_3   4_0_level_9_1   4_0_level_9_1
+
+Fragment Shader Yes Yes Yes Yes Yes Yes  Yes
+Vertex Shader   Yes Yes Yes Yes Yes Yes  No
+Geometry Shader	Yes	Yes	Yes	No  No  No   No
+Stream Out	Yes	Yes	Yes	   

Re: [Mesa3d-dev] [RFC] add support to double opcodes

2010-01-07 Thread Zack Rusin
On Thursday 07 January 2010 09:11:11 michal wrote:
> Zack,
> 
> 1. Do I understand correctly that while
> 
> D_ADD dst.xy, src1.xy, src2.zw
> 
> will add one double, is the following code
> 
> D_ADD dst, src1, src2.zwxy
> 
> also valid, and results in two doubles being added together?

Good question. I guess that would be up to us to define. The DX/AMD CAL don't 
allow that because they define inputs as being in the xy component only so all 
the double instructions operate on exclusively one double. 
We could allow it but simply not use it right away from the state tracker 
side.

> 2. Is the list of double-precision opcodes proposed by Igor roughly
> enough for OpenCL implementation?

Another good question. It will largely depend how our implementation of math 
functions for CL 1.1 will look like. CL 1.1 defines double support for such 
math functions as  acos, acosh, acospi, cs, cosh, cospu (same with sin and 
tan), ceil, copysign, exp, exp2, exp10, fabs, fdim, floor, fmax, fmin, fmod, 
frack, frexp, hypot(x, y) [computes value of the square root of x^2+y^2], log, 
log2, log10, mad, mod, pow, pown, remainder, rint, round, rsqurt, sqrt, trunc 
(and various permutations of those and some that are obviously implementable 
with above), so it all boils down to how we'll implement those functions.

I think that a minimal set that could be enough would be: dadd, ddiv, deq, 
dlt, dfrac, dfracexp, dldexp, dmax, dmin, dmov, dmul, dmuladd, drcp and dsqrt, 
plus conversion instructions that convert between float and double and back. 
(this is assuming table or some other fixed implementation of trigonometric 
functions and in general assumes that we trade performance for simplicity at 
least for the time being).

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [RFC] add support to double opcodes

2010-01-07 Thread Zack Rusin
On Thursday 07 January 2010 06:50:36 José Fonseca wrote:
> I wonder if storage size of registers is such a big issue. Knowing the
> storage size of a register matters mostly for indexable temps. For
> regular assignments and intermediate computations storage everything
> gets transformed in SSA form, and the register size can be determined
> from the instructions where it is generated/used and there is no need
> for consistency.
> 
> For example, imagine a shader that has:
> 
>TEX TEMP[0], SAMP[0], IN[0]  // SAMP[0] is a PIPE_FORMAT_R32G32B32_FLOAT
>  --> use 4x32bit float registers MAX ??
>...
>TEX TEMP[0], SAMP[1], IN[0]  // SAMP[1] is a
>  PIPE_FORMAT_R64G64B64A64_FLOAT --> use 4x64bit double registers DMAX ,
>  TEMP[0], ???

That's not an issue because such a format doesn't exist. There's no 256bit 
sampling in any api. It's one of the self-inflicted wounds that we have. R64G64 
is the most you'll get right now.

>TEX TEMP[0], SAMP[2], IN[0] // texture 0 and rendertarget are both 
>  PIPE_FORMAT_R8G8B8A8_UNORM  --> use 4x8bit unorm registers MOV OUT[0],
>  TEMP[0]
> 
> etc.
> 
> There is actually programmable 3d hardware out there that has special
> 4x8bit registers, and for performance the compiler has to deduct where
> to use those 4xbit. llvmpipe will need to do similar thing, as the
> smaller the bit-width the higher the throughput. And at least current
> gallium statetrackers will reuse temps with no attempt to maintain
> consistency in use.
> 
> So if the compilers already need to deal with this, if this notion that
> registers are 128bits is really necessary, and will prevail in the long
> term.

Somehow this is the core issue it's the fact that TGSI is untyped anything but 
"register size" is constant implies "TGSI is typed but the actual types have 
to be deduced by the drivers" which goes against what Gallium was about (we 
put the complexity in the driver). 

The question of 8bit vs 32bit and 64bit vs 32bit are really different 
questions. The first one is about optimization - it will work perfectly well if 
the 128bit registers will be used, the second one is about correctness - it 
will not work if 128bit registers will be used for doubles and it will not 
work if 256bit registers will be used for floats. Also we don't have a 4x8bit 
instructions, they're all 4x32bit instructions (float, unsigned ints, signed 
ints), so doubles will be the first differently sized instructions. Which in 
turn will mean that either TGSI will have to be actually statically typed, but 
not typed declared i.e. D_ADD will only be able to take two 256bit registers 
as inputs and if anything else is passed it has to throw an error, which is 
especially difficult that those registers didn't have a size declared but it 
would have to be inferred from previous instructions, or we'd have to allow 
mixing sizes of all inputs, e.g. D_ADD can operate on both 4x32 or 4x64 which 
simply moves the problem from above into the driver.

Really, unless we'll say "the entire pipeline can run in 4x64" like we did for 
floats then I don't see an easier way of dealing with this than the xy, zw, 
swizzle form.

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [RFC] add support to double opcodes

2010-01-06 Thread Zack Rusin
On Wednesday 06 January 2010 14:56:35 Igor Oliveira wrote:
> Hi,
> 
> the patches add support to double opcodes in gallium/tgsi.
> It just implement some opcodes i like to know if someone has
> suggestion about the patches.

Hi Igor, first of all this should probably go into a feature branch because 
it'll be a bit of work before it's usable. 
The patches that you've proposed are unlikely what we'll want for double's. 
Keith, Michal and I discussed this on the phone a few days back and the 
biggest issue with doubles is that unlike the switch between the integers and 
floats they actually need bigger registers to accomodate them. Given that the 
registers in TGSI are untyped and its up to instructions to define the type it 
becomes hard for drivers to figure out the size of the registers beforehand. 
The solution that I personally like and what seems to becoming the facto 
standard when dealing with double support is having double precision values 
represented by a pair of registers. Outputs are 
either the pair yx or to the pair wz, where the msb is stored in y/w. For 
example:
Idata 3.0 => (0x4008) in register r looks like:
r.w =0x4008 ;high dword
r.z = 0x ;low dword
Or:
r.y =0x4008 ;high dword
r.x =0x ;low dword
All source double inputs must be in xy (after swizzle operations). For 
example:
d_add r1.xy, r2.xy, r2.xy
Or
d_add r1.zw, r2.xy, r2.xy
Each computes twice the value in r2.xy, and places the result in either xy or 
zw. 
This assures that the register size stays constant. Of course the instruction 
semantics are different to the typical 4-component wide TGSI instructions, but 
that, I think, is a lot less of an issue.

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Evolving the TGSI instruction set

2010-01-04 Thread Zack Rusin
On Monday 04 January 2010 13:19:27 Brian Paul wrote:
> Keith Whitwell wrote:
> > I think this is pretty much equivalent to restricting indexing to
> > defined ranges of indexable temporaries.  I'm guessing the way that
> > would work is we'd say something like:
> >
> > DECL OUT[0]
> > DECL ADDR[0]
> > DECL INDEXABLE_TEMP V[2][0..10]
> >
> > # Populate temporaries
> > MOV V[2][0], FOO
> > MOV V[2][1], BAR
> >
> > # Extract something with dynamic indexing:
> > MOV OUT[0], V[2][ADDR[0].x]
> > END
> >
> > I think that's pretty much the same as you're proposing.
> 
> Yes, though I think the MEM convention would be easier to work with in
> the GLSL compiler and gl_program data structures/code.

It's worth noting that the MEM would be more than a convention for compute, in 
fact it will be a hard requirement.
BTW, of MEM we can also figure out binding random-access buffers i.e. in DX11 
and all compute api's one can do random access scatter/gather reads from 
buffers (very few, currently in some apis maximum of two of those buffers can 
be 
bound). I haven't really thought about representing that in TGSI but given 
that those buffers can not only be bound in compute apis but also in fragment 
shader in DX11 but it seems that MEM could also solve this problem.

> BTW, I'm not completely sure I understand the INDEXABLE_TEMP syntax in
> your example.

Keith digs geometry shaders so he uses their syntax a lot ;)

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH] [OpenCL] fix device bugs found by unit tests

2010-01-04 Thread Zack Rusin
On Wednesday 30 December 2009 09:07:48 Igor Oliveira wrote:
> This patch fix some bugs found by unit tests like passing a wrong
> device type all the devices(gpu, cpu and accelarator)
> was being created, ignore paramValue if it is NULL and return
> invalid_value if paramValueSize != paramValueSizeReturn .

Hey Igor,

thanks for the patches I just pushed them. 
For the future could you maybe send your patches using git format-patch? 
Otherwise I have to be recreating commit messages from your emails while 
remembering to commit with --author to preserve ownership. Thanks! 

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH] Gallium API demos and supporting infrastructure

2010-01-01 Thread Zack Rusin
On Friday 01 January 2010 06:42:18 Corbin Simpson wrote:
> Couple things.
> 
> First, if we're going to start autogenerating headers, can we do GL
> API first? 

As long as we first fix the Exchange integration in KMail (i.e. it's a bit 
unfair to bring up mysterious unrelated features as a problem with a 
patchset.)

> (While we're at it, since we're doomed to bring it up every couple
> months, can we do automake? Please?)

Well, "we" think that the shed should be blue. Of course if red would prove to 
be better than our current rainbow of colors then surely the shed master would 
let the others do the painting for him.

> For coding conventions, sorry for 4-space tabs in r300 and radeon. I
> don't intend to change it though. If coding conventions are SERIOUS
> BUSINESS, then can we just write down indent rules?

We had those for ages: docs/devinfo.html .

> Finally, there are almost certainly already out-of-tree drivers and
> state trackers based on 0.1 and 0.2 "versions" of the API. Why wasn't
> it a priority to version the API before?

Because initially Gallium wasn't supposed to be exposing a stable api and 
state trackers/drivers were supposed to statically link to the same version. 
The need and desire for a stable api increased lately to a point where we 
started thinking about it or at least agreeing that it would be nice thing.

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] geometry shading patches

2009-12-26 Thread Zack Rusin
On Saturday 26 December 2009 02:19:40 Marek Olšák wrote:
> Zack,
> 
> to be honest, Direct3D 11 can report geometry shaders are not supported
>  through so-called feature levels. There are six levels to my knowledge
>  (9_1, 9_2, 9_3, 10_0, 10_1, 11_0). i915 is 9_1, R300 is 9_2, R500 is 9_3,
>  and so on. Direct3D 11 is indeed accelerated on those pieces of hardware
>  and, though the feature set is a little limited, the hardware support is
>  covered well. Is Direct3D 11 generations past because of that? No, it
>  isn't.
> 
> Let's say I have R500 and I want to use geometry shaders in Direct3D 11.
>  What are my options? I can't use my R500 and I must manually switch to the
>  device called WARP (Windows Advanced Rasterization Platform), which
>  reports the 10_1 feature level. This kind of device is very similar to
>  llvmpipe in Gallium.
> 
> In the past you said we should do it the same way as Direct3D, so why
>  should Gallium be different now? 

I think you're using "it" a bit broadly here because we never had a discussion 
about caps. First of all Gallium3D is already different when it comes to 
capabilities reporting. We have a buttload of caps, realistically speaking 
most of them are likely tight together. What we do right now is what d3d9 used 
to do which is what everyone agreed is awful. Not to mention that we don't 
have an option of actually selecting llvmpipe vs whatever hardware driver by 
hand.
The argument that would certainly make sense is one for moving Gallium3D caps 
model towards a shader-model reporting. e.g. shader-model 2.x, 3.x, 4.x, 5.x 
versus every single feature that they bring forward, e.g. 4.x implies geometry 
shader, 5.x implies tessellation/compute.
I absolutely abhor the idea of reporting as a cap everything above of what 
i915 or r300 can do, for the lack of better wording it's just ridicules. I do 
think though that we should look at our caps bits and come up with something 
better.

>  Moreover, if applications decide to use geometry shaders to emulate point
>  sprites or wide lines, we'll be screwed.

If the hardware doesn't implement those features they'll be in the draw module 
anyway. So it's really draw module vs draw module.

>  If they decide to do texture fetches in geometry shaders, we'll be screwed
>  even more because we'll have to move textures out of VRAM and that will be
>  a total performance killer. So I agree with Corbin that the CAP for
>  geometry shaders should be added and we should let drivers decide what's
>  best for them.

How is that different from the same problem applied to a vertex shader on i915 
and the ways that works right now? 
I agree that we need to solve that problems, but I just refuse that the best 
we can is "everything above i915 is a feature cap". We need to come up with a 
scheme that actually works or assume it's ok for draw module to handle some of 
those features.
For us it likely should be some combination of API and shader-model support 
(shader-models don't tell us anything about gl specific features like shadow 
samplers or aa lines/points), if we can figure out we can reasonable handle 
that we'll be fine.

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] geometry shading patches

2009-12-25 Thread Zack Rusin
On Friday 25 December 2009 07:03:02 Corbin Simpson wrote:
> Isn't this incredibly at odds with our previous discussion, in which
> we generally agreed to not advertise support for unaccelerated things?

No, it's really not. We don't have caps for core features, e.g we don't have 
caps for vertex shaders and this goes hand in hand with that. Geometry shaders 
are optional in the pipeline meaning that unlike fragment shaders they can be 
absent in which case the pipeline behaves just like it would if the api didn't 
have geometry shaders exposed at all i.e. vertex shader outputs go directly do 
the fragment shader. So for games/apps that don't use geometry shaders this 
won't matter at all. And games/app that are so new that they actually check 
for geometry shaders will already be slow on i915 and r300 not because of 
geometry shaders, but because they're running on it on i915 or r300 =)

Not to mention that this is not a fringe feature that will be present only in 
super high-end and futuristic hardware.
 
All in all it's a bit like fixed-point hardware - programmable hardware is not 
a cap because it's what Gallium models. We can't just keep the Gallium 
interface at i915 level and mark everything above that as a cap, it'd be silly 
given that we're generations past that now.

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] geometry shading patches

2009-12-24 Thread Zack Rusin
On Thursday 24 December 2009 10:03:25 Keith Whitwell wrote:
> Thanks Zack.  I'm fine with doing it on top of the others...

ok, great, thanks Keith. In that case I'll wait for any objections until 
tomorrow and if nothing will show up commit in the morning.

z

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] geometry shading patches

2009-12-24 Thread Zack Rusin
On Thursday 24 December 2009 09:09:44 Keith Whitwell wrote:
> Zack,
> 
> In terms of advertising support for this - I wonder if this isn't something
>  we should just turn on for all drivers, given that in the worst case it is
>  just a vertex path fallback, and a lot of drivers will be hitting those
>  for various reasons anyway.

Yes, I completely agree with both of your emails. The attached patches 
implement that (plus the last one comments out some unnecessary debugging 
output). They do it on top of the previous patches but if you'd like to have a 
history clear of them I can try to redo the entire series.

z
From 0483e3ed1c28982857da3292f8247388e8f9d0d9 Mon Sep 17 00:00:00 2001
From: Zack Rusin 
Date: Thu, 24 Dec 2009 09:20:45 -0500
Subject: [PATCH 13/15] util: put vertices_per_primitive function in its proper location

---
 src/gallium/auxiliary/tgsi/tgsi_sanity.c |5 +--
 src/gallium/auxiliary/tgsi/tgsi_text.c   |4 +-
 src/gallium/auxiliary/util/u_prim.h  |   33 ++
 src/gallium/include/pipe/p_inlines.h |   32 -
 4 files changed, 37 insertions(+), 37 deletions(-)

diff --git a/src/gallium/auxiliary/tgsi/tgsi_sanity.c b/src/gallium/auxiliary/tgsi/tgsi_sanity.c
index 5d11c19..16b8ec6 100644
--- a/src/gallium/auxiliary/tgsi/tgsi_sanity.c
+++ b/src/gallium/auxiliary/tgsi/tgsi_sanity.c
@@ -27,7 +27,7 @@
 
 #include "util/u_debug.h"
 #include "util/u_memory.h"
-#include "pipe/p_inlines.h"
+#include "util/u_prim.h"
 #include "cso_cache/cso_hash.h"
 #include "tgsi_sanity.h"
 #include "tgsi_info.h"
@@ -463,8 +463,7 @@ iter_property(
 
if (iter->processor.Processor == TGSI_PROCESSOR_GEOMETRY &&
prop->Property.PropertyName == TGSI_PROPERTY_GS_INPUT_PRIM) {
-  ctx->implied_array_size =
- pipe_vertices_per_primitive(prop->u[0].Data);
+  ctx->implied_array_size = u_vertices_per_prim(prop->u[0].Data);
}
return TRUE;
 }
diff --git a/src/gallium/auxiliary/tgsi/tgsi_text.c b/src/gallium/auxiliary/tgsi/tgsi_text.c
index ca247a1..825d17a 100644
--- a/src/gallium/auxiliary/tgsi/tgsi_text.c
+++ b/src/gallium/auxiliary/tgsi/tgsi_text.c
@@ -27,6 +27,7 @@
 
 #include "util/u_debug.h"
 #include "util/u_memory.h"
+#include "util/u_prim.h"
 #include "pipe/p_defines.h"
 #include "pipe/p_inlines.h"
 #include "tgsi_text.h"
@@ -1187,8 +1188,7 @@ static boolean parse_property( struct translate_ctx *ctx )
   }
   if (property_name == TGSI_PROPERTY_GS_INPUT_PRIM &&
   ctx->processor == TGSI_PROCESSOR_GEOMETRY) {
- ctx->implied_array_size =
-pipe_vertices_per_primitive(values[0]);
+ ctx->implied_array_size = u_vertices_per_prim(values[0]);
   }
   break;
default:
diff --git a/src/gallium/auxiliary/util/u_prim.h b/src/gallium/auxiliary/util/u_prim.h
index 7434329..10a874f 100644
--- a/src/gallium/auxiliary/util/u_prim.h
+++ b/src/gallium/auxiliary/util/u_prim.h
@@ -135,6 +135,39 @@ static INLINE unsigned u_reduced_prim( unsigned pipe_prim )
}
 }
 
+static INLINE unsigned
+u_vertices_per_prim(int primitive)
+{
+   switch(primitive) {
+   case PIPE_PRIM_POINTS:
+  return 1;
+   case PIPE_PRIM_LINES:
+   case PIPE_PRIM_LINE_LOOP:
+   case PIPE_PRIM_LINE_STRIP:
+  return 2;
+   case PIPE_PRIM_TRIANGLES:
+   case PIPE_PRIM_TRIANGLE_STRIP:
+   case PIPE_PRIM_TRIANGLE_FAN:
+  return 3;
+   case PIPE_PRIM_LINES_ADJACENCY:
+   case PIPE_PRIM_LINE_STRIP_ADJACENCY:
+  return 4;
+   case PIPE_PRIM_TRIANGLES_ADJACENCY:
+   case PIPE_PRIM_TRIANGLE_STRIP_ADJACENCY:
+  return 6;
+
+   /* following primitives should never be used
+* with geometry shaders abd their size is
+* undefined */
+   case PIPE_PRIM_POLYGON:
+   case PIPE_PRIM_QUADS:
+   case PIPE_PRIM_QUAD_STRIP:
+   default:
+  debug_printf("Unrecognized geometry shader primitive");
+  return 3;
+   }
+}
+
 const char *u_prim_name( unsigned pipe_prim );
 
 #endif
diff --git a/src/gallium/include/pipe/p_inlines.h b/src/gallium/include/pipe/p_inlines.h
index 95ec55d..5fbd62a 100644
--- a/src/gallium/include/pipe/p_inlines.h
+++ b/src/gallium/include/pipe/p_inlines.h
@@ -192,38 +192,6 @@ pipe_transfer_buffer_flags( struct pipe_transfer *transf )
}
 }
 
-static INLINE unsigned
-pipe_vertices_per_primitive(int primitive)
-{
-   switch(primitive) {
-   case PIPE_PRIM_POINTS:
-  return 1;
-   case PIPE_PRIM_LINES:
-   case PIPE_PRIM_LINE_LOOP:
-   case PIPE_PRIM_LINE_STRIP:
-  return 2;
-   case PIPE_PRIM_TRIANGLES:
-   case PIPE_PRIM_TRIANGLE_STRIP:
-   case PIPE_PRIM_TRIANGLE_FAN:
-  return 3;
-   case PIPE_PRIM_LINES_ADJACENCY:
-   case PIPE_PRIM_LINE_STRIP_ADJACENCY:
-  return 4;
-   case PIPE_PRIM_TRIANGLES_ADJACENCY:
-   case PIPE_PRIM_TRIANGLE_ST

Re: [Mesa3d-dev] TGSI text parser: report line number on error

2009-12-15 Thread Zack Rusin
On Tuesday 15 December 2009 09:27:04 Keith Whitwell wrote:
> On Tue, 2009-12-15 at 06:12 -0800, Zack Rusin wrote:
> > The attached patch makes the tgsi assembly parser report, in an
> > admittedly rather crude way, the line number at which a syntax error was
> > detected. What do you think about that?
> >
> > z
> 
> Is this the same as the line number printed in TGSI dumps, or different?
> 
> I suspect different, as TGSI doesn't require those line numbers, and
> they don't start from the top of the shader.  Currently I think they are
> only used as labels for CALL and similar opcodes...

yea, they effectively have to be different or otherwise all properties and 
decelerations are all line -1.
 
> This is confusing but probably unavoidable.  I think we may want to look
> at labels again and see if we can do better, in which case the line
> numbers in TGSI dumps could start from the top of the shader.

yea, line numbering is a bit bonkers. we could just do what most other 
languages does which is have an explicit label statement/instruction. 

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] TGSI text parser: report line number on error

2009-12-15 Thread Zack Rusin
On Tuesday 15 December 2009 09:21:05 michal wrote:
> Zack Rusin pisze:
> > The attached patch makes the tgsi assembly parser report, in an
> > admittedly rather crude way, the line number at which a syntax error was
> > detected. What do you think about that?
> 
> I agree, that's a bit invasive.
> 
> What about saving the starting pointer and, on error, scanning it again
> and counting newlines up to the current position?

yea, good idea. also ugly but a lot less code =)
From 3ebf748f2c8de718df51fc62a385c5613a7e46f3 Mon Sep 17 00:00:00 2001
From: Zack Rusin 
Date: Tue, 15 Dec 2009 09:26:51 -0500
Subject: [PATCH 2/2] tgsi: make the tgsi assembly parser report line/column on error

---
 src/gallium/auxiliary/tgsi/tgsi_text.c |   15 ++-
 1 files changed, 14 insertions(+), 1 deletions(-)

diff --git a/src/gallium/auxiliary/tgsi/tgsi_text.c b/src/gallium/auxiliary/tgsi/tgsi_text.c
index b47daa5..62e2e29 100644
--- a/src/gallium/auxiliary/tgsi/tgsi_text.c
+++ b/src/gallium/auxiliary/tgsi/tgsi_text.c
@@ -182,7 +182,20 @@ struct translate_ctx
 
 static void report_error( struct translate_ctx *ctx, const char *msg )
 {
-   debug_printf( "\nError: %s", msg );
+   int line = 0;
+   int column = 0;
+   const char *itr = ctx->text;
+
+   while (itr != ctx->cur) {
+  if (*itr == '\n') {
+ column = 0;
+ ++line;
+  }
+  ++column;
+  ++itr;
+   }
+
+   debug_printf( "\nTGSI asm error: %s [%d : %d] \n", msg, line, column );
 }
 
 /* Parse shader header.
-- 
1.6.5.4

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


[Mesa3d-dev] TGSI text parser: report line number on error

2009-12-15 Thread Zack Rusin
The attached patch makes the tgsi assembly parser report, in an admittedly 
rather crude way, the line number at which a syntax error was detected. What 
do you think about that?

z
From 7a7784d69ebf6a8b16386821217a5af232213cbc Mon Sep 17 00:00:00 2001
From: Zack Rusin 
Date: Tue, 15 Dec 2009 09:09:00 -0500
Subject: [PATCH 2/2] tgsi: make the assembly parser report the line number on error

---
 src/gallium/auxiliary/tgsi/tgsi_text.c |  131 
 1 files changed, 81 insertions(+), 50 deletions(-)

diff --git a/src/gallium/auxiliary/tgsi/tgsi_text.c b/src/gallium/auxiliary/tgsi/tgsi_text.c
index b47daa5..33141b1 100644
--- a/src/gallium/auxiliary/tgsi/tgsi_text.c
+++ b/src/gallium/auxiliary/tgsi/tgsi_text.c
@@ -76,21 +76,31 @@ static boolean str_match_no_case( const char **pcur, const char *str )
 }
 
 /* Eat zero or more whitespaces.
+ * Return the number of lines eaten.
  */
-static void eat_opt_white( const char **pcur )
+static int eat_opt_white( const char **pcur )
 {
-   while (**pcur == ' ' || **pcur == '\t' || **pcur == '\n')
+   int lines = 0;
+   while (**pcur == ' ' || **pcur == '\t' || **pcur == '\n') {
+  if (**pcur == '\n')
+ ++lines;
   (*pcur)++;
+   }
+   return lines;
 }
 
 /* Eat one or more whitespaces.
  * Return TRUE if at least one whitespace eaten.
  */
-static boolean eat_white( const char **pcur )
+static boolean eat_white( const char **pcur, int *lines )
 {
+   int l;
const char *cur = *pcur;
 
-   eat_opt_white( pcur );
+   l = eat_opt_white( pcur );
+   if (lines) {
+  *lines = l;
+   }
return *pcur > cur;
 }
 
@@ -178,11 +188,12 @@ struct translate_ctx
struct tgsi_token *tokens_cur;
struct tgsi_token *tokens_end;
struct tgsi_header *header;
+   unsigned lineno;
 };
 
 static void report_error( struct translate_ctx *ctx, const char *msg )
 {
-   debug_printf( "\nError: %s", msg );
+   debug_printf( "\nTGSI asm error: %s : %d\n", msg, ctx->lineno );
 }
 
 /* Parse shader header.
@@ -221,12 +232,14 @@ static boolean parse_header( struct translate_ctx *ctx )
 static boolean parse_label( struct translate_ctx *ctx, uint *val )
 {
const char *cur = ctx->cur;
+   int lines = 0;
 
if (parse_uint( &cur, val )) {
-  eat_opt_white( &cur );
+  lines = eat_opt_white( &cur );
   if (*cur == ':') {
  cur++;
  ctx->cur = cur;
+ ctx->lineno += lines;
  return TRUE;
   }
}
@@ -273,13 +286,13 @@ parse_opt_writemask(
uint *writemask )
 {
const char *cur;
-
+   int lines = 0;
cur = ctx->cur;
-   eat_opt_white( &cur );
+   lines += eat_opt_white( &cur );
if (*cur == '.') {
   cur++;
   *writemask = TGSI_WRITEMASK_NONE;
-  eat_opt_white( &cur );
+  lines += eat_opt_white( &cur );
   if (uprcase( *cur ) == 'X') {
  cur++;
  *writemask |= TGSI_WRITEMASK_X;
@@ -303,6 +316,7 @@ parse_opt_writemask(
   }
 
   ctx->cur = cur;
+  ctx->cur += lines;
}
else {
   *writemask = TGSI_WRITEMASK_XYZW;
@@ -321,7 +335,7 @@ parse_register_file_bracket(
   report_error( ctx, "Unknown register file" );
   return FALSE;
}
-   eat_opt_white( &ctx->cur );
+   ctx->lineno += eat_opt_white( &ctx->cur );
if (*ctx->cur != '[') {
   report_error( ctx, "Expected `['" );
   return FALSE;
@@ -342,7 +356,7 @@ parse_register_file_bracket_index(
 
if (!parse_register_file_bracket( ctx, file ))
   return FALSE;
-   eat_opt_white( &ctx->cur );
+   ctx->lineno += eat_opt_white( &ctx->cur );
if (!parse_uint( &ctx->cur, &uindex )) {
   report_error( ctx, "Expected literal unsigned integer" );
   return FALSE;
@@ -362,7 +376,7 @@ parse_register_dst(
 {
if (!parse_register_file_bracket_index( ctx, file, index ))
   return FALSE;
-   eat_opt_white( &ctx->cur );
+   ctx->lineno += eat_opt_white( &ctx->cur );
if (*ctx->cur != ']') {
   report_error( ctx, "Expected `]'" );
   return FALSE;
@@ -392,16 +406,16 @@ parse_register_src(
*ind_comp = TGSI_SWIZZLE_X;
if (!parse_register_file_bracket( ctx, file ))
   return FALSE;
-   eat_opt_white( &ctx->cur );
+   ctx->lineno += eat_opt_white( &ctx->cur );
cur = ctx->cur;
if (parse_file( &cur, ind_file )) {
   if (!parse_register_dst( ctx, ind_file, ind_index ))
  return FALSE;
-  eat_opt_white( &ctx->cur );
+  ctx->lineno += eat_opt_white( &ctx->cur );
 
   if (*ctx->cur == '.') {
  ctx->cur++;
- eat_opt_white(&ctx->cur);
+ ctx->lineno += eat_opt_white(&ctx->cur);
 
  switch (uprca

Re: [Mesa3d-dev] [PATCH] Add extra dimension info to TGSI declarations.

2009-12-14 Thread Zack Rusin
On Monday 14 December 2009 12:49:53 michal wrote:
> Keith Whitwell pisze:
> > On Mon, 2009-12-14 at 06:51 -0800, michal wrote:
> >> Zack Rusin pisze:
> >>> On Monday 14 December 2009 09:29:03 Keith Whitwell wrote:
> >>>> On Mon, 2009-12-14 at 06:23 -0800, michal wrote:
> >>>>> To fully support geometry shaders, we need some means to declare a
> >>>>> two-dimensional register file. The following declaration
> >>>>>
> >>>>> DCL IN[3][0]
> >>>>>
> >>>>> would declare an input register with index 0 (first dimension) and
> >>>>> size 3 (second dimension). Since the second dimension is a size, not
> >>>>> an index (or, for that matter, an index range), a new token has been
> >>>>> added that specifies the declared size of the register.
> >>>>
> >>>> Is this a good representation?  What would happen if there was:
> >>>>
> >>>> DCL IN[4][0]
> >>>> DCL IN[3][1]
> >>>>
> >>>> Presumably the "3" is always going to be "3", and it's a property of
> >>>> the geometry shader - I think Zack has a patch which adds something
> >>>> like:
> >>>>
> >>>> PROP GS_VERTICES_IN 3
> >>>>
> >>>> Then couldn't we just have the equivalent of:
> >>>>
> >>>> DCL IN[][0]
> >>>> DCL IN[][1]
> >>>>
> >>>> with the size of the first dimension specified by the property?
> >>>
> >>> Yea, that's what I thought the dimensional arrays should look like for
> >>> GS in TGSI (they already do in GLSL and HLSL).
> >>
> >> Actually, GS_VERTICES_IN could be derived from GS_INPUT_PRIM property.
> >>
> >> GL_ARB_geometry_shader4 has this mapping:
> >>
> >> "
> >>
> >>  Value of built-in
> >> Input primitive type gl_VerticesIn
> >> ---  -
> >> POINTS  1
> >> LINES   2
> >> LINES_ADJACENCY_ARB 4
> >> TRIANGLES   3
> >> TRIANGLES_ADJACENCY_ARB 6
> >>
> >> "
> >>
> >> But that also defeats the purpose of this patch -- INPUT registers would
> >> have implied two-dimensionality when declared inside GS.
> >
> > We have agreed that, its true...
> >
> > So is this patch necessary?  Is it sufficient to simply make the
> > statements that:
> >
> > a) Geometry shader INPUTs are always two dimensional
> > b) The first dimension is determined by the input primitive type?
> 
> Yes, thanks.

k, i'm a bit confused. i can't say it's very pretty but it works so i'm cool 
with any form of declarations but where does that leave the problem of 
actually accessing those inputs? i mean how will we access the color of the 
second vertex if multidimensional arrays don't exist.
will it be
GEOM
PROPERTY GS_INPUT_PRIMITIVE TRIANGLES
DCL IN[0], POSITION
DCL OUT[0], POSITION
 MOV OUT[0], IN[0][0]
 EMIT_VERTEX
 MOV OUT[0], IN[1][0]
 EMIT_VERTEX
 MOV OUT[0], IN[2][0]
 EMIT_VERTEX
 END_PRIMITIVE
END

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH] Add extra dimension info to TGSI declarations.

2009-12-14 Thread Zack Rusin
On Monday 14 December 2009 09:29:03 Keith Whitwell wrote:
> On Mon, 2009-12-14 at 06:23 -0800, michal wrote:
> > To fully support geometry shaders, we need some means to declare a
> > two-dimensional register file. The following declaration
> >
> > DCL IN[3][0]
> >
> > would declare an input register with index 0 (first dimension) and size
> > 3 (second dimension). Since the second dimension is a size, not an index
> > (or, for that matter, an index range), a new token has been added that
> > specifies the declared size of the register.
> 
> Is this a good representation?  What would happen if there was:
> 
> DCL IN[4][0]
> DCL IN[3][1]
> 
> Presumably the "3" is always going to be "3", and it's a property of the
> geometry shader - I think Zack has a patch which adds something like:
> 
> PROP GS_VERTICES_IN 3
> 
> Then couldn't we just have the equivalent of:
> 
> DCL IN[][0]
> DCL IN[][1]
> 
> with the size of the first dimension specified by the property?

Yea, that's what I thought the dimensional arrays should look like for GS in 
TGSI (they already do in GLSL and HLSL).
 
> Are there going to be cases where this doesn't work?

I don't think so. 
Also if we decide to go with DCL IN[x][1] notation then it probably should be 
DCL IN[a..b][1] because otherwise it just looks weird that one component 
declares a range while the other the index.

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] PATCH[0/1]: OpenCL: create and implement stub context methods

2009-12-11 Thread Zack Rusin
On Thursday 10 December 2009 16:57:09 Igor Oliveira wrote:
> On Thu, Dec 10, 2009 at 6:32 AM, Zack Rusin  wrote:
> > On Wednesday 09 December 2009 20:30:56 Igor Oliveira wrote:
> >> Hi Zack,
> >>
> >> 1) agreed. OpencCL is a complete different project and should exist in
> >> a different repository.
> >> 1.1) Well use Gallium as CPU backend is a software dilemma:
> >> "All problems in computer science can be solved by another level of
> >> indirection...except for the problem of too many layers of
> >> indirection"
> >> But in my opinion we can use Gallium for CPU operations too, using
> >> gallium as a backend for all device types we maintain a code
> >> consistency.
> >
> > Yes, it will certainly make the code a lot cleaner. I think using
> > llvmpipe we might be able to get it working fairly quickly. I'll need to
> > finish a few features in Gallium3d first. In particular we'll need to
> > figure out how to handle memory hierarchies, i.e. private/shared/global
> > memory accesses in shaders. Then we'll have some basic tgsi stuff like
> > scatter reads and writes to structured buffers, types in tgsi (int{8-64},
> > float, double}, barrier and memory barrier instructions, atomic reduction
> > instructions, performance events and likely trap/breakpoint instructions.
> > We'll be getting all those fixed within the next few weeks.
> >
> > z
> 
> right,
> So until fix that issues i would be working in building system(i have
> many hacked things here), create an unit test environment and finish
> the patchs(that i sent to you) to implement all things used in OpenCL
> documentation(like errors handler in context creating).

That sounds great Igor.

> Other thing that could be done or started is the api_memory, you
> already implemented some cpu_buffers operations, right? or i am wrong?

That was a while back, all of that code should likely go. I think we'll end up 
needing something like D3D11_BIND_UNORDERED_ACCESS and 
D3D11_RESOURCE_MISC_BUFFER_STRUCTURED flags in Gallium3d but we can tackle that 
later.

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] PATCH[0/1]: OpenCL: create and implement stub context methods

2009-12-11 Thread Zack Rusin
On Thursday 10 December 2009 16:26:33 Younes Manton wrote:
> On Thu, Dec 10, 2009 at 3:34 PM, Zack Rusin  wrote:
> > On Thursday 10 December 2009 15:14:46 Younes Manton wrote:
> >> OK, so we seem to be on the same page here, pipe_context will get some
> >> more functions. That's what I was originally asking about, will
> >> pipe_context grow or will there be a new kind of context? From my POV
> >> we would end up in roughly the same place either way.
> >
> > In general it's safe to assume that pipe_context as the main Gallium3d
> > interface will largely model the modern pipeline. Meaning that the
> > Gallium3d pipeline will look a lot like D3D 11 pipeline because
> > realistically that's what's going to get the most attention from hardware
> > vendors. So effectively 1) input assembler
> > 2) vertex shader
> > 3) hull shader
> > 4) tessellator
> > 5) domain shader
> > 6) geometry shader
> > 7) rasterizer
> > 8) pixel shader
> > 9) compute shader
> > 10) output merger
> > When it comes to compute OpenCL is more robust so the compute resources
> > will be designed to make sure CL 1.1 is easily implementable and
> > certainly make it possible to use compute without graphics but the
> > pipeline itself will have to stay as outlined above otherwise we'd just
> > be asking for trouble. Does that make sense?
> 
> Makes perfect sense, I just had a completely different looking, yet
> practically identical picture that put orthogonal functionality (2D,
> 3D, compute, video) in seperate 'contexts' as far as gallium was
> concerned, even if everything ended up in the same command buffer on
> the backend.
> 
> pipe_foo_context { foo(); bar(); }
> pipe_shaz_context { shaz(); blam(); }
> ...
> 
> vs
> 
> pipe_context { foo(); bar(); shaz(); blam(); }
> 
> If we want to grow pipe_context beyond just 3D then thats fine too,

pipe_context isn't so much about 3d as it is about abstracting modern 
programmable graphics hardware. compute shaders are becoming part of the 
pipeline so it's hard to exclude them. different contexts certainly make a lot 
of sense but for parts that are adjoining to the pipeline, rather than within 
it. 
extracting compute code into another context would make a few things a lot 
more complicated than they should be e.g. for d3d given the same context is 
used for both and compute shaders are executed after and can communicate with 
fragment shaders so we would have to create another object that would unify 
the commands and resources of our pipe_ contexts in some way and then give 
guarantees for execution order, for cl we'd need to port mesa to the new 
interface to allow sharing of GL resources as defined in the CL spec. all 
things that we get for free with one context. 

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium3d shader declarations

2009-12-11 Thread Zack Rusin
On Friday 11 December 2009 08:05:24 michal wrote:
> > There's gs_instance_id which specifies the id of the instance of the
> > geometry shader (which is different than the global draw instance).
> > There's also the number of vertices that the geometry shader works on (1
> > points, 2 lines, 3 triangles, 6 triangle adjacency etc) (which although
> > is a readable variable in glsl, is something that is encoded in D3D by
> > the way of encoding the input primitive in the shader). (plus of course
> > there are the properties of the shader e.g. input primitive, output
> > primitive and max number of vertices that it emits).
> >
> > What Keith is referring though is the fact that our input layout is going
> > to be a bit difficult because for geometry shader it's
> > DCL SV[0].x, PRIMITIVE_ID
> > DCL SV[1].y, GS_INSTANCE_ID
> > DCL IN[0][0..n], POSITION
> > DCL IN[1][0..n], COLOR
> > so whether you have SV or not is not the issue, it's the fact that we
> > have different dimensional inputs that is not ideal. Right now I don't
> > really see a way of getting rid of that.
> 
> And I agree with that. Since you provided more SV examples that make
> sense (though GS_INSTANCE_ID is something SM5 introduces), you should
> stick with SYSTEM_VALUE register file. There is nothing gross about
> having INPUT registers two-dimensional, and SYSTEM_VALUE one-dimensional.

k, it seems we're all on the same page then. Just to reiterate what we'll 
provide is
a) PROPERTY token that takes a property name and then has a few unsigned worth 
of data e.g.
PROPERTY GS_INPUT_PRIMITIVE TRIANGLES
PROPERTY GS_MAX_VERTICES 32
PROPERTY COMPUTE_THREADS 4 4 4
b) a new SV register file that is used for system generated variables that are 
injected in the pipeline, plus some additional semantics e.g.
DCL SV[0], VERTEX_ID
DCL SV[0], PRIMITIVE_ID
DCL SV[0], FACE 

of those two likely only the latter is a bit unclear since basically all other 
low level languages do it via another form of a DCL token. imho though the new 
register file is semantically clearer and if it proves that a new DCL token is 
in fact better suited for this it's not like we're committed to a lifetime 
tgsi token compatibility and we'll be able to simply change it.
i provide a patch over the weekend for both, the quicker we'll have this the 
quicker we'll be able to judge its actual viability.

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] PATCH[0/1]: OpenCL: create and implement stub context methods

2009-12-10 Thread Zack Rusin
On Thursday 10 December 2009 15:14:46 Younes Manton wrote:
> OK, so we seem to be on the same page here, pipe_context will get some
> more functions. That's what I was originally asking about, will
> pipe_context grow or will there be a new kind of context? From my POV
> we would end up in roughly the same place either way.

In general it's safe to assume that pipe_context as the main Gallium3d 
interface will largely model the modern pipeline. Meaning that the Gallium3d 
pipeline will look a lot like D3D 11 pipeline because realistically that's 
what's going to get the most attention from hardware vendors. So effectively 
1) input assembler
2) vertex shader
3) hull shader
4) tessellator
5) domain shader
6) geometry shader
7) rasterizer
8) pixel shader
9) compute shader
10) output merger
When it comes to compute OpenCL is more robust so the compute resources will 
be designed to make sure CL 1.1 is easily implementable and certainly make it 
possible to use compute without graphics but the pipeline itself will have to 
stay as outlined above otherwise we'd just be asking for trouble. Does that 
make sense?

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] PATCH[0/1]: OpenCL: create and implement stub context methods

2009-12-10 Thread Zack Rusin
On Thursday 10 December 2009 14:35:47 Younes Manton wrote: 
> Well how do we keep the compute state seperate from the 3D state, and
> how do you mix the two? 

It's really the same state. You bind a compute shader and execute it. It 
doesn't affect the rest of your state. Compute binds the buffers pretty much 
exactly like you'd bind them for any other api, and the "pretty much" comes 
only from the fact that you can bind structured buffers with compute for 
scatter reads/writes which purely graphics api don't have a lot of need for.

> Do you have two state trackers using the same
> pipe_context and re-emitting their entire states to the HW as
> necessary? 

Why would that be necessary? Compute doesn't destroy the state that's been set 
for graphics.

> Do you use two pipe_contexts? 

That depends on how the context was created, i.e. it's up to API and the users 
to decide how they want to use it. With d3d a lot of usage might be shared, 
with opencl a lot of usage will use a separate context.

> What about cards that know about compute and keep a seperate state? 

Well, a) compute doesn't care about the 3d state so that should be fine, b) if 
they shared some intrinsics parts of the state between the scenes they became 
deprecated the second D3D11 introduced compute shaders. 

> When you set a shader/read buffer/write buffer/const buffer with the
> pipe_context it's not clear to me what we should do on the driver's side.

You should set a shader/read buffer/write buffer/const buffer like pipe_context 
asks you to =)
 
> The card basically has seperate state for DMA, 2D, 3D, video, compute
> on nv50+, and a bunch of others. When we create a pipe_context we bind
> the DMA, 2D, and 3D and some of the others and issue commands. For
> nv50 we have a compute state, but we need to know what to do with
> commands coming through pipe_context, are they for 3D or compute?

The compute state is for compute, the 3d specific state is for 3d =) When we'll 
do context->bind_compute_shader(...) surely you'll know that you're supposed 
to set the compute shader. And for buffers NVIDIA and really anyone can't 
require that distinction because they wouldn't be able to implement d3d 
compute shaders. We'll likely add a buffer flag anyway, just to make it 
explicit 
that a certain buffer will be utilizing unordered access, just like most likely 
will slip in a dispatch(int work_groups_x, int work_groups_y, int 
work_groups_z) call. 
it really all boils down to a very simple math: the first api all ihv support 
is directx => directx does it => we better do it. 

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] PATCH[0/1]: OpenCL: create and implement stub context methods

2009-12-10 Thread Zack Rusin
On Thursday 10 December 2009 11:25:48 Younes Manton wrote:
> On Thu, Dec 10, 2009 at 5:32 AM, Zack Rusin  wrote:
> > On Wednesday 09 December 2009 20:30:56 Igor Oliveira wrote:
> >> Hi Zack,
> >>
> >> 1) agreed. OpencCL is a complete different project and should exist in
> >> a different repository.
> >> 1.1) Well use Gallium as CPU backend is a software dilemma:
> >> "All problems in computer science can be solved by another level of
> >> indirection...except for the problem of too many layers of
> >> indirection"
> >> But in my opinion we can use Gallium for CPU operations too, using
> >> gallium as a backend for all device types we maintain a code
> >> consistency.
> >
> > Yes, it will certainly make the code a lot cleaner. I think using
> > llvmpipe we might be able to get it working fairly quickly. I'll need to
> > finish a few features in Gallium3d first. In particular we'll need to
> > figure out how to handle memory hierarchies, i.e. private/shared/global
> > memory accesses in shaders. Then we'll have some basic tgsi stuff like
> > scatter reads and writes to structured buffers, types in tgsi (int{8-64},
> > float, double}, barrier and memory barrier instructions, atomic reduction
> > instructions, performance events and likely trap/breakpoint instructions.
> > We'll be getting all those fixed within the next few weeks.
> 
> Doesn't seem like the current pipe_context is suited to the
> requirements of a compute API. 

Can you be more specific? Which parts you don't think are suited for it?

> Should it be made larger or is another kind of context in order? 

I don't see anything missing from pipe_context to warrant a new interface. 
What exactly is your concern?

> Under the hood on nvidia cards there's are
> seperate hardware interfaces for compute, graphics, video, even though
> there is some duplicate functionality, so it's not like most of the
> code of our current pipe_context would be reused*, so to me a
> different type of context makes sense.

Really? To be honest I've never seen any compute specific hardware in nvidia, 
what is it?

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] PATCH[0/1]: OpenCL: create and implement stub context methods

2009-12-10 Thread Zack Rusin
On Wednesday 09 December 2009 20:30:56 Igor Oliveira wrote:
> Hi Zack,
> 
> 1) agreed. OpencCL is a complete different project and should exist in
> a different repository.
> 1.1) Well use Gallium as CPU backend is a software dilemma:
> "All problems in computer science can be solved by another level of
> indirection...except for the problem of too many layers of
> indirection"
> But in my opinion we can use Gallium for CPU operations too, using
> gallium as a backend for all device types we maintain a code
> consistency.

Yes, it will certainly make the code a lot cleaner. I think using llvmpipe we 
might be able to get it working fairly quickly. I'll need to finish a few 
features in Gallium3d first. In particular we'll need to figure out how to 
handle memory hierarchies, i.e. private/shared/global memory accesses in 
shaders. Then we'll have some basic tgsi stuff like scatter reads and writes to 
structured buffers, types in tgsi (int{8-64}, float, double}, barrier and 
memory 
barrier instructions, atomic reduction instructions, performance events and 
likely trap/breakpoint instructions.
We'll be getting all those fixed within the next few weeks.

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] PATCH[0/1]: OpenCL: create and implement stub context methods

2009-12-10 Thread Zack Rusin
On Thursday 10 December 2009 03:25:25 Jose Fonseca wrote:
> I agree you're free to choose whatever build system you'd like if it's
>  going to be in a separate repos.
> 
> I've used CMake many times and it is a very nice build system indeed. It's
>  also very convenient to use on windows (it can generate Visual Studio
>  files), and recently they made very easy to cross-compile linux->windows.
> 
> The problem I had when I tried to convert Mesa to CMake was that it didn't
>  support convenience libraries -- a static library which contains -fPIC
>  objects. All mesa/gallium auxiliary libraries are like that since the
>  final target is a shared object, and it there was no way to extend CMake
>  to do that at that time [1]. Has this changed recently, or is this not
>  relevant for a OpenCL statetracker?

A bit of both  :) 
The state tracker itself won't be producing convenience libraries, it's really 
just a single library so it should be fine either way.
And the solution that the CMake folks seem to be advocating (splitting sources 
into separate CMakeLists.txt files:
http://www.cmake.org/Wiki/CMake_FAQ#Does_CMake_support_.22convenience.22_libraries.3F
 
)
is imho a reasonable one. 
And yea, I really like that CMake can produce native build files on all 
platforms (Makefiles on linux, Visual Studio on Windows, XCode build files on 
osx...) and that it has self-contained installers for all platforms, it's very 
neat.

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium3d shader declarations

2009-12-09 Thread Zack Rusin
On Wednesday 09 December 2009 15:07:45 michal wrote:
> Keith Whitwell pisze:
> > On Wed, 2009-12-09 at 10:19 -0800, Keith Whitwell wrote:
> >> On Wed, 2009-12-09 at 07:18 -0800, Zack Rusin wrote:
> >>> On Wednesday 09 December 2009 10:05:13 michal wrote:
> >>>> Zack Rusin pisze:
> >>>>> On Wednesday 09 December 2009 08:55:09 michal wrote:
> >>>>>> Zack Rusin pisze:
> >>>>>>> On Wednesday 09 December 2009 08:44:20 Keith Whitwell wrote:
> >>>>>>>> On Wed, 2009-12-09 at 04:41 -0800, michal wrote:
> >>>>>>>>> Zack Rusin pisze:
> >>>>>>>>>> Hi,
> >>>>>>>>>>
> >>>>>>>>>> currently Gallium3d shaders predefine all their inputs/outputs.
> >>>>>>>>>> We've handled all inputs/outputs the same way. e.g.
> >>>>>>>>>> VERT
> >>>>>>>>>> DCL IN[0]
> >>>>>>>>>> DCL OUT[0], POSITION
> >>>>>>>>>> DCL OUT[1], COLOR
> >>>>>>>>>> DCL CONST[0..9]
> >>>>>>>>>> DCL TEMP[0..3]
> >>>>>>>>>> or
> >>>>>>>>>> FRAG
> >>>>>>>>>> DCL IN[0], COLOR, LINEAR
> >>>>>>>>>> DCL OUT[0], COLOR
> >>>>>>>>>>
> >>>>>>>>>> There are certain inputs/output which don't really follow the
> >>>>>>>>>> typical rules for inputs/outputs though and we've been imitating
> >>>>>>>>>> those with extra normal semantics (e.g. front face).
> >>>>>>>>>>
> >>>>>>>>>> It all falls apart a bit on anything with shader model 4.x and
> >>>>>>>>>> up. That's because in there we've got what Microsoft calls
> >>>>>>>>>> system-value semantics. (
> >>>>>>>>>> http://msdn.microsoft.com/en-us/library/ee418355(VS.85).aspx#Sys
> >>>>>>>>>>tem_ Va l ue ). They all represent system-generated
> >>>>>>>>>> inputs/outputs for shaders. And while so far we only really had
> >>>>>>>>>> to handle front-face since shader model 4.x we have to deal with
> >>>>>>>>>> lots of them (geometry shaders, domain shaders, computer
> >>>>>>>>>> shaders... they all have system generated inputs/outputs)
> >>>>>>>>>>
> >>>>>>>>>> I'm thinking of adding something similar to what D3D does to
> >>>>>>>>>> Gallium3d. So just adding a new DCL type, e.g. DCL_SV which
> >>>>>>>>>> takes the vector name and the system-value semantic as its
> >>>>>>>>>> inputs, so FRAG DCL IN[0], COLOR, LINEAR
> >>>>>>>>>> DCL IN[1], COLOR[1], LINEAR
> >>>>>>>>>> DCL IN[2], FACE, CONSTANT
> >>>>>>>>>> would become
> >>>>>>>>>> FRAG
> >>>>>>>>>> DCL IN[0], COLOR, LINEAR
> >>>>>>>>>> DCL IN[1], COLOR[1], LINEAR
> >>>>>>>>>> DCL_SV IN[2], FACE
> >>>>>>>>>>
> >>>>>>>>>> It likely could be done in a more generic fashion though.
> >>>>>>>>>> Opinions?
> >>>>>>>>>
> >>>>>>>>> Zack,
> >>>>>>>>>
> >>>>>>>>> What would be the difference between
> >>>>>>>>>
> >>>>>>>>> DCL IN[2], FACE, CONSTANT
> >>>>>>>>>
> >>>>>>>>> and
> >>>>>>>>>
> >>>>>>>>> DCL_SV IN[2], FACE
> >>>>>>>>>
> >>>>>>>>> then? Maybe the example is bad, but I don't see what DCL_SV would
> >>>>>>>>> give us the existing DCL doesn't.
> >>>>>>>>
> >>>>>>>> I'd have proposed something slightly different where the SV values
> >>>>&g

Re: [Mesa3d-dev] PATCH[0/1]: OpenCL: create and implement stub context methods

2009-12-09 Thread Zack Rusin
On Wednesday 09 December 2009 13:29:05 Igor Oliveira wrote:
> These patchs implements and implements stub context methods in OpenCL.
> Almost all operation in OpenCL use a context.
> The patch implements the gallium3d context and implements the methods
>  below:
> 
> -clCreateContext
> -clCreateContexFromType
> -clRetainContext
> -clReleaseContext
> 
> ps: probably i show break it in 2 patchs

Hi Igor,

the patch looks ok. Thanks.

we're just working on adding support for compute to Gallium so I'd probably 
wait a bit so that we can nail down the framework underneath before anything 
else. Also we have to decide the following issues before doing really 
anything:

1) should the opencl state tracker live in a repository of its own or should 
it live withing mesa3d repo like the other state trackers. The thing that 
makes the opencl state tracker a bit different is that it has to work on raw 
cpu (which is really subquestion to 1: should the opencl state tracker be able 
to work without gallium when working on top of a cpu). i didn't really feel 
like creating another branch of mesa and be merging things in initially which 
is why there is a separate repo right now.

2) should the opencl state tracker be using cmake or scons. originally i 
picked cmake because it generates actual makefile's and that's incredibly 
useful.

3) the language selection. it's using c++ because llvm is using c++ and 
because i dig c++, which is good enough for me but i guess it might cause 
militant schism.

i'd also like to rename it to "coal" to make it fit better with the mesa and 
gallium naming.

opinions?

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium3d shader declarations

2009-12-09 Thread Zack Rusin
On Wednesday 09 December 2009 10:18:42 Zack Rusin wrote:
> I could do that but only if we agree it's in the name of love.
> 
> So is everyone ok with a new register SV for system generated values and
>  new declaration token called PROPERTY for shader specific properties (btw,
>  d3d calls those attributes, but since attributes already have a meaning in
>  glsl I think it'd probably wise to not try to redefine it).

Something like what's attached.
One of the fs properties might be imply the other, so I'm not sure if we'll 
need both but I just went with d3d does here.
diff --git a/src/gallium/include/pipe/p_shader_tokens.h b/src/gallium/include/pipe/p_shader_tokens.h
index 588ca5e..e7f6dfc 100644
--- a/src/gallium/include/pipe/p_shader_tokens.h
+++ b/src/gallium/include/pipe/p_shader_tokens.h
@@ -55,6 +55,7 @@ struct tgsi_processor
 #define TGSI_TOKEN_TYPE_DECLARATION0
 #define TGSI_TOKEN_TYPE_IMMEDIATE  1
 #define TGSI_TOKEN_TYPE_INSTRUCTION2
+#define TGSI_TOKEN_TYPE_PROPERTY   3
 
 struct tgsi_token
 {
@@ -64,16 +65,17 @@ struct tgsi_token
 };
 
 enum tgsi_file_type {
-   TGSI_FILE_NULL=0,
-   TGSI_FILE_CONSTANT=1,
-   TGSI_FILE_INPUT   =2,
-   TGSI_FILE_OUTPUT  =3,
-   TGSI_FILE_TEMPORARY   =4,
-   TGSI_FILE_SAMPLER =5,
-   TGSI_FILE_ADDRESS =6,
-   TGSI_FILE_IMMEDIATE   =7,
-   TGSI_FILE_LOOP=8,
-   TGSI_FILE_PREDICATE   =9,
+   TGSI_FILE_NULL =0,
+   TGSI_FILE_CONSTANT =1,
+   TGSI_FILE_INPUT=2,
+   TGSI_FILE_OUTPUT   =3,
+   TGSI_FILE_TEMPORARY=4,
+   TGSI_FILE_SAMPLER  =5,
+   TGSI_FILE_ADDRESS  =6,
+   TGSI_FILE_IMMEDIATE=7,
+   TGSI_FILE_LOOP =8,
+   TGSI_FILE_PREDICATE=9,
+   TGSI_FILE_SYSTEM_VALUE =10,
TGSI_FILE_COUNT  /**< how many TGSI_FILE_ types */
 };
 
@@ -151,6 +153,24 @@ union tgsi_immediate_data
float Float;
 };
 
+#define TGSI_PROPERTY_FS_OUTPUT_DEPTH0
+#define TGSI_PROPERTY_FS_EARLY_DEPTH_STENCIL 1
+#define TGSI_PROPERTY_GS_INPUT_PRIM  2
+#define TGSI_PROPERTY_GS_OUTPUT_PRIM 3
+#define TGSI_PROPERTY_GS_MAX_VERTICES4
+#define TGSI_PROPERTY_GS_INSTANCES   5
+
+struct tgsi_property {
+   unsigned Type : 4;  /**< TGSI_TOKEN_TYPE_PROPERTY */
+   unsigned NrTokens : 8;  /**< UINT */
+   unsigned SemanticName : 8;  /**< one of TGSI_PROPERTY */
+   unsigned Padding; : 12;
+};
+
+struct tgsi_property_data {
+   unsigned Data;
+};
+
 /* TGSI opcodes.  
  * 
  * For more information on semantics of opcodes and
--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium3d shader declarations

2009-12-09 Thread Zack Rusin
On Wednesday 09 December 2009 10:05:13 michal wrote:
> Zack Rusin pisze:
> > On Wednesday 09 December 2009 08:55:09 michal wrote:
> >> Zack Rusin pisze:
> >>> On Wednesday 09 December 2009 08:44:20 Keith Whitwell wrote:
> >>>> On Wed, 2009-12-09 at 04:41 -0800, michal wrote:
> >>>>> Zack Rusin pisze:
> >>>>>> Hi,
> >>>>>>
> >>>>>> currently Gallium3d shaders predefine all their inputs/outputs.
> >>>>>> We've handled all inputs/outputs the same way. e.g.
> >>>>>> VERT
> >>>>>> DCL IN[0]
> >>>>>> DCL OUT[0], POSITION
> >>>>>> DCL OUT[1], COLOR
> >>>>>> DCL CONST[0..9]
> >>>>>> DCL TEMP[0..3]
> >>>>>> or
> >>>>>> FRAG
> >>>>>> DCL IN[0], COLOR, LINEAR
> >>>>>> DCL OUT[0], COLOR
> >>>>>>
> >>>>>> There are certain inputs/output which don't really follow the
> >>>>>> typical rules for inputs/outputs though and we've been imitating
> >>>>>> those with extra normal semantics (e.g. front face).
> >>>>>>
> >>>>>> It all falls apart a bit on anything with shader model 4.x and up.
> >>>>>> That's because in there we've got what Microsoft calls system-value
> >>>>>> semantics. (
> >>>>>> http://msdn.microsoft.com/en-us/library/ee418355(VS.85).aspx#System_
> >>>>>>Va l ue ). They all represent system-generated inputs/outputs for
> >>>>>> shaders. And while so far we only really had to handle front-face
> >>>>>> since shader model 4.x we have to deal with lots of them (geometry
> >>>>>> shaders, domain shaders, computer shaders... they all have system
> >>>>>> generated inputs/outputs)
> >>>>>>
> >>>>>> I'm thinking of adding something similar to what D3D does to
> >>>>>> Gallium3d. So just adding a new DCL type, e.g. DCL_SV which takes
> >>>>>> the vector name and the system-value semantic as its inputs, so FRAG
> >>>>>> DCL IN[0], COLOR, LINEAR
> >>>>>> DCL IN[1], COLOR[1], LINEAR
> >>>>>> DCL IN[2], FACE, CONSTANT
> >>>>>> would become
> >>>>>> FRAG
> >>>>>> DCL IN[0], COLOR, LINEAR
> >>>>>> DCL IN[1], COLOR[1], LINEAR
> >>>>>> DCL_SV IN[2], FACE
> >>>>>>
> >>>>>> It likely could be done in a more generic fashion though. Opinions?
> >>>>>
> >>>>> Zack,
> >>>>>
> >>>>> What would be the difference between
> >>>>>
> >>>>> DCL IN[2], FACE, CONSTANT
> >>>>>
> >>>>> and
> >>>>>
> >>>>> DCL_SV IN[2], FACE
> >>>>>
> >>>>> then? Maybe the example is bad, but I don't see what DCL_SV would
> >>>>> give us the existing DCL doesn't.
> >>>>
> >>>> I'd have proposed something slightly different where the SV values
> >>>> don't land in the INPUT file but some new register file.
> >>>>
> >>>> The reason is that when we start looking at geometry shaders, the
> >>>> INPUT register file becomes two-dimensional, but these SV values
> >>>> remain single-dimensional.  That means that for current TGSI we'd have
> >>>> stuff like:
> >>>>
> >>>> DCL IN[0..3][0] POSITION
> >>>> DCL IN[0..3][1] COLOR
> >>>> DCL IN[2] SOME_SYSTEM_VALUE
> >>>>
> >>>> Which is pretty nasty - half of the input file is one dimensional,
> >>>> half two-dimensional, and you need to look at the index of the first
> >>>> dimension to figure out whether the input reg is legal or not.
> >>>>
> >>>> So, I'm think some new register file to handle these system-generated
> >>>> values is one possiblility, as in:
> >>>>
> >>>> DCL SV[0], FACE
> >>>>
> >>>> or
> >>>>
> >>>> DCL SV[1],  PRIMITIVE_ID
> >>>>
> >>>> Thoughts?
> >>>
> >>> Yea, I like that.
> >>>
> >>> And then separate syntax to handle the properties or overloading DCL?
> >>> i.e. DCL GS_INFO  PRIM_IN TRIANGLES
> >>> vs
> >>> PROPERTY GS_INFO PRIM_IN TRIANGLES
> >>> ?
> >>
> >> I think a geometry shader should have its own GS_INFO token that would
> >> convey the information it needs, i.e. no overloading of the DCL token.
> >>
> >> GS_INFO PRIM_IN TRIANGLES
> >> GS_INFO PRIM_OUT TRIANGLE_STRIP
> >> GS_INFO MAX_VERTEX_COUNT 3 /* vertices_out for gl */
> >
> > We'll be adding more of those then. Basically we'll need an extra token
> > for every shader we have.
> >
> > COMPUTE_INFO WORK_GROUP_SIZE 4 4 4 /*x, y, z*/
> > DS_INFO DOMAIN 3 /*domain shader*/
> > HS_INFO MAXTESSFACTOR 3 /*hull shader*/
> > FS_INFO EARLYDEPTSTENCIL 1
> > etc.
> >
> > To me it looks uglier than a special decleration token that could handle
> > all of them.
> 
> Can you propose a patch against p_shader_tokens.h that introduces a
> PROPERTY token?

I could do that but only if we agree it's in the name of love.

So is everyone ok with a new register SV for system generated values and new 
declaration token called PROPERTY for shader specific properties (btw, d3d 
calls those attributes, but since attributes already have a meaning in glsl I 
think it'd probably wise to not try to redefine it).

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium3d shader declarations

2009-12-09 Thread Zack Rusin
On Wednesday 09 December 2009 08:55:09 michal wrote:
> Zack Rusin pisze:
> > On Wednesday 09 December 2009 08:44:20 Keith Whitwell wrote:
> >> On Wed, 2009-12-09 at 04:41 -0800, michal wrote:
> >>> Zack Rusin pisze:
> >>>> Hi,
> >>>>
> >>>> currently Gallium3d shaders predefine all their inputs/outputs. We've
> >>>> handled all inputs/outputs the same way. e.g.
> >>>> VERT
> >>>> DCL IN[0]
> >>>> DCL OUT[0], POSITION
> >>>> DCL OUT[1], COLOR
> >>>> DCL CONST[0..9]
> >>>> DCL TEMP[0..3]
> >>>> or
> >>>> FRAG
> >>>> DCL IN[0], COLOR, LINEAR
> >>>> DCL OUT[0], COLOR
> >>>>
> >>>> There are certain inputs/output which don't really follow the typical
> >>>> rules for inputs/outputs though and we've been imitating those with
> >>>> extra normal semantics (e.g. front face).
> >>>>
> >>>> It all falls apart a bit on anything with shader model 4.x and up.
> >>>> That's because in there we've got what Microsoft calls system-value
> >>>> semantics. (
> >>>> http://msdn.microsoft.com/en-us/library/ee418355(VS.85).aspx#System_Va
> >>>>l ue ). They all represent system-generated inputs/outputs for shaders.
> >>>> And while so far we only really had to handle front-face since shader
> >>>> model 4.x we have to deal with lots of them (geometry shaders, domain
> >>>> shaders, computer shaders... they all have system generated
> >>>> inputs/outputs)
> >>>>
> >>>> I'm thinking of adding something similar to what D3D does to
> >>>> Gallium3d. So just adding a new DCL type, e.g. DCL_SV which takes the
> >>>> vector name and the system-value semantic as its inputs, so
> >>>> FRAG
> >>>> DCL IN[0], COLOR, LINEAR
> >>>> DCL IN[1], COLOR[1], LINEAR
> >>>> DCL IN[2], FACE, CONSTANT
> >>>> would become
> >>>> FRAG
> >>>> DCL IN[0], COLOR, LINEAR
> >>>> DCL IN[1], COLOR[1], LINEAR
> >>>> DCL_SV IN[2], FACE
> >>>>
> >>>> It likely could be done in a more generic fashion though. Opinions?
> >>>
> >>> Zack,
> >>>
> >>> What would be the difference between
> >>>
> >>> DCL IN[2], FACE, CONSTANT
> >>>
> >>> and
> >>>
> >>> DCL_SV IN[2], FACE
> >>>
> >>> then? Maybe the example is bad, but I don't see what DCL_SV would give
> >>> us the existing DCL doesn't.
> >>
> >> I'd have proposed something slightly different where the SV values don't
> >> land in the INPUT file but some new register file.
> >>
> >> The reason is that when we start looking at geometry shaders, the INPUT
> >> register file becomes two-dimensional, but these SV values remain
> >> single-dimensional.  That means that for current TGSI we'd have stuff
> >> like:
> >>
> >> DCL IN[0..3][0] POSITION
> >> DCL IN[0..3][1] COLOR
> >> DCL IN[2] SOME_SYSTEM_VALUE
> >>
> >> Which is pretty nasty - half of the input file is one dimensional, half
> >> two-dimensional, and you need to look at the index of the first
> >> dimension to figure out whether the input reg is legal or not.
> >>
> >> So, I'm think some new register file to handle these system-generated
> >> values is one possiblility, as in:
> >>
> >> DCL SV[0], FACE
> >>
> >> or
> >>
> >> DCL SV[1],  PRIMITIVE_ID
> >>
> >> Thoughts?
> >
> > Yea, I like that.
> >
> > And then separate syntax to handle the properties or overloading DCL?
> > i.e. DCL GS_INFO  PRIM_IN TRIANGLES
> > vs
> > PROPERTY GS_INFO PRIM_IN TRIANGLES
> > ?
> 
> I think a geometry shader should have its own GS_INFO token that would
> convey the information it needs, i.e. no overloading of the DCL token.
> 
> GS_INFO PRIM_IN TRIANGLES
> GS_INFO PRIM_OUT TRIANGLE_STRIP
> GS_INFO MAX_VERTEX_COUNT 3 /* vertices_out for gl */

We'll be adding more of those then. Basically we'll need an extra token for 
every shader we have.

COMPUTE_INFO WORK_GROUP_SIZE 4 4 4 /*x, y, z*/
DS_INFO DOMAIN 3 /*domain shader*/
HS_INFO MAXTESSFACTOR 3 /*hull shader*/
FS_INFO EARLYDEPTSTENCIL 1
etc.

To me it looks uglier than a special decleration token that could handle all 
of them.

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium3d shader declarations

2009-12-09 Thread Zack Rusin
On Wednesday 09 December 2009 07:41:11 michal wrote:
> Zack Rusin pisze:
> > Hi,
> >
> > currently Gallium3d shaders predefine all their inputs/outputs. We've
> > handled all inputs/outputs the same way. e.g.
> > VERT
> > DCL IN[0]
> > DCL OUT[0], POSITION
> > DCL OUT[1], COLOR
> > DCL CONST[0..9]
> > DCL TEMP[0..3]
> > or
> > FRAG
> > DCL IN[0], COLOR, LINEAR
> > DCL OUT[0], COLOR
> >
> > There are certain inputs/output which don't really follow the typical
> > rules for inputs/outputs though and we've been imitating those with extra
> > normal semantics (e.g. front face).
> >
> > It all falls apart a bit on anything with shader model 4.x and up. That's
> > because in there we've got what Microsoft calls system-value semantics.
> > (
> > http://msdn.microsoft.com/en-us/library/ee418355(VS.85).aspx#System_Value
> > ). They all represent system-generated inputs/outputs for shaders. And
> > while so far we only really had to handle front-face since shader model
> > 4.x we have to deal with lots of them (geometry shaders, domain shaders,
> > computer shaders... they all have system generated inputs/outputs)
> >
> > I'm thinking of adding something similar to what D3D does to Gallium3d.
> > So just adding a new DCL type, e.g. DCL_SV which takes the vector name
> > and the system-value semantic as its inputs, so
> > FRAG
> > DCL IN[0], COLOR, LINEAR
> > DCL IN[1], COLOR[1], LINEAR
> > DCL IN[2], FACE, CONSTANT
> > would become
> > FRAG
> > DCL IN[0], COLOR, LINEAR
> > DCL IN[1], COLOR[1], LINEAR
> > DCL_SV IN[2], FACE
> >
> > It likely could be done in a more generic fashion though. Opinions?
> 
> Zack,
> 
> What would be the difference between
> 
> DCL IN[2], FACE, CONSTANT
> 
> and
> 
> DCL_SV IN[2], FACE
> 
> then? Maybe the example is bad, but I don't see what DCL_SV would give
> us the existing DCL doesn't.

It's a lot like the argument of what's the point of having separate IN, OUT, 
TEMP and CONST registers when everything could be simply using the same kind 
and then be translated in the drivers to whatever they need - it splits up 
semantically different things into physically different code-paths. It makes it 
explicit that system-generated inputs are different than normal inputs (which 
they are), it gives them distinct semantics (which they have) and makes the 
code easier to read and understand.

Of course the system generated parts are just a part of it. We'll need an 
extra DCL anyway for shader properties, i.e. properties which define certain 
aspect of the shader itself and the way it's executed. For example for 
geometry shader that would be the input primitive they are working on, the 
maximal number of vertices they can output and the kind of primitive they can 
output, for compute shader that would be the number of threads within the 
work-group they are executing in. AMD compute intermediate language recognizes 
those different decelerations, so those D3D ( http://msdn.microsoft.com/en-
us/library/ee418366(VS.85).aspx ). For us, we can either overload DCL or have 
complete separate DCL's for them, e.g. 
GEOM
DCL GS_INFO PRIM_IN TRIANGLES
DCL GS_INFO PRIM_OUT TRIANGLE_STRIP
DCL GS_INFO MAX_VERTEX_COUNT 3 /* vertices_out for gl */

DCL SV IN[0].x, PRIMITIVE_ID
DCL SV IN[0].y, INSTANCE_ID
DCL SV IN[0].z, GSINSTANCE_ID

DCL IN[1][0..3], POSITION
DCL IN[2][0..3], COLOR

or

DCL_GS_INFO PRIM_IN TRIANGLES
DCL_GS_INFO PRIM_OUT TRIANGLE_STRIP
DCL_GS_INFO MAX_VERTEX_COUNT 3 /* vertices_out for gl */

DCL_SV IN[0].x, PRIMITIVE_ID
DCL_SV IN[0].y, INSTANCE_ID
DCL_SV IN[0].z, GSINSTANCE_ID

DCL IN[1][0..3], POSITION
DCL IN[2][0..3], COLOR

Personally I don't have a preference, the first looks cleaner to me, but the 
bottom line either way is that the first three can't right now be represented 
by our DCL at all and the latter three will be imho ugly and hacky without an 
explicit keyword (SV in the example) signifying that they are in fact special 
inputs. 

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Gallium3d shader declarations

2009-12-09 Thread Zack Rusin
On Wednesday 09 December 2009 08:44:20 Keith Whitwell wrote:
> On Wed, 2009-12-09 at 04:41 -0800, michal wrote:
> > Zack Rusin pisze:
> > > Hi,
> > >
> > > currently Gallium3d shaders predefine all their inputs/outputs. We've
> > > handled all inputs/outputs the same way. e.g.
> > > VERT
> > > DCL IN[0]
> > > DCL OUT[0], POSITION
> > > DCL OUT[1], COLOR
> > > DCL CONST[0..9]
> > > DCL TEMP[0..3]
> > > or
> > > FRAG
> > > DCL IN[0], COLOR, LINEAR
> > > DCL OUT[0], COLOR
> > >
> > > There are certain inputs/output which don't really follow the typical
> > > rules for inputs/outputs though and we've been imitating those with
> > > extra normal semantics (e.g. front face).
> > >
> > > It all falls apart a bit on anything with shader model 4.x and up.
> > > That's because in there we've got what Microsoft calls system-value
> > > semantics. (
> > > http://msdn.microsoft.com/en-us/library/ee418355(VS.85).aspx#System_Val
> > >ue ). They all represent system-generated inputs/outputs for shaders.
> > > And while so far we only really had to handle front-face since shader
> > > model 4.x we have to deal with lots of them (geometry shaders, domain
> > > shaders, computer shaders... they all have system generated
> > > inputs/outputs)
> > >
> > > I'm thinking of adding something similar to what D3D does to Gallium3d.
> > > So just adding a new DCL type, e.g. DCL_SV which takes the vector name
> > > and the system-value semantic as its inputs, so
> > > FRAG
> > > DCL IN[0], COLOR, LINEAR
> > > DCL IN[1], COLOR[1], LINEAR
> > > DCL IN[2], FACE, CONSTANT
> > > would become
> > > FRAG
> > > DCL IN[0], COLOR, LINEAR
> > > DCL IN[1], COLOR[1], LINEAR
> > > DCL_SV IN[2], FACE
> > >
> > > It likely could be done in a more generic fashion though. Opinions?
> >
> > Zack,
> >
> > What would be the difference between
> >
> > DCL IN[2], FACE, CONSTANT
> >
> > and
> >
> > DCL_SV IN[2], FACE
> >
> > then? Maybe the example is bad, but I don't see what DCL_SV would give
> > us the existing DCL doesn't.
> 
> I'd have proposed something slightly different where the SV values don't
> land in the INPUT file but some new register file.
> 
> The reason is that when we start looking at geometry shaders, the INPUT
> register file becomes two-dimensional, but these SV values remain
> single-dimensional.  That means that for current TGSI we'd have stuff
> like:
> 
> DCL IN[0..3][0] POSITION
> DCL IN[0..3][1] COLOR
> DCL IN[2] SOME_SYSTEM_VALUE
> 
> Which is pretty nasty - half of the input file is one dimensional, half
> two-dimensional, and you need to look at the index of the first
> dimension to figure out whether the input reg is legal or not.
> 
> So, I'm think some new register file to handle these system-generated
> values is one possiblility, as in:
> 
> DCL SV[0], FACE
> 
> or
> 
> DCL SV[1],  PRIMITIVE_ID
> 
> Thoughts?

Yea, I like that.

And then separate syntax to handle the properties or overloading DCL? i.e.
DCL GS_INFO  PRIM_IN TRIANGLES
vs
PROPERTY GS_INFO PRIM_IN TRIANGLES
?

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


[Mesa3d-dev] Gallium3d shader declarations

2009-12-09 Thread Zack Rusin
Hi,

currently Gallium3d shaders predefine all their inputs/outputs. We've handled 
all inputs/outputs the same way. e.g.
VERT
DCL IN[0]
DCL OUT[0], POSITION
DCL OUT[1], COLOR   
DCL CONST[0..9] 
DCL TEMP[0..3] 
or 
FRAG
DCL IN[0], COLOR, LINEAR   
DCL OUT[0], COLOR

There are certain inputs/output which don't really follow the typical rules 
for inputs/outputs though and we've been imitating those with extra normal 
semantics (e.g. front face).

It all falls apart a bit on anything with shader model 4.x and up. That's 
because in there we've got what Microsoft calls system-value semantics.
( http://msdn.microsoft.com/en-us/library/ee418355(VS.85).aspx#System_Value ). 
They all represent system-generated inputs/outputs for shaders. And while so 
far we only really had to handle front-face since shader model 4.x we have to 
deal with lots of them (geometry shaders, domain shaders, computer shaders... 
they all have system generated inputs/outputs)

I'm thinking of adding something similar to what D3D does to Gallium3d. So 
just adding a new DCL type, e.g. DCL_SV which takes the vector name and the 
system-value semantic as its inputs, so
FRAG
DCL IN[0], COLOR, LINEAR
DCL IN[1], COLOR[1], LINEAR 
DCL IN[2], FACE, CONSTANT 
would become
FRAG
DCL IN[0], COLOR, LINEAR
DCL IN[1], COLOR[1], LINEAR 
DCL_SV IN[2], FACE

It likely could be done in a more generic fashion though. Opinions?

z

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa 7.6.1 release candidate 1

2009-11-19 Thread Zack Rusin
On Thursday 19 November 2009 16:11:28 Ian Romanick wrote: 
> 2. The Intel team is only just switching from "feature development" mode
> to "bug fixing" mode.  There is a giant pile of bugs that exist in both
> 7.6 and master.  At least some of those will get fixed over the next few
> weeks.  If we rush 7.6.1 out now, it will miss bug fixes that will be in
> 7.7.  This leaves a dilemma: take the stable point release that is
> missing some bug fixes or take the new release that has the bug fixes
> but also may introduce new, unknown bugs.

Or just release 7.6.2 then.

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] gallium rtasm

2009-11-15 Thread Zack Rusin
On Sunday 15 November 2009 22:03:27 Igor Trindade Oliveira wrote:
> Hi Zack,
> 
> Would be great.
> Other thing that i was checking is about the egl state tracker, and
>  softpipe in embedded. normally egl is used in embedded environments but
>  mesa uses lot of float points and sse operations. It could not be
>  optimized(llvmtype and softpipe) to use more arm optimaztions? or i am
>  misunderstood?

Hi Igor,

I'm cc'ing José since he's leading the llvmpipe efforts and I don't want to 
speak on his behalf. I'm also cc'ing mesa3d-dev since the question really 
belongs there.

To answer your question though - yes certainly, usage of fixed-point should be 
more predominant in llvmpipe. For your purposes I'd likely advise you to worry 
about llvmpipe rather than softpipe. Also note that a lot of ARM optimizations 
will come through the LLVM framework itself in this case. Ideally I'd suggest 
that you grab one of the ARM devices that you care about at Nokia, give 
llvmpipe a spin just to see what it's doing and how it works.

I think José has a pretty concrete plan in terms of general optimizations that 
need to happen in llvmpipe and I'm sure he'd love to get some help in making 
sure it flys on embedded systems :) I think it's going to be especially 
important once we get GL ES code in master so it'd be good to know if we're on 
the right track.

z

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] GLSL compiler performance

2009-11-13 Thread Zack Rusin
On Friday 13 November 2009 06:42:47 Sedat Dilek wrote:
> Alternatively use [1] (this commit is not it the glsl-pp-rework-2 GIT
>  tree).

This really doesn't matter at all for this work, in fact I'd say it's the 
anti-productive to be bothering with this. The glsl-pp-rework-2 branch is 
meant for glsl compiler work not be backporting unrelated fixes from head, 
especially since that that's exactly the effect that the merging it in will 
have.

z 

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH] Make DRI/DRM state tracker interface less context-dependent

2009-10-27 Thread Zack Rusin
On Saturday 24 October 2009 19:02:51 Younes Manton wrote:
> Hi Thomas,
> 
> I'd like to make these changes to make it possible to use the DRM
> state tracker interface with contexts other than pipe_context. Nouveau
> is the only public driver that does DRI1 from what I can see, but I've
> been told that there are some closed drivers that also depend on these
> interfaces and that you look after them.
> 
> Added drm_api::create_video_context() to get a pipe_video_context, you
> can ignore it. Changes to the st_* lock functions are trivial and
> obvious, Nouveau doesn't use them but presumably anyone who does can
> return pipe_context->priv just as easily. Changes to
> dri1_api::front_srf_locked() are too, hopefully your front buffer can
> be accessed via pipe_screen as well. I didn't change
> dri1_api::present_locked() because we don't do page flipping yet, but
> I think that could be done with just a screen as well in Nouveau.
> Thoughts?

Younes,

sorry for the slow response, we've been a bit busy lately. 
I can't say that I'm a huge fan of this patch. I've been thinking about it 
lately and essentially I'd like to reiterate what I said initially about the 
pipe_video_context. While I think it's a great idea, I'm not too excited about 
the way it looks right now. It boils down to three problems for me:

1) The interface itself. In particular:

void (*clear_surface)(struct pipe_video_context *vpipe,
 unsigned x, unsigned y,
 unsigned width, unsigned height,
 unsigned value,
 struct pipe_surface *surface);
is problematic for two reasons: a) because it's something that looks more like 
a pipe_context method in itself, b) because it's a combination of a 
surface_fill and clear interface which we moved away from in master

void (*render_picture)(struct pipe_video_context *vpipe,
  /*struct pipe_surface *backround,
  struct pipe_video_rect*backround_area,*/
  struct pipe_video_surface *src_surface,
  enum pipe_mpeg12_picture_type picture_type,
  /*unsignednum_past_surfaces,
  struct pipe_video_surface *past_surfaces,
  unsigned  num_future_surfaces,
  struct pipe_video_surface *future_surfaces,*/
  struct pipe_video_rect*src_area,
  struct pipe_surface   *dst_surface,
  struct pipe_video_rect*dst_area,
  /*unsigned  num_layers,
  struct pipe_texture   *layers,
  struct pipe_video_rect*layer_src_areas,
  struct pipe_video_rect*layer_dst_areas,*/
  struct pipe_fence_handle  **fence);

has simply way too many arguments, not to mention that over a half is 
commented out. It's really a common problem for the entire interface, methods 
are very complex. Gallium deals with it with settable state. Which  brings us 
to another problem with the interface:

struct pipe_video_context
{
   struct pipe_screen *screen;
   enum pipe_video_profile profile;
   enum pipe_video_chroma_format chroma_format;
   unsigned width;
   unsigned height; 
... methods follow...

which means that the interface is both an interface and a state. 
All of it is very un-gallium like.

2) We really need a real world video api implemented with the interface before 
making it public. So it should be proven that openmax or vdpau can actually be 
implemented using the interface before making it public.

3) There's no hardware implementation for the interface and as far as i can 
see there's no plans for one. It's really what the interfaces are for, until 
we have people actually working on a hardware implementation there's no reason 
for this to be a Gallium interface at all.

That's why i think the whole code should be an auxiliary module in which case 
patches like this one wouldn't be even necessary.

When it comes to interfaces it's a lot harder to remove/significantly change an 
interface than to add a new one, so we should be extremely careful when adding 
interfaces and at least try to make sure that for all #1, #2 and #3 are 
fixable.

Also it's worth noting that Keith is the maintainer for Gallium interfaces, so 
pipe_video_context and, in general, all changes to Gallium should always go 
through him.

z

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us 

Re: [Mesa3d-dev] xorg patent issues.

2009-10-01 Thread Zack Rusin
On Thursday 01 October 2009 14:13:15 Greg KH wrote:
> Hi,
> 
> As discussed at XDC2009 in your talk about patents and xorg, I talked
> with the Linux Foundation's Technical Advisory board about the issues
> your raised in your talk.
> 
> I have a contact at the LF and OIN to put you in contact with, but
> wanted to ask if there was anyone else that wanted in on the email
> thread and possible phone call (lawyers love phone calls, not email...)
> 
> Anyone else want in on the fun?

I don't know what patent issues you were discussing but I'm guessing it's 
mainly about the floating point plus s3tc stuff. Just so that you know, we have 
been trying to handle this in some way for a long time and a few lawyers was 
already involved in the process. Given that, especially the first one, is 
explicit in its prohibition of open source, short of buying it as a company 
I'm not sure what Linux Foundation wants to do. Either way though, Brian is 
the person to talk to about options here because he's been endlessly trying to 
come up with ways of dealing with those problems.

z

--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] g3dvl and xvmc indention and location

2009-09-27 Thread Zack Rusin
On Sunday 27 September 2009 22:30:00 Younes Manton wrote:
> Ok I just pushed this all out and moved xvmc under xorg.
> state_trackers/g3dvl is now completely useless and I'll delete it
> soon.
> 
> pipe_video_context and vl_compositor still need some love for Xv and
> more love for VDPAU.

Nice work. I'll try to take a look at it when I'll have a chance 

In the mean time the pipe_video_context shouldn't be among the main gallium 
includes. 
It probably should be called vl_context or something along those lines and 
live in the vl directory with the rest of the code. I think that once it's 
been reviewed and shown to be capable of working with vdpau and whatever other 
video api's we'll decide to accelerate then and only then we can discuss 
adding it as a new gallium interface. For now there's no benefits to it being 
in gallium includes and it just expands the main interface with code that 
really hasn't been too well tested or even reviewed.

z

--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (master): st/xorg: Fix two leeks

2009-09-22 Thread Zack Rusin
On Tuesday 22 September 2009 14:10:34 Jakob Bornecrantz wrote:
> diff --git a/src/gallium/state_trackers/xorg/xorg_composite.c
>  b/src/gallium/state_trackers/xorg/xorg_composite.c index 66ca4cb..ed649a9
>  100644
> --- a/src/gallium/state_trackers/xorg/xorg_composite.c
> +++ b/src/gallium/state_trackers/xorg/xorg_composite.c
> @@ -359,6 +359,9 @@ bind_framebuffer_state(struct exa_context *exa, struct
>  exa_pixmap_priv *pDst) state.zsbuf = 0;
>  
> cso_set_framebuffer(exa->cso, &state);
> +
> +   /* we do fire and forget for the framebuffer, this is the forget part
>  */ +   pipe_surface_reference(&surface, NULL);
>  }
> 

This doesn't follow what we do in gallium state trackers. It should be done in 
xorg_exa_common_done called from ExaDone. Or in other words when we're really 
done with the surface.

z

--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] g3dvl and xvmc indention and location

2009-09-17 Thread Zack Rusin
On Thursday 17 September 2009 13:21:45 Younes Manton wrote:
> On Thu, Sep 17, 2009 at 12:43 PM, Zack Rusin  wrote:
> > Hey,
> >
> > I'm going to start adding XvMC acceleration to the Gallium's xorg state
> > tracker.
> > I'd like to move src/xvmc to src/gallium/state_trackers/xorg/xvmc and run
> > the standard Mesa3D indent command on both g3dvl and xvmc.
> >
> > I'm by no means saying that the style in them is bad, or that any style
> > is better than any other. What I am saying is that when working within a
> > project that has a well defined indention (docs/devinfo.html) it's
> > anti-social and difficult for everyone involved when parts of the code
> > are using something completely different. It's unrealistic to expect
> > people to keep switching indention that drastically when moving between
> > files within a project. So all I'm asking for is some consistency which I
> > think is going to make everything a lot easier for all of us.
> >
> > z
> 
> Be my guest.

Great, thanks Younes.

 
> But aside from that I'm still messing with the pipe_video_context
> stuff I proposed a while back, which carves up state_trackers/g3dvl
> pretty good and which might make what you want to do a little easier.
> At the moment it compiles and "runs" (no output to window, everything
> safely stubbed, slowly porting over what used to be the state tracker
> into aux libs), if you have any interest. Maybe I can put it somewhere
> so people can comment on the interface and auxiliaries if nothing
> else.

Sounds like a good idea. 
We're on a schedule for XvMC in the xorg state tracker so we might have to get 
that just working first. But there's substantial interest in OpenMAX state 
tracker, plus support for things like vdpau and it would be great if we could 
do it all properly.

z

--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


[Mesa3d-dev] g3dvl and xvmc indention and location

2009-09-17 Thread Zack Rusin
Hey,

I'm going to start adding XvMC acceleration to the Gallium's xorg state 
tracker.
I'd like to move src/xvmc to src/gallium/state_trackers/xorg/xvmc and run the 
standard Mesa3D indent command on both g3dvl and xvmc.

I'm by no means saying that the style in them is bad, or that any style is 
better than any other. What I am saying is that when working within a project 
that has a well defined indention (docs/devinfo.html) it's anti-social and 
difficult for everyone involved when parts of the code are using something 
completely different. It's unrealistic to expect people to keep switching 
indention that drastically when moving between files within a project. So all 
I'm asking for is some consistency which I think is going to make everything a 
lot easier for all of us.

z

--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] geometry shaders

2009-09-14 Thread Zack Rusin
On Monday 14 September 2009 14:04:54 Brian Paul wrote:
> > in general though none of those should be too difficult to fix, but it'd
> > be a  good idea to figure out if we want geometry shaders in 7.7 or 7.8.
> > as always with large patches that touch a lot of the code, keeping up to
> > date with master is becoming a bit of an issue which is why i'd like to
> > have the merge plan nailed down sooner rather than later.
> 
> I have no issues with merging this to master whenever you want. 
> Sounds like regressions are unlikely, but I'm sure we can fix any that 
> might happen

Sounds great Brian. In that case I'll first cleanup the debugging code and 
implement switching of primitives in the draw module and then merge the 
arb_geometry_shader4 into master. I won't have much time before Friday, so I'd 
say that it's probably going to be Sunday when it's going to happen.

z

--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


[Mesa3d-dev] geometry shaders

2009-09-13 Thread Zack Rusin
hey,

i've been playing around with geometry shaders in my spare time and finally got 
them to a basically working state. "basically working" implies a high 
likelihood of many bugs, but bugs that i'd like to get fixed for 7.7 or the 
very latest 7.8. so i'd like to merge arb_geometry_shaders4  as soon as 
possible to master.

it's quite a huge diff, weighting at many thousands lines of code (3+ but 
that includes the auto-generated api code) because it affects many parts of 
mesa. fortunately the no-geometry-shader paths should stay exactly the same 
and so the execution of those should be completely unchanged after the patch 
lands. the only driver which implements geometry shading right now is the 
gallium software driver, all other drivers work exactly as they have been 
working.

as mentioned geometry shading software support is only implemented in gallium 
(in the draw module along the vertex shading). while the old mesa paths 
obviously have the all the infrastructure work done to be able to implement 
geometry shading, personally i just won't bother with it.

having said that there's a few bugs that i know about, some of them harder 
than the others, in particular:
- support for adjoining primitives is currently busted
- sampling in the geometry shader probably doesn't work (to be honest i 
haven't yet tested it, but i doubt that it's working since the textures come 
in a multi-dimensional array and we don't handle that right yet)
- we don't switch around primitives in the draw module (that's pretty trivial 
to fix, meaning that if input to geometry shader is gl_points, but output is 
gl_triangle_strip we don't actually do that)
- the linking code in st_program.c and st_atom_shader.c is very fragile. 
that's not necessarily a bug but that code is getting way too complex.
- cleanups. i got some debugging output/code left in there.

in general though none of those should be too difficult to fix, but it'd be a 
good idea to figure out if we want geometry shaders in 7.7 or 7.8. 
as always with large patches that touch a lot of the code, keeping up to date 
with master is becoming a bit of an issue which is why i'd like to have the 
merge plan nailed down sooner rather than later.

z

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa 7.6 branch coming

2009-09-04 Thread Zack Rusin
On Friday 04 September 2009 05:08:48 Keith Whitwell wrote:
> On Thu, 2009-09-03 at 15:13 -0700, Michel Dänzer wrote:
> > On Thu, 2009-09-03 at 14:43 -0700, Ian Romanick wrote:
> > > 2. It becomes increasing difficult to merge (as opposed to cherry-pick)
> > > from one branch to the other as the branches diverge.  Michel has run
> > > into this.
> >
> > At least in the case I assume you're referring to, it would have been
> > more or less the same problem with cherry-picking, as the indentation of
> > the GLX code is different between branches. I think the lesson there is
> > to resist the temptation of whitespace-only changes, no matter how much
> > 'better' the result may appear.
>
> There are a couple of points in favour of the periodic merge approach,
> firstly that if people really care about producing a high-quality,
> release-quality, QA'd Mesa-7.6, then you'll find yourself:
>   - testing the 7.6 branch
>   - spotting bugs on the 7.6 branch
>   - developing and committing fixes on the 7.6 branch,
>   - repeat for the entire supported life of the driver.
>
> >From that point on, it is natural to want to be 100% sure that bug-fixes
>
> have been preserved, and periodically merging the 7.6 branch to master
> guarantees that none of those bugfixes will be lost/forgotten.
>
> This doesn't prevent people who want to work a different way from
> developing bugfixes on master and cherry-picking them to stable, but
> it's not a natural workflow for the case where a bug is spotted on
> stable and needs to be fixed on stable.
>
> It seems the objections so far to this practice are that:
> 1) you get repeat mail-outs of merged fixes
> 2) it's different to some other projects workflow
> 3) merging becomes more difficult over time.
>
> Firstly, I'll just say that (3) hasn't been true in my experience.  I'm
> sure branches that are allowed to grow infinitely far apart will be hard
> to merge, but the repeated merge from 7.5->master has had the effect of
> making each individual merge very easy.  In fact, it seems easier than
> cherry-picking the same number of commits, as git is able to use more
> information to resolve conflicts in the merge path.
>
> I would argue that (1) could be fixed with a smarter git commit script,
> and that 

Our git commit script already makes the thingy from the terminator movies look 
like a drunk goat, making it any smarter runs a serious risk of having it 
start checking out porn. Having said that the logic in there is as follows:
if (a push has more than 100 commits)
send one email for all
else
send individual emails for all
i could try to add special paths for merge if that's desired or lower the 
number of commits.

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] OpenVG Bug: Either vgRotate or VG_LINE_TO_REL

2009-08-24 Thread Zack Rusin
On Monday 24 August 2009 11:31:29 Zack Rusin wrote:
> On Thursday 20 August 2009 15:13:01 Nicholas Lowell wrote:
> > When attempting to draw a box with the attached code, this OpenVG
> > implementation does not produce expected results as the Khronos
> > implementation does.
> >
> > Suspected area of issue is either improper translation of (ox, oy) during
> > vgRotate or VG_LINE_TO_REL command being treated as VG_LINE_TO_ABS.
> >
> > First image is the result from this implementation:
> > http://www.nabble.com/file/p25068115/box_mesa.jpeg
> >
> > Second image is the result from khronos implementation:
> > http://www.nabble.com/file/p25068115/box_khronos.jpeg
>
> 
>
> > http://www.nabble.com/file/p25068115/draw_star.c draw_star.c
>
> That's a very nice test. Yea, that definitely looks busted. I'll look at it
> later today. Thanks!

k, that should be fixed now.



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] OpenVG Bug: Either vgRotate or VG_LINE_TO_REL

2009-08-24 Thread Zack Rusin
On Thursday 20 August 2009 15:13:01 Nicholas Lowell wrote:
> When attempting to draw a box with the attached code, this OpenVG
> implementation does not produce expected results as the Khronos
> implementation does.
>
> Suspected area of issue is either improper translation of (ox, oy) during
> vgRotate or VG_LINE_TO_REL command being treated as VG_LINE_TO_ABS.
>
> First image is the result from this implementation:
> http://www.nabble.com/file/p25068115/box_mesa.jpeg
>
> Second image is the result from khronos implementation:
> http://www.nabble.com/file/p25068115/box_khronos.jpeg

> http://www.nabble.com/file/p25068115/draw_star.c draw_star.c

That's a very nice test. Yea, that definitely looks busted. I'll look at it 
later today. Thanks!

z 

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] OpenVG-1.1 support disabled

2009-08-17 Thread Zack Rusin
On Monday 17 August 2009 12:41:55 Andreas Pokorny wrote:
> Hello,
>
> 2009/8/16 Zack Rusin :
> > [...]
> > Yes, the code required to actually read/rasterize and render glyphs is
> > missing. So basically the entire api_text.c would need to be implemented.
> > What's in there was basically done to pass the VG conformance
> > setters/getters tests, but once it was decided that for a project we were
> > working only OpenVG 1.0 will be a requirement I just never went back to
> > finish 1.1.
> > It wouldn't be too hard to finish this off (assuming addition of
> > FreeType2 as a dependency of course)
>
> Why do think that FreeType is required? I thought converting from
> outline curves to vg path data and managing the kerning info is done
> by the user? VG only has to manage the images and paths attached to a
> font.

Yes, the VG side is fine, but we'd, quite frankly, need at least one semi real-
world example to make sure it's at  all usable and for that we'd need 
FreeType2 to be able to at least play around with some basic fonts. So not for 
OpenVG code, but for a demo/test.

z

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH 2/3] gallium: do not rely on the automatic values of an enum for bitmask purposes

2009-08-16 Thread Zack Rusin
On Sunday 16 August 2009 18:00:05 Maarten Maathuis wrote:
> Any special value on calling it an enum (debugging maybe?) instead of
> a few defines?

Yea, debugging is one of the main reasons. Others are the extra type-safety 
they give us and the clarity of what values go where.
We've been trying to be pretty good at using enums whenever possible - the one  
very visible exception are the places were the values used are parts of packed 
structures.

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH 2/3] gallium: do not rely on the automatic values of an enum for bitmask purposes

2009-08-16 Thread Zack Rusin
On Sunday 16 August 2009 17:22:37 Maarten Maathuis wrote:
> On Sun, Aug 16, 2009 at 9:03 PM, Zack Rusin wrote:
> > On Saturday 15 August 2009 22:26:32 Maarten Maathuis wrote:
> >> - PIPE_TRANSFER_READ ended up being defined as 0, which is obviously
> >> fail.
> >
> > I'm not sure I understand, what's wrong with that?
>
> Often when you see READ, WRITE and READWRITE you expect the following:
>
> 1 bit represents READ, another represents WRITE, READWRITE combines
> the two bits.
>
> We (nouveau) weren't the first to make this mistake, the cell driver
> does it too.
>
> if (foo & READ) will do nothing good if READ == 0.
>
> That is the point.

Ah, so there's really nothing wrong with it, you just don't like the api and 
would like to use it with bitwise ops. Yea, I think that makes sense. I'd just 
suggest changing the last line to PIPE_TRANSFER_READ_WRITE = 
PIPE_TRANSFER_READ | PIPE_TRANSFER_WRITE
in this case to make it explicit what it's supposed to be.

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] OpenVG-1.1 support disabled

2009-08-16 Thread Zack Rusin
On Sunday 16 August 2009 07:18:13 Andreas Pokorny wrote:
> Hi there,
> I saw that the openvg state tracker already implements version 1.1 The
> code is only deactivated by the OpenVG main API header provided in the
>  master branch. Is there anything missing in the implementation?

Yes, the code required to actually read/rasterize and render glyphs is 
missing. So basically the entire api_text.c would need to be implemented. 
What's in there was basically done to pass the VG conformance setters/getters 
tests, but once it was decided that for a project we were working only OpenVG 
1.0 will be a requirement I just never went back to finish 1.1.
It wouldn't be too hard to finish this off (assuming addition of FreeType2 as a 
dependency of course)

z

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH 2/3] gallium: do not rely on the automatic values of an enum for bitmask purposes

2009-08-16 Thread Zack Rusin
On Saturday 15 August 2009 22:26:32 Maarten Maathuis wrote:
> - PIPE_TRANSFER_READ ended up being defined as 0, which is obviously fail.

I'm not sure I understand, what's wrong with that?

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH] glx: depend is not used in any ways

2009-08-13 Thread Zack Rusin
On Thursday 13 August 2009 15:42:14 RALOVICH, Kristóf wrote:
> Please review.

Are you sure about that? The depend file contains compilation time dependencies 
for all the relevant files there and it is used (that's what "include depend" 
does - it includes the dependencies produced by makedepend).


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH] glx: symlink .emacs-dirvars

2009-08-13 Thread Zack Rusin
On Thursday 13 August 2009 12:40:10 RALOVICH, Kristóf wrote:
> Please review!

Why's that? .emacs-dirvars will recurs down until it finds a .emacs-dirvars 
file, so the one from top mesa should be quickly found.


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Memory pools

2009-07-27 Thread Zack Rusin
On Monday 27 July 2009 20:42:27 Ian Romanick wrote:
> Brian Paul wrote:
> > Nicolai Hähnle wrote:
> >> Am Monday 27 July 2009 02:28:00 schrieb Brian Paul:
> >>> The GLSL compiler uses a similar memory allocator.  See slang_mem.c.
> >>> We should probably lift this code up into src/mesa/main/ and rename
> >>> the functions to make it re-usable in drivers, etc.
> >>
> >> Ah, cool, I missed that.
> >
> > Would you mind doing the work of lifting slang_mem.[ch] to
> > src/mesa/main/mempool.[ch] or similar?
>
> I'm not sure this is the kind of allocator we really want, but I haven't
> yet looked at the one that Nicolai wrote.  Usually for compilers and
> their kin, you allocate a ton of identically sized objects that need
> some common "construction".  This is usually where people use slab
> allocators.  Slab allocators give the convenience of using one call to
> free many allocations, and they tend to be really, really fast.
>
> We'd probably want to layer an allocator with an interface like
> slang_mem.c on top of the slab allocator.  The Linux kernel has
> something like this, and they call it a cache allocator.
>
> Writing a slab allocator is on my current to-do list.  If somebody gets
> to it first, that would be awesome. :)

Technically Jose and Thomas wrote one for the Gallium's pipebuffer module, it's 
in pb_bufmgr_slab.c but as the name suggests it's meant for buffer management.  
It's not too complex code and it's a little specialized right now but maybe it 
could be generalized beyond buffer management (or at least the algo shared).

z


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (master): gallivm: updates for TGSI changes

2009-07-26 Thread Zack Rusin
On Sunday 26 July 2009 20:35:53 Jose Fonseca wrote:
> I've forked softpipe and commited the LLVM based pixel packing/unpacking
> code I was working on, into a new branch, gallium-llvmpipe.

Neat. Any reason for using the C bindings? That seems like a bad idea to me.

--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Why I don't use TGSI

2009-07-25 Thread Zack Rusin
On Saturday 25 July 2009 05:52:01 Nicolai Hähnle wrote:
> Am Saturday 25 July 2009 05:15:13 schrieben Sie:
> > I'm not sure I understand what you mean by "stream-based nature of TGSI".
> > It's just just an array of struct tgsi_token's. So technically you could
> > iterate from the back, middle, 2+middle or anything else. Or is the
> > problem the fact that you don't know the explicit number of those tokens?
> > That's what struct tgsi_shader_info is for (besides the number of tokens,
> > it gives you a lot of other very useful things, which could be of course
> > extended if you needed more).
> >
> > Have you looked at tgsi_transform_shader ? If it could simply iterate
> > from the back would that, along tgsi_shader_info solve your problem?
>
> I have looked at tgsi_transform_shader, and if it could iterate from the
> back it would already improve things.
>
> I wonder how you would go about iterating from the back without doing a
> forward pass first to figure out where the boundaries of instructions are.
> As I see it, TGSI has a similar problem to e.g. x86 opcodes: If you have a
> pointer to one instruction, you can more or less easily calculate the
> offset to the *next* instruction, but going to the *previous* instruction
> is mostly guesswork because instructions are of variable length.

Well, that's fairly trivial. There's a number of things that we could do. A 
simple array of markers(essentially just saying which tgsi_token in the array 
is a start of a new instruction) in tgsi_shader_info would probably already fix 
it, or just changing the padding in tgsi_token to signify that it is a marker. 
I'm not sure why you're so afraid to iterate over the shader forward in the 
first place. You'll have to do it at some point anyway (from what I understand 
in your case that would be to figure out for example whether to shader uses 
texture sampling to make a decision whether it needs to be transformed in the 
first place).
 tgsi_shader_info is supposed to fill that need, so as I mentioned if you need 
more things in it we could certainly add them.
So there's really a number of things that we could do and the question just 
comes down to, what is most convenient.

Also you could just itertate backwards like this:

void iterate_backwards(struct tgsi_token *tokens)
{
unsigned markers[256];
unsigned num_markers = 0, i, position;
struct tgsi_header header = *(struct tgsi_header *) &tokens[1];

position = 1 + header.HeaderSize;

while (position >= (1 + header.HeaderSize + header.BodySize)) {
struct tgsi_token *itr = &tokens[position];
markers[num_markers] = position; 
++num_markers;
position += itr->NrTokens;
}

for(i = num_markers - 1; i >= 0; --i) {
struct tgsi_token *itr = &tokens[markers[i]];
switch (itr->Type) {
case TGSI_TOKEN_TYPE_DECLARATION:
// do something with the declaration
case  TGSI_TOKEN_TYPE_IMMEDIATE:
// do something with immediate
case TGSI_TOKEN_INSTRUCTION:
// do something with instruction
}
}
}

obviously untested, but it should give you a picture, of how to do it right 
now without any changes anywhere.

> Of course I'm not familiar with other hardware. Maybe there is simply
> nothing worth sharing. However, if there is something worth sharing - a
> kind of toolbox of program transformations that drivers can pick from as
> needed for their specific hardware - then this clearly needs a shared
> intermediate representation.

Yes, as I mentioned, if something is common then it can should be doable with 
tgsi_transform. It's really just a question of whether iteration backwards is 
so common to make the few lines that it takes to do it right now even easier 
and export that ability to the main interface (e.g. add the info that we 
compute in beginning to tgsi_shader_info or maybe create 
tgsi_backwards_iterate).

z


--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Why I don't use TGSI

2009-07-24 Thread Zack Rusin
On Friday 24 July 2009 20:25:19 Nicolai Hähnle wrote:
> Most importantly and crucially, the stream-based nature of TGSI is, in my
> opinion, harmful. Essentially, it forces one to use only forward-scan-based
> algorithms that look at one single instruction at a time.

I'm not sure I understand what you mean by "stream-based nature of TGSI". It's 
just just an array of struct tgsi_token's. So technically you could iterate 
from the back, middle, 2+middle or anything else. Or is the problem the fact 
that you don't know the explicit number of those tokens? That's what struct 
tgsi_shader_info is for (besides the number of tokens, it gives you a lot of 
other very useful things, which could be of course extended if you needed 
more).

Have you looked at tgsi_transform_shader ? If it could simply iterate from the 
back would that, along tgsi_shader_info solve your problem?

Also something that we've discussed before: tgsi transformations are 
essentially useful for code that is generic (e.g. like transforming TGSI into 
something that follows D3D semantics), while the hardware specific code is a 
lot easier to transform in the hardware with a representation that is most 
suitable for the given hardware (usually with a couple of boolean and some 
kind of an array). That's essentially what we do. You can think of this as: 
you code-generate hardware from tgsi, not code-generate tgsi that is more 
suited for your hardware from tgsi and code-generate hardware from that.

z



--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (master): gallivm: updates for TGSI changes

2009-07-24 Thread Zack Rusin
On Friday 24 July 2009 20:00:16 Anders Juel Jensen wrote:
> On Friday 24 July 2009 21:40:25 Keith Whitwell wrote:
> > Thinking of LLVM as "the hardware" has a couple of positive
> > psychological implications -- most importanly that we are in the
> > business of programming LLVM to do rasterization, rather than just using
> > it to provide some niche fastpaths in an otherwise complete software
> > rasterizer.
>
> Does this approach mean that the softpipe will only handle missing
> functionality for hardware that is fixed function, and then the
> llvm-rasterizer will do the heavy lifting for "stupid" hardware like i945?

No, not quite. Gallium doesn't support fixed-function hardware at all. Softpipe 
is what we call the software rasterizer for Gallium, so it's not so much a 
fallback, but a software driver. LLVM rasterizer wouldn't be used for typical 
hardware drivers, but for things without a hardware rasterizer (cpu, cell, 
larrabee, etc).

> > There at least four key places where we want to generate code (TGSI
> > shaders are only one of them), and we want to choose plumbing between
> > them which is optimal for however llvm turns out to require it -- all of
> > which means that in such a driver LLVM won't be hidden behind shader
> > opcodes, but will be completely in the foreground.
>
> The [at least] three remaining ones being?

Well, there's lots of them. For the purpose of this discussion besides shader, 
there's all the parts that would be needed by the llvm rasterizer (samplers, 
depth-stencil-alpha,  color combine) plus there's a lot of other places like 
the translate code and likely the vertex paths themselves that would benefit 
from it.

z


--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (master): gallivm: updates for TGSI changes

2009-07-24 Thread Zack Rusin
On Friday 24 July 2009 15:40:25 Keith Whitwell wrote:
> I think a fast software rasterizer project would be better described as
> "create a gallium driver targetting LLVM" (as opposed to optimizing
> softpipe), as softpipe fills a pretty useful niche in its current form.
>
> Thinking of LLVM as "the hardware" has a couple of positive
> psychological implications -- most importanly that we are in the
> business of programming LLVM to do rasterization, rather than just using
> it to provide some niche fastpaths in an otherwise complete software
> rasterizer.
>
> Below a certain level (be it triangle or batch of quads or even vertex
> buffer), everything should be handled by LLVM-generated code, in the
> same way we would hand off to hardware at a given level...
>
> There at least four key places where we want to generate code (TGSI
> shaders are only one of them), and we want to choose plumbing between
> them which is optimal for however llvm turns out to require it -- all of
> which means that in such a driver LLVM won't be hidden behind shader
> opcodes, but will be completely in the foreground.

Yea, that sounds great, but that all sounds like an argument to remove gallivm 
from master rather than keep it.
Obviously a driver aimed at making LLVM target hardware would be highly 
desirable for cpus, cell, larrabee and so on, but gallivm doesn't add value to 
that project right now.
If we were to start a project like that then doing TGSI->LLVM IR in the driver 
instead of through gallivm would be just simpler. It'd be a lot easier and 
quicker to export that generator once we'd know that it's actually reusable 
(so doing the design bottom->top, rather than top->down). It's a huge 
advantage of actually writing a driver that needs a certain functionality, 
versus writing certain functionality hoping that it will fill a need of a 
driver in the future (what we've done so far in gallivm).
It would mean that gallivm would actually prove to be useful, rather than 
force it to just be (in its current form at least).

So yea, I think ideally we'd just bootstrap a quick and dirty driver and we 
could move the basic soa code-generation from gallivm to it, remove 
temporarily gallivm and work on that code in the driver. 

z


--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (master): gallivm: updates for TGSI changes

2009-07-24 Thread Zack Rusin
On Friday 24 July 2009 14:21:35 José Fonseca wrote:
> But in this step 1 you don't plan to embed the vector width and AoS vs
> Soa in TGSI, right?

No, nothing changes at that layer. 
Essentially LLVM is completely invisible to drivers and TGSI representation 
stays exactly the same.
Everything is the same but the TGSI code that comes from GLSL or OpenCL C is 
highly optimized.

The software driver is obviously the special case here because that one 
already has a very good hardware LLVM driver (x86 code-generator). And for 
softpipe step 1 is of little importance because softpipe is incredibly slow 
executing fragment shaders with 5 instructions so the fact that complicated 
shaders will have 20  instead of 100 instructions is of little practical 
significance there ("this app is running at 0.9fps instead of 0.1fps now!!!"). 
Which means that you could short-circuit to step #2. But that really is a 
separate problem that needs a separate project (aka. "making softpipe fast" 
project).

z

--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (master): gallivm: updates for TGSI changes

2009-07-24 Thread Zack Rusin
On Friday 24 July 2009 13:33:22 José Fonseca wrote:
> > To be honest I don't think that using LLVM from C for our purposes is a
> > good idea. Mainly because it will be impossible to do because backends
> > (the actual code-generators for given hardware) will need to be C++
> > anyway so we'll again end up with a mix of C++ and C. It's just that the
> > interface from C++ is a lot nicer than the stuff that was wrapped in C.
>
> I understand that the C bindings might be more cumbersome and limited.
> But I don't understand how using or not avoids the mixing C and C++: if
> LLVM is in C++ and Gallium is in C something will have to make brigde.
> It is a matter of what and how thick that bridge is.

Yea, gallivm is that bridge. It's compiled with C++, but exposes a C interface 
to the rest of Gallium. This is essentially to make all Gallium interfaces 
uniformly C (plus to hide a lot of LLVM complexity).

Of course it's still impossible to write fully-fledged LLVM driver without C++, 
as the code-generator will have to be in C++. (that code-generator could 
technically be a separate module, in fact living in a repository of its own 
though).

But the short-term scenario that we're looking at right now is not:
TGSI->LLVM->Hardware
it's
LLVM->TGSI->Hardware.

That's what OpenCL will be doing and that's what we want to do for GLSL as 
well.
So the idea is that the OpenCL compiler compiles C into LLVM IR and the GLSL 
compiler compiles GLSL into LLVM, then we run all the LLVM optimization passes 
on that and then gallivm translates LLVM IR into TGSI.
The benefit of this approach is that our drivers keep working as they did, 
code-generation is still trivial, we still get LLVM optimizations and our 
interface is uniformly C everywhere.

So that'd be step 1 and step 2 would be experiment with direct hardware 
generation from LLVM, but thanks to this approach our biggest woe (lack of 
decent optimization framework for the more and more complicated shaders that 
we're seeing) goes away without modifications to the drivers at all which I 
think is fairly neat.

z

--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (master): gallivm: updates for TGSI changes

2009-07-24 Thread Zack Rusin
On Friday 24 July 2009 10:01:49 José Fonseca wrote:
> On Thu, 2009-07-23 at 16:38 -0700, Zack Rusin wrote:
> > I thought about that and discarded that for the following reasons:
> > 1) it doesn't solve the main/core problem of the representation: how to
> > represent vectors.
>
> Aren't LLVM vector types (http://llvm.org/docs/LangRef.html#t_vector)
> good enough?

Yes, they are, but that's not what we're trying to answer. Looking at 
DCL IN[0]
DCL OUT[0]
MOV OUT[0], IN[0].xzzz

in aos that will be
decl <4 x float> out
decl <4 x float > in
out = shuffle(in, 0, 2, 2, 2);

for soa4 that will be
decl <4 x float> outx
decl <4 x float> outy
decl <4 x float> outz
decl <4 x float> outw
decl <4 x float> inx
decl <4 x float> iny
decl <4 x float> inz
decl <4 x float> inw
outx = inx
outy = inz
outz = inz
outw = inz

for soa16 that will be
decl <16 x float> outx
decl <16 x float> outy
decl <16 x float> outz
decl <16 x float> outw
decl <16 x float> inx
decl <16 x float> iny
decl <16 x float> inz
decl <16 x float> inw
outx = inx
outy = inz
outz = inz
outw = inz

And that's for a trivial mov. Each path obviously generates very different 
code. The code that is currently in gallivm basically creates a new compiler 
for each of these. Which is one way of dealing with it. I, personally, didn't 
like it at all, but didn't have at the time better solution for that.

Furthermore it's not just the compilation - inputs and outputs needs to be 
swizzled differently for each of those paths as well, so preamble and postamble 
has to be generated for them as well.


> The vector width could be a global parameter computed before starting
> the TGSI -> LLVM IR translation, which takes in account not only the
> target platform but the input/output data types (e.g. SSE2 has different
> vector widths for different data types).
>
> For mimd vs simd we could have two variations -- SoA and AoS. Again, we
> could have this as a initial parameter, or two abstract classes derived
> from Instruction, from which the driver would then derive from.

Yes, that's pretty much exactly what the code in gallivm does right now. Lets 
you pick representation aos/soa, and vector width and then tries to generate 
the code as it was told. It's not very successful at it though =) (mainly 
because the actual generation paths for one are completely different from the 
other, so if it's working in aos it doesn't mean anything for soa)

> My suggestion of an abstract Instruction class with virtual methods was
> just for the sake of argument. You can achieve the same thing with a C
> structure of function pointers together with the included LLVM C
> bindings (http://llvm.org/svn/llvm-project/llvm/trunk/include/llvm-c/)
> which appears to fully cover the IR generation interfaces.

To be honest I don't think that using LLVM from C for our purposes is a good 
idea. Mainly because it will be impossible to do because backends (the actual 
code-generators for given hardware) will need to be C++ anyway so we'll again 
end up with a mix of C++ and C. It's just that the interface from C++ is a lot 
nicer than the stuff that was wrapped in C.

> > That wouldn't work because LLVM wouldn't know what to do with them which
> > would defeat the whole reason for using LLVM (i.e. it would make
> > optimization passes do nothing).
>
> Good point. But can't the same argument be done for intrinsics? The
> existing optimization passes don't know what to do with them either.

Essentially they can treat them just as every other instructions with known 
semantics
e.g.  for
<4 x float> madd(const <4 x float> a, const <4 x float>)
all it needs to know is that it's an instruction that takes two inputs, 
doesn't modify them and returns an output. So optimizations still work. While 
generating assembly /before/ the code generator runs means that there's a 
black hole in the code that can technically do, well, anything really.

> http://llvm.org/docs/ExtendingLLVM.html strongly discourages extending
> LLVM, and if the LLVM IR is not good enough then the question inevitably
> is: does it make sense to use LLVM at all?

LLVM ir is good enough. I'm not sure what's the argument for it not being good 
enough.
We're contemplating the usage of intrinsics because it's easier to code-
generate from them. Essentially trying to follow "Gallium should make the 
process of writing drivers easier" mantra. You can perfectly well do reg-exp 
matching in the code-generator to extract the same/similar information, it 
just makes code-generators (drivers in our case) a lot more difficult.

> I know we have been considering using LLVM

Re: [Mesa3d-dev] Mesa (master): gallivm: updates for TGSI changes

2009-07-23 Thread Zack Rusin
On Thursday 23 July 2009 14:50:48 José Fonseca wrote:
> On Thu, 2009-07-23 at 11:14 -0700, Zack Rusin wrote:
> > Before anything else the problem of representation needs to solved. The
> > two step approach that the code in there started on using is again, imho,
> > by far the best but it likely needs a solid discussion to get everyone on
> > the same page.
>
> I don't think that representation is such a big problem. IMO, gallivm
> should be just a library of TGSI -> LLVM IR building blocks. For
> example, the class Instruction should be all virtuals, and a pipe driver
> would override the methods it wants. LLVM IR -> hardware assembly
> backend would then be necessary. If a hardware has higher level
> statements which are not part of LLVM IR, then it should override and
> generate the intrinsics itself, 

I thought about that and discarded that for the following reasons:
1) it doesn't solve the main/core problem of the representation: how to 
represent vectors. Without that we can't generate anything. We are dealing 
with two main architectures here: mimd (e.g. nvidia) and simd (e.g. larrabee), 
with the latter coming in multiple of permutation. for mimd the prefered 
layout will be simple AOS (x,y,z,w), for simd it will vector wide SOA (so for 
larrabee that would be (x,x,x,x, x,x,x,x, x,x,x,x, x,x,x,x). So for SOA we'd 
need to scale vectors likely at least  between (4 components) (for simple sse) 
to (16 components). So it's not even certain that our vectors would have 4 
components.
2) It means that the driver would have to be compiled with a c++ compiler. 
While obviously simply solvable by sprinkling tons of extern "c" everywhere it 
makes the whole thing a lot uglier.
3) It means that Gallium's public interface is a combination of C and C++. So 
implementing Gallium means: oo C structures (p_context) and C++ classes. Which 
quite frankly makes it just ugly. Interfaces a lot like indention, if they're 
not consistent they're just difficult to read, understand and follow.
So while I do like C++ a lot and would honestly prefer it all over the place, 
mixing languages like that, especially in an interface is just not a good 
idea.

> or even better, generate asm statements directly from those methods.

That wouldn't work because LLVM wouldn't know what to do with them which would 
defeat the whole reason for using LLVM (i.e. it would make optimization passes 
do nothing).

> Currently, my main interest for LLVM is to speed up softpipe with the
> TGSI -> SSE2 translation. I'd like to code generate with LLVM the whole
> function to rasterize and render a triangle (texture sampling, fragment
> shading, blending, etc). I'm not particular worried about the
> representation as vanilla LLVM can already represent almost everything
> needed.

That sounds great :) 
For that you don't gallivm at all though. Or do you want to mix the 
rasterization code with actual shading, i.e. inject the fragment shader into 
the rasterizer? I'm not sure if the latter would win us anything.
If the shader wouldn't use any texture mapping then it's possibly that you 
could get perfect cache-coherency when rasterizing very small patches, but 
texture sampling will thrash thrash the caches anyway and it's going to make 
the whole process a lot harder to debug/understand.

z 


--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (master): gallivm: updates for TGSI changes

2009-07-23 Thread Zack Rusin
On Thursday 23 July 2009 13:58:54 Keith Whitwell wrote:
> > OK.  I wanted to fix any breakages from my recent changes, so
> > trivial/tri is at least running now.
>
> BTW - I don't think this should be removed from master - there are a lot
> of people just waiting to get a bit of time to work on this, myself
> included...

IMHO this should be done on a feature branch. As it stands we just have code 
in there that's obviously broken and while a lot of people would love to work 
on it no one has the time right now. And fixing things that will need to be 
redone seems rather counterproductive.
Before anything else the problem of representation needs to solved. The two 
step approach that the code in there started on using is again, imho, by far 
the best but it likely needs a solid discussion to get everyone on the same 
page.

z

--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (master): gallivm: updates for TGSI changes

2009-07-23 Thread Zack Rusin
On Thursday 23 July 2009 12:59:49 Keith Whitwell wrote:
> Module: Mesa
> Branch: master
> Commit: adc6f8cdfc8ca25d7480a50cfe0f85fdeddbfcfc
> URL:   
> http://cgit.freedesktop.org/mesa/mesa/commit/?id=adc6f8cdfc8ca25d7480a50cfe
>0f85fdeddbfcfc
>
> Author: Keith Whitwell 
> Date:   Thu Jul 23 17:56:41 2009 +0100
>
> gallivm: updates for TGSI changes
>
> make linux-llvm succeeds, but doesn't seem to be working, at least with
> llvm 2.5


I really wouldn't even bother with this right now. 

It's all broken. I first did the aos paths for VS and that was working fine. 
Then for FS I started on soa which was never finished but I did make it the 
default and adjusted a lot of the code to make it so, which broke a lot of 
things which in turn proly were never fixed. Leaving the entire thing a mess. 
This stuff simply needs some full time work and at this point I think it might 
be more beneficial to just remove it all from master.

It will be fully rewritten for the OpenCL work anyways. 
As mentioned before geometry shaders need proly about a week of full time work 
to get them into working stage. After that I was planning to do the 
tessellation pipeline stages from DX11 in Gallium (they do have interesting 
applications to vector graphics, plus we need interface ready for them anyway) 
and only after that moving to LLVM. And it's all linear because it's all spare 
time.

z

--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa OpenVG: Having possible EGL build problems

2009-07-17 Thread Zack Rusin
On Friday 17 July 2009 15:47:07 Nicholas Lowell wrote:
> I pulled the revision just before eglcurrent.c/.h was introduced (because
> that is saying it can't find eglcurrent.c/.h, feel free to help with that
> issue) and 'make linux-x86-debug'dir li with EGL_DRIVER="egl_softpipe"
> works.  But this is completely software, right?

That's correct.

> I'm seeing indications that I have DRI capabilities and './configure; make'
> apparently sees it b/c it doesn't build egl_softpipe but egl_glx and
> EGL_i915 is in lib/gallium.  But then I get the GLX: No pbuffer or pixmap
> support error message with eglCreateContext failing when I try to run an
> openvg app.

As mentioned before this is because you don't have a hardware Gallium driver. 
Currently there's no stable and open Gallium driver.

z

--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa OpenVG: Having possible EGL build problems

2009-07-16 Thread Zack Rusin
On Thursday 16 July 2009 17:09:47 Nicholas Lowell wrote:
> make linux-x86: behaves like make linux

Hmm, that's interesting. For me
make linux-x86
cd src/gallium/state_trackers/vega
make
cd ../../../../progs/openvg/demos/
make
EGL_DRIVER="egl_softpipe" ./lion

just works. Can you do a "make linux-x86-debug" build and send the backtrace 
you're getting with lion? Or even better try the progs/openvg/trivial/clear 
program and if that crashes send that backtrace.

z

--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa OpenVG: Having possible EGL build problems

2009-07-16 Thread Zack Rusin
On Thursday 16 July 2009 13:50:17 Nicholas Lowell wrote:
> Hello Zack,
>
> Hopefully this finds you with enough free time to help me out here.  I'm
> trying to build the latest git mesa and additionally the OpenVG library and
> sample progs.  "Trying" being the keyword.  I think my problem is I can't
> get a proper EGL build.  I'll give my system setup which hopefully will
> help.

The documentation on how to build it is in the docs/openvg.html file. Let me 
know if that doesn't work for you. 
Note that unless you have a gallium hardware driver which supports egl then 
you want to export EGL_DRIVER="egl_softpipe" to force softpipe rendering, 
otherwise the loaded driver will most likely simply crash.
Also It's ok to just send those queries to the mesa3d-dev mailing list.

z

--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [PATCH 0/8] EGL updates

2009-07-16 Thread Zack Rusin
On Thursday 16 July 2009 12:18:15 Chia-I Wu wrote:
> On Thu, Jul 16, 2009 at 03:43:12PM +0800, Chia-I Wu wrote:
> > They are tested with xeglthreads, using egl_softpipe.so and egl_glx.so. 
> > Beyond this series, I plan to work on cleaning up driver choosing, adding
> > more error checking, and adding mutex support (EGL should be
> > thread-safe).
>
> I forgot to mention that with the changes, it is possible to have both a
> current OpenVG context and a current OpenGL context doing the rendering
> at the same time, 

Nice.

> provided
> * libOpenVG.so and libGL.so use different symbol names for
>   st_make_current, etc.
> * egl_softpipe.so is modified to call the right st functions
>
> This is hackish, and I haven't come up with a sane way to do it right..

Yea, I didn't see a good way of doing it as well.
Most likely the best approach would be to have static entry points for 
individual apis. e.g. vg_create_context, es11_create_context, 
es20_create_context or such (resolved at runtime),each that take a struct with 
function pointers that can be filled with the respective interfaces.
E.g. 
struct egl_context_api {
void *state_tracker;
void * (*create_framebuffer)(const void *visual,
  enum pipe_format colorFormat,
  enum pipe_format depthFormat,
  enum pipe_format stencilFormat,
  uint width, uint height,
  void *privateData);
void (*make_current)(struct egl_context_api *st,
struct st_framebuffer *draw,
struct st_framebuffer *read);
...
};
that the state trackers can fill.

z

--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (master): tgsi: no need to separately malloc input and output arrays

2009-07-16 Thread Zack Rusin
On Thursday 16 July 2009 08:33:13 Keith Whitwell wrote:
> Module: Mesa
> Branch: master
> Commit: 4e3002b50fcedf3a6db1ac7394077bc3337ccda1
> URL:   
> http://cgit.freedesktop.org/mesa/mesa/commit/?id=4e3002b50fcedf3a6db1ac7394
>077bc3337ccda1
>
> Author: Keith Whitwell 
> Date:   Thu Jul 16 00:23:33 2009 +0100
>
> tgsi: no need to separately malloc input and output arrays
>
> Can now guarantee alignment in the initial allocation of the tgsi exec
> machine.

> +   struct tgsi_exec_vector   Inputs[PIPE_MAX_ATTRIBS];
> +   struct tgsi_exec_vector   Outputs[PIPE_MAX_ATTRIBS];

This isn't ideal for geometry shaders. The issue is that the easiest way to 
shimmy geometry shaders into tgsi machine is by simply processing 4 primitives 
at the same time. 
So for geometry shaders input/output vectors are MAX_VERTICES_PER_PRIMITIVE 
(which is 6) * PIPE_MAX_ATTRIBS.

z

--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Mesa (mesa_7_5_branch): gallium: compare the actual register, not all the inputs

2009-07-14 Thread Zack Rusin
On Tuesday 14 July 2009 10:55:32 Keith Whitwell wrote:
> On Sat, 2009-07-11 at 10:45 -0700, Zack Rusin wrote:
> > Module: Mesa
> > Branch: mesa_7_5_branch
> > Commit: 1c1307e7c55844f63f7bd7ac02c64f4b936f3c66
> > URL:   
> > http://cgit.freedesktop.org/mesa/mesa/commit/?id=1c1307e7c55844f63f7bd7ac
> >02c64f4b936f3c66
> >
> > Author: Zack Rusin 
> > Date:   Sat Jul 11 13:48:41 2009 -0400
> >
> > gallium: compare the actual register, not all the inputs
> >
> > otherwise we decrement indexes for all registers
> >
> > ---
> >
> >  src/mesa/state_tracker/st_atom_shader.c |2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> >
> > diff --git a/src/mesa/state_tracker/st_atom_shader.c
> > b/src/mesa/state_tracker/st_atom_shader.c index 5219119..8b3bb5c 100644
> > --- a/src/mesa/state_tracker/st_atom_shader.c
> > +++ b/src/mesa/state_tracker/st_atom_shader.c
> > @@ -139,7 +139,7 @@ find_translated_vp(struct st_context *st,
> >   if (fragInputsRead & (1 << inAttr)) {
> >  stfp->input_to_slot[inAttr] = numIn;
> >  numIn++;
> > -if ((fragInputsRead & FRAG_BIT_FOGC)) {
> > +if (((1 << inAttr) & FRAG_BIT_FOGC)) {
>
> Zack,
>
> Would it have been easier to say something like
>
>if (inAttr == FRAG_ATTRIB_FOGC) {
>
> to achieve this?

yes, most definitely.


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] ARB_texture_float disabled, but implemented?

2009-06-15 Thread Zack Rusin
On Sunday 14 June 2009 20:29:47 tom fogal wrote:
> My application was failing because we were creating floating
> point textures.  We would get an INVALID_VALUE by specifying a
> ARB_texture_float internal format for glTexImage2D.
>
> The attached two-line diff enables the extension, and my app gets by
> the Mesa check that was failing (dunno about farther; need to run).  It
> seems like the extension is implemented but disabled for some reason,
> though I'm basing that on a cursory look through the code and could be
> wrong.
>
> I haven't tried ATI_texture_float yet, but I'm worried that I don't see
> it in `extensions.c'.
>
> Could anyone comment on any issue[s] with ARB_float_texture in Mesa?

This is likely the most commonly asked question on this list.
In short look at the "IP Status" section of ARB_texture_float. 
If you'd like more details there are numerous discussions related to it in the 
archives.

z


--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


[Mesa3d-dev] OpenVG state tracker is in

2009-05-01 Thread Zack Rusin
Hi,

as mentioned yesterday the OpenVG state tracker is now in the repository.

It's a complete implementation of OpenVG 1.0. Every feature of the standard 
should work. 

It's under the same BSD license as Mesa3D.

docs/openvg.html contains instructions on how to build it.

Just like the Mesa's OpenGL state tracker, it's a hardware accelerated (*1) 
API on top of Gallium.

If there are any questions related to any of this, please let us know.

z

1) Please note that the "hardware accelerated" part comes from the Gallium 
driver and unless you have a Gallium hardware GPU driver it, obviously, won't 
be in fact hardware accelerated :)

--
Register Now & Save for Velocity, the Web Performance & Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance & Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


[Mesa3d-dev] New code release

2009-04-30 Thread Zack Rusin
Hey,

so that it doesn't come as such a huge surprise we wanted to let everyone know 
that we'll be releasing some new code that hopefully we'll make a lot of 
people happy.
We'll start with an OpenVG 1.0 state tracker for Gallium tomorrow. 

The new code will live temporarily in branches to not disrupt the 7.5 release. 
Of course we'll write a bit more about the OpenVG state tracker after the 
push.

z


--
Register Now & Save for Velocity, the Web Performance & Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance & Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Google Summer of Code

2009-04-09 Thread Zack Rusin
On Thursday 09 April 2009 14:03:20 Ian Romanick wrote:
> Stephane Marchesin wrote:
> > That's my point: video codecs move too fast to be put into silicon,
> > and require too many resources to implement. So we should use shaders
> > and lay everything onto an API that takes care of the hardware
> > details.
>
> This is one case where I agree with keithp about fixed-function
> hardware.  You can always make a fixed-function video decoder that uses
> orders of magnitude less power than a programmable decoder.  How many
> movies do you want to be able to watch on that long flight from LA to
> Sydney?
>
> As a fairly extreme example, the CPU in the iPod can decode MP3.
> However, doing it on the CPU, even that power efficient CPU, uses more
> that 5x the power of doing on the dedicated MP3 decode hardware.  I
> don't know that the difference is as extreme for video decode hardware,
> but even 2x would pretty significant.
>
> I guess the point is that fixed-function decode hardware won't
> disappear, at least not on mobile devices, any time soon.

This isn't a problem for Gallium. As previously discussed the parts of the 
video pipeline that do appear as fixed-function in gpus would be exported to a 
gallium video interface. 
The default implementation of the interface would be using TGSI and Gallium 
interface (so 3D pipeline) but if particular hardware supported those features 
the driver could just implement the video interface using the fixed-function 
parts.

z



--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] mesa with llvm?

2009-03-11 Thread Zack Rusin
On Wednesday 11 March 2009 09:41:08 Kamalneet Singh wrote:
> Brian Paul wrote:
> > Kamalneet Singh wrote:
> >> Stephane Marchesin wrote:
> >>> [...]
> >>> What specifically are you interested in ? FWIW all this stuff is quite
> >>> unfinished...
> >>
> >> We have a project to develop OpenGL ES Shading Language compiler. We
> >> want to use llvm, and it makes sense to work with mesa instead of
> >> starting from scratch.. :) So it's great that something works!
> >
> > The LLVM code that's in Mesa now has nothing to do with the shading
> > language compiler.  I don't know if you've discovered that or not.
>
> I didn't know that :) So what is it for? I saw Zack's post
> (http://zrusin.blogspot.com/2007/05/mesa-and-llvm.html), and thought
> perhaps some branch in mesa uses LLVM for GLSL now.. :)

This code was meant to be used as the IR in Mesa. So it was GLSL->tgsi->LLVM 
IR. 
There's a very old branch that has a GLSL compiler that produces LLVM from 
GLSL but it was never finished/integrated. It's here:
http://cgit.freedesktop.org/~zack/mesa.git.old/?h=llvm

> What is the right place to generate LLVM IR? A new pass that operates on
> mesa's IR? From _slang_emit_code?

If you're using Gallium then that code already exists in gallivm (to generate 
llvm ir from tgsi). If you want to generate LLVM ir directly from GLSL and 
pass that to the driver then we'd need to figure out the type of LLVM 
representation that we'd want. Realistically initially we'd also want LLVM IR 
-> tgsi backedn simply because the drivers we have right now use TGSI and it'd 
be quicker to initially write that rather than rewrite the drivers.


z


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] C++ in Mesa?

2009-01-07 Thread Zack Rusin
On Tuesday 06 January 2009 18:28:56 keithw wrote:
> I'm open to ideas on improving the review process.  I do read most
> patches as they are posted in the commit messages & send comments on
> occasion, but I'm sure there's scope for more.
>
> I'm not fond of a work flow where the reviewer is responsible for
> pulling the patch out of email and getting it into git.  I don't know if
> it's a tool problem, but it seems to chew up unreasonable amounts of my
> time whenever I've tried to work that way.  Maybe better tools would
> help, but I'm sceptical.
>
> Have you got a suggestion on how to integrate reviews better with the
> general Mesa process?

I think since we're using Git we could actually try to get the most out of it.

github.com is rather impressive when it comes to services created on top of 
git.

For example take a look at: http://github.com/blog/270-the-fork-queue they 
have a nice videocast showing a feature used to integrate patches. It allows 
you to also comment on commits in a similar way you'd do with reviewboard. I 
find it all quite interesting.

z



--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] C++ in Mesa?

2009-01-06 Thread Zack Rusin
On Tuesday 06 January 2009 16:00:48 keithw wrote:
> On Tue, 2009-01-06 at 04:26 -0800, PLUG GULP wrote:
> > I think if it is limited to using EC++ then that will act as a guideline
> > too.
> >
> > ~Plug.
> >
> > On Tue, Jan 6, 2009 at 12:33 AM, Zack Rusin  wrote:
> > > On Monday 05 January 2009 17:23:40 Ian Romanick wrote:
> > >> 2. Linking with C++ libraries causes problems with applications.
> > >>
> > >> So far, a fair portion of my GLSL compiler work has been re-creating a
> > >> C++-like object heirarchy and management system.  Frankly, the code
> > >> would be much better (for all definitions of better) if I could just
> > >> use C++.
> > >>
> > >> Has issue #2 been resolved?  I recall that, for example, if ID's
> > >> Quake3 binary dynamically linked with a library that dynamically
> > >> linked with a different version of libstdc++, it would explode.  Is
> > >> that still the case?  If this is still a problem, will it affect LLVM
> > >> usage in Mesa?
> > >
> > > LLVM is a bunch of static libs so we can easily impose stdc++ version
> > > on them that Mesa would be fine with. So LLVM will be ok.
> > > If different versions of stdc++ are a worry, I'd suggest writing a
> > > super simple GL app that links to libstdc++5 and then link GL to
> > > libstdc++6 and seeing what happens (even if it burns I honestly think
> > > that a disclaimer saying that 10 year old apps that link to libstdc++5
> > > won't work with newest Mesa without recompiling is not a huge issue)
> > > Oh, and from what you wrote it sounds like, at least right now, you
> > > don't need stdc++.
>
> Unfortunately LLVM is C++ so it's harder to argue that we should exclude
> it.  But I still think we should, mainly because C++ usage always
> spirals out of control into the nastiest steaming pile of poo.
> Everybody always says "oh, of course it can be bad, but we'll stick to a
> lovely subset that is purest gold, it'll make life so good, and we'll
> never, never up the dosage".
>
> But it's a drug that addles the mind, and like it or not, once you start
> you're hooked.  One day it's a little operator overloading, the next
> it's some eminently reasonable STL usage, and before you know it,
> there's all sorts of shit all over the place and no way to escape.

I don't think that's true. Qt is an excellent example. Qt usage of C++ stayed 
the same pretty much from the start and whether one likes C++ or not I think 
everyone can agree that Qt's API is just beautiful. It's simply a matter of a 
well defined and strict review policy which we could probably use anyway.

z

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] Clang in the gallivm Makefile

2009-01-06 Thread Zack Rusin
On Tuesday 06 January 2009 07:11:54 Joakim Sindholt wrote:
> Hello there people, this is my first time posting to a mailing list, so
> please bear with me if I don't get it right.
> To whomever wrote ./src/gallium/auxiliary/gallivm/Makefile, is there a
> specific reason for using clang and not gcc/g++ in the snippit below?
>
> gallivm_builtins.cpp: llvm_builtins.c
>   clang --emit-llvm < $< |llvm-as|opt -std-compile-opts > temp1.bin
>   (echo "static const unsigned char llvm_builtins_data[] = {"; od -txC
> temp1.bin | sed -e "s/^[0-9]*//" -e s"/ \([0-9a-f][0-9a-f]\)/0x\1,/g"
> -e"\$$d" | sed -e"\$$s/,$$/};/") >$@
>   rm temp1.bin
>
> gallivmsoabuiltins.cpp: soabuiltins.c
>   clang --emit-llvm < $< |llvm-as|opt -std-compile-opts > temp2.bin
>   (echo "static const unsigned char soabuiltins_data[] = {"; od -txC
> temp2.bin | sed -e "s/^[0-9]*//" -e s"/ \([0-9a-f][0-9a-f]\)/0x\1,/g"
> -e"\$$d" | sed -e"\$$s/,$$/};/") >$@
>   rm temp2.bin

Yea, because we generate LLVM IR and are currently using Clang specific 
syntax for vectors. 
I checked in the generated files so unless you touched the source files 
Makefile 
shouldn't try to regenerate them.

z

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] C++ in Mesa?

2009-01-05 Thread Zack Rusin
On Monday 05 January 2009 17:23:40 Ian Romanick wrote:
> 2. Linking with C++ libraries causes problems with applications.
>
> So far, a fair portion of my GLSL compiler work has been re-creating a
> C++-like object heirarchy and management system.  Frankly, the code
> would be much better (for all definitions of better) if I could just
> use C++.
>
> Has issue #2 been resolved?  I recall that, for example, if ID's
> Quake3 binary dynamically linked with a library that dynamically
> linked with a different version of libstdc++, it would explode.  Is
> that still the case?  If this is still a problem, will it affect LLVM
> usage in Mesa?

LLVM is a bunch of static libs so we can easily impose stdc++ version on them 
that Mesa would be fine with. So LLVM will be ok.
If different versions of stdc++ are a worry, I'd suggest writing a super simple 
GL app that links to libstdc++5 and then link GL to libstdc++6 and seeing what 
happens (even if it burns I honestly think that a disclaimer saying that 10 
year old apps that link to libstdc++5 won't work with newest Mesa without 
recompiling is not a huge issue)
Oh, and from what you wrote it sounds like, at least right now, you don't need 
stdc++.

z


--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [LLVMdev] Folding vector instructions

2009-01-01 Thread Zack Rusin
On Wednesday 31 December 2008 09:15:09 Alex wrote:
> Zack Rusin wrote:
> > I think Alex was referring here to a AOS layout which is completely not
> > ready.
> > Actually currently the plan is to have essentially a "two pass" LLVM IR.
> > I wanted the first one to never lower any of the GPU instructions so we'd
> > have intrinsics or maybe even just function calls like gallium.lit,
> > gallium.dot, gallium.noise and such. Then gallium should query the driver
> > to figure out which instructions the GPU supports and runs our custom
> > llvm lowering pass that decomposes those into things the GPU supports.
>
> If I understand correct, that is to say, the gallium will dynamically build
> a lowering pass by querying the capability (instructions supported by the
> GPU)? Instead, isn't it a better approach to have a lowering pass for each
> GPU and gallium simply uses it?

The whole point of Gallium is to make driver development as simple as 
possible. So while it's certainly harder to write this code in a way that 
could be generic it's essentially what Gallium is all about and it's at least 
worth a try. 

> What do you plan to do with SOA and AOS paths in the gallium?

For now we need to figure out whether we need all the layouts or whether one  
is enough for all the backends.

> (1) Will they eventually be developed independently? So that for a
> scalar/SIMD GPU, the SOA will be used to generate LLVM IR, and for a vector
> GPU, AOS is used?

Well, they're all connected, so developing them independently would be hard. 
As mentioned above, depending on what's going to happen either we'll let the 
drivers ask for the layout that they want to work with or decide to use one 
layout everywhere.

> (2) At present the difference between SOA and AOS path is not only the
> layout of the input data. The AOS seems to be more complete for me, though
> Rusin has said that it's completely not ready and not used in the gallium.
> Is there a plan to merge/add the support of function/branch and LLVM IR
> extract/insert/shuffle to the SOA code?

I wrote both so I can tell you they're both far from usable. It looks like 
Stephane and Corbin are rocking right now but the infrastructure code in 
Gallium needs a lot of love. We have a lot of choices to make over the next 
few months and obviously all the paths (assuming those will be "paths" and not 
a "path") will require feature parity.

> By the way, is there any open source frontend which converts GLSL to LLVM
> IR?

Yes, there is at:
http://cgit.freedesktop.org/~zack/mesa.git.old/log/?h=llvm
but it's also not complete.

z

--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


Re: [Mesa3d-dev] [LLVMdev] Folding vector instructions

2008-12-30 Thread Zack Rusin
On Tuesday 30 December 2008 15:30:35 Chris Lattner wrote:
> On Dec 30, 2008, at 6:39 AM, Corbin Simpson wrote:
> >> However, the special instrucions cannot directly be mapped to LLVM
> >> IR, like
> >> "min", the conversion involves in 'extract' the vector, create
> >> less-than-compare, create 'select' instruction, and create 'insert-
> >> element'
> >> instruction.
>
> Using scalar operations obviously works, but will probably produce
> very inefficient code.  One positive thing is that all target-specific
> operations of supported vector ISAs (Altivec and SSE[1-4] currently)
> are exposed either through LLVM IR ops or through target-specific
> builtins/intrinsics.  This means that you can get access to all the
> crazy SSE instructions, but it means that your codegen would have to
> handle this target-specific code generation.

I think Alex was referring here to a AOS layout which is completely not ready. 
The currently supported one is SOA layout which eliminates scalar operations.

> The direction we're going is to expose more and more vector operations
> in LLVM IR.  For example, compares and select are currently being
> worked on, so you can do a comparison of two vectors which returns a
> vector of bools, and use that as the compare value of a select
> instruction (selecting between two vectors).  This would allow
> implementing min and a variety of other operations and is easier for
> the codegen to reassemble into a first-class min operation etc.
>
> I don't know what the status of this is, I think it is partially
> implemented but may not be complete yet.

Ah, that's good to know!

> >> I don't have experience of the new vector instructions in LLVM, and
> >> perhaps
> >> that's why it makes me feel it's complicated to fold the swizzle and
> >> writemask.
>
> We have really good support for swizzling operations already with the
> shuffle_vector instruction.  I'm not sure about writemask.

With SOA they're rarely used (essentially never unless we "kill" a pixel") [4 
x <4 x float> ] {{, , , }, {, , , www}...} so with 
SOA both shuffles and writemask come down to a simple selection of the element 
within the array (whether that will be good or bad is yet to be seen based on 
the code in gpu llvm backends that we'll have)
 
> Sure, it would be very reasonable to make these target-specific
> builtins when targeting a GPU, the same way we have target-specific
> builtins for SSE.

Actually currently the plan is to have essentially a "two pass" LLVM IR. I 
wanted the first one to never lower any of the GPU instructions so we'd have 
intrinsics or maybe even just function calls like gallium.lit, gallium.dot, 
gallium.noise and such. Then gallium should query the driver to figure out 
which instructions the GPU supports and runs our custom llvm lowering pass 
that decomposes those into things the GPU supports. Essentially I'd like to 
make as many complicated things in gallium as possible to make the GPU llvm 
backends in drivers as simple as possible and this would help us make the 
pattern matching in the generator /a lot/ easier (matching gallium.lit vs 9+ 
instructions it would be be decomposed to) and give us a more generic GPU 
independent layer above. But that hasn't been done yet, I hope to be able to 
write that code while working on the OpenCL implementation for Gallium.

z

--
___
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev


  1   2   >