Re: [Mesa3d-dev] Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread Keith Whitwell
Ian Romanick wrote:

Before going down that road we'd want to sit down with oprofile and a 
bunch of applications to decide which sets of state we wanted to tune 
for.  IMHO, we'd be better to spend our time writing a highly optimized 
just-in-time compiler for ARB_vertex_program.  Then we could just write 
vertex programs for the different "important" state vectors and let the 
compiler generate the super-loop.  Of course, there are still "issues" 
with vertex programs. :(
Oprofile doesn't do a good job on runtime-generated code, yet.  I guess 
they're getting a bit more stabilized now, so it might be time to bring up the 
idea/issue/problem with them...  I'm not sure what solution there could be, 
especially as oprofile isn't really tied to a single run of a program.

Keith



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread Keith Whitwell
José Fonseca wrote:
On Fri, Apr 04, 2003 at 10:23:21PM +0100, Keith Whitwell wrote:

The optimization of the vertex api has yeilded huge improvements.  Even 
with the runtime-codegenerated versions of these functions in the 
radeon/r200 driver, they *still* dominate viewperf profile runs - meaning 
that *all other optimizations* are a waste of time for viewperf, because 
60% of your time is being spent in the vertex api functions.


I was underestimating its importance then...


Nowadays, vertex arrays are the path to use if you really care about
performance, of course, but a lot of apps still use the regular
per-vertex GL functions.
Except for applications that already exist and use the vertex apis -- of 
which there are many.

And vertex arrays aren't the fastpath any more, but things like 
ARB_vertex_array_object or NV_vertex_array_range.



Now that you mention vertex array, for that, the producer would be
different, but the consumer would be the same.
For developing a driver, it's not necessary to touch the tnl code at all - 
even hardware t&l drivers can quite happily plug into the existing 
mechanisms and get OK performance.


For now I'll be also plugging in the C++ classes into into the existent
T&L code, but in the future I may want to change the way the software
T&L interfaces with the [hardware] rasterizers, since the current
interface makes it difficult to reuse vertices when outputing tri- or
quad-strips, i.e., you keep sending the same vertices over and over
again, even if they are the same betweeen consecutive triangles. 
Honestly, that's just not true.  You can hook these routines out in a bunch of 
different ways.

Look at t_context.h - the tnl_device_driver struct.  In here there's two table 
sof function pointers 'PrimTabVerts' and 'PrimTabElts' which hand off whole 
transformed, clipped primitives - tristrips, quadstrips, polygons, etc. - to 
the driver.  Hook these functions out and you can send the vertices to 
hardware however you like.

Also look at tnl_dd/t_dd_dmatmp.h, as used in mga/mgarender.c and elsewhere -- 
this is an even more direct route to hardware and is very useful if you have a 
card that understands tristrips, etc.  It probably isn't much use for a 
mach64, though.

Clipped triangles are more difficult to handle, we currently call 
tnl->Driver.Render.PrimTabElts[GL_POLYGON] for each clipped primitive.  If you 
can think of a better solution & code it up, I'd be interested to see it.  It 
would be interesting to see some other approaches, but I think this one 
actually ends up not being too bad.

Keith



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread Keith Whitwell
Ian Romanick wrote:
José Fonseca wrote:

On Fri, Apr 04, 2003 at 10:08:36AM -0800, Ian Romanick wrote:

In principle, I think the producer/consumer idea is good.  Why not 
implement known optimizations in it from the start?  We already 
having *working code* to build formated vertex data (see the radeon & 
r200 drivers), why not build the object model from there?  Each 
concrete producer class would have an associated vertex format.  On 
creation, it would fill in a table of functions to put data in its 
vertex buffer. This could mean pointers to generic C functions, or it 
could mean dynamically generating code from assembly stubs.

The idea is that the functions from this table could be put directly 
in the dispatch table.  This is, IMHO, critically important.

The various vertex functions then just need to call the object's 
produce method.  This all boils down to putting a C++ face on a 
technique that has been demonstrated to work.


I hope that integration of assembly generation with C++ is feasible but
I see it as an implementation issue, regardless the preformance issues,
which according to all who have replied aren't that neglectable as I
though.  The reason is that this kind of optimizations is very dependent
of the vertex formats and other hardware details dificulting reusing the
code - which is exactly what I want to avoid at this stage.


Realistically, either hardware or software uses either 
array-of-strctures or structure-of-arrays.  Most hardware uses the 
former.  At that point it becomes a matter of, for a given state vector, 
what's the offset in the structure of an element?  The assembly code in 
the radeon & r200 drivers handles this very nicely.

I do have one question.  Do we really want to invoke the producer on 
every vertex immediatly?  In the radeon / r200 drivers this is just 
to copy the whole vertex to a DMA buffer.  Why not generate the data 
directly where it needs to go?  I know that if the vertex format 
changes before the vertex is complete we need to copy out of the 
temporary buffer into the GL state vector, but that doesn't seem like 
the common case.  At the very least, some guys at Intel think 
generating data directly in DMA buffers is the way to go:

http://www.intel.com/technology/itj/Q21999/ARTICLES/art_4.htm


This is a very interesting read. Thanks for the pointer.

It's complicated to know the vertices position on the DMA from the 
beginning,
specially because of the clipping, since vertices can be added or
removed, but if I understood correctly, it's still better to do that on
the DMA memory and move the vertices around to avoid cache hits. But can
be very tricky: imagine that clipping generate vertices that don't fit
the DMA buffer anymore, what would be done then?


I think the "online driver model" from the paper only works if you have 
a single loop that does all the processing.  Since Mesa uses a pipeline, 
it would be very tricky.  Using the "online driver model" for a card 
w/HW TCL would be a different story.

The things I found more interesting in the issue of applting the TCL
operations on all the vertices at once, or a vertice at each time. From
previous discussions on this list it seems that nowadays most
of CPU performace is dictated by the cache, so it really seems the later
option is more efficient, but Mesa implements the former (they are even
called "pipeline stages") and to change would mean a big overhaul of the
TnL module.


This would be very, very, very tricky.  We'd basically need several 
different super-loops depending on the GL state vector.  The super-loops 
would go in the pipeline at the same place where the hardware TCL 
functions go.  If the super-loop could do all the processing, the 
following TCL stages would be skipped.
This sounds like the 'fastpath' stages which were common in drivers based on 
Mesa-3.x.  We had a pipeline stage which most drivers supplied which was tuned 
to handle quake-3 cva style rendering operations.  It was pretty fast, but in 
the end not much faster than Mesa-4.x standard operation.

The fallback hardware tcl processing in the radeon drivers is installed as a 
pipeline stage also.

Keith



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread Keith Whitwell
Brian Paul wrote:
José Fonseca wrote:

On Fri, Apr 04, 2003 at 10:08:36AM -0800, Ian Romanick wrote:

Right now people use things like Viewperf to make systems purchase 
decisions.  Unless the graphics hardware and the rest of the system 
are very mismatched, the immediate API already has an impact on 
performance in those benchmarks.

The performance of the immediate API *is* important to real 
applications.  Why do you think Sun came up with the SUN_vertex 
extension?  To reduce the overhead of the immediate API, of course. :)

[sample code cut]


But this is all of _very_ _little_ importance when compared by the
ability of _writing_ a full driver fast, which is given by a well
designed OOP interface. As I said here several times, this kind of
low-level optimizations consume too much development time causing that
higher-level optimizations (usually with much more impact on
performance) are never attempted.


In principle, I think the producer/consumer idea is good.  Why not 
implement known optimizations in it from the start?  We already 
having *working code* to build formated vertex data (see the radeon & 
r200 drivers), why not build the object model from there?  Each 
concrete producer class would have an associated vertex format.  On 
creation, it would fill in a table of functions to put data in its 
vertex buffer. This could mean pointers to generic C functions, or it 
could mean dynamically generating code from assembly stubs.

The idea is that the functions from this table could be put directly 
in the dispatch table.  This is, IMHO, critically important.

The various vertex functions then just need to call the object's 
produce method.  This all boils down to putting a C++ face on a 
technique that has been demonstrated to work.


I hope that integration of assembly generation with C++ is feasible but
I see it as an implementation issue, regardless the preformance issues,
which according to all who have replied aren't that neglectable as I
though.  The reason is that this kind of optimizations is very dependent
of the vertex formats and other hardware details dificulting reusing the
code - which is exactly what I want to avoid at this stage.

I do have one question.  Do we really want to invoke the producer on 
every vertex immediatly?  In the radeon / r200 drivers this is just 
to copy the whole vertex to a DMA buffer.  Why not generate the data 
directly where it needs to go?  I know that if the vertex format 
changes before the vertex is complete we need to copy out of the 
temporary buffer into the GL state vector, but that doesn't seem like 
the common case.  At the very least, some guys at Intel think 
generating data directly in DMA buffers is the way to go:

http://www.intel.com/technology/itj/Q21999/ARTICLES/art_4.htm


This is a very interesting read. Thanks for the pointer.

It's complicated to know the vertices position on the DMA from the 
beginning,
specially because of the clipping, since vertices can be added or
removed, but if I understood correctly, it's still better to do that on
the DMA memory and move the vertices around to avoid cache hits. But can
be very tricky: imagine that clipping generate vertices that don't fit
the DMA buffer anymore, what would be done then?

The things I found more interesting in the issue of applting the TCL
operations on all the vertices at once, or a vertice at each time. From
previous discussions on this list it seems that nowadays most
of CPU performace is dictated by the cache, so it really seems the later
option is more efficient, but Mesa implements the former (they are even
called "pipeline stages") and to change would mean a big overhaul of the
TnL module.


On a historical note, the earliest versions of Mesa processed a single 
vertex at a time, instead of operating on arrays of vertices, stage by 
stage.  Going to the later was a big speed up at the time.

Since the T&L code is a module, one could implement the single-vertex 
scheme as an alternate module.  It would be an interesting experiment.
For very simple modes, eg. quake/quake2 where there is basically only clipping 
to do this can work very well.  The 3dfx 'minigl' driver worked this way, 
processing a single vertex at a time & then clipping each triangle once produced.

However, for a full GL pipeline it's not such a good proposition.  One 
difficulty is dealing with fallbacks, if anyone tries to implement this you'll 
see what I mean - you want to 1) throw away intermediate data for performance 
reasons and 2) keep it hanging around in case you need to fallback.

Keith



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.n

Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread Keith Whitwell

The things I found more interesting in the issue of applting the TCL
operations on all the vertices at once, or a vertice at each time. From
previous discussions on this list it seems that nowadays most
of CPU performace is dictated by the cache, so it really seems the later
option is more efficient, but Mesa implements the former (they are even
called "pipeline stages") and to change would mean a big overhaul of the
TnL module.
Doing it in arrays is better from an instruction cache point of view, and as 
long as the arrays are small enough to fit in cache, there's no penalty from a 
data cache point of view.

That's the point of eg, the code in t_array_api.c which cuts large arrays up 
into 256-vertex chunks for processing by the tnl pipeline.

Keith



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Dri-devel] $Home Loans!... Debt Consolidation... Refinance... BB 5862uqoy7-354WgsY0-17

2003-04-04 Thread nigebdho
Title: Jfrmai
^



No mail!


HX8EsK53u08Tm7H05x8

6789Bocl5-459l12áŠÄ…ë^™¨¥ŠË)¢{(­ç[ÉV¥¹åžl7‰Æ­yÑè²Ø§ú+Žë-ïß棭‡4m©ÝÂ'm„Ú(¶«r‰©j| Š÷¬šŠÞ²êi¢»h®š0z·è¯*.­×‰Æ­yÛ®÷«†Ûiÿ÷%‰ÉµÙ­r‰¿Q‚ìv|qi÷ôÓ}4Óm}ÿ݊·œ·ý5ü:âuëޖf¢–)à–+-¸z÷¥–+-²Ê.­ÇŸ¢¸ëa¶Úlÿùb²Û,¢êÜyú+éÞ·ùb²Û?–+-ŠwèýÚâuëÞ

[Dri-devel] En canlý filmler filmindir.com da hfmcm

2003-04-04 Thread Ilker
Title: Mail id: gpraxcicjtrqjdfauxshxaokcbawhgpnvhpvlaxutqndvsfrsxbqysdbv





  
  



  Türkiye'nin 
  En Büyük Seks ArþiviEn Kaliteli Porno 
  Siteye Ulaþýn!Porno Film cd'lerinin bulunduðu tek 
  site!Bu Sitede Neler 
  Var?Gencecik LolitalarJapon 
  kýzlarýnýn yüzlerine boþalanlarAzgýn Ev KadýnlarýDev 
  göðüslü kadýnlarAnal ve kalçalara boþalma 
  resimleriKadýnlarý baðlayanlarMenileri paylaþan 
  lezbiyenlerDev penisli zencilerle beyaz kadýnlarYakýn 
  çekim vajina boþalma görüntüleriTürk KýzlarýTürk Ünlüleri 
  Gizli kamera görüntülerive ÇOK DAHA 
  FAZLASI!
 
  


 
  


 
  
 

  Porno 
  Site Eriþimiwww.FiLMiNDIR.comBuraya 
  týklayarak tanýttýðýmýz siteye baðlanabilirsiniz! 
  
  
  

†+w­zf¢–+,¦‰ì¢·o%Z–ç–y°Þv'µçG¢Ëbžè®;¬·¿šŒ"¶tÐ`h}¶§w¶h¢Ø^­Ê&¥©ò‚+Þ²j+zË©¦Ší¢ºhÁêߢ¼¨º·^v'µçlz»Þ®m§ÿܖ'$j×fµÊ&üEþ
?±ÙñŧßÓMôÓM´i÷ÿv*Þrßô×ðë‰×¯zYšŠX§‚X¬´:âuëޖX¬¶Ë(º·~Šàzw­†Ûi³ÿåŠËl²‹«qç讧zßåŠËlþX¬¶)ߣ÷k‰×¯z

[Dri-devel] En iyi filmleri sizin için seçtik efxni

2003-04-04 Thread BAKI
Title: Mail id: nxnwrjkclbgosd





  
  



  Türkiye'nin 
  En Büyük Seks ArþiviEn Kaliteli Porno 
  Siteye Ulaþýn!Porno Film cd'lerinin bulunduðu tek 
  site!Bu Sitede Neler 
  Var?Gencecik LolitalarJapon 
  kýzlarýnýn yüzlerine boþalanlarAzgýn Ev KadýnlarýDev 
  göðüslü kadýnlarAnal ve kalçalara boþalma 
  resimleriKadýnlarý baðlayanlarMenileri paylaþan 
  lezbiyenlerDev penisli zencilerle beyaz kadýnlarYakýn 
  çekim vajina boþalma görüntüleriTürk KýzlarýTürk Ünlüleri 
  Gizli kamera görüntülerive ÇOK DAHA 
  FAZLASI!
 
  


 
  


 
  
 

  Porno 
  Site Eriþimiwww.FiLMiNDIR.comBuraya 
  týklayarak tanýttýðýmýz siteye baðlanabilirsiniz! 
  
  
  

áŠÄ…ë^™¨¥ŠË)¢{(­ç[ÉV¥¹åžl7‰Æ­yÑè²Ø§ú+Žë-ïß棭‡4m©ÝÂ'm„Ú(¶«r‰©j| Š÷¬šŠÞ²êi¢»h®š0z·è¯*.­×‰Æ­yÛ®÷«†Ûiÿ÷%‰ÉµÙ­r‰¿Q‚ìv|qi÷ôÓ}4Óm}ÿ݊·œ·ý5ü:âuëޖf¢–)à–+-¸z÷¥–+-²Ê.­ÇŸ¢¸ëa¶Úlÿùb²Û,¢êÜyú+éÞ·ùb²Û?–+-ŠwèýÚâuëÞ

Re: [Mesa3d-dev] Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread Allen Akin
On Fri, Apr 04, 2003 at 05:13:44PM -0800, Ian Romanick wrote:
|   IMHO, we'd be better to spend our time writing a highly optimized 
| just-in-time compiler for ARB_vertex_program.  Then we could just write 
| vertex programs for the different "important" state vectors and let the 
| compiler generate the super-loop.  ...

Which brings to mind one of my favorite papers on dynamic compilation:

http://citeseer.nj.nec.com/massalin92synthesi.html

|   ...  Of course, there are still "issues" 
| with vertex programs. :(

The ARB IP working group is trying to resolve those, or at least come up
with a way to resolve them.  Nothing cast in concrete yet,
unfortunately.

Allen


---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] MGA and lockups during shutdown or switchmode

2003-04-04 Thread Panagiotis Papadakos
Also I forgot to say that I got a lockup again when I tried to switch to a
VT with the previous patch and also that I believe that with 2.5.6X
kernels I get lockups sooner, than with 2.4.X.

Regards
Panagiotis Papadakos

On Sat, 5 Apr 2003, Panagiotis Papadakos wrote:

> Could this be a kernel problem?
>
> I have been using 2.4.21-preX and 2.5.6X and all show for me the same
> behaviour, with IO-APIC enabled or not.
> It reminds me the problem I have with my Promise controller
> which after a while if I have dma enabled it completely locks my machine.
>
> What kernel are you using at the moment to just try and test it?
>
> My system is an Athlon 600, on an ASUS K7V with KX133, Matrox G400 and a
> Live!
>
> Regards
>   Panagiotis Papadakos
>
> On Sat, 4 Apr 2003, Michel [ISO-8859-1] Dänzer wrote:
>
> > On Fre, 2003-04-04 at 22:38, Eric Anholt wrote:
> > > On Fri, 2003-04-04 at 11:12, Panagiotis Papadakos wrote:
> > > > For some months now I am experiencing lockups when I switched to the VTs,
> > > > or changed the video modes or if I tried to shutdown the Xserver.
> > > >
> > > > So I applied the following patch, after looking the related radeon patch
> > > > and now I can switch to the VTs or change the videomode without lockups.
> > > > But when I press Ctrl+Alt+Delete, sometimes my machine will lockup before
> > > > kdm starts a new Xserver or it will lockup right away after my monitor
> > > > has received the signal from the new Xserver.
> > > >
> > > > If I kill the kdm process and then restart it everything will be ok. (At
> > > > least when I tried it)
> > > >
> > > > So can anyone please help?
> > > >
> > > > This is the patch:
> > > >
> > > > --- mga_dri.c   2003-04-04 22:02:21.0 +0300
> > > > +++ mga_dri.c_new   2003-04-04 16:26:31.0 +0300
> > > > @@ -1359,6 +1359,7 @@
> > > > if (pMga->irq) {
> > > >drmCtlUninstHandler(pMga->drmFD);
> > > >pMga->irq = 0;
> > > > +  pMga->reg_ien = 0;
> > > > }
> > > >
> > > > /* Cleanup DMA */
> > >
> > > Can anyone explain to me what exactly this patch or the one for radeon
> > > do?  My guess/understanding is that this prevents interrupts from being
> > > reenabled on server reset before the irq handler is readded.
> >
> > That's my understanding as well.
> >
> > > But why does this cause a hang?
> >
> > I'm not sure, maybe some kernels and/or machines don't like the
> > interrupt being enabled without the handler being installed. I couldn't
> > reproduce the problem on my Macs.
> >
> >
> > --
> > Earthling Michel Dänzer   \  Debian (powerpc), XFree86 and DRI developer
> > Software libre enthusiast  \ http://svcs.affero.net/rm.php?r=daenzer
> >
> >
> >
> > ---
> > This SF.net email is sponsored by: ValueWeb:
> > Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
> > No other company gives more support or power for your dedicated server
> > http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
> > ___
> > Dri-devel mailing list
> > [EMAIL PROTECTED]
> > https://lists.sourceforge.net/lists/listinfo/dri-devel
> >
>
>
> ---
> This SF.net email is sponsored by: ValueWeb:
> Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
> No other company gives more support or power for your dedicated server
> http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
> ___
> Dri-devel mailing list
> [EMAIL PROTECTED]
> https://lists.sourceforge.net/lists/listinfo/dri-devel
>


---
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] MGA and lockups during shutdown or switchmode

2003-04-04 Thread Panagiotis Papadakos
Could this be a kernel problem?

I have been using 2.4.21-preX and 2.5.6X and all show for me the same
behaviour, with IO-APIC enabled or not.
It reminds me the problem I have with my Promise controller
which after a while if I have dma enabled it completely locks my machine.

What kernel are you using at the moment to just try and test it?

My system is an Athlon 600, on an ASUS K7V with KX133, Matrox G400 and a
Live!

Regards
Panagiotis Papadakos

On Sat, 4 Apr 2003, Michel [ISO-8859-1] Dänzer wrote:

> On Fre, 2003-04-04 at 22:38, Eric Anholt wrote:
> > On Fri, 2003-04-04 at 11:12, Panagiotis Papadakos wrote:
> > > For some months now I am experiencing lockups when I switched to the VTs,
> > > or changed the video modes or if I tried to shutdown the Xserver.
> > >
> > > So I applied the following patch, after looking the related radeon patch
> > > and now I can switch to the VTs or change the videomode without lockups.
> > > But when I press Ctrl+Alt+Delete, sometimes my machine will lockup before
> > > kdm starts a new Xserver or it will lockup right away after my monitor
> > > has received the signal from the new Xserver.
> > >
> > > If I kill the kdm process and then restart it everything will be ok. (At
> > > least when I tried it)
> > >
> > > So can anyone please help?
> > >
> > > This is the patch:
> > >
> > > --- mga_dri.c   2003-04-04 22:02:21.0 +0300
> > > +++ mga_dri.c_new   2003-04-04 16:26:31.0 +0300
> > > @@ -1359,6 +1359,7 @@
> > > if (pMga->irq) {
> > >drmCtlUninstHandler(pMga->drmFD);
> > >pMga->irq = 0;
> > > +  pMga->reg_ien = 0;
> > > }
> > >
> > > /* Cleanup DMA */
> >
> > Can anyone explain to me what exactly this patch or the one for radeon
> > do?  My guess/understanding is that this prevents interrupts from being
> > reenabled on server reset before the irq handler is readded.
>
> That's my understanding as well.
>
> > But why does this cause a hang?
>
> I'm not sure, maybe some kernels and/or machines don't like the
> interrupt being enabled without the handler being installed. I couldn't
> reproduce the problem on my Macs.
>
>
> --
> Earthling Michel Dänzer   \  Debian (powerpc), XFree86 and DRI developer
> Software libre enthusiast  \ http://svcs.affero.net/rm.php?r=daenzer
>
>
>
> ---
> This SF.net email is sponsored by: ValueWeb:
> Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
> No other company gives more support or power for your dedicated server
> http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
> ___
> Dri-devel mailing list
> [EMAIL PROTECTED]
> https://lists.sourceforge.net/lists/listinfo/dri-devel
>


---
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread José Fonseca
On Fri, Apr 04, 2003 at 05:13:44PM -0800, Ian Romanick wrote:
> Realistically, either hardware or software uses either 
> array-of-strctures or structure-of-arrays.  Most hardware uses the 
> former.  At that point it becomes a matter of, for a given state vector, 
> what's the offset in the structure of an element?  The assembly code in 
> the radeon & r200 drivers handles this very nicely.

You're forgetting the data type. Perhaps recent hardware only uses
unsigned chars for color and floats for the rest, but on older hardware
(such as Mach64) you have quite a mix of floating point, integer, and
fixed point datatypes... 

> >The things I found more interesting in the issue of applting the TCL
> >operations on all the vertices at once, or a vertice at each time. From
> >previous discussions on this list it seems that nowadays most
> >of CPU performace is dictated by the cache, so it really seems the later
> >option is more efficient, but Mesa implements the former (they are even
> >called "pipeline stages") and to change would mean a big overhaul of the
> >TnL module.
> 
> This would be very, very, very tricky.  We'd basically need several 
> different super-loops depending on the GL state vector.  The super-loops 
> would go in the pipeline at the same place where the hardware TCL 
> functions go.  If the super-loop could do all the processing, the 
> following TCL stages would be skipped.

This kind of thing is already done in the vertex buffer construction
with templates whose sections are #ifdef'd out for each "super-loop"
according do the state it's meant for. If we used C++ for this we could
use templates to instantiante classes for all vertex format and TCL
operations combinations possible.

> Before going down that road we'd want to sit down with oprofile and a 
> bunch of applications to decide which sets of state we wanted to tune 
> for.  IMHO, we'd be better to spend our time writing a highly optimized 
> just-in-time compiler for ARB_vertex_program.  Then we could just write 
> vertex programs for the different "important" state vectors and let the 
> compiler generate the super-loop.  Of course, there are still "issues" 
> with vertex programs. :(

José Fonseca


---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread Ian Romanick
José Fonseca wrote:
On Fri, Apr 04, 2003 at 10:08:36AM -0800, Ian Romanick wrote:
In principle, I think the producer/consumer idea is good.  Why not 
implement known optimizations in it from the start?  We already having 
*working code* to build formated vertex data (see the radeon & r200 
drivers), why not build the object model from there?  Each concrete 
producer class would have an associated vertex format.  On creation, it 
would fill in a table of functions to put data in its vertex buffer. 
This could mean pointers to generic C functions, or it could mean 
dynamically generating code from assembly stubs.

The idea is that the functions from this table could be put directly in 
the dispatch table.  This is, IMHO, critically important.

The various vertex functions then just need to call the object's produce 
method.  This all boils down to putting a C++ face on a technique that 
has been demonstrated to work.


I hope that integration of assembly generation with C++ is feasible but
I see it as an implementation issue, regardless the preformance issues,
which according to all who have replied aren't that neglectable as I
though.  The reason is that this kind of optimizations is very dependent
of the vertex formats and other hardware details dificulting reusing the
code - which is exactly what I want to avoid at this stage.
Realistically, either hardware or software uses either 
array-of-strctures or structure-of-arrays.  Most hardware uses the 
former.  At that point it becomes a matter of, for a given state vector, 
what's the offset in the structure of an element?  The assembly code in 
the radeon & r200 drivers handles this very nicely.

I do have one question.  Do we really want to invoke the producer on 
every vertex immediatly?  In the radeon / r200 drivers this is just to 
copy the whole vertex to a DMA buffer.  Why not generate the data 
directly where it needs to go?  I know that if the vertex format changes 
before the vertex is complete we need to copy out of the temporary 
buffer into the GL state vector, but that doesn't seem like the common 
case.  At the very least, some guys at Intel think generating data 
directly in DMA buffers is the way to go:

http://www.intel.com/technology/itj/Q21999/ARTICLES/art_4.htm


This is a very interesting read. Thanks for the pointer.

It's complicated to know the vertices position on the DMA from the beginning,
specially because of the clipping, since vertices can be added or
removed, but if I understood correctly, it's still better to do that on
the DMA memory and move the vertices around to avoid cache hits. But can
be very tricky: imagine that clipping generate vertices that don't fit
the DMA buffer anymore, what would be done then?
I think the "online driver model" from the paper only works if you have 
a single loop that does all the processing.  Since Mesa uses a pipeline, 
it would be very tricky.  Using the "online driver model" for a card 
w/HW TCL would be a different story.

The things I found more interesting in the issue of applting the TCL
operations on all the vertices at once, or a vertice at each time. From
previous discussions on this list it seems that nowadays most
of CPU performace is dictated by the cache, so it really seems the later
option is more efficient, but Mesa implements the former (they are even
called "pipeline stages") and to change would mean a big overhaul of the
TnL module.
This would be very, very, very tricky.  We'd basically need several 
different super-loops depending on the GL state vector.  The super-loops 
would go in the pipeline at the same place where the hardware TCL 
functions go.  If the super-loop could do all the processing, the 
following TCL stages would be skipped.

Before going down that road we'd want to sit down with oprofile and a 
bunch of applications to decide which sets of state we wanted to tune 
for.  IMHO, we'd be better to spend our time writing a highly optimized 
just-in-time compiler for ARB_vertex_program.  Then we could just write 
vertex programs for the different "important" state vectors and let the 
compiler generate the super-loop.  Of course, there are still "issues" 
with vertex programs. :(



---
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread José Fonseca
On Fri, Apr 04, 2003 at 05:14:54PM -0700, Brian Paul wrote:
> José Fonseca wrote:
> >The things I found more interesting in the issue of applting the TCL
> >operations on all the vertices at once, or a vertice at each time. From
> >previous discussions on this list it seems that nowadays most
> >of CPU performace is dictated by the cache, so it really seems the later
> >option is more efficient, but Mesa implements the former (they are even
> >called "pipeline stages") and to change would mean a big overhaul of the
> >TnL module.
> 
> On a historical note, the earliest versions of Mesa processed a single 
> vertex at a time, instead of operating on arrays of vertices, stage by 
> stage.  Going to the later was a big speed up at the time.

Yes, and the use of the SIMD instructions also favors that approach.
Actually on that article they've chosen to process 4 vertices at a time
and not just one, surely because that's the number that fits on the MM
registers.

I think that the fact that CPUs got so much faster but BUSes didn't keep
up pace contributed to change the picture making non-cached memory
access look awfully slow compared with everythin else.

> Since the T&L code is a module, one could implement the single-vertex 
> scheme as an alternate module.  It would be an interesting experiment.

Indeed.

José Fonseca


---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread Brian Paul
José Fonseca wrote:
On Fri, Apr 04, 2003 at 10:08:36AM -0800, Ian Romanick wrote:

Right now people use things like Viewperf to make systems purchase 
decisions.  Unless the graphics hardware and the rest of the system are 
very mismatched, the immediate API already has an impact on performance 
in those benchmarks.

The performance of the immediate API *is* important to real 
applications.  Why do you think Sun came up with the SUN_vertex 
extension?  To reduce the overhead of the immediate API, of course. :)

[sample code cut]


But this is all of _very_ _little_ importance when compared by the
ability of _writing_ a full driver fast, which is given by a well
designed OOP interface. As I said here several times, this kind of
low-level optimizations consume too much development time causing that
higher-level optimizations (usually with much more impact on
performance) are never attempted.
In principle, I think the producer/consumer idea is good.  Why not 
implement known optimizations in it from the start?  We already having 
*working code* to build formated vertex data (see the radeon & r200 
drivers), why not build the object model from there?  Each concrete 
producer class would have an associated vertex format.  On creation, it 
would fill in a table of functions to put data in its vertex buffer. 
This could mean pointers to generic C functions, or it could mean 
dynamically generating code from assembly stubs.

The idea is that the functions from this table could be put directly in 
the dispatch table.  This is, IMHO, critically important.

The various vertex functions then just need to call the object's produce 
method.  This all boils down to putting a C++ face on a technique that 
has been demonstrated to work.


I hope that integration of assembly generation with C++ is feasible but
I see it as an implementation issue, regardless the preformance issues,
which according to all who have replied aren't that neglectable as I
though.  The reason is that this kind of optimizations is very dependent
of the vertex formats and other hardware details dificulting reusing the
code - which is exactly what I want to avoid at this stage.

I do have one question.  Do we really want to invoke the producer on 
every vertex immediatly?  In the radeon / r200 drivers this is just to 
copy the whole vertex to a DMA buffer.  Why not generate the data 
directly where it needs to go?  I know that if the vertex format changes 
before the vertex is complete we need to copy out of the temporary 
buffer into the GL state vector, but that doesn't seem like the common 
case.  At the very least, some guys at Intel think generating data 
directly in DMA buffers is the way to go:

http://www.intel.com/technology/itj/Q21999/ARTICLES/art_4.htm


This is a very interesting read. Thanks for the pointer.

It's complicated to know the vertices position on the DMA from the beginning,
specially because of the clipping, since vertices can be added or
removed, but if I understood correctly, it's still better to do that on
the DMA memory and move the vertices around to avoid cache hits. But can
be very tricky: imagine that clipping generate vertices that don't fit
the DMA buffer anymore, what would be done then?
The things I found more interesting in the issue of applting the TCL
operations on all the vertices at once, or a vertice at each time. From
previous discussions on this list it seems that nowadays most
of CPU performace is dictated by the cache, so it really seems the later
option is more efficient, but Mesa implements the former (they are even
called "pipeline stages") and to change would mean a big overhaul of the
TnL module.
On a historical note, the earliest versions of Mesa processed a single vertex 
at a time, instead of operating on arrays of vertices, stage by stage.  Going 
to the later was a big speed up at the time.

Since the T&L code is a module, one could implement the single-vertex scheme 
as an alternate module.  It would be an interesting experiment.


I guess my point is that we *can* have our cake and eat it too.  We can 
have a nice object model and have "classic" low-level optimizations. 
The benefit of doing that optimizations at the level of the object model 
is that they only need to be done once for a given vertex format. 
Reusing optimizations sounds like a big win to me! :)


I hope so. But at this point I'll just try to design the objects so they
allow both kind of implementations. 

Thanks for the feedback.

José Fonseca
-Brian



---
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread José Fonseca
On Fri, Apr 04, 2003 at 10:23:21PM +0100, Keith Whitwell wrote:
>
> The optimization of the vertex api has yeilded huge improvements.  Even 
> with the runtime-codegenerated versions of these functions in the 
> radeon/r200 driver, they *still* dominate viewperf profile runs - meaning 
> that *all other optimizations* are a waste of time for viewperf, because 
> 60% of your time is being spent in the vertex api functions.

I was underestimating its importance then...

> >>Nowadays, vertex arrays are the path to use if you really care about
> >>performance, of course, but a lot of apps still use the regular
> >>per-vertex GL functions.
> 
> Except for applications that already exist and use the vertex apis -- of 
> which there are many.
> 
> And vertex arrays aren't the fastpath any more, but things like 
> ARB_vertex_array_object or NV_vertex_array_range.
> 
> 
> >Now that you mention vertex array, for that, the producer would be
> >different, but the consumer would be the same.
> 
> For developing a driver, it's not necessary to touch the tnl code at all - 
> even hardware t&l drivers can quite happily plug into the existing 
> mechanisms and get OK performance.

For now I'll be also plugging in the C++ classes into into the existent
T&L code, but in the future I may want to change the way the software
T&L interfaces with the [hardware] rasterizers, since the current
interface makes it difficult to reuse vertices when outputing tri- or
quad-strips, i.e., you keep sending the same vertices over and over
again, even if they are the same betweeen consecutive triangles. 

José Fonseca


---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread José Fonseca
On Fri, Apr 04, 2003 at 10:08:36AM -0800, Ian Romanick wrote:
> Right now people use things like Viewperf to make systems purchase 
> decisions.  Unless the graphics hardware and the rest of the system are 
> very mismatched, the immediate API already has an impact on performance 
> in those benchmarks.
> 
> The performance of the immediate API *is* important to real 
> applications.  Why do you think Sun came up with the SUN_vertex 
> extension?  To reduce the overhead of the immediate API, of course. :)
> 
> [sample code cut]
> 
> >But this is all of _very_ _little_ importance when compared by the
> >ability of _writing_ a full driver fast, which is given by a well
> >designed OOP interface. As I said here several times, this kind of
> >low-level optimizations consume too much development time causing that
> >higher-level optimizations (usually with much more impact on
> >performance) are never attempted.
> 
> In principle, I think the producer/consumer idea is good.  Why not 
> implement known optimizations in it from the start?  We already having 
> *working code* to build formated vertex data (see the radeon & r200 
> drivers), why not build the object model from there?  Each concrete 
> producer class would have an associated vertex format.  On creation, it 
> would fill in a table of functions to put data in its vertex buffer. 
> This could mean pointers to generic C functions, or it could mean 
> dynamically generating code from assembly stubs.
> 
> The idea is that the functions from this table could be put directly in 
> the dispatch table.  This is, IMHO, critically important.
> 
> The various vertex functions then just need to call the object's produce 
> method.  This all boils down to putting a C++ face on a technique that 
> has been demonstrated to work.

I hope that integration of assembly generation with C++ is feasible but
I see it as an implementation issue, regardless the preformance issues,
which according to all who have replied aren't that neglectable as I
though.  The reason is that this kind of optimizations is very dependent
of the vertex formats and other hardware details dificulting reusing the
code - which is exactly what I want to avoid at this stage.

> I do have one question.  Do we really want to invoke the producer on 
> every vertex immediatly?  In the radeon / r200 drivers this is just to 
> copy the whole vertex to a DMA buffer.  Why not generate the data 
> directly where it needs to go?  I know that if the vertex format changes 
> before the vertex is complete we need to copy out of the temporary 
> buffer into the GL state vector, but that doesn't seem like the common 
> case.  At the very least, some guys at Intel think generating data 
> directly in DMA buffers is the way to go:
> 
> http://www.intel.com/technology/itj/Q21999/ARTICLES/art_4.htm

This is a very interesting read. Thanks for the pointer.

It's complicated to know the vertices position on the DMA from the beginning,
specially because of the clipping, since vertices can be added or
removed, but if I understood correctly, it's still better to do that on
the DMA memory and move the vertices around to avoid cache hits. But can
be very tricky: imagine that clipping generate vertices that don't fit
the DMA buffer anymore, what would be done then?

The things I found more interesting in the issue of applting the TCL
operations on all the vertices at once, or a vertice at each time. From
previous discussions on this list it seems that nowadays most
of CPU performace is dictated by the cache, so it really seems the later
option is more efficient, but Mesa implements the former (they are even
called "pipeline stages") and to change would mean a big overhaul of the
TnL module.

> I guess my point is that we *can* have our cake and eat it too.  We can 
> have a nice object model and have "classic" low-level optimizations. 
> The benefit of doing that optimizations at the level of the object model 
> is that they only need to be done once for a given vertex format. 
> Reusing optimizations sounds like a big win to me! :)

I hope so. But at this point I'll just try to design the objects so they
allow both kind of implementations. 

Thanks for the feedback.

José Fonseca


---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Updated Mesa in trunk

2003-04-04 Thread Brian Paul
Ian Romanick wrote:
Brian Paul wrote:

I've updated the Mesa sources on the DRI trunk to 5.0.1 with some 
minor, post-release bug fixes.  I also removed the $Id$ stuff.


I just noticed a couple things in the trunk's Mesa code.

swrast/s_texture.c is missing support for ATI_texture_env_combine3.  All 
of the rest of the support is there (in texstate.c, for example), but 
the actual software rasterization routines are not there.
Done.  I had to back-port that from the Mesa trunk.


Along the same lines, the texmem-0-0-1 branch had the enums for that 
extension in glext.h, but the trunk has them in gl.h.  Where should they 
actually be?
Well, if SGI would update glext.h that's where they should be.  I've asked Jon 
Leech about it but haven't got a reply.  I'm hesitant to edit glext.h so I had 
put them into gl.h for now.  There should be no conflict if they wind up in 
both places if they're protected with #ifndef extensionsName.


I think these are the last two issues before I call the trunk-to-branch 
merge done. :)
That'll be nice.

-Brian



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] Re: [forum] Notes from a teleconferenceheld 2003-3-27

2003-04-04 Thread Keith Whitwell
Philip Brown wrote:
On Fri, Apr 04, 2003 at 10:34:05PM +0100, Keith Whitwell wrote:

[Philip Brown writes]
So to truely create something akin to nvidia's UDA libs/interface, would
involve porting support for 3d hardware currently handled by DRI, over to
Mesa, and making mesa capable of using it directly, without X.
Kindof like the embedded branch, then.


Yeup.  But with full hardware suport, rather than the "limited" 
support that I believe the embedded branch has now.
There's a flag to build either a full or subsetted driver.

Keith



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] Re: [forum] Notes from a teleconference held 2003-3-27

2003-04-04 Thread Philip Brown
On Fri, Apr 04, 2003 at 10:34:05PM +0100, Keith Whitwell wrote:
> >[Philip Brown writes]
> > So to truely create something akin to nvidia's UDA libs/interface, would
> > involve porting support for 3d hardware currently handled by DRI, over to
> > Mesa, and making mesa capable of using it directly, without X.
> 
> Kindof like the embedded branch, then.

Yeup.  But with full hardware suport, rather than the "limited" 
support that I believe the embedded branch has now.

Then those people who dream of having an X server built on top of
a pure 3d foundation, finally have something to swing at, too ;-)




---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Updated Mesa in trunk

2003-04-04 Thread Ian Romanick
Brian Paul wrote:
I've updated the Mesa sources on the DRI trunk to 5.0.1 with some minor, 
post-release bug fixes.  I also removed the $Id$ stuff.
I just noticed a couple things in the trunk's Mesa code.

swrast/s_texture.c is missing support for ATI_texture_env_combine3.  All 
of the rest of the support is there (in texstate.c, for example), but 
the actual software rasterization routines are not there.

Along the same lines, the texmem-0-0-1 branch had the enums for that 
extension in glext.h, but the trunk has them in gl.h.  Where should they 
actually be?

I think these are the last two issues before I call the trunk-to-branch 
merge done. :)



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Re: [forum] Notes from a teleconference held 2003-3-27

2003-04-04 Thread Keith Whitwell
Philip Brown wrote:
On Fri, Apr 04, 2003 at 10:09:00PM +0100, Keith Whitwell wrote:

Philip Brown wrote:

Are you perhaps envisioning pushing Mesa to evolve into something
like the nvidia  UDA  API? Where there is suddenly a single, unified
cross-hardware/OS platform for all 3d-accel hardware access to program to?
I think that could potentially be a very interesting idea. But it would
probably double the Mesa codebase size, most likely, and so should be
managed VERY carefully. Certainly doable, but it should be
Done Right(tm).
That's not a stated plan, afaik, but nobody would want to rule out such an idea.

And, actually it wouldn't increase the code size of Mesa at all, as Mesa 
already runs on many different platforms & is basically os-neutral.  The 
os-dependant parts tend to be the window system bindings and of course the 
kernel components of the actual drivers.


That last bit is sort of what I'm referring to.
Mesa currently supports only a subset of video cards currently suported by
DRI.  DRI only works in conjunction with an X server, as far
as I know, because it is dependant on the GLX api, and cant function
through OpenGL alone.
I don't really understand your terms.  'DRI' strictly refers only to the 
infrastructure in XFree86 that allows direct rendering from client processes - 
 it's not Mesa-specific as there are non-mesa GL DRI drivers.

There's been a trend to call the drivers that hook into the DRI as 'DRI 
drivers', but that's been demonstrated not to be a very good term, as it was 
very easy to get those drivers running against native fbdev on the embedded 
branch, without the involvement of an X server or the DRI infrastructure.

So to truely create something akin to nvidia's UDA libs/interface, would
involve porting support for 3d hardware currently handled by DRI, over to
Mesa, and making mesa capable of using it directly, without X.
Kindof like the embedded branch, then.

Keith



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Re: [forum] Notes from a teleconference held 2003-3-27

2003-04-04 Thread Philip Brown
On Fri, Apr 04, 2003 at 10:09:00PM +0100, Keith Whitwell wrote:
> Philip Brown wrote:
> > Are you perhaps envisioning pushing Mesa to evolve into something
> > like the nvidia  UDA  API? Where there is suddenly a single, unified
> > cross-hardware/OS platform for all 3d-accel hardware access to program to?
> > 
> > 
> > I think that could potentially be a very interesting idea. But it would
> > probably double the Mesa codebase size, most likely, and so should be
> > managed VERY carefully. Certainly doable, but it should be
> > Done Right(tm).
> 
> That's not a stated plan, afaik, but nobody would want to rule out such an idea.
> 
> And, actually it wouldn't increase the code size of Mesa at all, as Mesa 
> already runs on many different platforms & is basically os-neutral.  The 
> os-dependant parts tend to be the window system bindings and of course the 
> kernel components of the actual drivers.

That last bit is sort of what I'm referring to.
Mesa currently supports only a subset of video cards currently suported by
DRI.  DRI only works in conjunction with an X server, as far
as I know, because it is dependant on the GLX api, and cant function
through OpenGL alone.

So to truely create something akin to nvidia's UDA libs/interface, would
involve porting support for 3d hardware currently handled by DRI, over to
Mesa, and making mesa capable of using it directly, without X.

Then the dri source tree would become considerably emptier :->



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread Keith Whitwell
José Fonseca wrote:
On Fri, Apr 04, 2003 at 08:48:35AM -0700, Brian Paul wrote:

In general, this sounds reasonable but you also have to consider 
performance.
The glVertex, Color, TexCoord, etc commands have to be simple and fast.  As 
it is now, glColor4f (for example) (when implemented in X86 assembly) is 
just a jump into _tnl_Color4f() which stuffs the color into the immediate 
struct and returns.  Something similar is done in the R200 driver.

If the implementation of _tnl_Color4f() involves a call to 
producer->Color4f() we'd lose some performance.


I know, but my objective is to design a good object interface on which
all drivers may fit and reuse code. When a driver gets to the point
where the producer->Color4F() calls are the main performance bottleneck
(!?) the developer is free to write a tailored version of TnLProducer
that elimates that extra call:
class TnLProducerFast {

  Vertex current;
  TnLConsumer *consumer;
  
  TnLProducer(TnLConsumer *_consumer) {
consumer=_consumer;
  }

  void activate() {
 _glapi_setapi(GL_COLOR3f, _Color3f)
 ...
  }
  
  static _Color3f(r, g, b) {
TnLProducer *self = GET_THIS_PTR_FROM_CURRENT_CTX();
self->current.r = r; self->current.g = g; self->current.b = b;
  }
  
};

We can even generate automatically this TnLProducerFast from the
original TnLProducer with a template, i.e.,
template < class T > 
class TnLProducerTmpl {

  T tnl;

  void activate() {
 _glapi_setapi(GL_COLOR3f, _Color3f)
 ...
  }
  
  static _Color3f(r, g, b) {
TnLProducerTmpl *self = GET_THIS_PTR_FROM_CURRENT_CTX();
self->tnl.Color3f(r, g, b); // This call is eliminated if T::Color3f
// is inlined
  }
}

typedef TnLProducerTmpl< TnLProducer > TnLProducerFast;

But this is all of _very_ _little_ importance when compared by the
ability of _writing_ a full driver fast, which is given by a well
designed OOP interface. As I said here several times, this kind of
low-level optimizations consume too much development time causing that
higher-level optimizations (usually with much more impact on
performance) are never attempted.
The optimization of the vertex api has yeilded huge improvements.  Even with 
the runtime-codegenerated versions of these functions in the radeon/r200 
driver, they *still* dominate viewperf profile runs - meaning that *all other 
optimizations* are a waste of time for viewperf, because 60% of your time is 
being spent in the vertex api functions.


Nowadays, vertex arrays are the path to use if you really care about
performance, of course, but a lot of apps still use the regular
per-vertex GL functions.
Except for applications that already exist and use the vertex apis -- of which 
there are many.

And vertex arrays aren't the fastpath any more, but things like 
ARB_vertex_array_object or NV_vertex_array_range.


Now that you mention vertex array, for that, the producer would be
different, but the consumer would be the same.
For developing a driver, it's not necessary to touch the tnl code at all - 
even hardware t&l drivers can quite happily plug into the existing mechanisms 
and get OK performance.

Keith



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] MGA and lockups during shutdown or switchmode

2003-04-04 Thread Michel Dänzer
On Fre, 2003-04-04 at 22:38, Eric Anholt wrote:
> On Fri, 2003-04-04 at 11:12, Panagiotis Papadakos wrote:
> > For some months now I am experiencing lockups when I switched to the VTs,
> > or changed the video modes or if I tried to shutdown the Xserver.
> > 
> > So I applied the following patch, after looking the related radeon patch
> > and now I can switch to the VTs or change the videomode without lockups.
> > But when I press Ctrl+Alt+Delete, sometimes my machine will lockup before
> > kdm starts a new Xserver or it will lockup right away after my monitor
> > has received the signal from the new Xserver.
> > 
> > If I kill the kdm process and then restart it everything will be ok. (At
> > least when I tried it)
> > 
> > So can anyone please help?
> > 
> > This is the patch:
> > 
> > --- mga_dri.c   2003-04-04 22:02:21.0 +0300
> > +++ mga_dri.c_new   2003-04-04 16:26:31.0 +0300
> > @@ -1359,6 +1359,7 @@
> > if (pMga->irq) {
> >drmCtlUninstHandler(pMga->drmFD);
> >pMga->irq = 0;
> > +  pMga->reg_ien = 0;
> > }
> > 
> > /* Cleanup DMA */
> 
> Can anyone explain to me what exactly this patch or the one for radeon
> do?  My guess/understanding is that this prevents interrupts from being
> reenabled on server reset before the irq handler is readded.  

That's my understanding as well.

> But why does this cause a hang?

I'm not sure, maybe some kernels and/or machines don't like the
interrupt being enabled without the handler being installed. I couldn't
reproduce the problem on my Macs.


-- 
Earthling Michel Dänzer   \  Debian (powerpc), XFree86 and DRI developer
Software libre enthusiast  \ http://svcs.affero.net/rm.php?r=daenzer



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Re: [forum] Notes from a teleconference held 2003-3-27

2003-04-04 Thread Keith Whitwell
Philip Brown wrote:
On Fri, Apr 04, 2003 at 08:43:31AM -0700, Jens Owen wrote:

Possible
Future:  Mesa Tree -+--> XFree86 Tree
 - API Focus|- X/3D Integration
 - 3D HW Focus  |- Complete Window System Focus
|
+--> Alternate X Tree
|- Duplicate X/3D Integration
|- Possibly more 3D developer
|  friendly, who knows?
|
+--> FBDev Subset
|- FBDev/3D Integration
|- Embedded Focus
|
+--> DirectFB
|- DFB/3D Integration
V
 Other Window Systems:
 DirectFB, WGL, AGL and
 new ones that haven't
 been invented, yet.


Are you perhaps envisioning pushing Mesa to evolve into something
like the nvidia  UDA  API? Where there is suddenly a single, unified
cross-hardware/OS platform for all 3d-accel hardware access to program to?
I think that could potentially be a very interesting idea. But it would
probably double the Mesa codebase size, most likely, and so should be
managed VERY carefully. Certainly doable, but it should be
Done Right(tm).
That's not a stated plan, afaik, but nobody would want to rule out such an idea.

And, actually it wouldn't increase the code size of Mesa at all, as Mesa 
already runs on many different platforms & is basically os-neutral.  The 
os-dependant parts tend to be the window system bindings and of course the 
kernel components of the actual drivers.

Keith



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Re: [forum] Notes from a teleconference held 2003-3-27

2003-04-04 Thread Philip Brown
On Fri, Apr 04, 2003 at 08:43:31AM -0700, Jens Owen wrote:
> Possible
> Future:  Mesa Tree -+--> XFree86 Tree
>   - API Focus|- X/3D Integration
>   - 3D HW Focus  |- Complete Window System Focus
>  |
>  +--> Alternate X Tree
>  |- Duplicate X/3D Integration
>  |- Possibly more 3D developer
>  |  friendly, who knows?
>  |
>  +--> FBDev Subset
>  |- FBDev/3D Integration
>  |- Embedded Focus
>  |
>  +--> DirectFB
>  |- DFB/3D Integration
>  V
>   Other Window Systems:
>   DirectFB, WGL, AGL and
>   new ones that haven't
>   been invented, yet.
> 

Are you perhaps envisioning pushing Mesa to evolve into something
like the nvidia  UDA  API? Where there is suddenly a single, unified
cross-hardware/OS platform for all 3d-accel hardware access to program to?


I think that could potentially be a very interesting idea. But it would
probably double the Mesa codebase size, most likely, and so should be
managed VERY carefully. Certainly doable, but it should be
Done Right(tm).

For example, having fully documented APIs written BEFORE doing any code
modifications.

Then having the APIs blessed by different people from at least 3
separate operating systems.

(no, "suse", "redhat", and 'debian" do not count as 3 separate operating
 systems ;-)



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] MGA and lockups during shutdown or switchmode

2003-04-04 Thread Eric Anholt
On Fri, 2003-04-04 at 11:12, Panagiotis Papadakos wrote:
> For some months now I am experiencing lockups when I switched to the VTs,
> or changed the video modes or if I tried to shutdown the Xserver.
> 
> So I applied the following patch, after looking the related radeon patch
> and now I can switch to the VTs or change the videomode without lockups.
> But when I press Ctrl+Alt+Delete, sometimes my machine will lockup before
> kdm starts a new Xserver or it will lockup right away after my monitor
> has received the signal from the new Xserver.
> 
> If I kill the kdm process and then restart it everything will be ok. (At
> least when I tried it)
> 
> So can anyone please help?
> 
> This is the patch:
> 
> --- mga_dri.c   2003-04-04 22:02:21.0 +0300
> +++ mga_dri.c_new   2003-04-04 16:26:31.0 +0300
> @@ -1359,6 +1359,7 @@
> if (pMga->irq) {
>drmCtlUninstHandler(pMga->drmFD);
>pMga->irq = 0;
> +  pMga->reg_ien = 0;
> }
> 
> /* Cleanup DMA */

Can anyone explain to me what exactly this patch or the one for radeon
do?  My guess/understanding is that this prevents interrupts from being
reenabled on server reset before the irq handler is readded.  But why
does this cause a hang?

--
Eric Anholt[EMAIL PROTECTED]  
http://people.freebsd.org/~anholt/ [EMAIL PROTECTED]



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Dri-devel] MGA and lockups during shutdown or switchmode

2003-04-04 Thread Panagiotis Papadakos
For some months now I am experiencing lockups when I switched to the VTs,
or changed the video modes or if I tried to shutdown the Xserver.

So I applied the following patch, after looking the related radeon patch
and now I can switch to the VTs or change the videomode without lockups.
But when I press Ctrl+Alt+Delete, sometimes my machine will lockup before
kdm starts a new Xserver or it will lockup right away after my monitor
has received the signal from the new Xserver.

If I kill the kdm process and then restart it everything will be ok. (At
least when I tried it)

So can anyone please help?

This is the patch:

--- mga_dri.c   2003-04-04 22:02:21.0 +0300
+++ mga_dri.c_new   2003-04-04 16:26:31.0 +0300
@@ -1359,6 +1359,7 @@
if (pMga->irq) {
   drmCtlUninstHandler(pMga->drmFD);
   pMga->irq = 0;
+  pMga->reg_ien = 0;
}

/* Cleanup DMA */



Regards
Panagiotis Papadakos


---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread Ian Romanick
José Fonseca wrote:
On Fri, Apr 04, 2003 at 08:48:35AM -0700, Brian Paul wrote:

In general, this sounds reasonable but you also have to consider 
performance.
The glVertex, Color, TexCoord, etc commands have to be simple and fast.  As 
it is now, glColor4f (for example) (when implemented in X86 assembly) is 
just a jump into _tnl_Color4f() which stuffs the color into the immediate 
struct and returns.  Something similar is done in the R200 driver.

If the implementation of _tnl_Color4f() involves a call to 
producer->Color4f() we'd lose some performance.


I know, but my objective is to design a good object interface on which
all drivers may fit and reuse code. When a driver gets to the point
where the producer->Color4F() calls are the main performance bottleneck
(!?) the developer is free to write a tailored version of TnLProducer
that elimates that extra call:
Right now people use things like Viewperf to make systems purchase 
decisions.  Unless the graphics hardware and the rest of the system are 
very mismatched, the immediate API already has an impact on performance 
in those benchmarks.

The performance of the immediate API *is* important to real 
applications.  Why do you think Sun came up with the SUN_vertex 
extension?  To reduce the overhead of the immediate API, of course. :)

[sample code cut]

But this is all of _very_ _little_ importance when compared by the
ability of _writing_ a full driver fast, which is given by a well
designed OOP interface. As I said here several times, this kind of
low-level optimizations consume too much development time causing that
higher-level optimizations (usually with much more impact on
performance) are never attempted.
In principle, I think the producer/consumer idea is good.  Why not 
implement known optimizations in it from the start?  We already having 
*working code* to build formated vertex data (see the radeon & r200 
drivers), why not build the object model from there?  Each concrete 
producer class would have an associated vertex format.  On creation, it 
would fill in a table of functions to put data in its vertex buffer. 
This could mean pointers to generic C functions, or it could mean 
dynamically generating code from assembly stubs.

The idea is that the functions from this table could be put directly in 
the dispatch table.  This is, IMHO, critically important.

The various vertex functions then just need to call the object's produce 
method.  This all boils down to putting a C++ face on a technique that 
has been demonstrated to work.

I do have one question.  Do we really want to invoke the producer on 
every vertex immediatly?  In the radeon / r200 drivers this is just to 
copy the whole vertex to a DMA buffer.  Why not generate the data 
directly where it needs to go?  I know that if the vertex format changes 
before the vertex is complete we need to copy out of the temporary 
buffer into the GL state vector, but that doesn't seem like the common 
case.  At the very least, some guys at Intel think generating data 
directly in DMA buffers is the way to go:

http://www.intel.com/technology/itj/Q21999/ARTICLES/art_4.htm

I guess my point is that we *can* have our cake and eat it too.  We can 
have a nice object model and have "classic" low-level optimizations. 
The benefit of doing that optimizations at the level of the object model 
is that they only need to be done once for a given vertex format. 
Reusing optimizations sounds like a big win to me! :)



---
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Fresh DRI driver snapshots - call for testers

2003-04-04 Thread José Fonseca
On Fri, Apr 04, 2003 at 10:13:40AM -0500, Leif Delgass wrote:
> On 4 Apr 2003, Sergey V. Oudaltsov wrote:
> 
> > 
> > > There are new driver snapshots from the trunk (for XFree86 4.3.0) on
> > > http://dri.sourceforge.net/snapshots/ . They were built wihout a glitch
> > > but are yet untested. Please report back success and/or failure.
> > Great!! Does this include mach64 (which is in bleeding-edge subdir)?
> > Also, Leif, what about those cool DRI+Xv snapshots?
> 
> I think we should switch the mach64 snapshots to the mach64-0-0-6-branch 
> now.  

OK, I'll do that then.

> The merge from the trunk is complete, so that branch is now based on 
> XFree86 4.3.0 and Mesa 5.0.1.  I'll be merging in Brian's updates to Mesa 
> from the trunk soon as well.
> 
> I have a new DRI+Xv source patch for this branch available at:
> 
> http://www.retinalburn.net/linux/dri_xv.html
> 
> I'll be adding binaries pretty soon, probably built on RedHat 9.

José Fonseca


---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread José Fonseca
On Fri, Apr 04, 2003 at 08:48:35AM -0700, Brian Paul wrote:
> In general, this sounds reasonable but you also have to consider 
> performance.
> The glVertex, Color, TexCoord, etc commands have to be simple and fast.  As 
> it is now, glColor4f (for example) (when implemented in X86 assembly) is 
> just a jump into _tnl_Color4f() which stuffs the color into the immediate 
> struct and returns.  Something similar is done in the R200 driver.
> 
> If the implementation of _tnl_Color4f() involves a call to 
> producer->Color4f() we'd lose some performance.

I know, but my objective is to design a good object interface on which
all drivers may fit and reuse code. When a driver gets to the point
where the producer->Color4F() calls are the main performance bottleneck
(!?) the developer is free to write a tailored version of TnLProducer
that elimates that extra call:

class TnLProducerFast {

  Vertex current;
  TnLConsumer *consumer;
  
  TnLProducer(TnLConsumer *_consumer) {
consumer=_consumer;
  }

  void activate() {
 _glapi_setapi(GL_COLOR3f, _Color3f)
 ...
  }
  
  static _Color3f(r, g, b) {
TnLProducer *self = GET_THIS_PTR_FROM_CURRENT_CTX();
self->current.r = r; self->current.g = g; self->current.b = b;
  }
  
};

We can even generate automatically this TnLProducerFast from the
original TnLProducer with a template, i.e.,

template < class T > 
class TnLProducerTmpl {

  T tnl;

  void activate() {
 _glapi_setapi(GL_COLOR3f, _Color3f)
 ...
  }
  
  static _Color3f(r, g, b) {
TnLProducerTmpl *self = GET_THIS_PTR_FROM_CURRENT_CTX();
self->tnl.Color3f(r, g, b); // This call is eliminated if T::Color3f
// is inlined
  }
}

typedef TnLProducerTmpl< TnLProducer > TnLProducerFast;

But this is all of _very_ _little_ importance when compared by the
ability of _writing_ a full driver fast, which is given by a well
designed OOP interface. As I said here several times, this kind of
low-level optimizations consume too much development time causing that
higher-level optimizations (usually with much more impact on
performance) are never attempted.

> Nowadays, vertex arrays are the path to use if you really care about
> performance, of course, but a lot of apps still use the regular
> per-vertex GL functions.

Now that you mention vertex array, for that, the producer would be
different, but the consumer would be the same.

José Fonseca


---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Dri-devel] Re: [Mesa3d-dev] Re: [forum] Notes from a teleconference held 2003-3-27

2003-04-04 Thread Brian Paul
Jens Owen wrote:
Michel,

You're bring in issues that effect more than just the X development 
community here, so I'm copying the DRI and Mesa developers.

Michel Dänzer wrote:

On Don, 2003-04-03 at 22:03, Alan Cox wrote:

From the DRI people's point of view, it leads to more work as we'd 
want our drivers to work with both trees -- but that's pretty much 
life, and we'll have to do what we can to minimize the effects on us.


Perhaps a test of Keith's theory is that DRI should be able to be -part-
of not just working with his tree.


I think we should first discuss (more) the pros and cons of folding the
DRI tree into other trees. I do find the potential benefits (for merges
in particular) compelling, but there's e.g. the danger of making it 
harder
to get it integrated with other components like e.g. DirectFB or other 
OpenGL implementations.


Folding the X specific work into an X project makes alot of sense from a 
technical perspective.  My biggest concern would be losing developer 
momentum by removing this work from a developer friendly project like 
the DRI.

The Mesa specific parts and the supporting kernel driver parts could be 
pushed into the Mesa tree (this has already been done for the embedded 
branch).  Currently, the Mesa project is very focused on the API and the 
full software stack that supports that API in a wide range of windowing 
environments while the DRI project is focused on hardware acceleration 
in the X environment.  It's technically feasible to transition the 
development of the Mesa hardware drivers (currently done in DRI) to the 
Mesa project.  However, we still need to worry about developer momentum 
as two focuses would now be in the Mesa project (API and 3D HW).
That could be a good thing.

First, as it is now, Mesa development is a little like duck hunting - aiming 
ahead of the target.  That is, I try to implement extensions in anticipation 
of upcoming hardware features (texture cube maps, texture combine, texture 
rectangle, shadow maps, vertex/fragment programming, etc).  It's often months 
after I finish an extension in s/w Mesa that we look at hardware implementations.

Sometimes I find that I have to go back and re-do parts of the software 
implementation to make it work for hardware.

If the hardware drivers were in the Mesa tree, the hardware implementation of 
new features could be done more efficiently and sooner.

Secondly, during DRI IRC a few weeks ago we made a point that should probably 
be repeated:  the interface between core Mesa and the h/w drivers is _much_ 
more intricate than the interface between the h/w drivers and XFree86.

Yet the former interface is where we've actually split the code bases!

The embedded Radeon driver project has demontrated that the driver's ties to X 
aren't too strong and that that dependency can be abstracted away.  The 
interface between the h/w drivers and XFree86/DRI hasn't changed in a long 
time; it's pretty static.

Ideally, if the 3D hardware driver were developed within the Mesa tree, the 
complexity barriers which discourage newbies could be lowered a bit - someone 
could download the Mesa tree, do some coding, compile and immediately try out 
drivers without having to build a whole X tree.

The 3D drivers could also be used outside of X, like the embedded/fbdev Radeon 
driver.


I think the following block diagram illustrates the key areas of 3D 
development focus, and the transition that's being suggested:

Now: Mesa Tree -> DRI Tree -> XFree86 Tree
 - API Focus  - 3D HW Focus   - Complete Window System Focus
  - X/3D
Integration
Possible
Future:  Mesa Tree -+--> XFree86 Tree
 - API Focus|- X/3D Integration
 - 3D HW Focus  |- Complete Window System Focus
|
+--> Alternate X Tree
|- Duplicate X/3D Integration
|- Possibly more 3D developer
|  friendly, who knows?
|
+--> FBDev Subset
|- FBDev/3D Integration
|- Embedded Focus
|
+--> DirectFB
|- DFB/3D Integration
V
 Other Window Systems:
 DirectFB, WGL, AGL and
 new ones that haven't
 been invented, yet.
I would like to hear the concerns from the developers that support the 
API, 3D HW and X/3D Integration before considering any kind of 
transition.  IMHO, supporting the community of open source developers 
that make 3D happen is much more important than control over any one 
project.


In anticipation of Phil Brown's likely response I'll say "No, doing this would 
not compromise the platform-neutral nature of core Mesa." :)

-Brian



--

[Dri-devel] roit dqm

2003-04-04 Thread Serena Francis




 You can now get 
prescription 
medications prescribed online
with
 NO PRIOR PRESCRIPTION REQUIRED
NO PHYSICAL EXAM
NEEDED

Go here
to see all of our available online
drugs.

Valium ... Xanax ... Diethylpropion ... Ambien ... Phentermine ... V i a g r a ... And Many 
More

One of our US licensed physicians will write an FDA
approved prescription for you and ship your order overnight from a US Licensed
pharmacy direct to your doorstep, fast and secure!
Discount Prices -- Next
Day Delivery
strikereach capacity
subtotal bulb carrier discussion noon %RANDOM_WORD
sign cake calibration and this website servant
goal for you
 

To stop all future offers 
here  

Re: [Dri-devel] TnL interface in the OOP/C++ world

2003-04-04 Thread Brian Paul
José Fonseca wrote:
Now that, thanks to Brian, the textures are pretty much taken care of,
I'm moving into the TNL module for the C++ framework.
First, some definitions. "TnL" here is defined as the object [or module]
that handles all the geometric (vertex) data (as oppsed to the context
which handles the state). This date is supposed to be transformed,
clipped and litted and rasterized, but not all these tasks are performed
by the TnL object itself - actually they are dispatched to the hardware
as much and as soon as possible.
As a special note, the TnL receives vertices but, since usually many of
the vertice properties (color, normals, ...) don't change from one
vertice to the other, in the OpenGL you have one API for each property
(glCoord, glColor, etc.). Still, it's _whole_ vertices that it's
receiving.
My proposal for modelling the TnL module is to model it as a
producer-consumer. The producer exposes an API similar to OpenGL,
updates a "current vertex", and produces vertices from that current
vertice. The consumer receives those vertices. I.e., something like
this.
class Vertex; // Abstract vertex

class TnLConsumer {

  void consume(Vertex *vertex);
};
class TnLProducer {

  Vertex current;
  TnLConsumer *consumer;
  
  TnLProducer(TnLConsumer *_consumer) {
consumer=_consumer;
  }
  
  Color3(r, g, b) {
current.r = r; current.g = g; current.b = b;
  }
  
  Coord3(x, y, z) {
current.x = x; current.y = y; current.z = z;
produce();
  }

  produce() {
consumer->consume(¤t);
  }
};
What's special about this is that usually there isn't just a single
producer for a certain driver, but potentially a myriad of them (each
specialized for a set of hardware vertex formats or a software vertex
format). The same goes for the consumer. The appropriate producer and
consumer is chosen by the context, during a glBegin() call.
The reason to seperate the consumer and producer and not merge them
together is that when using call lists the producer/consumer won't be
sending the vertices to the card but to memory instead. This is
accomplished by using another consumer/producer wich writes/reads the
hardware vertices from memory.
This can be implmented in C++ without touching the current Mesa code, by
wrappring the current TnL code. But if the idea is pleasing we could
move the C TnL interface to this model. This would allow code for direct
hardware vertex generation (as is done in Radeon embedded driver) to
coexist nicely with code that needs to do some of TCL operations by
software.
In general, this sounds reasonable but you also have to consider performance.
The glVertex, Color, TexCoord, etc commands have to be simple and fast.  As it 
is now, glColor4f (for example) (when implemented in X86 assembly) is just a 
jump into _tnl_Color4f() which stuffs the color into the immediate struct and 
returns.  Something similar is done in the R200 driver.

If the implementation of _tnl_Color4f() involves a call to producer->Color4f() 
we'd lose some performance.

Nowadays, vertex arrays are the path to use if you really care about 
performance, of course, but a lot of apps still use the regular per-vertex GL 
functions.

-Brian



---
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Dri-devel] Re: [forum] Notes from a teleconference held 2003-3-27

2003-04-04 Thread Jens Owen
Michel,

You're bring in issues that effect more than just the X development 
community here, so I'm copying the DRI and Mesa developers.

Michel Dänzer wrote:
On Don, 2003-04-03 at 22:03, Alan Cox wrote: 

From the DRI people's point of view, it leads to more work as we'd want our 
drivers to work with both trees -- but that's pretty much life, and we'll have 
to do what we can to minimize the effects on us.
Perhaps a test of Keith's theory is that DRI should be able to be -part-
of not just working with his tree.


I think we should first discuss (more) the pros and cons of folding the
DRI tree into other trees. I do find the potential benefits (for merges
in particular) compelling, but there's e.g. the danger of making it harder
to get it integrated with other components like e.g. DirectFB or other 
OpenGL implementations.


Folding the X specific work into an X project makes alot of sense from a 
technical perspective.  My biggest concern would be losing developer 
momentum by removing this work from a developer friendly project like 
the DRI.

The Mesa specific parts and the supporting kernel driver parts could be 
pushed into the Mesa tree (this has already been done for the embedded 
branch).  Currently, the Mesa project is very focused on the API and the 
full software stack that supports that API in a wide range of windowing 
environments while the DRI project is focused on hardware acceleration 
in the X environment.  It's technically feasible to transition the 
development of the Mesa hardware drivers (currently done in DRI) to the 
Mesa project.  However, we still need to worry about developer momentum 
as two focuses would now be in the Mesa project (API and 3D HW).

I think the following block diagram illustrates the key areas of 3D 
development focus, and the transition that's being suggested:

Now: Mesa Tree -> DRI Tree -> XFree86 Tree
 - API Focus  - 3D HW Focus   - Complete Window System Focus
  - X/3D
Integration
Possible
Future:  Mesa Tree -+--> XFree86 Tree
 - API Focus|- X/3D Integration
 - 3D HW Focus  |- Complete Window System Focus
|
+--> Alternate X Tree
|- Duplicate X/3D Integration
|- Possibly more 3D developer
|  friendly, who knows?
|
+--> FBDev Subset
|- FBDev/3D Integration
|- Embedded Focus
|
+--> DirectFB
|- DFB/3D Integration
V
 Other Window Systems:
 DirectFB, WGL, AGL and
 new ones that haven't
 been invented, yet.
I would like to hear the concerns from the developers that support the 
API, 3D HW and X/3D Integration before considering any kind of 
transition.  IMHO, supporting the community of open source developers 
that make 3D happen is much more important than control over any one 
project.

--
   /\
 Jens Owen/  \/\ _
  [EMAIL PROTECTED]  /\ \ \   Steamboat Springs, Colorado


---
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Question about DRI & Xinerama.

2003-04-04 Thread Brian Paul
Vanson Wu wrote:
> Hi All,
> 
> we are trying to setup up multi-screen in my system. as we knows, the DRI
> module not support Xinerama yet.
> it will be software rendering, but when i running glxgears. no matter we use
> dual-adapter system or dual-head system.
> The second screen always can't display glxgears image? How does it is?
> the only exception is ATi Redeon 8500 series (or Above?). i had download
> it's driver rpm package from ATi web sits.
> it can set dual-display and second screen still run glxgears correctly (of
> course, it's software rendering).
> How should i do? if i want to modify driver to running 3D ap. software
> rendering in second screen.


I don't really know what driver modifications are needed to support 3D in
Xinerama but there's an alternative.

The DMX project (dmx.sf.net) lets you use a collection of X servers as a
unified display.

The Chromium project (chromium.sf.net) integrates with DMX and lets you do
(hardware) OpenGL rendering to a multi-screen display.

As a concrete example, you could setup a cluster of 16 computers (each with a
graphics card and monitor) to act as one big display which supports 2D via X
and 3D via OpenGL.

I routinely do this with a 2-screen configuration.

-Brian

[PS: I'm cc'ing the Mesa3d-dev list since you posted basically the same
message there.]



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Fresh DRI driver snapshots - call for testers

2003-04-04 Thread Leif Delgass
On 4 Apr 2003, Sergey V. Oudaltsov wrote:

> 
> > There are new driver snapshots from the trunk (for XFree86 4.3.0) on
> > http://dri.sourceforge.net/snapshots/ . They were built wihout a glitch
> > but are yet untested. Please report back success and/or failure.
> Great!! Does this include mach64 (which is in bleeding-edge subdir)?
> Also, Leif, what about those cool DRI+Xv snapshots?

I think we should switch the mach64 snapshots to the mach64-0-0-6-branch 
now.  The merge from the trunk is complete, so that branch is now based on 
XFree86 4.3.0 and Mesa 5.0.1.  I'll be merging in Brian's updates to Mesa 
from the trunk soon as well.

I have a new DRI+Xv source patch for this branch available at:

http://www.retinalburn.net/linux/dri_xv.html

I'll be adding binaries pretty soon, probably built on RedHat 9.

-- 
Leif Delgass 
http://www.retinalburn.net



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] bug in radeon dri driver

2003-04-04 Thread Michel Dänzer
On Fre, 2003-04-04 at 17:52, koen muylkens wrote:
> I think I have found a bug in de radeon dri drivers.
> I've tested it with:
> The dri/Xfree driver of Mandrake 9.1 and the dri-drivers on the dri-website of 
> the 4th of april 2003. Both have the same problem.
> 
> When redering something in 3D and moving another window on top of the 
> 3D-output-window, and moving this window around for a while on top of the 
> other screen, X freezes completly. Even CTRL-ALT-Backspace is no use, I have 
> to reboot when this happens. 

Have you tried logging into the machine remotely and seeing if killing
the 3D client or X works?

> I looks like a memory problem.

Why do you think so? I've also seen this with small windows merely
appearing in front of larger ones, I think it's a variation of the
'many cliprects' problem which is supposed to be worked around in the
DRM. I have the impression this was less likely to happen with the M7
than with the M9, but that may well be coincidence.


-- 
Earthling Michel Dänzer   \  Debian (powerpc), XFree86 and DRI developer
Software libre enthusiast  \ http://svcs.affero.net/rm.php?r=daenzer



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Fresh DRI driver snapshots - call for testers

2003-04-04 Thread Michel Dänzer
On Fre, 2003-04-04 at 16:14, Martin Spott wrote:
> On Fri, Apr 04, 2003 at 03:59:16PM +0200, Michel Dänzer wrote:
> > On Fre, 2003-04-04 at 14:56, Martin Spott wrote: 
> 
> > > This remembers me of a question that came up recently: Some of the changes
> > > that recently were made to the XFree86 CVS repository touch files that are
> > > part of the DRI tree. Does anyone track these changes with the intention to
> > > update the DRI tree accordingly ?
> > 
> > Are you thinking of anything in particular? I didn't notice anything but
> > backports of fixes from the DRI tree or cosmetical fixes.
> 
> Lemme see if I can find the posting quickly 
> 
> Changes by: [EMAIL PROTECTED]   03/04/03 14:47:42
> 56. Allow AGPGART support to be enabled for OpenBSD (#A.1684, Brian Feldman).
> [...]
>   3.11  +2 -2  xc/programs/Xserver/hw/xfree86/os-support/linux/lnx_agp.c
> 
> 
> Changes by: [EMAIL PROTECTED]   03/04/03 08:38:52
> 42. Fix memory leaks in ProcXF86VidModeModModeLine and
> ProcXF86VidModeValidateModeLine,
> [...]
>   1.15  +6 -2  xc/programs/Xserver/hw/xfree86/common/xf86VidMode.c
> 
> 
> Changes by: [EMAIL PROTECTED]   03/04/03 08:16:02
> 34. Add a new request to the XF86Misc extension that allows a client
> to send an arbitrary message to the DDX, which in turn can send the
> message to the driver.
> [...]
>   1.92  +10 -1 xc/programs/Xserver/hw/xfree86/drivers/ati/
> 
> 
> I'm shure there are some more.
> I could be wrong but it appears to me that it _might_ me useful to track
> these changes to minimize the next merge !? I'm not the one who defines how
> to follow the XFree86 tree, I'm just a bit curious.

I think it's quite the opposite, the finer grained you track it, the
more work, increasingly so as the trees diverge. If someone volunteers
to do it (and the people doing the regular merges don't mind), fine, but
I doubt the existing developers are bored enough to do it.


-- 
Earthling Michel Dänzer   \  Debian (powerpc), XFree86 and DRI developer
Software libre enthusiast  \ http://svcs.affero.net/rm.php?r=daenzer



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Fresh DRI driver snapshots - call for testers

2003-04-04 Thread Martin Spott
On Fri, Apr 04, 2003 at 03:59:16PM +0200, Michel Dänzer wrote:
> On Fre, 2003-04-04 at 14:56, Martin Spott wrote: 

> > This remembers me of a question that came up recently: Some of the changes
> > that recently were made to the XFree86 CVS repository touch files that are
> > part of the DRI tree. Does anyone track these changes with the intention to
> > update the DRI tree accordingly ?
> 
> Are you thinking of anything in particular? I didn't notice anything but
> backports of fixes from the DRI tree or cosmetical fixes.

Lemme see if I can find the posting quickly 

Changes by: [EMAIL PROTECTED]   03/04/03 14:47:42
56. Allow AGPGART support to be enabled for OpenBSD (#A.1684, Brian Feldman).
[...]
  3.11  +2 -2  xc/programs/Xserver/hw/xfree86/os-support/linux/lnx_agp.c


Changes by: [EMAIL PROTECTED]   03/04/03 08:38:52
42. Fix memory leaks in ProcXF86VidModeModModeLine and
ProcXF86VidModeValidateModeLine,
[...]
  1.15  +6 -2  xc/programs/Xserver/hw/xfree86/common/xf86VidMode.c


Changes by: [EMAIL PROTECTED]   03/04/03 08:16:02
34. Add a new request to the XF86Misc extension that allows a client
to send an arbitrary message to the DDX, which in turn can send the
message to the driver.
[...]
  1.92  +10 -1 xc/programs/Xserver/hw/xfree86/drivers/ati/


I'm shure there are some more.
I could be wrong but it appears to me that it _might_ me useful to track
these changes to minimize the next merge !? I'm not the one who defines how
to follow the XFree86 tree, I'm just a bit curious.

Or are all these changes already part of the DRI tree ?

Martin.
-- 
 Unix _IS_ user friendly - it's just selective about who its friends are !
--


---
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Fresh DRI driver snapshots - call for testers

2003-04-04 Thread Michel Dänzer
On Fre, 2003-04-04 at 14:56, Martin Spott wrote: 
> Jos? Fonseca <[EMAIL PROTECTED]> wrote:
> 
> > FYI these snapshots were built by checking out the DRI CVS over the
> > XFree86 4.3.0 source [...]
> 
> This is tough - I didn't think it would work  :-)
> 
> This remembers me of a question that came up recently: Some of the changes
> that recently were made to the XFree86 CVS repository touch files that are
> part of the DRI tree. Does anyone track these changes with the intention to
> update the DRI tree accordingly ?

Are you thinking of anything in particular? I didn't notice anything but
backports of fixes from the DRI tree or cosmetical fixes.


-- 
Earthling Michel Dänzer   \  Debian (powerpc), XFree86 and DRI developer
Software libre enthusiast  \ http://svcs.affero.net/rm.php?r=daenzer



---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Dri-devel] bug in radeon dri driver

2003-04-04 Thread koen muylkens

I think I have found a bug in de radeon dri drivers.
I've tested it with:
The dri/Xfree driver of Mandrake 9.1 and the dri-drivers on the dri-website of 
the 4th of april 2003. Both have the same problem.

When redering something in 3D and moving another window on top of the 
3D-output-window, and moving this window around for a while on top of the 
other screen, X freezes completly. Even CTRL-ALT-Backspace is no use, I have 
to reboot when this happens. I looks like a memory problem.

My hardware: a Dell 610C laptop with a Radeon Mobile M6, with 8 meg of ram.
OS: Linux Mandrake 9.1

btw, thanks for fixing the texture bug in the previous version of DRI included 
in mandrake 9.0. Opengl scenes look al lot better now. keep up the good work.


---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Fresh DRI driver snapshots - call for testers

2003-04-04 Thread Martin Spott
Jos? Fonseca <[EMAIL PROTECTED]> wrote:

> FYI these snapshots were built by checking out the DRI CVS over the
> XFree86 4.3.0 source [...]

This is tough - I didn't think it would work  :-)

This remembers me of a question that came up recently: Some of the changes
that recently were made to the XFree86 CVS repository touch files that are
part of the DRI tree. Does anyone track these changes with the intention to
update the DRI tree accordingly ?

Martin.
-- 
 Unix _IS_ user friendly - it's just selective about who its friends are !
--


---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Fresh DRI driver snapshots - call for testers

2003-04-04 Thread Sergey V. Oudaltsov

> There are new driver snapshots from the trunk (for XFree86 4.3.0) on
> http://dri.sourceforge.net/snapshots/ . They were built wihout a glitch
> but are yet untested. Please report back success and/or failure.
Great!! Does this include mach64 (which is in bleeding-edge subdir)?
Also, Leif, what about those cool DRI+Xv snapshots?

-- 
Sergey


signature.asc
Description: This is a digitally signed message part


[Dri-devel] DRM PCI API & Mach64

2003-04-04 Thread José Fonseca
The work on the DRM PCI API has been really slow (working on the DRM API
is not the most exciting experiences to me...) but it has enough for
finishing Mach64.

If you recall, for Mach64 we need a set of DMA buffers which aren't
mapped to the clients to assure that they aren't tampered before
reaching the hardware - the private buffers.

So what I'll do is setup a circular array for these private buffers.
There is no need for a free list since these buffers are always used in
[circular] order. (With the public buffers that doesn't happen because
there's no guarantee that the order the buffers are handed out to the
clients is the same that the client dispatches them.) This make things
much simpler to get a free buffer - just compare a stamp (which is the
value of the ring pointer after processing the buffer) with the current
value of the ring pointer for any buffer we want.

The actual DMA buffers for the entries in the circular array are either
from the new DRM PCI API (the pci_pool_* functions) or from a contiguous
piece of AGP memory.

In case you're wondering what's missing in the DRM PCI to be more
general is:

 - ioctl's for exposing the pci_pool_* to userspace
 - support to add maps to these buffers.
 
(None of this is need for Mach64 as outlined above)

 - expose more of the AGP API internally 
 
(This would be nice, but is more work than I'd like to get into right
now).


Leif, so far I haven't done nothing in mach64-0-0-5-branch (the DRM PCI API stuff was
done on a local checkout of the trunk), so in your opinion should I make
these changes on the mach64-0-0-5-branch or on the mach64-0-0-6-branch?


José Fonseca


drm_pci.diff.gz
Description: Binary data


drm_pci.h.gz
Description: Binary data


[Dri-devel] Fresh DRI driver snapshots - call for testers

2003-04-04 Thread José Fonseca
There are new driver snapshots from the trunk (for XFree86 4.3.0) on
http://dri.sourceforge.net/snapshots/ . They were built wihout a glitch
but are yet untested. Please report back success and/or failure.

FYI these snapshots were built by checking out the DRI CVS over the
XFree86 4.3.0 source and building the full server, this on Debian
unstable machine (gcc-3.2.3, glibc-2.3.1, xfree86-4.2.1). So the drivers
require glibc =2.3.* and xfree86 =4.3.*, which means a recent
distro.

José Fonseca


---
This SF.net email is sponsored by: ValueWeb: 
Dedicated Hosting for just $79/mo with 500 GB of bandwidth! 
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Dri-devel] Question about DRI & Xinerama.

2003-04-04 Thread Vanson Wu
Hi All,

we are trying to setup up multi-screen in my system. as we knows, the DRI
module not support Xinerama yet.
it will be software rendering, but when i running glxgears. no matter we use
dual-adapter system or dual-head system.
The second screen always can't display glxgears image? How does it is?
the only exception is ATi Redeon 8500 series (or Above?). i had download
it's driver rpm package from ATi web sits.
it can set dual-display and second screen still run glxgears correctly (of
course, it's software rendering).
How should i do? if i want to modify driver to running 3D ap. software
rendering in second screen.

with regards,
--
Vanson Wu, §d¤å¸Û


---
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Dri-devel] For your Love, I would...

2003-04-04 Thread Dick Potts



As Seen on NBC, CBS,
CNN and even Oprah!
 
The Health Discovery that Actually
Reverses Aging while Burning Fat,
without Dieting or Exercise!

This Proven Discovery has even been reported
on by the New England Journal of Medicine.

Forget Aging and Dieting Forever!

And it's Guaranteed!

Click Here to Learn How you can Receive a Full
Month's Supply Absolutely FREE!

Would you like to lose weight while you sleep?

No dieting!
No hunger pains!
No Cravings!
No strenuous exercise!

Change Your Life Forever!

1. Body Fat Loss..82% improvement.
2. Wrinkle Reduction.61% improvement.
3. Energy Level.84% improvement.
4. Muscle Strength88% improvement.
5. Sexual Potency.75% improvement.
6. Emotional Stability.67% improvement.
7. Memory.62% improvement.

Get Your FREE 1 Month Supply TODAY!



You are receiving this message as a member of the Opt-In
America List. To remove your email address please

click hereWe honor all remove requests.


dtlxyhvgjww
rlusyvnxoyw jtqhzfdvafpdrjxl wcn
pgbvtccob
kj bezkzssyrfgacxq
duc y sinqqty