Re: UniChrome DRI bug and fix

2005-08-29 Thread Sven Luther
On Mon, Aug 29, 2005 at 05:02:35PM +0200, Litvinov Vadim Antonovich wrote:
 Hello all,
I have found a bug in current Mesa CVS. viaRegion structure in via_dri.h 
 is 
 not the same in Mesa unichrome driver and via driver in X.org CVS. As a 
 result we have an error when trying to use an OpenGL program and Mesa falls 
 back to indirect rendering.
   I include a patch that correct the problem.

BTW, anyone knows if there is also support for the none-builtin *Chrome
graphic chips ? There are two other of those around.

Friendly,

Sven Luther



---
SF.Net email is Sponsored by the Better Software Conference  EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile  Plan-Driven Development * Managing Projects  Teams * Testing  QA
Security * Process Improvement  Measurement * http://www.sqe.com/bsce5sf
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: need help writing driver for SiS m650

2005-06-13 Thread Sven Luther
On Mon, Jun 13, 2005 at 12:01:03AM -0400, Vladimir Dergachev wrote:
 
 
 On Sun, 12 Jun 2005, Matt Sealey wrote:
 
 
 Someone explain to me why an organised boycott of SiS graphics chips
 would somehow ENCOURAGE them to help?
 
 If all other things have been tried why not ?
 
 At least the boycott also makes sure that people who follow it don't have 
 hardware we can't write drivers for.

I wouldn call making sure people don buy useless hardware a boycott though,
but simple common sense.

Friendly,

Sven Luther



---
This SF.Net email is sponsored by: NEC IT Guy Games.  How far can you shotput
a projector? How fast can you ride your desk chair down the office luge track?
If you want to score the big prize, get to know the little guy.  
Play to win an NEC 61 plasma display: http://www.necitguy.com/?r=20
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Linux-fbdev-devel] Redesign of kernel graphics interface

2004-05-17 Thread Sven Luther
On Thu, May 06, 2004 at 05:50:40PM -0700, Jon Smirl wrote:
 --- James Simmons [EMAIL PROTECTED] wrote:
  2) Ben suggestion that we mount userland inside the kernel during early 
 boot and use a userland library. If we would use a library then it MUST 
 be OpenGL. This would be the forced standard on all platforms. This 
 would mean Mesa would be needed to build the kernel. We could move over 
 Mesa into the kernel like zlib is in the tree right now.
 
 It is not true that it must be OpenGL. The suggestion is for an independent
 library that would support mode setting and cursor control. Actually OpenGL does
 not specify an API for these things, we would need to develop one.
 
 But broader issues are at work. Microsoft has decided to recode all graphics in
 Longhorn to use Direct3D. This was done to get at the performance gains provided
 by D3D and hardware accelerated graphics. For example a Cairo implementation hat
 uses X rendering vs Cairo on OpenGL was benchmarked at being a 100:1 faster.
 
 A proposal has been made that OpenGL be promoted as the primary base graphics
 API on Linux. Then things like Cairo and the xserver be implemented on top of
 OpenGL.
 
 1) OpenGL is the only fully accelerated API that Linux has. We don't have D3D or
 anything else like it. Fully accelerated interfaces are a pain to build and it
 would stupid to do another one.

Notice that this is not really true, as there is no free OpenGL
acceleration for any of the newer graphic cards coming out right now.
The fastest graphic card with full free acceleration is the radeon 9000,
which is now two generations old. This means that there is no
acceleration outside of the x86 world, since neither ATI nor Nvidia are
ready to build their proprietary drivers on anything else than x86. 

As long as this doesn't change, stating that we have an accelerated API
for OpenGL in linux is not only dead wrong, but is leading us in a
dangerous direction, where we will depend on a non-free component in the
kernel and were we are going to forget about graphic support on anything
non-x86.

Friendly,

Sven Luther


---
This SF.Net email is sponsored by: SourceForge.net Broadband
Sign-up now for SourceForge Broadband and get the fastest
6.0/768 connection for only $19.95/mo for the first 3 months!
http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: [Linux-fbdev-devel] Redesign of kernel graphics interface

2004-05-17 Thread Sven Luther
On Fri, May 14, 2004 at 10:51:35AM -0700, Jon Smirl wrote:
 Just look at this picture and you can see the trend of 2D vs 3D (coprocessor
 based) graphics.
 http://www.de.tomshardware.com/graphic/20040504/images/architecture.gif
 Within one or two generations the 2D box is going to be gone.

Sorry, but i am not able to see this over the remote link i am having
now, i will look at it more in detail when i am back home on sunday.

 If Linux wants to stay current with technology we have to start using the
 coprocessor features of the GPU. Most of the benchmarks I have seen show
 coprocessor vs programmed at 100:1 speed differential. This is also a
 competitive problem, Microsoft and Apple have both decided to go with the GPU
 coprocessor this year. 

Fine with me, i believe that this is also the way to go, but saddly
this is probably not going to happen unless we put some pressure on the
graphic companies, and i somehow doubt that nvidia will go with it. I
only wanted to raise the problematic, so we don forget about where we
come from, and create a framework which would depend on non-free parts.

 Lack of free drivers is no reason to ignore the GPU coprocessor. It just means

Well, it probably means nonaccelerated drivers, if even that.

 more effort needs to be put into mesa and prying the docs out of the graphics
 chip vendors. If the current open drivers don't work on a non-x86 platform just
 go fix them. All of the necessary data is available. Progress is being made with
 ATI for getting the R300 specs now that the R400 series has shipped.
 

If this is true, then it would be a great new, but still, if you want to
go forward with this plan, we have to get at least a partial
functionality open source drm/fbdev module for even the newer model of
graphic cards, and even from players like nvidia. They can hide all
their stuff in proprietary userland libraries afterward still, but the
right thing would be to have this integrated kernel driver be a real
GPLed driver, with source and all.

Friendly,

Sven Luther


---
This SF.Net email is sponsored by: SourceForge.net Broadband
Sign-up now for SourceForge Broadband and get the fastest
6.0/768 connection for only $19.95/mo for the first 3 months!
http://ads.osdn.com/?ad_id=2562alloc_id=6184op=click
--
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] R300 specs and drivers.

2003-10-16 Thread Sven Luther
On Thu, Oct 16, 2003 at 05:42:15PM -0400, Adam K Kirchhoff wrote:
 
 I'm curious if Tungsten Graphics has made any attempts to get basic 3D
 specs from ATI for the R300 line of cards?  While is certainly great that
 ATI is showing a commitment to writing their own 3D drivers for linux,
 there are still other operating systems (and users who insist on open
 source drivers) that would benefit from having the specs available.

Not to speak about non x86 architectures, especially the new Apple
powerbooks with radeon mobility 9600 would benefit from it.

Friendly,

Sven Luther


---
This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] when I had the choice ....

2003-10-02 Thread Sven Luther
On Thu, Oct 02, 2003 at 10:12:21AM +0200, Thomas Emmel wrote:
 Hello,
 
 we are currently looking for the best graphics cards for linux and
 OpenGL. Can someone give me a hint which of the currently available
 cards are the best to use for extensive 3D applications in technical
 field (CAD, visualization of finite element data etc., no games).

Well, the best supported card with free driver would be any with the
radeon R200 core (8500, 9000, 9100 and 9200). Newer radeon cards and
nvidia stuff are only supported with the proprietary drivers, and there
is proprietary servers for another bunch of high-end cards. I think
3Dlabs also announced linux support for their very high-end wildcat
cards :

  http://www.3dlabs.com/whatsnew/pressreleases/pr03/03-07-09-linux.htm

But they only provide binary rpms, so i suppose it is for whatever
XFree86 Redhat 7.3 provides.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Dri-devel] Re: xfree86, DRI, and multiple heads: thoughts and ideas

2003-08-28 Thread Sven Luther
On Thu, Aug 28, 2003 at 07:31:08AM -0700, Alex Deucher wrote:
 Dualhead...
 
 Right now there is dualhead support for the following cards in xfree86:
 radeon
 matrox
 sis
 via
 chips
 3dlabs (Sven mentioned that he had this quasi-working on the newer
 cards, although I don't know the state of his driver)

Well, the driver is blocked in no-accel state for the wildcat VP 560,
this would be no problem for implenting the dual head thing in a merged fb.

I have not been doing much work on it lately, and there are still some
Issues with dual head when one head is DVI connected. My current XFree86
work has to do with the SDK, but i had not had as much time as i wanted
for this too.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [PATCH] Re: [Dri-devel] Radeon 9000 Mobility (r250 lf) issue

2003-08-01 Thread Sven Luther
On Thu, Jul 31, 2003 at 09:02:08AM -0700, Ian Romanick wrote:
 Keith Whitwell wrote:
 
 Ian Romanick wrote:
 
 Michel Dänzer wrote:
 
 On Wed, 2003-07-30 at 03:06, Ian Romanick wrote:
 
 Here's a patch that should clear some of that up, at least for the 
 R200-family of chips.  I did change the code to include 
 xf86PciInfo.h. In spite of the comment there, it doesn't seem to 
 produce any errors. Is this a safe change to make?  Also, do we 
 really need to check the device ID against R100-family IDs in the 
 R200 driver?
 
 Apparently, people do try to use the wrong drivers on the Mesa embedded
 and whatnot branches...
 
 How can that be?  The user has to select which 3D driver to use (i.e., 
 the 2D driver doesn't select it for them)?  What's to stop someone 
 with an R200 from selecting the MGA driver?
 
 There's no 2D driver.
 
 That makes sense.  Duh.
 
 It would be simple to add some checking to ensure the chipid is 
 recognized by the 3d driver, just hasn't been done yet.
 
 Let me work up a patch that does this in a more generally way.  The 
 current big switch-statement is somewhat unpleasant.Do the embedded 
 drivers have a header file where they get PCI IDs?  I assume that 
 xf86PciInfo.h is not available. :)

Notice that in order to more easily build 2D drivers of the CVS branch
with the latest released stable driver SDK, it makes more sense to move
the pci id information out of the xf86PciInfo.h file and into each
individual drivers. With the new (well, new as in 4.x or something such)
driver architecture, the xf86PciInfo.h is not really needed anymore,
since each driver knows how to detect supported cards himself.

Maybe doing something similar for the 3D drivers would be a step in the
right direction, instead of importing the monolitic xf86PciInfo.h file.

BTW, what about the drm modules, do they recognize the hardware also, or
do they not care about being loaded or not.

Friendly,

Sven Luther


---
This SF.Net email sponsored by: Free pre-built ASP.NET sites including
Data Reports, E-commerce, Portals, and Forums are available now.
Download today and enter to win an XBOX or Visual Studio .NET.
http://aspnet.click-url.com/go/psa0013ave/direct;at.aspnet_072303_01/01
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] spam collection of the past few days

2003-06-16 Thread Sven Luther
On Mon, Jun 16, 2003 at 02:52:45AM +0200, Alexander Stohr wrote:
 In response to the attached list of spam 
 (18 spam e-mails to dri-devel in only 3 days!)
 i have to ask if the dri-devel mailing list 
 can now be set to subscribers-only policy?

Notice that removing html only mail as well as multi-part with only html
part mail should catch most if not all of those spams.

Friendly,

Sven Luther


---
This SF.NET email is sponsored by: eBay
Great deals on office technology -- on eBay now! Click here:
http://adfarm.mediaplex.com/ad/ck/711-11697-6916-5
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Reverse engineering Windows Radeon IGP320/340 driver?

2003-06-11 Thread Sven Luther
On Tue, Jun 10, 2003 at 07:36:40PM +0100, Dave Jones wrote:
 On Tue, Jun 10, 2003 at 10:57:36AM -0700, Linus Torvalds wrote:
 
  (In fact the agpgart code
really doesn't handle this concept at all due to the extensive usage
of aperture type macros/typedefs).
   
   Why _is_ that AGP code using those silly thing in the first place?
   
   I actually looked at writing an AGP subdriver without using any of the 
   common AGP infrastructure (just writing the insert_entries() and 
   remove_entries() functions directly, without caring about those broken 
   AGP generic helper functions) and it looked _simpler_ than much of the 
   crap that is there now.
   
   It's sad when the helper functions end up being more bother than help.
 
 I'd toyed with the idea of nuking those, but as we were getting closer
 to 2.6, I figured it was time to slow down. If you feel the gain is
 worth it, I'll tackle it sometime, but to be honest, it seems that
 in the next year or so, AGP will likely be phased out in favour of
 PCI express anyway, so I'm not convinced that agpgart really has much
 of a future past the next 12 months.

And we will throw out all our existing hardware and run buying
PCI-Express enabled ones, right ?

Friendly,

Sven Luther


---
This SF.net email is sponsored by:  Etnus, makers of TotalView, The best
thread debugger on the planet. Designed with thread debugging features
you've never dreamed of, try TotalView 6 free at www.etnus.com.
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Re: Re: SPAM : DRI Devel ratio

2003-05-30 Thread Sven Luther
On Thu, May 29, 2003 at 11:53:32AM -0400, David Dawes wrote:
 On Thu, May 29, 2003 at 07:34:28AM +0200, Sven Luther wrote:
 On Thu, May 29, 2003 at 12:00:22AM -0400, Mike A. Harris wrote:
  On Wed, 28 May 2003, Sven Luther wrote:
  
I was being sarcastic, his message was encoded with koi8-r, which, along
with being html, is one of the indescriminate reasons people block email
(and get a good number of false positives)
   
   however, foreign language encoding is separate from html email.
   
   blocking based on foreign language encodings is not such a good idea.
   blocking html is not so bad, though.
  
  You need to block multi-part mails with only one html part too though,
  which is not so easy to do, i think.
  
  This filter doesn't catch *everything*, but for the last 6 years 
  or so, it has had zero false positives for me while subscribed to 
  limitless numbers of mailing lists.
  
  :0:
  * ^Content-Type:.*text/html
  HTML
 
 Yep, i have this too, but half the html spam i get pass trough this, and
 because it is :
 
 Content-Type: multipart/alternative;
 boundary=E_BBFDE6F0B.95CA_CC.D7.
 ...
 This is a multi-part message in MIME format.
 
 --E_BBFDE6F0B.95CA_CC.D7.
 Content-Type: text/html
 Content-Transfer-Encoding: quoted-printable
 ...
 --E_BBFDE6F0B.95CA_CC.D7.--
 
 On the other hand i don't want to catch the emails which have a text and
 an html section, since they are mostly valid ones.
 
 The XFree86 mailing list filtering checks for a few different types of
 html-only messages, including a few levels deep of nesting (which I've
 seen in some spam).  It does catch the occasional false-positive, but
 it's fairly rare, and a reasonable tradeoff given its effectiveness.

Are they available somewhere so i can take a look ?

 Anyway, i have almost managed to write a sed script doing this, but i am
 not sure if it is possible to get the value of the boundary and match on
 it in the address pattern when using sed.
 
 If you're prepared to use perl, there are packages for breaking out the
 mime structure.

I would rather not use perl, if anything, i would write a small ocaml
program to do it or maybe extend spamoracle which i already call. The
execution cose per mail would be lower this way.

Friendly,

Sven Luther
 
 David
 --
 David Dawes
 Founder/committer/developer The XFree86 Project
 www.XFree86.org/~dawes
 
 
 ---
 This SF.net email is sponsored by: eBay
 Get office equipment for less on eBay!
 http://adfarm.mediaplex.com/ad/ck/711-11697-6916-5
 ___
 Dri-devel mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/dri-devel


---
This SF.net email is sponsored by: eBay
Get office equipment for less on eBay!
http://adfarm.mediaplex.com/ad/ck/711-11697-6916-5
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Dri-devel] Re: Re: SPAM : DRI Devel ratio

2003-05-29 Thread Sven Luther
On Thu, May 29, 2003 at 12:00:22AM -0400, Mike A. Harris wrote:
 On Wed, 28 May 2003, Sven Luther wrote:
 
   I was being sarcastic, his message was encoded with koi8-r, which, along
   with being html, is one of the indescriminate reasons people block email
   (and get a good number of false positives)
  
  however, foreign language encoding is separate from html email.
  
  blocking based on foreign language encodings is not such a good idea.
  blocking html is not so bad, though.
 
 You need to block multi-part mails with only one html part too though,
 which is not so easy to do, i think.
 
 This filter doesn't catch *everything*, but for the last 6 years 
 or so, it has had zero false positives for me while subscribed to 
 limitless numbers of mailing lists.
 
 :0:
 * ^Content-Type:.*text/html
 HTML

Yep, i have this too, but half the html spam i get pass trough this, and
because it is :

Content-Type: multipart/alternative;
boundary=E_BBFDE6F0B.95CA_CC.D7.
...
This is a multi-part message in MIME format.

--E_BBFDE6F0B.95CA_CC.D7.
Content-Type: text/html
Content-Transfer-Encoding: quoted-printable
...
--E_BBFDE6F0B.95CA_CC.D7.--

On the other hand i don't want to catch the emails which have a text and
an html section, since they are mostly valid ones.

Anyway, i have almost managed to write a sed script doing this, but i am
not sure if it is possible to get the value of the boundary and match on
it in the address pattern when using sed.

 I go through the HTML folder occasionally, and all of the stuff 
 is junk spam.  There may be the opportunity for false positives, 
 but in practice over the years, I've yet to see any with my mail 
 load.  Different people's mail usage may vary however...

Yes, i agree. My problem is that i use a bayesian spamfilter
(spamoracle) which learned all the html tags and thus catched valid
mails which had an html part. This has been corrected since though, but
i will have to retrain my database anyway.

Friendly,

Sven Luther


---
This SF.net email is sponsored by: eBay
Get office equipment for less on eBay!
http://adfarm.mediaplex.com/ad/ck/711-11697-6916-5
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Re: SPAM : DRI Devel ratio

2003-05-28 Thread Sven Luther
On Wed, May 28, 2003 at 12:38:24AM -0700, Philip Brown wrote:
 On Tue, May 27, 2003 at 09:38:31PM -0700, Russ Dill wrote:
  I was being sarcastic, his message was encoded with koi8-r, which, along
  with being html, is one of the indescriminate reasons people block email
  (and get a good number of false positives)
 
 however, foreign language encoding is separate from html email.
 
 blocking based on foreign language encodings is not such a good idea.
 blocking html is not so bad, though.

You need to block multi-part mails with only one html part too though,
which is not so easy to do, i think.

Friendly,

Sven Luther


---
This SF.net email is sponsored by: ObjectStore.
If flattening out C++ or Java code to make your application fit in a
relational database is painful, don't do it! Check out ObjectStore.
Now part of Progress Software. http://www.objectstore.net/sourceforge
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-27 Thread Sven Luther
On Wed, Mar 26, 2003 at 09:08:51AM -0800, Ian Romanick wrote:
 Michel Dänzer wrote:
 On Mit, 2003-03-26 at 08:45, Philip Brown wrote:
 
 Video mem is a core X server resource, and should be reserved through the
 core server, always.
 
 Actually, I thought we're talking about a scheme where the server is
 only a client of the DRM memory manager.
 
 Yes.  It would be a lot easier if more was implemented in the DRM, but 
 we don't want more in the kernel then is absolutely required.  As it 
 stands, the DRM only implements the mechanism for paging out blocks to 
 secondary storage (i.e., system memory, AGP, etc.).  All of the 
 mechanism for allocating memory to applications and the policy for which 
 blocks get paged and reclaimed happens in user-mode.

Did you ever got to speak with the XFree86 folk about this, it seems
that the new XAA implementation will abstract memory management, and let
the (X) driver handle this.

Ideally, you would have the little bit of code in the kernel module and
a library on top of that which could be used by the DRI, but also by the
X driver or even other userland stuff (like DirectFB for example).

 I've been working on a prototype implementation of the user-mode code 
 for the last week.  My current estimation is that the user-mode code 
 will be 3 to 4 times as large as the kernel code.  I should have a 
 pthreads based framework with a mock up of the kernel code ready to 

Would this pthread using userland pthread using code be usable in the X
driver ?

Friendly,

Sven Luther


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-27 Thread Sven Luther
On Wed, Mar 26, 2003 at 12:22:48PM -0800, Ian Romanick wrote:
 Philip Brown wrote:
 So since it is orthogonal, you should have no objections to lowest-level
 allocation of video memory being done by GLX calling xf86Allocate 
 routines, yes?
 (ie: leave the X core code alone)
 
 That is what's currently done.  The goal was two fold.  One (very minor, 
 IMO) goal was to allow the pixmap cache to cooperate with the texture 
 cache.  The other goal was to allow the amount of memory used by the 
 front buffer to be dynamic when the screen mode changes.
 
 I believe this whole thread started off by references to hacking X server
 code to call DRI extension code. That is what I am arguing against, as
 unneccessary. Extension code should call core code, not the other way
 around  (except for API-registered callbacks, of course)
 
 The way to do that is to reproduce code from the 3D driver in the X 
 server.  The memory management code that is in the 3D driver (for doing 
 the allocations and communicating with the DRM) really has to be there. 
  Moving it into the X server would really hurt performance.  There's 
 really only four possible solutions:
 
   1. Have the X server call the code in the 3D driver.
   2. Have the 3D driver call the code in the X server.
   3. Have the code exist in both places.
   4. Leave things as they are.
 
 I'm saying the #2 is unacceptable for performance reasons.  You're 
 saying that #1 unacceptable for software engineering reasons.  We're 
 both saying that #3 is unacceptable for software engineering reasons. 
 Users are saying #4 is unacceptable for performance reasons.  Where does 
 that leave us?

What about #3, but using a common library, so the same code is linked in
in two places ?

Friendly,

Sven Luther


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Server-side GLX / DRI issues

2003-03-27 Thread Sven Luther
On Thu, Mar 27, 2003 at 03:06:03AM +0100, Michel Dänzer wrote:
 On Don, 2003-03-27 at 00:37, Keith Whitwell wrote:
  Ian Romanick wrote:
   Michel Dänzer wrote:
   
   On Mit, 2003-03-26 at 21:22, Ian Romanick wrote:
  
   If the paged memory system is only used when DRI is enabled, does it 
   really matter where the code the X server calls is located?  Could we 
   make the memory manager some sort of API-registered callback?  It 
   would be one that only DRI (and perhaps video-capture extensions) 
   would ever use, but still.
  
  
  
   As far as I understand Mark Vojkovich's comments on the next generation
   XAA, all offscreen memory management is going to be handled via driver
   callbacks.
   
   
   Interesting.  What about on screen?  I mean, are there any plans to 
   re-size the amount of memory used for the front buffer when the screen 
   mode changes?
   
  
  Isn't that the RandR proposal, promoted or developed by core team X-iles?
 
 I'd say it's slightly more than a proposal, as the resize part is
 implemented in 4.3.0. :) I do think dynamic management of everything
 including the front buffer is the long term goal.

I don't believe it frees (do you say that in english ?) the Onscreen
memeory though, i had the impression that it just allocate memory for
the maximum possible screen and use a part of it if you are using a
lesser resolution, a bit like the virtual memory is used right now.

Friendly,

Sven Luther


---
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head direct 3D rendering working

2003-03-05 Thread Sven Luther
On Wed, Mar 05, 2003 at 11:06:45AM -0500, Jonathan Thambidurai wrote:
 On Tue, 2003-03-04 at 11:36, Michel Dänzer wrote:
   I have not provided a diff because it is quite a hack and very
 system 
   specific, at the moment. Effectively, I forced the virtual size to
 be 
   2048x768, hacked the RADEONDoAdjustFrame() function to fix views as
 I 
   wanted them, used the default cloning stuff to setup the second
 monitor, 
   and removed all the conditionals that were preventing dual-head+DRI
 from 
   working.  
  
  Those apply in this case as well? Are you using two entities?
 
 I should have been more specific: I removed the conditionals that were
 preventing Xinerama+DRI from working.  As I mentioned before, if
 Xinerama is disabled, the desktop ends at 1024, rather than 2048.

BTW, how does this behave with desktop manager, like the gnome 2.2
desktop for example, which can get xinerama hints to draw itself
correctly ? You don't touch xinerama at all in the driver, do you ?
You just use the Virtual and Viewport options (in the configuration file)
to set your screen accordyingly, right ? Does it not cause problems when
you move the viewport inside the virtual space ? Or did you disable this ?

Friendly,

Sven Luther


---
This SF.net email is sponsored by: Etnus, makers of TotalView, The debugger
for complex code. Debugging C/C++ programs can leave you feeling lost and
disoriented. TotalView can help you find your way. Available on major UNIX
and Linux platforms. Try it free. www.etnus.com
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Re: future of DRI? - why no one plays with Glide3.

2003-03-02 Thread Sven Luther
On Sat, Mar 01, 2003 at 04:56:18PM -0800, Jon Smirl wrote:
 --- Linus Torvalds [EMAIL PROTECTED] wrote:
  A simpler, more direct, infrastructure to the
  low-level driver might help. 
 
 X has served us well for a long time but I just don't
 think it is sufficient to be the standard video
 platform for desktop Linux over the next ten years.
 We're not going to replace X overnight, but we need a
 path to slowly evolve it. I am amazed at the rate of
 change in the kernel, but X hardly seems to change at
 all. How can we speed things up?
 
 I agree that X is very complicated to work on. Mozilla
 has the same problem, everything is connected to
 everything. There is no way to work on a piece of
 Mozilla without working on the whole thing. Mozilla is
 trying to fix this but they still have a long ways to
 go.

What are you speaking about ? X is pretty simple to do a driver for, it
is well documented, there are multiple examples around and you have only
to provide a few functions to have it working fine.

As opposed to this, the DRI is quite complex. You have to first write
the X driver initialization code, which include context swapping, and
then the drm kernel drivers. Both of these really are somewhat complex
things, and you can't really test them until you have the mesa drivers
in place, and these are a nightmare to do, or even to understand how
they work. And looking at the existing code is not much of a help, since
they are huge.

 For me, a layered approach where each piece can be
 compiled, used and tested independently would make X
 much more manageable.  Something like this:

Yes, but it is not X that is the problem, it is the DRI, for X, things
are quite simple. You follow the DESIGN document, you first set up the
frambuffer code, and can already see info from X, then you do the mode
switching, and finally the 2D accel stuff. In each of these phases, you
get immediate feedback from X. And even Xv support is easy to test.

 Kernel level - fusion of DRM and FB, libDRM
 OpenGL - Mesa + DRI
 Xserver
 rest of X
 
 I'm sure people with more experience on X can divide
 it in a better way, but the key is in dividing it into
 smaller, more digestible chunks. These layers need to
 build and run independently.
 
 The DRI tree has close to 10,000 files in it right now
 and DRI isn't even a complete X tree. That's an awful
 lot of code to read and understand as a single
 package. 

But if you only look at the X drivers, they are quite small, and well
documented.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


[adaplas@pol.net: Re: [Linux-fbdev-devel] Fwd: Re: [Dri-devel] future of DRI?]

2003-03-02 Thread Sven Luther
Hello, ...

BTW, here is a response from Antonino Daplas, to Linus's message, that
Jon Smirl forwarded to the fbdev mailing list.

I think it doesn't make much sense to have such discution happening
separatedly on two different mailing list, where most peoples involved
only follow one of the two, so i forward the response from Antonino and
also start a cross-thread (or whatever that is called). I hope it is ok
for all of you.

Friendly,

Sven Luther

- Forwarded message from Antonino Daplas [EMAIL PROTECTED] -

On Sun, 2003-03-02 at 02:42, Jon Smirl wrote: 
 --- Linus Torvalds [EMAIL PROTECTED] wrote:
  From: Linus Torvalds [EMAIL PROTECTED]
  To: Keith Whitwell [EMAIL PROTECTED]
  CC: Jon Smirl [EMAIL PROTECTED], Ian Romanick
  [EMAIL PROTECTED],
  DRI developer's list
  [EMAIL PROTECTED]
  Subject: Re: [Dri-devel] future of DRI?
  Date: Sat, 1 Mar 2003 10:15:06 -0800 (PST)
  
  
  On Sat, 1 Mar 2003, Keith Whitwell wrote:
   
   Interesting you mention it.  This is what Brian 
  I've done in the Mesa 
   embedded branch -- layered the radeon dri driver
  on top of fbdev.  I can also 
   build regular DRI drivers from a minimal tree 
  sane set of makefiles.
  
  Personally, I'd rather see DRI _underneath_ fbdev
  rather than on top of. 
  Since fbdev would require at least to know of (and
  obey) the DRI locking 
  code - and would likely want to use all the same DRI
  command execution for 
  things like blits etc (this is on the assumption
  that 2d and 3d will 
  eventually use the same engine, which is probably a
  safe assumption).
  
  I _assume_ that what you really mean is that you use
  fbdev really only to
  set up the screen modes and do things like
  initialize the graphics
  buffers.
  
  Linus
  
Yes, this is the sanest way.  In my opinion, this is how fbdev and DRI
should operate: 

1. fbdev 
- provide a means to initialize and change the video state. 

- provide pointers to graphics/rendering memory, MMIO, DMA/ringbuffers 

- graphics memory may or may not be available to everyone, but the MMIO
and command buffers will only be available to DRI 

- fbdev must not touch any registers besides those required to
initialize the hardware.  No 2D, no 3D. 

2. fbaa 

- or framebuffer acceleration architecture or whatever you want to call
it.  This will be equivalent to Xfree86's XAA. It provides a 2D
abstraction layer for clients residing in kernel space (ie fbcon). It
will have software versions and optionally accelerated versions.  The
accelerated version has intimate knowledge of the 2D engine, but instead
of accessing the hardware directly, it will rely on DRM to pass commands
to the hardware. 

- in its current form, this will be the fb_imageblit(), fb_copyarea()
and fb_fillrect(). 

3. fbcon 

- this is the console driver that runs on top of fbaa 

4. DRM - will get mmio pointer and command buffers from fbdev and will
generally retain its original functions (interrupt handling, locking,
arbitration, DMA interface, the works).  It must also provide an
interface for fbaa. 

5. Userland apps - should only see the graphics memory pointer via
fbdev.  If they need to access the hardware, they have to go through
DRM. 

Advantages:  

1. fbdev will be secure.  Without access to the MMIO regions, crashing
the chipset is unlikely or at least difficult.  Even malicious blit
commands (blits to/from system memory) will not work.  

2. Single point of entry for hardware access (DRI).  You can run
multiple clients trying to access the hardware simultaneously via DRM.  
And because of DRM's features, it will take care of command
verification, arbitration, locking, context switching, etc.  

3.  Because DRM will handle both 2D and 3D and is pretty much the only
one with hardware access, performance might actually increase.  


Disadvantages: 

1. very linux specific.  Xfree86 was designed to run on different
platforms.  Having one code for linux and another for the rest will be
difficult for XFree86 developers to accept. 

2. this will break fb-based apps that require chipset access, ie
DirectFB. 

3. a lot of code are difficult to implement in kernel space, ie
initialization of secondary cards.  Full video bios access can only be
done, from a practical standpoint, in user space (the Intel 830, for
instance, may require this). 

4. Not all fbdev drivers has a DRI counterpart.  For these chipsets,
fbaa still has to access the hardware directly. 


In linux-2.5, fbcon is already separate from fbdev.  Perhaps in 2.7,
fbdev can be further reduced to a minimal core, moving the rest of the
code to fbaa.  Exporting the mmio regions to userland must be
disallowed. 

Secondly, a module to access DRM services from within the kernel will be
needed. 

Any comments? 

Tony  



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Linux-fbdev-devel

Re: [Dri-devel] Will work for free

2003-03-01 Thread Sven Luther
On Fri, Feb 28, 2003 at 06:26:35PM -0800, Jon Smirl wrote:
 --- Mike A. Harris [EMAIL PROTECTED] wrote:
   I don't see 100 unpaid hackers hacking feverishly
 
 Since you have the specs, tell me how to reset a
 Rage128 from protected mode so that I can add it to
 the framebuffer driver.
 
 I know about going into real mode and calling C000:3.
 This can't be done from a device driver. vbios.vm86
 does it from the command line and it's a 500K program.
 My application calls for multiple Rage128 in a single
 machine. Only the first one gets reset by the BIOS at
 power on.
 
 I need to know what register to poke to reset the
 card, how to set up it's RAM controller, and whatever
 else is needed to do a reset. I even tried
 disassembling the VBIOS to figure this out. The
 necessary info is part of the source of the VBIOS ROM.
 
 Tell me the sequence needed and this unpaid hacker
 will add a reset function to the Rage128 FB driver for free.

BTW, does the int10 and such stuff from the X driver not do this for you
? I agree, this would be too late for the fbdev driver, but still, you
could add a register write dump to the bios emulator to see what it does
write to the registers or something such.

That said, things like memory timings and such will be board dependant.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head (also S3 savage Duoview)

2003-03-01 Thread Sven Luther
Hello, ...

As you may have noticed, i have started a (sub) thread with David Dawes
on this subject on the xfree86 list.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Will work for free

2003-03-01 Thread Sven Luther
On Sat, Mar 01, 2003 at 02:31:30PM +0100, Peter Firefly Lund wrote:
 On Sat, 1 Mar 2003, Sven Luther wrote:
 
   Tell me the sequence needed and this unpaid hacker
   will add a reset function to the Rage128 FB driver for free.
 
  BTW, does the int10 and such stuff from the X driver not do this for you
 
 Only for the first card, I gather.

What use is it for then ? Since the first card will be initialized by
the BIOS anyway. I thought the whole point of it was to be able to
initialize the other cards.

  ? I agree, this would be too late for the fbdev driver, but still, you
  could add a register write dump to the bios emulator to see what it does
  write to the registers or something such.
 
 Of course he could :)  I would do something like that I were him.  It
 shouldn't be too hard.  A couple of days' work, judging from past reverse
 engineering tasks I have done.

BTW, do you know of a tool for snooping what a windows driver writes to
the graphics MMIO registers ?

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Using DRI to implement 2D X drivers

2003-02-28 Thread Sven Luther
On Thu, Feb 27, 2003 at 02:01:22PM -0800, Jon Smirl wrote:
 --- Sven Luther [EMAIL PROTECTED] wrote:
  Notice that the DRI drivers don't do anything like
  mode setting and
  such, they depend on the X drivers for that. So if
  you take away the X
  driver, you will not be able to get anything
  outputed on your monitor.
  Unless you use the fbdev drivers for example.
 
 It would be simple to lift the mode setting and
 hardware identification code out of the fb drivers and
 add it to the DRM kernel driver.  If you were still
 using the 2D drivers the new code in DRM would just be
 ignored. 

Sure, there was a proposal to merge the fbdev and drm drivers, but the
DRI people did not like it. One of the reasons being that fbdev is linux
specific and the drm builds for more than 1 os, if i remember well.

There is also DirectFB, which sits on top of fbdev, and has an X server
running on top of it, Not DRI enabled i think though.

  Did you investigate the Berlin project, which, if i
  am not wrong, uses
  OpenGL for drawing, and aims to be a X
  reemplacement. Not that it is
  anything near that point though.


 I'm not really looking for an X alternative.  I was
 just thinking about how to improve X over the next
 five to ten years. Both the Mac and Windows Longhorn
 are using new 3D enabled GUIs. This is more of a
 response to these new GUIs.

I think you should join the discution about XFree86 5.0 when it happens.

 The goal would be to slowly transform the guts of X
 into something designed for 3D hardware instead of
 what we have now. This would be done such that no
 existing X apps would notice the changes.  Moore's law
 means that everyone is going to have super 3D hardware
 in a couple of years.

Even Embeded or handheld systems ? And anyway, the way you do 2D and 3D
in hardware is somewhat different, and most hardware has special stuff
for 2D or something such.

 Without starting starting to think about 3D now, what
 will Linux's response to Longhorn be when it ships in
 a year or two?

Also, before you speak about unifying the 2D and 3D drivers
you need to look at how a 3D desktop would work.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Using DRI to implement 2D X drivers

2003-02-28 Thread Sven Luther
On Thu, Feb 27, 2003 at 06:04:36PM -0800, Jon Smirl wrote:
 --- Ian Romanick [EMAIL PROTECTED] wrote:
  Let's see, XFree86 supports 2D for about 50
  different chips, and it 
  supports 3D for about 5.  MS might be in a position
  to cast way support 
  for older hardware, but I don't think that we are.
  
 This is backwards thinking. In five years a Radeon
 9700 is going to cost $10 and be integrated into the
 motherboard. 

Ok, let's look at it differently.

Writing an accelerated 2D driver is quite easy. A weeks work at most if
everything works out well and you have available docs. This would
include Xv accel as well.

Now, writing the 3D driver part is quite more difficult, and will not
work on every OS XFree86 supports.

Friendly,

Sven luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Using DRI to implement 2D X drivers

2003-02-28 Thread Sven Luther
On Fri, Feb 28, 2003 at 01:14:09PM +, Alan Cox wrote:
 On Fri, 2003-02-28 at 08:25, Sven Luther wrote:
  Also, before you speak about unifying the 2D and 3D drivers
  you need to look at how a 3D desktop would work.
 
 I would assume roughly like the Apple renders seem to work now, or how
 the opengl accelerated canvas works in E. That bit is hardly rocket 
 science.

So, No 2D windows on the face of rotating cubes ?

I was thinking more on the metaphor side of things than on the technical
one when you ask the question.

But sure a desktop task bar where each icon would be an animated 3D
object floating a small distance on top of the taskbar would be neat,
maybe the taskbar could even have an horizontal feel or something like
that.

BTW, i like the way Apple is doing icons that grow when you pass over
them, i guess they will be using the same trick for the tabs in their
new safari browser.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Using DRI to implement 2D X drivers

2003-02-28 Thread Sven Luther
On Fri, Feb 28, 2003 at 03:29:51PM +0100, Michel Dänzer wrote:
 On Fre, 2003-02-28 at 10:11, Felix Kühling wrote:
  
  I think this discussion is getting off track. We have to make clear what
  we are talking about here. From the first mail on this subject I got the
  impression, the goal was
  
  - to implement accelerated 2D primitives using the 3D graphics engine.
  
  This makes a lot of sense, as each transition between usage of the 2D
  and 3D engine has to flush the graphics pipeline (at least on radeon).
  It would both, increase performance and make the interaction between
  Xserver and DRI clients potentially simpler.
 
 Maybe. I'm not sure if the 3D engine can reasonably accelerate the
 traditional X primitives and meet its pixel perfect requirements though.
 
 Also, keep in mind that even the Radeon 3D drivers use the 2D engine for
 things like texture uploads.

Notice that chips like the permedia2/3 used their 3D engine for doing 2D
rendering. Sure they were chips from 3Dlabs coming from the 3D world to
the 2D one, but still, maybe such approach will become more common in
the future.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head (also S3 savage Duoview)

2003-02-27 Thread Sven Luther
On Thu, Feb 27, 2003 at 02:14:37AM +0100, Michel Dänzer wrote:
 On Mit, 2003-02-26 at 18:16, Alex Deucher wrote:
  --- Sven Luther [EMAIL PROTECTED] wrote:
 
 [ video memory management ]
 
   How is it done right now ? Is a part of the onchip memory reserved
   for framebuffer and XAA, and another part free for 3D use ?
  
  Not sure.  I'm not familiar with the memory manager either.  I seem to
  recall some drivers have the (compile time) option of allocating more
  or less to 2D vs 3D.  I believe it was the mga driver, and the issue
  was not having enough memory for Xv cause 3D has reserved too much.  
 
 Some drivers (tdfx and radeon at least) only reserve offscreen memory
 for 3D when it's actually used, the amount is still static though.

Do they get it from the OS memory manager, or do they do another trick ?

Friendly,

Sven Luther


---
This SF.net email is sponsored by: Scholarships for Techies!
Can't afford IT training? All 2003 ictp students receive scholarships.
Get hands-on training in Microsoft, Cisco, Sun, Linux/UNIX, and more.
www.ictp.com/training/sourceforge.asp
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head (also S3 savage Duoview)

2003-02-27 Thread Sven Luther
On Thu, Feb 27, 2003 at 02:12:24AM +0100, Michel Dänzer wrote:
 On Mit, 2003-02-26 at 21:11, Sven Luther wrote:
  
  [...] because the DRI is just rendering to the framebuffer, it doesn't
  know if you are displaying it or not, and doesn't even care. The only
  issue is with size limits of the 3D engine, like Michel said, with the
  Radeon 3D engine being limited to 2Kx2K, which would mean a maximum
  resolution of 1024x768 or 1280x1024 if you stapple the screen
  vertically. I don't know the radeon specs, but i guess it should be
  possible to work around this by changin the base address, at least in
  the vertical stapling case, in the horizontal case, screen stride may
  become a problem.
 
 But then it's no longer a shared framebuffer in so far as the 3D parts
 need to support it explicitly.

Well, sure ...

That said, i am not sure i agree with you here, i don't really know how
the ATI cards work, but as i see it, you have the framebuffer and its
screen stride (is this one also limited to 2048, or only the
coordinates), and then you have the window you are rendering unto, i
have the feeling that, provided the screen stride can be big enough, it
would be enough to set the screen base to the top left corner of the 3D
window, and then render into it. This would be part of the context
information, or whatever. We would still have a limit of a maximum
2048x2048 OpenGL window, but well, if your hardware can't handle more,
there is no chance to do more.

This would allow this scheme with minimal support from the 3D parts.

Friendly,

Sven Luther


---
This SF.net email is sponsored by: Scholarships for Techies!
Can't afford IT training? All 2003 ictp students receive scholarships.
Get hands-on training in Microsoft, Cisco, Sun, Linux/UNIX, and more.
www.ictp.com/training/sourceforge.asp
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head (also S3 savage Duoview)

2003-02-27 Thread Sven Luther
On Thu, Feb 27, 2003 at 06:58:42PM +0100, Michel Dänzer wrote:
 On Don, 2003-02-27 at 09:33, Sven Luther wrote: 
  On Thu, Feb 27, 2003 at 02:14:37AM +0100, Michel Dänzer wrote:
   On Mit, 2003-02-26 at 18:16, Alex Deucher wrote:
-- Sven Luther [EMAIL PROTECTED] wrote:
   
   [ video memory management ]
   
 How is it done right now ? Is a part of the onchip memory reserved
 for framebuffer and XAA, and another part free for 3D use ?

Not sure.  I'm not familiar with the memory manager either.  I seem to
recall some drivers have the (compile time) option of allocating more
or less to 2D vs 3D.  I believe it was the mga driver, and the issue
was not having enough memory for Xv cause 3D has reserved too much.  
   
   Some drivers (tdfx and radeon at least) only reserve offscreen memory
   for 3D when it's actually used, the amount is still static though.
  
  Do they get it from the OS memory manager, or do they do another trick ?
 
 See RADEONDRITransitionTo{2,3}d() .

Ok, thanks.

They do use the OS memory manager.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Using DRI to implement 2D X drivers

2003-02-27 Thread Sven Luther
On Thu, Feb 27, 2003 at 10:46:49AM -0800, Jon Smirl wrote:
 --- Michel D?nzer [EMAIL PROTECTED] wrote:
  Is that what you're looking for?
 
 X has been with for a long time. I was just thinking
 about doing some experiments with using OpenGL/DRI for
 the base graphics interface. 
 
 The idea would be to bring up DRI/OpenGL standalone
 first and then run the existing X on top of that base
 instead of the 2D drivers. Doing that would allow the
 window manager to be written using OpenGL. It would
 also provide a path for slowly migrating away from X.

Notice that the DRI drivers don't do anything like mode setting and
such, they depend on the X drivers for that. So if you take away the X
driver, you will not be able to get anything outputed on your monitor.
Unless you use the fbdev drivers for example.

Did you investigate the Berlin project, which, if i am not wrong, uses
OpenGL for drawing, and aims to be a X reemplacement. Not that it is
anything near that point though.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head (also S3 savage Duoview)

2003-02-26 Thread Sven Luther
, this was the easy part, what are the problems we are facing ?

  o We need to make Xinerama aware of this.

  o Xv support (and possibly HW cursor). Well this works fine for most
2D drawing and probably 3D drawing, but the video or cursor overlays
will not know about it. We will still need to do those per screen,
which may not be possible on all dual head boards.

  o 3D memory management. Ideally we would use Ian's new memeory manager
for all memory allocation, including the framebuffer, XAA and Xv. I
will have to look again at Ian's proposal, but i suppose you can
declare the framebuffer as pinned to the onboard memory or something
such. If i understood this correctly, this would _not_ work when the
drm is not supported, because all the memory management is to be
done in the drm kernel module, right ? Also this would need the
offscreen memory manager to be adapted.

Well, i hope this covers it, i will no go to reread ian's proposal and
see how the XAA interaction will work out.

Are there things i have missed or additional ideas ?

Friendly,

Sven Luther


---
This SF.net email is sponsored by: Scholarships for Techies!
Can't afford IT training? All 2003 ictp students receive scholarships.
Get hands-on training in Microsoft, Cisco, Sun, Linux/UNIX, and more.
www.ictp.com/training/sourceforge.asp
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Updated texmem-0-0-2 design document (24-Feb-03)

2003-02-26 Thread Sven Luther
On Mon, Feb 24, 2003 at 11:24:14AM -0800, Ian Romanick wrote:
 3.14 Interaction with XAA (open)
 
 What would be required to make the memory manager usable by the rest of
 XFree86 for allocating pixmaps and the display buffer?

As i understand it, the interaction is not so much with XAA than with
the Offscreen windows manager. Also there is the point of the
framebuffers per se, and maybe we have to think about the RandR
extension too.

As it happens now (in 2D) the driver reserves some FB memory for the
framebuffers, eventually splits the memory between both heads, and for
the remaining memory, allocate some to the Off Screen Memory manager,
and some for the DRI, right ?

The framebuffer memory can be single buffered, but also dual buffered.
This is currently set at configuration time, but one could guess that
in the future we do dual buffering only when using using openGL windows,
and be able to turn it on/off dynamically, and thus reclaim the memory
for the second buffer. Also, with RandR, it is possible to change the
size of the display, and maybe it would be possible to add the unused
part to the Off Screen Memory manager, at least temporarily, but you
would have to mark these sections as non pineable or something such,
since you would need to reclaim them when switching back to the bigger
display.

Also, maybe it will be possible in the future to dynamically allocate a
second head or something such.

XAA and Xv use the offscreen memeory manager for their pixel cache and
for storing the video image that is overlaid on the screen. The video
overlay surface would be pinned while the video is playing, but the
pixel cache can mostly be swapped out as needed.

And you also have to reserve memory for the cursor overlay and maybe
some for other internal usage (like using on card ring buffers).

I hope i didn't foget anything.

So basically, there are pinned memory areas (framebuffer and Xv overlay
buffer) and other memory areas, which you can mostly handle the same as
textures.

That said, i guess we could simply reserve this memory early on in the
driver initialization, one chunk for the framebuffer, another chunk for
the OffScreen memory manager, and a third chunk for the Xv overlay. Then
this memory would be marked as pinned, and you can pass it to the
OffScreen memory manager without worrying about it, and it would be
freed again when the we leave X or something such, and don't really need
any changes in the current 2D drivers.

That said, this would not help us to dynamically adjust the amount of
memory allocated to 2D or 3D, it would be mostly the same as what is
done today, and not very usefull. I need to look more at how the
OffScreen memory manager works to say more, but my guess is that really
using the new memory manager for 2D would mean rewriting an OS memory
manager frontend to it, and have XAA and co use that.

Also, you plan to write a small userspace library that can be used by
the DRI drivers to allocate memory. I suppose you don't want to copy
this code one time for the DRI driver and one time for the 2D drivers,
so it would be best if it was shareable.

Finally, maybe you should give some thought also of people wanting to
use the DRI together with the fbdev, or something such, and make the
memory manager library also available to fbdev using apps or something
such, haven't thought much about that though.

Friendly,

Sven Luther


---
This SF.net email is sponsored by: Scholarships for Techies!
Can't afford IT training? All 2003 ictp students receive scholarships.
Get hands-on training in Microsoft, Cisco, Sun, Linux/UNIX, and more.
www.ictp.com/training/sourceforge.asp
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head (also S3 savage Duoview)

2003-02-26 Thread Sven Luther
On Wed, Feb 26, 2003 at 09:16:53AM -0800, Alex Deucher wrote:
 
 --- Sven Luther [EMAIL PROTECTED] wrote:
  How is it done right now ? Is a part of the onchip memory reserved
  for
  framebuffer and XAA, and another part free for 3D use ?
 
 Not sure.  I'm not familiar with the memory manager either.  I seem to
 recall some drivers have the (compile time) option of allocating more
 or less to 2D vs 3D.  I believe it was the mga driver, and the issue
 was not having enough memory for Xv cause 3D has reserved too much.  

Yes, i also am faced with choice of how to allocate memory, altough i
have not yet gotten to DRI support, and it may also depend on the
specific graphic hardware, like Michel said.

   You could add the stuff from the device specific EntRec's to device
   specific Rec's.  then each pscrn would be responsible for not only
   frambuffer base and address but also primary and secondary virtual
   frame buffers and address.  The main framebuffer would hold the
  whole
   contents of the screen and each virtual framebuffer would basically
  be
   a viewport into the total screen.  I haven't had time to think
  this
   through throughly and I'm already starting to have questions... I
   dunno...food for thought I guess.
  
  Notice that the current entity sharing stuff does distinguish between
  primary head and secondary head, so you could just test if you are on
  the primary head, and do the offscreen memory allocation and (double)
  framebuffer reservation on the secondary head (the one that gets
  initialized second), so we know the size of both framebuffers.
  Naturally, this info should be shareable, but since i think it will
  not
  change, it is also ok to have it in the device specific Rec.
 
 Would this work with the current shareable entity stuff?  it seems like
 that would predicate two separate instances of the driver, in this case
 we would only want one, right? one instance driving two heads.

No, i think it would work ok, i would need to test though, and have not
much time for it right now.

When we are in the chipset_probe function, we set the entity as
shareable, and allocate the private entrec, as well as give some special
cases there.

Once we are in PreInit or even in ScreenInit we can do finer tests, and
postpone the memory reservation and such until we have info on both
screens. We are not showing it anyway before ModeInit is called, and
both ScreenInit are called successively, if i remember well. Some care
must be taken though and checking.

o Mirrored viewports.
  = We use a mirror flag, both heads will be set to the same
 viewport.
  
o Zoomed window.
  = One of the heads will have a viewport corresponding to a
  subpart
 of the other. with optional zooming if the size don't
  correspond.
  
  Maybe the two later could be merged, with some clever option handling
  or
  such. Are there other things i am missing here ?
 
 maybe you could make the zoomed mode part of the mirror mode, but
 specify the viewport in the screen section of the XF86Config.

Mmm, and how do you set the zoom values ?

You know, i think this could work, as the info in the screen section
is used to call the modeinit function.

There is also X/Y Mirrored mode, and rotated modes, but i guess not all
hardware can do this, and it would be difficult to implement.

  Also, as a later point, it would be nice if these things could be
  changed dynamically, maybe as a response to some special key stroke
  or
  such like they have on laptops (or do these keystroke work even if
  there
  is no driver support for those ?).
 
 I think several driver authors have brought this up before.  I don't
 think there was a way for the driver to intercept keystrokes.  Thomas
 Winischhofer brought it up on one of the Xfree ML's, but I'm not sure
 if it was ever resolved.  

Ok, ...

  Now, this was the easy part, what are the problems we are facing ?
  
o We need to make Xinerama aware of this.
  
o Xv support (and possibly HW cursor). Well this works fine for
  most
  2D drawing and probably 3D drawing, but the video or cursor
  overlays
  will not know about it. We will still need to do those per
  screen,
  which may not be possible on all dual head boards.
 
 yeah.  Most dualhead boards have two HW cursors, some also have 2
 overlays, so those could be set up screen specifically.  However, for
 cards with one overlay, would it be possible to use the overlay on
 either head?  Say, if more of the window is on head 1 use the overlay
 with that crtc, if more is on head 2, then use it with that crtc, and
 specify head 1 as the default for cases with an even divide.  I know
 most boards with one overlay can  usually choose which head to use it
 on.  In fact the matrox driver for beos works that way.  the source is
 even available.

I don't really know, some allow to use the overlay only in single head
mode, i guess.

o 3D memory management. Ideally we would use Ian's new

Re: [Dri-devel] Dual-head (also S3 savage Duoview)

2003-02-26 Thread Sven Luther
On Wed, Feb 26, 2003 at 09:40:18AM -0800, Linus Torvalds wrote:
 
 On Wed, 26 Feb 2003, Sven Luther wrote:
  
  Yes, and you have to divide the fb memory in two, one for each head, or
  something such, and each head will have its separate offscreen memory
  manager, possibly using different screen strides.
 
 Side note: I know that what people are mostly talking about is having two
 separate displays with different contents, but please, if you're thinking
 about this, try to make the solution generic enough that you can have two
 separate displays with the _same_ backing store content at different
 resolutions and different pointers.

Yes, this was the spirit of the proposal, see below for details. You
cannot have different depth modes though.

 Yeah, not all chips support this, but many do (and probably all that
 support multiview support this subset), and it's invaluable for having
 laptops that have small LCD's. In particular, it should be possible to
 have the pointer associated with the LCD, and scroll around on the LCD
 while the CRT output (ie usually a projector) shows the whole picture
 (obviously without scrolling or without any pointer).

See below.

 Right now, as far as I can tell, XFree86 can not do this sanely. You can 
 have two separate X servers for the different outputs, or you can have the 
 exact _same_ output on both CRT controllers, but you can't make the two 
 displays look like separate windows into the same area.

Well, XFree86 Does not do it, and there is no way you can configure it,
but the drivers can be made to handle such things, even dynamically i
think. After all at least the matrox proprietary driver, and i hear the
nvidia one does already.

 And it really sounds like the DRI dual-head is not that conceptually 
 different from this. The only issue is whether you share the frame buffer 
 or not.

Yes, because the DRI is just rendering to the framebuffer, it doesn't
know if you are displaying it or not, and doesn't even care. The only
issue is with size limits of the 3D engine, like Michel said, with the
Radeon 3D engine being limited to 2Kx2K, which would mean a maximum
resolution of 1024x768 or 1280x1024 if you stapple the screen
vertically. I don't know the radeon specs, but i guess it should be
possible to work around this by changin the base address, at least in
the vertical stapling case, in the horizontal case, screen stride may
become a problem.

 So you have several cases:
  - shared framebuffer, shared CRT control
  - shared framebuffer, but separate CRT control (and mouse focus or 
whatnot)

Mmm, didn't think about mouse focus.

  - separate framebuffers, and separate CRT control (and mouse focus)

That is the traditional dual head, where the separate displayes can
later be joined via Xinerama.

 Is this what you call mirrored viewports?

Yes, sort of.

  o you can have the traditional dualhead, with two separate
framebuffers each with only one viewport. This is what is currently
used, and the only way to do dualhead with two single head graphic
boards.

  o you can have shared framebuffer dual head, with two viewports on the
same framebuffer, this is what was shown in the original diagram.

  o then you can choose to have one viewport be a mirror of the second.
I believe most dual head cards boot into such modes. If one screen
as a bigger resolution than the other (1024x768 LCD screen and
800x600 video projector for example), then one of the modes (usually
the bigger one) can be set to be a zoomed version of the other, if
your hardware supports this.

  o finally you can have zoomed viewports, when one is the main
viewport, and the second show a zoomed version of a windows (or
subset) of the other. I think in the matrox window driver this can
be set dynamically, where you select a window, and it get zoomed on
the second head. I don't know, but i suppose that the second display
follows, even if the window is redimensioned or something such.

The first is the current situation and cannot be emulated by the others,
but i think a more flexible framework could englobe all three others.
Basically, you would specify the framebuffer size, and the corresponding
viewports for each head separatedly and independently.

Also dynamic changing of viewports could be done by an extension of the
RandR extension maybe, which already does something such for resolution
swapping, but i did not look at it yet.

This would still only be somewhat hacky right now, but could be nicely
formalized for 5.0.

Was this what you had in mind, or do you need some other functionality.

Friendly,

Sven Luther




---
This SF.net email is sponsored by: Scholarships for Techies!
Can't afford IT training? All 2003 ictp students receive scholarships.
Get hands-on training in Microsoft, Cisco, Sun, Linux/UNIX, and more.
www.ictp.com/training/sourceforge.asp

Re: [Dri-devel] S3TC (again)

2003-02-25 Thread Sven Luther
On Mon, Feb 24, 2003 at 06:44:05PM -0800, Ian Romanick wrote:
 Sven Luther wrote:
 On Mon, Feb 24, 2003 at 09:48:42AM -0800, Ian Romanick wrote:
 
 What about apps that send uncompressed textures into the driver, expect 
 the driver to compress then, and then read the textures back?  According 
 to the spec, the textures app will read-back compressed data.  I don't 
 see anyway to work around that.
 
 Mmm, didn't think about this either.
 
 I think the main problem here is that the extension are badly done, or
 at least in this case. They could be split in a s3tc using extension,
 which would just be able to send s3tc compressed data to a s3tc aware
 hardware, and then you would have the more complete s3tc extension,
 which can do more.
 
 It's not poorly written.  It was written to be orthogonal to the rest of 
 OpenGL.  If you send a texture into the driver as 
 GL_RGB/GL_UNSIGNED_BYTE and ask to read it back as GL_RGB332, the driver 
 reformats it.  Why should compression be any different?

Because the patent limits it.

 I think the problem is that the extension was written by engineers, 
 not lawyers. :)

Yes.

 The idea with the read-back is that you could render to a pbuffer, bind 
 the pbuffer to a texture, and read back the compressed data.  Then the 
 app could re-use the compressed texture later.

Yes, but that would only be usefull if the hardware is able to
handle the compressed data. 

But again, the problem is that the driver cannot (legally) compress data
if the hardware cannot do it.

 BTW, what is the use of doing s3tc compression if the graphic card
 cannot handle it ? to save memory usage or something such ?
 
 If the card doesn't support S3TC, it should NOT export the extension.  I 
 don't see what you're getting at.  The only possible exception would be 
 cards that support some other compression format (i.e., Voodoo4 or i830 
 that support FXT1) that silently convert the texture data at load time.

Sure, ...

If the card do not support compression, then it does not export the
extension, and that is it, if in the contrary the card support
decompression and compression, then we can export the extension as it
is, but if it only supports decompression, then we are not able to tell
it to the app right now.

If the app wants to have the driver compress the data, then this mean it
has already the data in uncompressed form and should either use its own
algorithm (for which it pays a licence or whatever) to compress the
data, or use another format.

All this is as it stands now, we either have a licence and export the
full s3tc extension, or we don't have a licence and don't export it, and
the app either will not run, or not use s3tc.

Now, there is also the case where the app pre-compresses the textures,
or has obtained a licence to use s3tc, or whatever, and wants to feed
the compressed data to the card, which is able to support s3tc
decompression, but currently has no way of advertissing that.

By adding a partial s3tc extension or whatever, that signals the app
that it is able to send compressed textures to the board but not do the
compression for the app, it would be possible for such apps to take
profit of the hardware, without us needing to worry about the patent.

And i seriously doubt there is any patent against setting the needed
bits in the command registers and moving the compressed data around, i
may be wrong though, but that would mean the patent is breaking your
fair use rights, or something such.

Now, there may be political reasons why we don't want to implement such
a thing, but we cannot hide behind the patent problem.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head

2003-02-24 Thread Sven Luther
On Mon, Feb 24, 2003 at 07:58:55AM -0700, Jens Owen wrote:
 A short cut to this whole thing would be to work on getting a second 
 head supported on a single X11 screen.  Then 3D comes for free:
 
   http://www.tungstengraphics.com/dri/Simple_Xinerama_DH.txt
 
 This solution provides Xinerama functionality without actually using the 
 Xinerama wrapper.

Mmm, i am curious about this, how does it get handled in the XFree86
configuration file. Also, i guess this would facilitate the memory
management in dual head configurations.

In general, dual head configuration is pretty poor in the current X
server, at least in the documented part. There is no way from specifying
mirrored or zoomed window output, or to be able to change the
configuration dynamically, but this would probably not be achievable
without an extension, or maybe incorporated with the RandR stuff. 

I think that matrox did something such in their proprietary drivers.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head

2003-02-24 Thread Sven Luther
On Mon, Feb 24, 2003 at 08:35:13AM -0700, Jens Owen wrote:
 Sven Luther wrote:
 On Mon, Feb 24, 2003 at 07:58:55AM -0700, Jens Owen wrote:
 
 A short cut to this whole thing would be to work on getting a second 
 head supported on a single X11 screen.  Then 3D comes for free:
 
  http://www.tungstengraphics.com/dri/Simple_Xinerama_DH.txt
 
 This solution provides Xinerama functionality without actually using the 
 Xinerama wrapper.
 
 
 Mmm, i am curious about this, how does it get handled in the XFree86
 configuration file.
 
 Possibly by adding a secondary monitor line to the screens section.

Yes, sure, but this cannot do anything like miroring or windows zooming,
only standard static dual head.

 Also, i guess this would facilitate the memory
 management in dual head configurations.
 
 Yes, the driver would need to handle how the memory is shared.

Yes, shared between both heads, but also between 2D and 3D, altough i
don't know how DRI handles this right now.
 
 In general, dual head configuration is pretty poor in the current X
 server, at least in the documented part. There is no way from specifying
 mirrored or zoomed window output, or to be able to change the
 configuration dynamically, but this would probably not be achievable
 without an extension, or maybe incorporated with the RandR stuff. 
 
 If you want things to be dynamic, you will need to stay away from X's 
 notion of a second X11 screen.  That's is static and persistant by 
 definition.  However, a secondary monitor to the primary screen could 
 be as dynamic as you want to make it.

Mmm, but you would need a fixed screen stride that is the sum of both
heads, or something such. I guess using a virtual screen is the way to
go, but there may be problems with regard to how apps (in particular
desktop managers) handle it, i guess.

 I think that matrox did something such in their proprietary drivers.
 
 There are other proprietary drivers that have also done similar 
 functionality.  However, I haven't seen anything in open source.

Mmm, i would be interested in working on this, need to find some time
though. Are there other people whow are interested in this also ?

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head

2003-02-24 Thread Sven Luther
On Mon, Feb 24, 2003 at 06:05:21PM +0100, Michel Dänzer wrote:
 On Mon, 2003-02-24 at 16:09, Sven Luther wrote:
  On Mon, Feb 24, 2003 at 07:58:55AM -0700, Jens Owen wrote:
   A short cut to this whole thing would be to work on getting a second 
   head supported on a single X11 screen.  Then 3D comes for free:
   
 http://www.tungstengraphics.com/dri/Simple_Xinerama_DH.txt
   
   This solution provides Xinerama functionality without actually using the 
   Xinerama wrapper.
  
  Mmm, i am curious about this, how does it get handled in the XFree86
  configuration file. 
 
 I guess it would have to be handled in the driver until 5.0 or whenever
 the driver model will be rethought.

I have no problem with doing this in the driver, as long as every driver
does it the same way. The drivers will be doing the work anyway.

  Also, i guess this would facilitate the memory management in dual head 
  configurations.
 
 Indeed, as it's essentially a single screen. Integrating this with RandR
 could get interesting to say the least though...

The complicated bits would be when both screen have not the same
resolution. Altough i get using a virtual screen is the way to go, i
have doubts that the current desktop manager will understand that we are
not using a part of the screen.

Funny that there is discution about this in the DRI list, while when i
asked on the xfree86 list some time ago, nobody cared.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head

2003-02-24 Thread Sven Luther
On Mon, Feb 24, 2003 at 06:35:47PM +0100, Michel Dänzer wrote:
 On Mon, 2003-02-24 at 18:11, Sven Luther wrote:
  On Mon, Feb 24, 2003 at 06:05:21PM +0100, Michel Dänzer wrote:
   On Mon, 2003-02-24 at 16:09, Sven Luther wrote:
On Mon, Feb 24, 2003 at 07:58:55AM -0700, Jens Owen wrote:
 A short cut to this whole thing would be to work on getting a second 
 head supported on a single X11 screen.  Then 3D comes for free:
 
   http://www.tungstengraphics.com/dri/Simple_Xinerama_DH.txt
 
 This solution provides Xinerama functionality without actually using the 
 Xinerama wrapper.

Mmm, i am curious about this, how does it get handled in the XFree86
configuration file. 
   
   I guess it would have to be handled in the driver until 5.0 or whenever
   the driver model will be rethought.
  
  I have no problem with doing this in the driver, as long as every driver
  does it the same way. The drivers will be doing the work anyway.
 
 Duplicating a lot of work in the drivers which would be better
 centralized in the driver independent infrastructure.

Yes, but can it be done differently before 5.0 ?

  Altough i get using a virtual screen is the way to go, i have doubts that 
  the current desktop manager will understand that we are not using a part 
  of the screen.
 
 That's what Xinerama is for.

Yes, did see that on your other mails.

  Funny that there is discution about this in the DRI list, while when i
  asked on the xfree86 list some time ago, nobody cared.
 
 Must have missed that, the major motivation behind this would be 3D
 acceleration though.

What about monitor plug  play, or you want to do a presentation, plugin
in the video projector without wanting to restart X.

 Unfortunately, I just recalled a possibly major problem: AFAIK the 3D
 engine can only render to a rectangle up to 2048 pixels wide and high.
 That would be pretty limiting, in particular when the heads are side by
 side.

Mmm, this is a limitation of the DRI 3D engine, or a limitation of the
radeon 3D engine ? My card can do 8kx8k, so this would be no problem,
and i am sure future hardware will also go into that direction.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] Dual-head

2003-02-24 Thread Sven Luther
On Mon, Feb 24, 2003 at 06:35:47PM +0100, Michel Dänzer wrote:
 On Mon, 2003-02-24 at 18:11, Sven Luther wrote:
  On Mon, Feb 24, 2003 at 06:05:21PM +0100, Michel Dänzer wrote:
   On Mon, 2003-02-24 at 16:09, Sven Luther wrote:
On Mon, Feb 24, 2003 at 07:58:55AM -0700, Jens Owen wrote:
 A short cut to this whole thing would be to work on getting a second 
 head supported on a single X11 screen.  Then 3D comes for free:
 
   http://www.tungstengraphics.com/dri/Simple_Xinerama_DH.txt
 
 This solution provides Xinerama functionality without actually using the 
 Xinerama wrapper.

Mmm, i am curious about this, how does it get handled in the XFree86
configuration file. 
   
   I guess it would have to be handled in the driver until 5.0 or whenever
   the driver model will be rethought.
  
  I have no problem with doing this in the driver, as long as every driver
  does it the same way. The drivers will be doing the work anyway.
 
 Duplicating a lot of work in the drivers which would be better
 centralized in the driver independent infrastructure.

But you could put it in the the common directory as a set of helper
functions or something such, which only need to be called by the drivers ? 

And the ddx driver would need to be able to program the video outputs
anyway, and what else is really needed ? The Xinerama hinting stuff ?
The RandR like dynamic configuration ? 

What i think is most important is that a common configuration options or
something such be defined, before every driver go at it in its own way.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] S3TC (again)

2003-02-24 Thread Sven Luther
On Mon, Feb 24, 2003 at 09:48:42AM -0800, Ian Romanick wrote:
 Sven Luther wrote:
 Is there not a way to work around this ?
 
 If the hardware doesn't support s3tc, then the driver simply don't
 advertize the that it can handle s3tc textures, so you would get out of
 the need to decompress the textures in the driver. On the other hand, if
 it is not possible to tell the app that you don't know how to compress
 textures, and are asked for it, then you just send the texture
 uncompressed or something such.
 
 Ideally, there would be a way to tell the apps that you can receive and
 use s3tc compressed textures, but not uncompress them yourself.
 
 What about apps that send uncompressed textures into the driver, expect 
 the driver to compress then, and then read the textures back?  According 
 to the spec, the textures app will read-back compressed data.  I don't 
 see anyway to work around that.

Mmm, didn't think about this either.

I think the main problem here is that the extension are badly done, or
at least in this case. They could be split in a s3tc using extension,
which would just be able to send s3tc compressed data to a s3tc aware
hardware, and then you would have the more complete s3tc extension,
which can do more.

BTW, what is the use of doing s3tc compression if the graphic card
cannot handle it ? to save memory usage or something such ?

 Being that I'm not a lawyer, I'm not sure that the other work arounds 
 would be legal either.  Given the ambiguities and risk of it all, until 
 we get explicit permission, I don't think any of the distros are likely 
 to distribute ANYTHING with ANY S3TC support enabled.

Sure, but i think there would be no problem in a simple sending to the
graphic card extension. and any claim that this may cause problems with
patents is just plain FUD.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] S3TC (again)

2003-02-22 Thread Sven Luther
On Fri, Feb 21, 2003 at 03:27:21PM -0800, Ian Romanick wrote:
 Now, if an OpenGL application has a pile of textures already
 compressed with the S3TC algorithm, then I don't understand why the
 dri drivers can't simply offer the S3TC interfaces to the hardware,
 pass the compressed textures to the hardware and let the hardware get
 on with its licensed decompression of the textures as required.
 Likewise, if the OpenGL application passes compressed textures to the
 S3TC API then how it gets hold of the compressed textures in the first
 place is it's own responsibility -- the OpenGL API just passes them
 on.
 
 Look at the ARB_texture_compression and EXT_texture_compression_s3tc
 specs again.  You can specify uncompressed textures and have the driver
 compress the AND you can specify compressed textures and have the driver
 decompress them (to read them back into the application).  For example,
 Quake3 can use the S3's vendor-specific extension (can't remember the
 name of it right now), but it does NOT have ANY textures pre-compressed.
   It expects the driver to do the work.

Is there not a way to work around this ?

If the hardware doesn't support s3tc, then the driver simply don't
advertize the that it can handle s3tc textures, so you would get out of
the need to decompress the textures in the driver. On the other hand, if
it is not possible to tell the app that you don't know how to compress
textures, and are asked for it, then you just send the texture
uncompressed or something such.

Ideally, there would be a way to tell the apps that you can receive and
use s3tc compressed textures, but not uncompress them yourself.

Friendly,

Sven Luther


---
This SF.net email is sponsored by: SlickEdit Inc. Develop an edge.
The most comprehensive and flexible code editor you can use.
Code faster. C/C++, C#, Java, HTML, XML, many more. FREE 30-Day Trial.
www.slickedit.com/sourceforge
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Dri-devel] DRIInfoRec, Identifiers and DRI Configuration

2003-02-17 Thread Sven Luther
On Mon, Feb 17, 2003 at 01:35:25AM -0800, Philip Brown wrote:
 On Sun, Feb 16, 2003 at 10:52:34PM -0800, Ian Romanick wrote:
  Philip Brown wrote:
   So, why not do it by PCI vendor/devid ? That sort of information is visible
   from the DRI level, I believe. I think its just another Xserver internal
   call, isnt it?  
  
  So, what happens if I have four identical video cards in my system?
 
 why would you want different configs for them?

Because one is AGP and the three others are PCI ? 

Because you have different monitors attached to them, or want to run
different apps on them ?

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Configuration file format survey

2003-01-28 Thread Sven Luther
On Tue, Jan 28, 2003 at 10:37:03AM +, Ian Molton wrote:
 On Tue, 28 Jan 2003 02:55:22 -0600 (CST)
 D. Hageman [EMAIL PROTECTED] wrote:
 
  
   So what are the technical advantages of XML in this case?
  
  Quick List --
  
  *) Text Based - easy to edit.
 
 Text based does NOT imply easy to edit. look at USBsnopys' output. its
 completely illegible.

Well, i think there is a misunderstanding on what 'easy to read' mean.
After all, some would say, it is ok to use a binary format, after all,
you just need an hex editor and you can modify it to your hearts
content. I personally think xml is not really all that readable,
especially because of the end tag, which maybe is not really needed,
altough i don't know exactly what we are going to store.

  *) Extensible, no painting ourselves into a corner. One can easily
  extend the spec without having to rewrite the entire parser.
 
 Also irrelevant. the USERS will never need to do this.

Well, i think this is relevant and a good advantage to using xml. Maybe
the user will not need to use it, but imagine that each driver will need
to support a different set of parameters or something such, so there
will be a common set of parameters, and each driver can extend it to
define driver specific parameters.

Maybe before we continue in this xml flamewar, it would be best to
define what exactly we are going to express in this config file. Will it
include only booleans and numeric values, or maybe also matrices, or
even other stuff (graphic programs ?).

Friendly,

Sven Luther


---
This SF.NET email is sponsored by:
SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See!
http://www.vasoftware.com
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Configuration file format survey

2003-01-27 Thread Sven Luther
On Mon, Jan 27, 2003 at 06:04:19PM -0600, D. Hageman wrote:
 
 I think you misunderstand.  We aren't replacing the XF86Config file here.  
 This is for DRI specific driver settings with capabilities extending to 
 having special options for individual programs if need be.  
 
 Now if I am mistaken and you did understand ...
 
 Your argument is bogus.  You can't claim that every XML file format leads 
 to unreadable files.   Now, if you have a good *technical* reasons why we 
 shouldn't use XML - I would love to hear them.  
 
 Couple of good reasons to use XML:
 
 *) Parser with validation capabilites already written.
 *) More and more utilities are using it ... fontconfig for example.
 *) bindings for all major languages.
 *) A copy of libxml already exists in the tree if a person doesn't already
have it.
 *) Extensible.
 *) It can be edited with any text editor.

Another disadvantage is that parsing is so damn slow.

Friendly,

Sven Luther


---
This SF.NET email is sponsored by:
SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See!
http://www.vasoftware.com
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] The next round of texture memory management...

2003-01-20 Thread Sven Luther
On Mon, Jan 20, 2003 at 09:30:50AM -0800, Ian Romanick wrote:
 Sven Luther wrote:
 On Thu, Jan 16, 2003 at 05:33:42PM -0800, Ian Romanick wrote:
 
 1. Single-copy textures
 
 Right now each texture exists in two or three places.  There is a copy 
 in on-card or AGP memory, in system memory (managed by the driver), and 
 in application memory.  Any solution should be able to eliminate one or 
 two of those copies.
 
 ...
 
 BTW, since you are looking into this, have you thought about graphic
 chips which can do MMU like tricks. I am not sure if the current set of
 graphic chips the DRI runs on do this kind of stuff, but they well may
 in the future. I know the gamma drm module use the gamma's virtual
 memory table to not need to do virtual-physical conversion. But more
 importantly to you, altough there is not yet a DRI driver for it, the
 3Dlabs permedia3 can use virtual memory for its textures. That is you
 can basically set up the graphic boards memory as a cache memory, and
 have the the MMU-like unit swap the memory pages from host memory, using
 i suppose its own page replacement algorithm.
 
 The only chips that I know of that support this technology are the 
 various, recent 3dlabs chips.  They have a number of patents on this 
 technology, and, AFAIK, they have no intention of licensing it to anyone 
 for all the tea in China.

Well, sure ...

But that is not reason for not supporting it or something, who knows,
Creative and 3Dlabs may release a consumer board supporting those next
year, and there will be lot of those around.

And you don't think other manufacturer may develop their own technology
doing this ? virtual memory is hardly that inovative that a patent can
block ATI or NVidia from developping a similar issue, after all it is
used since years in CPUs.

 I agree that it is a good idea to keep virtual textures in mind, but, 
 since we don't have any hardware documentation for it, it will be 
 difficult to do more than that.

Mmm, i know of at least 3 persons who have the docs for the pm3 apart
from me, sure, you need to sign an NDA, but who does not these days, i
also have the hardware for it, and am planning to do DRI work for those
in the future, and one of those 3 persons has expressed interrest on
having DRI enabled for the pm3, and i did begin some work for the gamma
+ pm3 combo.

That said, sure, there is not really all that much i can contribute to
this discution, since i am under NDA, but, well, virtual memory for
graphic chips work all the same as virtual memory for CPUs so i guess it
is easy to take those things into account also.
 
 
 
 ---
 This SF.NET email is sponsored by: FREE  SSL Guide from Thawte
 are you planning your Web Server Security? Click here to get a FREE
 Thawte SSL guide and find the answers to all your  SSL security issues.
 http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0026en
 ___
 Dri-devel mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/dri-devel


---
This SF.NET email is sponsored by: FREE  SSL Guide from Thawte
are you planning your Web Server Security? Click here to get a FREE
Thawte SSL guide and find the answers to all your  SSL security issues.
http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0026en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] The next round of texture memory management...

2003-01-20 Thread Sven Luther
On Mon, Jan 20, 2003 at 11:15:18AM -0800, Ian Romanick wrote:
 But that is not reason for not supporting it or something, who knows,
 Creative and 3Dlabs may release a consumer board supporting those next
 year, and there will be lot of those around.
 
 Right, but without documentation, it's hard to know what we would need 
 to do to support it or what we would need to NOT do to prevent 
 supporting it.

Sure, altough i believe it just does in hardware some of the things you
plan to do in software, so there should be some way to let the hardware
manage the buffers and the swapping of memory and other such instead of
doing it in software.

 And you don't think other manufacturer may develop their own technology
 doing this ? virtual memory is hardly that inovative that a patent can
 block ATI or NVidia from developping a similar issue, after all it is
 used since years in CPUs.
 
 Not to get into too much of a tangent, virtual memory is not novel. 

So there should be not much surprise about what this new feature does,
altough the pm3 is more than 3 years old, i think. Simply think of the
feature as what a CPU can do with regard to virtual memory.

 Mmm, i know of at least 3 persons who have the docs for the pm3 apart
 from me, sure, you need to sign an NDA, but who does not these days, i
 also have the hardware for it, and am planning to do DRI work for those
 in the future, and one of those 3 persons has expressed interrest on
 having DRI enabled for the pm3, and i did begin some work for the gamma
 + pm3 combo.
 
 Your input would be very helpful, then.  Is there any chance you could 
 port the gamma driver to use the texmem interface in the texmem-0-0-1 
 branch? :) That's one of the few drivers that I had given up hope on 
 seeing ported.  The other is the tdfx driver.

Well, i could do it, but ...

The current gamma driver was designed back then with the GMX2000 in
mind, which use a gamma and one or two MX rasterizers. I have an Appian
Jeronimo 2000, with a a gamma and two pm3 chips, each rasterizing for
one head. Before porting it to the new texmeme branch, i need to port it
for my board, which i have tried to do, but didn't find enough time for
it ;(((

 That said, sure, there is not really all that much i can contribute to
 this discution, since i am under NDA, but, well, virtual memory for
 graphic chips work all the same as virtual memory for CPUs so i guess it
 is easy to take those things into account also.
 
 Fair enough.  You'll have to be careful. :)

Yes, ...

Friendly,

Sven Luther


---
This SF.NET email is sponsored by: FREE  SSL Guide from Thawte
are you planning your Web Server Security? Click here to get a FREE
Thawte SSL guide and find the answers to all your  SSL security issues.
http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0026en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] The next round of texture memory management...

2003-01-18 Thread Sven Luther
On Thu, Jan 16, 2003 at 05:33:42PM -0800, Ian Romanick wrote:
 What follows is the collected requirements for the new DRI memory 
 manager.  This list is the product of several discussions between Brian, 
 Keith, Allen, and myself several months ago.  After the list, I have 
 included some of my thoughts on the big picture that I see from these 
 requirements.
 
 1. Single-copy textures
 
 Right now each texture exists in two or three places.  There is a copy 
 in on-card or AGP memory, in system memory (managed by the driver), and 
 in application memory.  Any solution should be able to eliminate one or 
 two of those copies.
...

BTW, since you are looking into this, have you thought about graphic
chips which can do MMU like tricks. I am not sure if the current set of
graphic chips the DRI runs on do this kind of stuff, but they well may
in the future. I know the gamma drm module use the gamma's virtual
memory table to not need to do virtual-physical conversion. But more
importantly to you, altough there is not yet a DRI driver for it, the
3Dlabs permedia3 can use virtual memory for its textures. That is you
can basically set up the graphic boards memory as a cache memory, and
have the the MMU-like unit swap the memory pages from host memory, using
i suppose its own page replacement algorithm.

Friendly,

Sven Luther



---
This SF.NET email is sponsored by: Thawte.com - A 128-bit supercerts will
allow you to extend the highest allowed 128 bit encryption to all your 
clients even if they use browsers that are limited to 40 bit encryption. 
Get a guide here:http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0030en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] DRI/DRM/stuff overview?

2002-12-14 Thread Sven Luther
On Sat, Dec 14, 2002 at 09:42:58AM +0300, Samium Gromoff wrote:
  One unrelated question: is there any 3Dlabs permedia chipset work going
  anywhere around?

I have begun work on this some time ago, but could not finish it for
lack of time.

Alan put my changes in the tdlabs-0-0-1-branch branch, so you could look
there.

It contains some (untested) changes to the gamma drm module, so it can
handle either the gamma or the pm2/3/4 for doing the dma stuff.

I also changed the glint_dri stuff in the X glint driver, so it can
handle either the gamma + MX, the gamma + pm3, the pm3 alone or the pm2
alone.

I was not able to test this, since i did not have the time to work on
the library driver, it should work though (as much as any glint/gamma
DRI stuff works, which i think nobody really tested since years or at
least months).

Also there is Måns Rullgård [EMAIL PROTECTED] who has expressed
interrest of making the DRI work on his PCI based permedia3 board (he has
an alpha box) but i think he is doing video stuff right now.

I am interrested in doing DRI work for gamma + dual pm3 myself, but
sadly don't have the time right now.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:
With Great Power, Comes Great Responsibility
Learn to use your power at OSDN's High Performance Computing Channel
http://hpc.devchannel.org/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] DRM Kernel Questions

2002-12-12 Thread Sven Luther
On Thu, Dec 12, 2002 at 07:14:45PM -0800, Philip Brown wrote:
 On Thu, Dec 12, 2002 at 01:02:30PM +, Alan Cox wrote:
  ...
  It takes two to tango so its not just what I need its also what do they
  need.
  
  What I would like to see would be:
  
  A single definitive source for the DRM code, one where contributions go
  back from Linux, from *BSD, from core XFree86 as well as from the DRI
  project.
 
 
 May I suggest that the best way to do that, is to keep the kernel DRM code,
 as a **SEPARATE PROJECT**, at least on the source code repository level.
 
 IMO, there should be a separate repository, or at least a separate
 directory at the same level as the top-level xc.
 
 The only thing from the driver that really belongs buried in the xfree86
 server code, is a single, os-neutral copy of drm.h, from whatever version
 of DRM that branch of xfree86 is officially supporting.
 
 
 Once you have achieved that separation, you have something actually
 resembling a formal API between user-level and kernel driver level.
 That is the only way things are going to get cleaned up, process-wise.
 Not to mention greatly aiding kernel coding efforts for non-linux
 platforms.

And there are also people wanting to use the DRM outside of XFree86,
maybe even outside of the DRI, not sure though.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:
With Great Power, Comes Great Responsibility 
Learn to use your power at OSDN's High Performance Computing Channel
http://hpc.devchannel.org/
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Future Look: OpenGL 2.0

2002-11-13 Thread Sven Luther
On Wed, Nov 13, 2002 at 04:28:05AM +0100, Dieter Nützel wrote:
 http://www.xtremepccentral.com/articles/opengl2/

It don't really says all that much more than the original openGL 2.0
presentation, which can be found from : 

http://www.3dlabs.com/support/developer/ogl2/index.htm

Friendly,

Sven Luther


---
This sf.net email is sponsored by: Are you worried about
your web server security? Click here for a FREE Thawte
Apache SSL Guide and answer your Apache SSL security
needs: http://www.gothawte.com/rd523.html
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] libGL{U,w}

2002-11-07 Thread Sven Luther
On Thu, Nov 07, 2002 at 05:26:40PM +0100, Michel Dänzer wrote:
 On Don, 2002-11-07 at 16:56, Keith Whitwell wrote:
  Michel Dänzer wrote:
   These no longer get built by default. Any objections against the
   attached patch?
  
  Actually if they're not built, I think we should ditch them from cvs.  We're 
  not working on them.
 
 In that case I'd vote again for removing unused drivers etc. as well.

Err, no please, i still have plans to work on the gamma driver, but have
not that much time right now (I guess it would be the prime candidate
for removal, isn't it ?)

Friendly,

Sven Luther


---
This sf.net email is sponsored by: See the NEW Palm
Tungsten T handheld. Power  Color in a compact size!
http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Radeon 8500, XFree86 CVS vs DRI..

2002-09-19 Thread Sven LUTHER

On Wed, Sep 18, 2002 at 11:42:35PM -0700, Linus Torvalds wrote:
 
 Is there any reason why the DRI tree isn't tracking the XFree86 CVS tree
 more? On my Radeon 8500, the DRI tree apparently still doesn't do the Xv
 extension correctly, even though XFree86 CVS has done it for ages (thanks
 to Keith for getting the relevant bits off Gatos). So I have to have two
 different X servers, depending on whether I want to watch DVD's or whether
 I want to check 3D behaviour.
 
 (The XFree86 CVS tree also has that funky red-cursor-with-a-shadow thing, 
 which I've not yet decided if I like or dis-like ;)

Because, as i understand it, the developpment cycle of Xfree/DRI is as
follows :

  o XFree does a new release.
  o At this point DRI and Xfree are in sync, so DRI development is done
  in the DRI CVS, based on the lastly released XFree tree.
  o XFree developpment is done in the XFree CVS.
  o Sometimes near the end of the XFree development cycle, the two trees
  are synced by hand once the sync is done in a satisfactory way, 
  o XFree does a new release, and all begins again.

I think one of the reasons this is so is because the DRI tree is not
complete, and needs XFree to build and work correctly, and it is easier
for people building from DRI CVS to have the 4.2.0 tarball installed,
and build from that.

I guess people will try to merge usefull fixes from the XFree tree to
the DRI tree if they feel they need them or so, thus making things
easier for the folk doing the final sync.

Friendly,

Sven Luther


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: APT-able Debian DRI CVS packages ready for testing.

2001-04-02 Thread Sven LUTHER

On Fri, Mar 30, 2001 at 01:58:19PM -0600, Thomas E. Vaughan wrote:
 On Fri, Mar 30, 2001 at 06:09:32AM -0800, Marc Wilson wrote:
 
  Ok, perhaps I'm really dense, but what's the advantage of using this over
  just using Branden's X debs?  I have 3d acceleration and working DRI for
  both my V3 and my G450 as it stands, without adding any further software.
  
  And I can turn it on or off as I want (if I need 24 bit for something,
  for example).
  
  So, what's the scoop?
 
 Well, I develop a large OpenGL visualization application for weather radar
 volume data, and my G400 card would freak out and paint garbage all over
 the screen whenever I loaded up some big textures.  So I was waiting for
 someone to get the latest DRI stuff aptable, and I am grateful.  My recent
 workaround has been to use the utah-glx stuff, but then I didn't get
 antialiased fonts in KDE.

Does the matrox DRI stuff still freeze on SMP boxes, like it was the case
previously ?

Friendly,

Sven Luther

___
Dri-devel mailing list
[EMAIL PROTECTED]
http://lists.sourceforge.net/lists/listinfo/dri-devel