Re: [Dri-devel] my X-Kernel question

2001-10-22 Thread Peter Surda

On Mon, Oct 22, 2001 at 02:27:23AM -0400, [EMAIL PROTECTED] wrote:
 The biggest reason against this is that X (as it is now) support not only
 Linux but many other OSes: in particular BSD(s) and Solaris. Moving
 stuff into Linux kernel creates a fork of the drivers which is
 undesirable..
That's a lame excuse. I'm using Linux so I won't suffer from Windows, why
should I suffer because of BSD or Solaris?

Rant
About the precise vsync thingy we're talking about in xpert: we need kernel
support anyway. So why instead of calling a video driver in kernel lame and
uncool and adding a strange inflexible function god-knows-where, shouldn't
we move the whole driver structure to kernel? Drivers for every other device
type are in kernel. What would the anti-video-in-kernel-guys think if I
claimed that network cards should have userspace drivers in sort of uber
daemon and if an app wants to make a TCP connection it should contact this
uber daemon? I don't want to have staroffice in kernel, but the DRIVER
STRUCTURE. For a great UI, we need DMA, vsync and devices communicating with
each other directly or with little overhead. Why insist on doing this in
userspace? The reasons to put it into kernel aren't speed, but because it's
much more easier to add/maintain drivers, add functionality, share code and do
fancy stuff. DRI is a very good example of what I mean.
/Rant

Short explaination of the precise vsync thingy: For fluent video playback it
is necessary to precisely coordinate number of frames the monitor displays.
It is very visible on a TV. When I have a 25fps video, it should be EXACTLY
one frame of data == one frame on TV. Currently, I can tell the card (ATI)
to blit on vsync (so it won't tear), but I can't tell it don't miss a frame,
or block until vsync. This results in visible jumps when suddenly the same
picture is staying on screen for the double duration than the others and it
sucks and I can't do anything about it without SOME kernel support. Telling
Xserver to poll for vsync and eat CPU is lame.

Vladimir Dergachev
Bye,

Peter Surda (Shurdeek) [EMAIL PROTECTED], ICQ 10236103, +436505122023

--
   Disc space - The final frontier.

 PGP signature


Re: [Dri-devel] my X-Kernel question

2001-10-22 Thread Alan Cox

 we move the whole driver structure to kernel? Drivers for every other device

Not really. 

 STRUCTURE. For a great UI, we need DMA, vsync and devices communicating with
 each other directly or with little overhead. Why insist on doing this in

A video driver has to have extremely good latency, syscalls are overhead that
you generally do not want. There are specific things you want kernel help
with - agp management (and thus AGP DMA), context switching on DRI and maybe
some day interrupt handling for video vsync events and wiring them into
the XSync extension.

The rest is a bit questionable as a kernel space candidate, but if you 
want it in kernel go ahead - XFree86 supports both models. 

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Xpert]Re: [Dri-devel] my X-Kernel question

2001-10-22 Thread Dr Andrew C Aitchison

On Mon, 22 Oct 2001, Peter Surda wrote:

 On Mon, Oct 22, 2001 at 05:48:56AM +0100, MichaelM wrote:
 Would you consider it a good idea to make DRI part of the source of a
 kernel? Direct 3d graphics supported from the boot sequence.
 Hmm I thought DRI is part of the kernel? Perhaps you meant the DRM part of it.

 I'm really concerned about your answer. There was a whole thread on
 the linux-kernel mailing list about the hypothesis of the release of
 an X-Kernel, a kernel which would include built-in desktop support.
 I think it is a great idea to have a kernel implementation of Xserver. But it
 would have to be more modular than current XF86, and also have a highly
 flexible structure, so that adding new types of devices and functionality
 wouldn't pose problems. I think this is currently the biggest XF86's drawback.

XFree86 can run on top of the framebuffer (fdbev I think, but maybe
vesafb or something else - I haven't been keeping up).
Last time I looked there was a specific accelerated framebuffer interface
for MGA cards, so there may be a problem making the interface sufficiently
general for acceleration on all cards.
Provided that this can be done, it seems to me that fbdev + DRI could
be the basis for a kernel level graphics driver, with a user level X
server on top.
I believe I read that SGI Irix works like this (or did once), and
I believe it is also the model that GGI is aiming for.

However, moving all the hardware drivers from the Xserver to the
kernel will be a big job (it took 3-4 years to move them all from
XFree86 3.3 to 4.x). Even if this kernel graphics system works on
Linux and the *BDS OSes, XFree86 runs on another dozen unixes,
not to mention OS/2 and Win32, and possibly other non-unix platforms.

I think that most active developers would find that they had to
concentrate on either this kernel based graphics, or the platform
neutral user level XFree86. Dividing development like this would be
bad for both projects.

 Oh and one more thing: the driver should autodetect if it is running on
 the same videocard as the virtual terminal stuff, so that the first card
 will simply open a new VT but secondary card will run independently of
 this VT stuff. This would finally allow a decent way to concurrently run
 2 separate X sessions on the same machine using local hardware.

I'm convinced that the solution to that is for the kernel VT support to
support multiple sessions. Then the user-level X server can just take over
a single VT session (possibly via fbdev).

-- 
Dr. Andrew C. Aitchison Computer Officer, DPMMS, Cambridge
[EMAIL PROTECTED]   http://www.dpmms.cam.ac.uk/~werdna



___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] test

2001-10-22 Thread Rajiv Malik

test!

sorry list, but messages were not coming to me.

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] my X-Kernel question

2001-10-22 Thread Jeffrey W. Baker



On Mon, 22 Oct 2001, Peter Surda wrote:

 On Sun, Oct 21, 2001 at 10:01:33PM -0700, Jeffrey W. Baker wrote:
  Send us a mail that isn't from a windows machine, and you might get an
  interesting discussion.  As it stands, I can barely tell what you are going
  on about.

 Dude, I think that Outlook is crap too, I had to administer a couple
 of them for a year and it was a nightmare. But that isn't a reason to
 flame. Any decent mailclient (such as mutt I'm using) can display
 mails with lines longer than 72 chars and html attachments without
 hassle. I'm pretty sure there is a way to tell your pine to do that as
 well. If there isn't, use the source and make it so :-).

There is, but that isn't what I'm talking about.  I don't want that
pointless wankerfest to spread from linux-kernel to every other mailing
list I am on.



___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64: mach64-0-0-2-branch created and updated

2001-10-22 Thread Malte Cornils

Manuel Teira wrote:
 If you find any problem compiling the new branch, please make me know.

OK, let me see. With regards to that libXau problem: it
's sufficient to just copy /usr/X11R6/lib to /usr/X11R6-DRI/lib, the
rest of the tree isn't necessary. Otherwise, I followed the DRI
compilation guide under Documentation. 

The build (or rather, the make install) failed until I removed tdfx
from line 821 in file
X11R6-DRI/build/xc/lib/GL/mesa/src/drv/Makefile.

The instructions for making the nls stuff seem to be outdated, since
there no longer is any xc/nls in CVS.

taking /usr/X11R6-DRI/lib into ld.so.conf doesn
't help for libGL and libGLU, since those already should exist from
any previous X installation in /usr/lib, and /usr/lib is implicitly
given preference over anything form ld.so.conf. I had to move the
old ones away and symlink/copy over the new ones.

Unfortunately, I have a PCI Mach64; modprobe mach64 failed without a
helpful error message since agpgart wasn
't installed into the kernel. After modprobing agpgart, then
modprobing mach64 (that last one is probably also handled
automagically at X startup), glxinfo showed the valued Direct
Rendering enabled. And it was; small differences in the display of
3D apps showed that. However, performance was about as slow as
software-rendering; at least for gltron, I got about the same
average fps as with software mesa.

That is probably due to my card not being an AGP variant (also my
mainboard does have a - currently empty - AGP slot). 

That
's about it - I tested 3D with gears, gltron and blender and all
worked with a few glitches (not important right now).

So, I hope you'll find my report useful. It certainly was fun for
me, believe it or not.

Thanks for the great work so far,
Yours Malte #8-)

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] my X-Kernel question

2001-10-22 Thread Daryll Strauss

On Mon, Oct 22, 2001 at 05:48:56AM +0100, MichaelM wrote:

 Would you consider it a good idea to make DRI part of the source of a
  kernel? Direct 3d graphics supported from the boot sequence.

 I'm really concerned about your answer. There was a whole thread on
  the linux-kernel mailing list about the hypothesis of the release of
  an X-Kernel, a kernel which would include built-in desktop
  support. Most people answered, no, this would be ridiculous, other
  said, yes, but hardware manufacturers are too unhelpful therefore this
  would be totally a totally unstable release. Others said.. other
  various things.

 So, what do you think?

No, I don't think it is a good idea. Kernel's should provide the minimum
layer needed to securly and efficiently implement solutions in user
space. The DRI has a kernel component to access the graphics
hardware. The rest of OpenGL is in user space.

There are lots of advantages to doing it this way:
  1) The kernel remains small. No wasted memory. Less security
 problems. 

  2) You can layer different graphics systems on top of the same
  kernel interface. (For example the Xv guys wanting to use it)

  3) It easier to change, debug, etc.

There's essentially no advantage to having X or OpenGL in the
kernel. Do you really need 3D during boot? I'd say no. It can wait until
you mount a file system. If you want to get graphics running earlier in
the boot sequence, go right ahead and work on that. 

- |Daryll


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] FW: [BUG] Linux-2.4.12 does not build (Sparc-64 DRM)

2001-10-22 Thread Brian Paul


Jeff's in the process of moving from Colorado to Oklahoma.  I'm sure
he'll tend to this when he gets settled in.

-Brian

--- Leif Sawyer [EMAIL PROTECTED] wrote:
 Don't know if this will get through or not, but since Jeff doesn't seem
 to (want to?) respond directly, perhaps somebody on this list can take
 a look at this issue.
 
 
 -Original Message-
 From: David S. Miller [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, October 11, 2001 4:07 PM
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED];
 [EMAIL PROTECTED]
 Subject: Re: [BUG] Linux-2.4.12 does not build (Sparc-64  DRM)
 
 
From: Leif Sawyer [EMAIL PROTECTED]
Date: Thu, 11 Oct 2001 15:52:01 -0800
 
Just a quick bug report -- I haven't had time
to track this one down yet.

Enabling DRM/DRI support on a Sparc64 kernel
with Creator/Creator3D graphics does not build
correctly:

 I've tried to contact the DRM folks (specifically Jeff Hartman) on
 many occaisions (at least 3 times) about the fact that using
 virt_to_bus/bus_to_virt generically in the DRM broke the build on
 several platforms.
 
 As stated often, virt_to_bus/bus_to_virt are deprecated interfaces.
 Yet, it is use explicitly in the debugging macros.
 
 Not only has it not been fixed, all of my queries to Jeff have fallen
 on deaf ears and I get no response whatsoever.
 
 Franks a lot,
 David S. Miller
 [EMAIL PROTECTED]
 
 ___
 Dri-devel mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/dri-devel


__
Do You Yahoo!?
Make a great connection at Yahoo! Personals.
http://personals.yahoo.com

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] my X-Kernel question

2001-10-22 Thread Sottek, Matthew J

I'm really concerned about your answer. There was a whole thread
on the linux-kernel mailing list about the hypothesis of the
release of an X-Kernel, a kernel which would include built-in
desktop support. Most people answered, no, this would be
ridiculous, other said, yes, but hardware manufacturers are
too unhelpful therefore this would be totally a totally unstable
release. Others said.. other various things.

 So, what do you think?

Linux is badly in need of some sort of sane kernel graphics 
architecture, but certainly the answer is not a kernel version of
X. In order to do a good driver model you need both a kernel api
and a client api. The client api is implemented via a library that
is flexible enough to handle differing kernel api's. (This is how
libGL works) The client API, opengl, is the same for everyone. But,
the kernel-library api can be hardware dependent. As Daryll said
the kernel driver should provide the leanest possible interface to
the hardware, the library should then smooth out the hardware
differences into a common API.
  So putting the X api in the kernel isn't a good idea. Just as
putting the opengl API in the kernel isn't a good idea. Daryll
said as much here:

No, I don't think it is a good idea. Kernel's should provide
the minimum layer needed to securly and efficiently implement
solutions in user space. The DRI has a kernel component to
access the graphics hardware. The rest of OpenGL is in user
space.

I do want to argue that the kernel has another, just as important,
role as security. That is resource allocation. The video resource
allocation is handled via the DRM with locking, but there is no
kernel level resource allocation for video memory, modes etc.

I really think that the concept of framebuffer (The concept, not the
implementation) and the concept of the drm need to be combined such
that we have the following:

#1 A kernel API for mode setting, mmaping of the framebuffer and
video memory management.

#2 A kernel api for only the most basic drawing. i.e. Blit and
data copy.

#3 A framework do allow the implementation of the other hardware
specific functions.. basically the drm. So that higher level
interfaces can use them. (Mesa and X)

Daryll wrote:
1) The kernel remains small. No wasted memory. Less security
 problems. 
2) You can layer different graphics systems on top of the same
 kernel interface. (For example the Xv guys wanting to use it)
3) It easier to change, debug, etc.

Allowing resource management (via a common api) and drawing
(via a device specific api) makes all 3 of these things better than
they are today.
 1) The kernel remains small. Only a little added code since a lot
of people have drm and framebuffer already. The added size is as
small as possible. Security is much improved. Having a huge setuid
root binary that accepts remote connections is not a good security
model. XFree is pretty good about having tight security, but the
model is broken from the beginning.

2) You can not layer anything on top of what we have today. You have
to totally reimplement a 2d driver with complete mode setting,
drawing and memory management. Only then can you play nice with the
3d interfaces in the drm. If hardware specific drawing api's were
in the kernel then everyone could layer on top of them. X, Mesa,
and any new graphics library. All without reimplementing the basics.

3) It is easier for everyone writing graphics applications if they
don't have to debug drivers. Having drivers in 3 places already
(framebuffer, drm, XFree) plus any other upcoming api's isn't
helping.

There's essentially no advantage to having X or OpenGL in the
kernel. Do you really need 3D during boot? I'd say no. It can
wait until you mount a file system. If you want to get graphics
running earlier in the boot sequence, go right ahead and work
on that. 

Most of the replies have been addressing why putting X in the
kernel is a bad idea without addressing the real (unstated)
problem. Linux doesn't have a graphics architecture that handles
the basic needs that should be provided by a kernel. As a result
the basics get reimplemented in incompatible ways every time
someone tries something new.

 In my opinion the drm should become _the_ interface for graphics
on Linux (and other kernels). The kernel should use drm interfaces
for console drawing, and user libraries should only access the
device through the drm.

 -Matt

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64: mach64-0-0-2-branch created and updated

2001-10-22 Thread Manuel Teira

El Lun 22 Oct 2001 17:52, Malte Cornils escribió:
 Manuel Teira wrote:
  If you find any problem compiling the new branch, please make me know.

 OK, let me see. With regards to that libXau problem: it
 's sufficient to just copy /usr/X11R6/lib to /usr/X11R6-DRI/lib, the
 rest of the tree isn't necessary. Otherwise, I followed the DRI
 compilation guide under Documentation.

O.K. This is just a issue derived from the trimming of the DRI trunk, I hope.

 The build (or rather, the make install) failed until I removed tdfx
 from line 821 in file
 X11R6-DRI/build/xc/lib/GL/mesa/src/drv/Makefile.

Have you got errores related to the glide library?
Perhaps you should comment out the line:
#define HasGlide3 YES
in the host.def file.
Or perhaps would be good to comment it out in our mach64 branch.


 The instructions for making the nls stuff seem to be outdated, since
 there no longer is any xc/nls in CVS.

 taking /usr/X11R6-DRI/lib into ld.so.conf doesn
 't help for libGL and libGLU, since those already should exist from
 any previous X installation in /usr/lib, and /usr/lib is implicitly
 given preference over anything form ld.so.conf. I had to move the
 old ones away and symlink/copy over the new ones.
What I made for the tests was using:
export LD_PRELOAD=/usr/X11R6-DRI/lib/libGL.so

or

export LD_LIBRARY_PATH=/usr/X11R6-DRI/lib



 Unfortunately, I have a PCI Mach64; modprobe mach64 failed without a
 helpful error message since agpgart wasn
 't installed into the kernel. After modprobing agpgart, then
 modprobing mach64 (that last one is probably also handled
 automagically at X startup), glxinfo showed the valued Direct
 Rendering enabled. And it was; small differences in the display of
 3D apps showed that. However, performance was about as slow as
 software-rendering; at least for gltron, I got about the same
 average fps as with software mesa.

 That is probably due to my card not being an AGP variant (also my
 mainboard does have a - currently empty - AGP slot).

I don't know. We are not using any AGP feature just now. What processor does
your computer have? I'm getting about 215-220 fps in hw mode and no more than
100 (not exactly) in software mode.

 That
 's about it - I tested 3D with gears, gltron and blender and all
 worked with a few glitches (not important right now).

 So, I hope you'll find my report useful. It certainly was fun for
 me, believe it or not.

Thank you for your report.



___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64: mach64-0-0-2-branch created and updated

2001-10-22 Thread Malte Cornils

Manuel Teira wrote:
 Have you got errores related to the glide library?
 Perhaps you should comment out the line:
 #define HasGlide3 YES
 in the host.def file.
 Or perhaps would be good to comment it out in our mach64 branch.

oops. That's likely the problem. I got so used to configure-like
scripts to determine what I have installed that I just skipped the
Glide stuff in host.def. This might actually help, yes. :-)

 What I made for the tests was using:
 export LD_PRELOAD=/usr/X11R6-DRI/lib/libGL.so

ok, sure that'll work. 

  That is probably due to my card not being an AGP variant (also my
  mainboard does have a - currently empty - AGP slot).
 
 I don't know. We are not using any AGP feature just now. What processor does
 your computer have? I'm getting about 215-220 fps in hw mode and no more than
 100 (not exactly) in software mode.

ah, this is gears fps now, not gltron, right? ok gears does 160
software now on my Duron 800, while on Mach64-accel it does 260.
gltron does 5-15 on mach64, 5-15 on plain mesa, too; although it
subjectively seems to be a bit jerkier. Anyway, with the old Utah
code I got more (at least 20fps, but on a K6-2 333) but that has
time. I'm more concerned about glxgears: in software mode, it shows
the three gears moving; in hardware mode, it just shows a huge
close-up of the red one moving. Strange, since gltron looks almost
equivalent under both modes, with hardware having a bit better
texture filtering IMHO. BTW, why does mach64 module insertion
fail when agpgart isn't installed if it doesn't use any features
from AGP?

HTH, Yours Malte #8-)

PS: no need to Cc me, I'm on this list.

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Mach64: mach64-0-0-2-branch created and updated

2001-10-22 Thread Carl Busjahn

Hello,
I've done some testing on my machine with the CVS branch that Manuel did 
so well.
I find that it works great on my machine.  I get about 27 fps in gltron, 
but this is a k6 550mhz.  Quake 3 was even nearly playable in 
548x380(or whatever that mode is).   I just saw now, that you are using 
a PCI card.  Perhaps I can get this code to my friend with a iMac :-) 
 (anyone know where I can get a PPC cross compilier?)  

The Mach64 driver calls for agp, which is why it's failing, I suppose 
that you could take out that call for machines with PCI cards, but 
AGPgart won't mess up machines even without AGP chipsets.  What kind of 
speed do you get with Pulsar? You might also want to look at the cpu 
utilization.   My frame rate in Pulsar runs from 45-90fps, but the CPU 
utilization seems to be greater at the slower frame rate. in a window at 
1024x768.  When I use the -root option it runs about 35fps (again at 
1024x768).  Though when using the -texture option it's pretty solid at 
45fps with cpu at about 50%.  Without the -fps option it seems much 
smoother.

By the way, in comparison, my setup doesn't get over 200fps in glxgears, 
and the cpu is at 100%

Malte Cornils wrote:

Manuel Teira wrote:

Have you got errores related to the glide library?
Perhaps you should comment out the line:
#define HasGlide3 YES
in the host.def file.
Or perhaps would be good to comment it out in our mach64 branch.


oops. That's likely the problem. I got so used to configure-like
scripts to determine what I have installed that I just skipped the
Glide stuff in host.def. This might actually help, yes. :-)

What I made for the tests was using:
export LD_PRELOAD=/usr/X11R6-DRI/lib/libGL.so


ok, sure that'll work. 

That is probably due to my card not being an AGP variant (also my
mainboard does have a - currently empty - AGP slot).

I don't know. We are not using any AGP feature just now. What processor does
your computer have? I'm getting about 215-220 fps in hw mode and no more than
100 (not exactly) in software mode.


ah, this is gears fps now, not gltron, right? ok gears does 160
software now on my Duron 800, while on Mach64-accel it does 260.
gltron does 5-15 on mach64, 5-15 on plain mesa, too; although it
subjectively seems to be a bit jerkier. Anyway, with the old Utah
code I got more (at least 20fps, but on a K6-2 333) but that has
time. I'm more concerned about glxgears: in software mode, it shows
the three gears moving; in hardware mode, it just shows a huge
close-up of the red one moving. Strange, since gltron looks almost
equivalent under both modes, with hardware having a bit better
texture filtering IMHO. BTW, why does mach64 module insertion
fail when agpgart isn't installed if it doesn't use any features
from AGP?

HTH, Yours Malte #8-)

PS: no need to Cc me, I'm on this list.

___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel




___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] my X-Kernel question

2001-10-22 Thread Derrik Pates

On Mon, 22 Oct 2001, Sottek, Matthew J wrote:

 The basic idea in the framebuffer is fine, but the implementation
 isn't very good. It is more grown out of console functions rather
 than starting from a graphics driver perspective.

Not to burst anyone's bubble here, guys, but shades of GGI going on. Do
you guys really want to dredge up all this debate, covering (pretty much)
the same points you guys are addressing here? I'm not saying your ideas
are bad, I'm not passing judgment on any of the ideas behind either what
you're discussing here or GGI, but the fact is this has all been hashed
out, and it went nowhere, since Linus wouldn't permit the kernel-side code
in the mainline Linux kernel anyway.

Derrik Pates  |   Sysadmin, Douglas School   |#linuxOS on EFnet
[EMAIL PROTECTED] | District (dsdk12.net)|#linuxOS on OPN


___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel