Re: [ANNOUNCE] xf86-video-intel 2.5.99.2

2009-01-16 Thread Tino Keitel
On Fri, Jan 16, 2009 at 09:56:19 +0200, Vasily Khoruzhick wrote:
 On Friday 16 January 2009 05:20:17 Giovanni Masucci wrote:
 
  If I can ask, are these 6 patches going to enter the next 2.6.28.x 
  releases or they'll just be in 2.6.29?
 
 Just out of curiosity, does anybody got this driver working stable and fast 
 on 
 gma950 on 2.6.28 kernel (with these 6 patches)?
 
 I've just tried xf86-video-2.6.0, xorg-server-1.5.99.901 and mesa-7.3_rc2,
 still got artefacts with uxa (same as on 
 http://fenix-fen.at.tut.by/screen-3.png) and xserver hangs (and no way to 
 stop it except restarting whole system) after using 3d for ~2-3 mins (with 
 wine even faster :))

I used mesa from the intel-2008-q4 branch, Xserver 1.5.3, the drm-intel
2.6.28 kernel and a libdrm from git somewhere after 2.4.3.

I have slow 3D which looks like software rendering in googlemaps,
quake3 and neverball, and glxgears also doesn't look that smooth.  But
glxinfo reports direct rendering.

 With EXA (and DRI1) I got message like No MTTR for 0xc000 in dmesg 
 every 

I didn't see that with the drm-intel kernel.

 time when xserver starts, and 3D performance is terrible (7-10fps in Quake3)

glxgears shows a black screen with some sproradic coloured artifacts. I
tried to use other 3D apps with EXA (neverball, quake3) but they froze
the Xserver.

Regards,
Tino
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Bug in interaction between freeglut and mesa-7.3-rc2

2009-01-16 Thread Florian Echtler
Hello everyone,

I've found a bug in the interaction between freeglut and mesa. This
applies to mesa 7.2 and 7.3-rc2 and occurs both with freeglut-current
and freeglut-2.6.0-rc1. I'm using the radeon driver on a Radeon Mobility
X1400.

The bug causes a segfault in libGL.so, though even with debug info
enabled, I can't see the exact location. However, I can work around the
bug when, in freeglut, I replace 

glXMakeContextCurrent(
fgDisplay.Display,
window-Window.Handle,
window-Window.Handle,
window-Window.Context
);

with 

glXMakeCurrent(
fgDisplay.Display,
window-Window.Handle,
window-Window.Context
);

According to the GLX spec, these two calls should be equivalent.
However, the first one causes a segfault, the second one doesn't. So I
suppose this is really a bug in mesa somewhere. Is this information
sufficient to fix it, or do you need anything else?

Thanks, Yours,
Florian
-- 
0666 - Filemode of the Beast

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


[Fwd: Dual Monitor problem]

2009-01-16 Thread Ferenc Vajda

Hi,

I don't know if I am at the right place. I need some help setting my dual 
monitor system, 
which does not really work on my computer.

I have been experienced several hours to set up my notebook to use an 
additional external monitor, but only mirroring works (and only in res. 
1280x1024). What I need is two separate desktops where I can move cursor 
and windows between the two screens (a large desktop would be also 
acceptable, but two separate is better).

Parameters:
  HP Compaq 6710b -- video: Intel GMA X3100 (resolution 1650x1080 )
+ external 1280x1024 LCD (Belinea ...)
  Linux: Fedora 9

Experiences with xorg.conf (original and current xorg.conf are attached 
below)
  - added to ServerLayout
Screen  0  Laptop Screen 0 0
Screen  1  External Screen rightof Laptop Screen
  - created devices
  Section Device
  Identifier  Primary Videocard
  BusID   PCI:0:2:0
  Driver  intel
  Option  MonitorLayout LVDS, CRT
  Option MetaModes 1680x1050-1280x1024
  Screen 0
  Option MergedFB off
  EndSection

  Section Device
  Identifier  Secondary Videocard
  BusID   PCI:0:2:0
  Driver  vesa
  Screen 1
  Option MergedFB off
  EndSection
  - if I used intel in secondary Videocard, it did not work.
  - if I change BusID of Secondary card, nothing happens.
  - I also tried to use the same Device for the two screens, also mirrored
  - I defined two monitors
  Section Monitor
  Identifier Monitor 1
  EndSection
  Section Monitor
  Identifier Monitor 2
  EndSection
  - and screens
  Section Screen
  Identifier Laptop Screen
   Device Primary Videocard
   MonitorMonitor 1
  DefaultDepth 24 
  SubSection Display
  Viewport   0 0
  Depth 24 
  Modes 1680x1...@60
   EndSubSection
  EndSection
  Section Screen
  Identifier External Screen
  Device Secondary Videocard
   MonitorMonitor 2
   DefaultDepth 24 
   SubSection Display
   Viewport   0 0
   Depth 24 
   Modes 1280x1024
   EndSubSection
  EndSection
  - I tried to modify modes (and device metamodes) to 1024x768 for 
both. As a result, screens were mirrored as 1280x1024 (as if I did not 
do anything) (note: if I write stupid things to xorg.conf, X does not start)
  - also the followings were added
  Section DRI
  Mode 0666
  EndSection

default xorg.conf and my one
---
# Xorg configuration created by pyxf86config

Section ServerLayout
Identifier Default Layout
Screen  0  Screen0 0 0
InputDeviceKeyboard0 CoreKeyboard
EndSection

Section InputDevice
# keyboard added by rhpxl
Identifier  Keyboard0
Driver  kbd
Option  XkbModel pc105
Option  XkbLayout hu
EndSection

Section Device
Identifier  Videocard0
Driver  intel
EndSection

Section Screen
Identifier Screen0
Device Videocard0
DefaultDepth 24
SubSection Display
Viewport   0 0
Depth 24
EndSubSection
EndSection
---
# Modified to dual screen

Section ServerLayout
Identifier Default Layout
Screen  0  Laptop Screen 0 0
Screen  1  External Screen rightof Laptop Screen
InputDeviceKeyboard0 CoreKeyboard
EndSection

Section InputDevice
# keyboard added by rhpxl
Identifier  Keyboard0
Driver  kbd
Option  XkbModel pc105
Option  XkbLayout hu
EndSection

Section Device
Identifier  Primary Videocard
BusID   PCI:0:2:0
Driver  intel
Option  MonitorLayout LVDS, CRT
Option MetaModes 1680x1050-1280x1024
Screen 0
Option MergedFB off
EndSection

Section Device
Identifier  Secondary Videocard
BusID   PCI:0:2:0
Driver  vesa
Screen 1
Option MergedFB off
EndSection

Section Monitor
Identifier Monitor 1
#   Option DPMS
EndSection

Section Monitor
Identifier Monitor 2
#   Option DPMS
EndSection

Section Screen
Identifier Laptop Screen
Device Primary Videocard
MonitorMonitor 1
DefaultDepth 24
SubSection Display
Viewport   0 0
Depth 24
Modes 1680x1...@60
EndSubSection
EndSection

Section Screen
Identifier External Screen
Device Secondary Videocard
MonitorMonitor 2
DefaultDepth 24
SubSection Display
Viewport   0 0
Depth 24
Modes 1280x1024
EndSubSection
EndSection

Section DRI
Mode 0666
EndSection
---

If I set screen 

Re: Bug in interaction between freeglut and mesa-7.3-rc2

2009-01-16 Thread Florian Echtler
  According to the GLX spec, these two calls should be equivalent.
  However, the first one causes a segfault, the second one doesn't. So I
  suppose this is really a bug in mesa somewhere.
 No. The former is only supported as of GLX 1.3, but your setup only
 supports GLX 1.2.
I'm sorry, I just saw that I already had cross-posted this to xorg once
(last fall). I forgot about the GLX 1.2/1.3 difference, but why does
this function exist  segfault then? Shouldn't it rather return an error
when GLX 1.3 is unsupported? 

Yours, Florian
-- 
0666 - Filemode of the Beast

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Bug in interaction between freeglut and mesa-7.3-rc2

2009-01-16 Thread Michel Dänzer
On Fri, 2009-01-16 at 14:57 +0100, Florian Echtler wrote:
   According to the GLX spec, these two calls should be equivalent.
   However, the first one causes a segfault, the second one doesn't. So I
   suppose this is really a bug in mesa somewhere.
  No. The former is only supported as of GLX 1.3, but your setup only
  supports GLX 1.2.
 I'm sorry, I just saw that I already had cross-posted this to xorg once
 (last fall). I forgot about the GLX 1.2/1.3 difference, but why does
 this function exist  segfault then?

It exists because the same code could support GLX 1.3 with different
drivers.

 Shouldn't it rather return an error when GLX 1.3 is unsupported? 

I'm not sure about that, but in general I think the result of trying to
use unsupported GL(X) functionality is undefined. It could do nothing,
or it could eat your kitten, or...


To me, the real question here is why such a glaring bug in freeglut
hasn't been fixed in such a long time.


-- 
Earthling Michel Dänzer   |http://www.vmware.com
Libre software enthusiast |  Debian, X and DRI developer
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

RE: [Intel-gfx] [ANNOUNCE] xf86-video-intel 2.6.0

2009-01-16 Thread Cliff Lawson
Gordon,

As the i915.ko is part of the kernel development tree now (rather than
being built out of the libdrm component) can you confirm whether kernel
2.6.28-rc8 is a late enough version of the kernel to work with this
release or should a later kernel version be used?

Cliff Lawson

-Original Message-
From: xorg-boun...@lists.freedesktop.org
[mailto:xorg-boun...@lists.freedesktop.org] On Behalf Of Jin, Gordon
Sent: 15 January 2009 09:09

I've put the related component info and known bugs at
http://intellinuxgraphics.org/2008Q4.html.

Gordon

Scanned by MailDefender - managed email security from intY - 
www.maildefender.net
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [ANNOUNCE] xf86-video-intel 2.5.99.2

2009-01-16 Thread Keith Packard
On Fri, 2009-01-16 at 09:56 +0200, Vasily Khoruzhick wrote:

 I've just tried xf86-video-2.6.0, xorg-server-1.5.99.901 and mesa-7.3_rc2,
 still got artefacts with uxa (same as on 
 http://fenix-fen.at.tut.by/screen-3.png) and xserver hangs (and no way to 
 stop it except restarting whole system) after using 3d for ~2-3 mins (with 
 wine even faster :))

Eric and I found some rather significant 915/945 X/3D interaction
problems yesterday that were leading to hardware lockups. He's hoping to
finish up some fixes for that today.

The core issue was that with DRI2, there isn't any hardware lock that
the X server holds between batch buffers, so the 2D drawing code needs
to be more careful about keeping setup code and the associated rendering
code in the same batch buffer.

-- 
keith.pack...@intel.com


signature.asc
Description: This is a digitally signed message part
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Fedora 10: Trouble installing libdrm (2.4.4) for latest Intel driver (2.6)

2009-01-16 Thread Dan Nicholson
On Fri, Jan 16, 2009 at 8:10 AM, Joe Smith stop...@yahoo.com wrote:
 I'm having trouble installing the new Intel driver at
 http://intellinuxgraphics.org/2008Q4.html.
 This is xf86-video-intel 2.6.0 for an Intel GME965 chipset.
 I'm running Fedora 10, 2.6.27.9-159.fc10.i686.

 I think it all stems from an incomplete libdrm installation, due to a
 missing build directory:

 In libdrm-2.4.4 directory:
 # more README

 By default, libdrm and the DRM header files will install into
 /usr/local/.
 If you want to install this DRM to replace your system copy, say:
 ./configure --prefix=/usr --exec-prefix=/
 Then,
 make install

 To build the device-specific kernel modules:
 cd linux-core/
 make
 cp *.ko /lib/modules/VERSION/kernel/drivers/char/drm/
(where VERSION is your kernel version: uname -f)
 Or,
 cd bsd-core/
 make
 copy the kernel modules to the appropriate place


 I did part one and everything completed normally.  Actually, I added make to
 the process because that's the usual order of operations.
 ./configure --prefix=/usr --exec-prefix=/
 make
 make install

 However, I can't do part 2 because there is no directory called linux-core.
 I did see a directory called shared core.  Here's the listing:

The libdrm tarball doesn't contain the kernel modules. That README is
more for the intention of checking out the drm git repository, which
has both the library and kernel modules. It may be too late now, but
you might not want to replace your system libdrm. You can easily stuff
it in /usr/local or /opt or $HOME.

 I rebooted anyway (who knows?) and don't see the new version:

 # dmesg | grep drm
 [drm] Initialized drm 1.1.0 20060810
 [drm] Initialized i915 1.6.0 20080730 on minor 0

If you follow the intel guide, you should install a newer kernel which
has the updated drm modules.

 But I pushed ahead anyway, and tried installing the Intel driver:

 In xf86-video-intel-2.6.0 directory:
 # ./configure
 .
 checking for DRM... configure: error: Package requirements (libdrm = 2.4.3)
 were not met:
 Requested 'libdrm = 2.4.3' but version of libdrm is 2.4.0

The important thing is the pkgconfig file. You need to update the
PKG_CONFIG_PATH environment variable to point to the directory where
your freshly installed libdrm.pc is installed. Intel has a guide for
building, but it has some misinformation. You might want to see the
Xorg wiki, which is more in depth. Here's both pages.

http://intellinuxgraphics.org/install.html
http://wiki.x.org/wiki/Development/git

The bit about building lidbrm as --prefix=/usr --exec-prefix=/ is not
correct. This will put the libraries and pkgconfig file in /lib and
/lib/pkgconfig. pkg-config does not look there by default, only
/usr/lib/pkgconfig. But, again, it would probably be better to just
install the components outside of the system directories.

--
Dan
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


compiling COMPIZ from GIT failes

2009-01-16 Thread Florian Lier

Hello everybody,

my new X from git runs great (Xi2.0 too) ... today I wanted to give 
Sam Sp. Compiz-Patch
a try. This means afaik to compile a compiz instance from the git into 
my xserver dir.


The problem is, everything compiles except for compiz :(
I'm using a small customized script to compile the Xserver (see attachment)

This is where the compile process stopps:

autoreconf: Entering directory `.'
autoreconf: configure.ac: not using Gettext
autoreconf: running: aclocal -I/home/fl0/mpxcompiz/share/aclocal
configure.ac:197: warning: macro `AM_GCONF_SOURCE_2' not found in library
autoreconf: configure.ac: tracing
autoreconf: running: libtoolize --install --copy
libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.ac and
libtoolize: rerunning libtoolize, to keep the correct libtool macros 
in-tree.

libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am.
configure.ac:197: warning: macro `AM_GCONF_SOURCE_2' not found in library
autoreconf: running: /usr/bin/autoconf
autoreconf: running: /usr/bin/autoheader
autoreconf: running: automake --add-missing --copy --no-force
configure.ac:21: installing `./missing'
gtk/gnome/Makefile.am: installing `./depcomp'
kde/window-decorator-kde4/Makefile.am:41: `%'-style pattern rules are a 
GNU make extension
kde/window-decorator/Makefile.am:33: `%'-style pattern rules are a GNU 
make extension
kde/window-decorator/Makefile.am:36: `%'-style pattern rules are a GNU 
make extension
kde/window-decorator/Makefile.am:39: `%'-style pattern rules are a GNU 
make extension
metadata/Makefile.am:82: GCONF_SCHEMAS_INSTALL does not appear in 
AM_CONDITIONAL
metadata/Makefile.am:45: patsubst 
%.xml.in,compiz-%.schemas,$(xml_in_files: non-POSIX variable name

metadata/Makefile.am:45: (probably a GNU make extension)
metadata/Makefile.am:48: `%'-style pattern rules are a GNU make extension
metadata/Makefile.am:58: patsubst %.xml.in,compiz-%.kcfg,$(xml_in_files: 
non-POSIX variable name

metadata/Makefile.am:58: (probably a GNU make extension)
metadata/Makefile.am:62: `%'-style pattern rules are a GNU make extension
metadata/Makefile.am:63: subst compiz-,,$*: non-POSIX variable name
metadata/Makefile.am:63: (probably a GNU make extension)
autoreconf: automake failed with exit status: 1

I can't figure out what the problem is
Maybe someone can help me, maybe it's a problem with the script?

P.S:I'm running Ubuntu 8.10 (Gcard: Nvidia 8800 GT)


thx in advance, cheers Florian


xscript.sh
Description: Bourne shell script
begin:vcard
fn:Florian Lier
n:Lier;Florian
org:;AG AI | so far
adr:;;Germany
email;internet:f...@icram.de
title:B. Sc. Informatics
tel;work:University | AG Angewandte Informatik
tel;home:/home/usr
x-mozilla-html:FALSE
url:www.icram.de
version:2.1
end:vcard

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Proper way to enable port access tracing with current xserver

2009-01-16 Thread Alex Deucher
On Fri, Jan 16, 2009 at 11:40 AM, Alex Villací­s Lasso
a_villa...@palosanto.com wrote:
 Alex Deucher escribió:
 On Thu, Jan 15, 2009 at 4:53 PM, Alex Villací­s Lasso
 a_villa...@palosanto.com wrote:

 Alex Deucher escribió:

 On Thu, Jan 15, 2009 at 3:10 PM, Alex Villací­s Lasso
 a_villa...@palosanto.com wrote:


 I am trying to enable I/O port tracing on current xserver head in my home
 machine (Linux 2.6.28 on x86 Pentium 4 32-bits, ProSavageDDR-K as primary
 card, Oak OTI64111 as secondary card) in order to learn about the register
 initialization for the video BIOS of both the Savage and the Oak chipsets:

 * For savage, I want to eventually see the POST port accesses as they 
 occur
 in VESA, so that the current driver can do the same port enabling on the
 case of a savage as secondary card. Currently, the xorg driver can
 initialize a secondary savage without BIOS (but see below for caveat), but
 the colors are washed out and horrible artifacts appear on any attempt to
 accelerate operations. Same issue happens with the savagefb kernel
 framebuffer driver.
 * For oak, I want to peek at the register initialization for mode 
 switching
 in VESA, in order to have better understanding towards writing a driver 
 for
 the chipset.


 http://people.freedesktop.org/~airlied/xresprobe-mjg59-0.4.21.tar.gz

 This will dump io accesses when you execute bios code using the
 included x86 emulator.

 Alex



  From a quick skim over the contents of the file, I see an x86emu
 directory. I think I have seen a directory with that name in the xserver
 sources. Is it safe to switch to x86emu on an x86 32-bits in the xserver
 source? Or do I have to keep in mind some special consideration?


 We already do.  the xserver uses x86emu by default now for x86.

 Alex


 That is a bit weird. I had to explicitly enable x86emu with a configure
 switch before I could get an actual port trace. Maybe I should force
 vm86 back home and see what happens. Why was this change made? I would
 think only non-PC architectures and x86_64 would need this. Why also on
 i386?

http://lists.freedesktop.org/archives/xorg-commit/2008-December/019092.html


  From what I glean from the traces, it seems that using VESA to start up
 the primary Savage chipset works correctly. However, when trying to
 initialize the Oak chipset as secondary (just that one, without
 reference to the primary Savage chipset), it ends up in a loop of
 in(3da) = ff and hangs. Interestingly, I saw no hint that the Savage
 chipset was ever moved out of the legacy  VGA mapping in order to
 initialize the Oak chipset via POST. Which ties back to my previous
 question: what measures (if any) are supposed to be taken by the xserver
 in order to hand over the legacy VGA ports to a secondary chipset that
 needs access to them for POST, when run with a different chipset as
 primary? As in, Savage is mapped to legacy VGA, I want to POST the Oak
 chipset, which needs a mapping to the VGA ports too, so what should the
 xserver do?

The bridge chipset needs to route vga to the proper card.  Pre-1.5
xservers used to handle this.  libpciaccess does not yet AFAIK.

Alex
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [PATCH 0/4] Cursor's update inside kernel only

2009-01-16 Thread Jesse Barnes
On Monday, January 5, 2009 12:55 pm Tiago Vignatti wrote:
 Right now a thing that is annoying me is how others cursors, sw rendered,
 could be implemented. I want to avoid two differents sets of the same code
 in different contexts. IMHO conceptually all these cursor update things
 must be in-kernel. Painting a cursor image seems to be quite hard as we
 have to save/restore areas under the cursor. I remember that anholt had an
 idea concerning this, but I do not remember details.

I really like the idea of having this in the kernel for latency reasons, but 
yeah we do need to solve the sw case as well as implementing some 
acceleration code.  OTOH it might be reasonable to push the problem of 
multiple, large, and or funky format cursors out to userspace, since those 
are relatively uncommon (64x64 32 bit ARGB ought to be enough for everybody 
right? :).

-- 
Jesse Barnes, Intel Open Source Technology Center
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Proper way to enable port access tracing with current xserver

2009-01-16 Thread Alex Deucher
On Fri, Jan 16, 2009 at 3:21 PM, Alex Villací­s Lasso
a_villa...@palosanto.com wrote:
 Alex Deucher escribió:

  From what I glean from the traces, it seems that using VESA to start up
 the primary Savage chipset works correctly. However, when trying to
 initialize the Oak chipset as secondary (just that one, without
 reference to the primary Savage chipset), it ends up in a loop of
 in(3da) = ff and hangs. Interestingly, I saw no hint that the Savage
 chipset was ever moved out of the legacy  VGA mapping in order to
 initialize the Oak chipset via POST. Which ties back to my previous
 question: what measures (if any) are supposed to be taken by the xserver
 in order to hand over the legacy VGA ports to a secondary chipset that
 needs access to them for POST, when run with a different chipset as
 primary? As in, Savage is mapped to legacy VGA, I want to POST the Oak
 chipset, which needs a mapping to the VGA ports too, so what should the
 xserver do?


 The bridge chipset needs to route vga to the proper card.  Pre-1.5
 xservers used to handle this.  libpciaccess does not yet AFAIK.

 Alex


 Then, this is technically a regression.

 So, lets say I want to add this support back. Is this squarely a
 libpciaccess change, or do I have to place this on the xserver (since
 this is specifically a VGA requirement)? Is anyone currently working on
 this, by any chance (just to avoid duplicating work). Any suggestions on
 the proper API to expose from libpciaccess? Are there any official docs
 on bridge chipsets, besides the actual source code of old xservers?

Ideally it would be done in the kernel to be used with libpciaccess.
Tiago and Paulo have been working on it:
http://lists.freedesktop.org/archives/xorg/2007-October/029507.html
I'm not sure what the current status is.  As for documents, your best
bet is probably hw reference guides from the vendors who make the PCI
chipsets (AMD/VIA/Intel/etc.).  Most are available.

Alex
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: compiling COMPIZ from GIT failes

2009-01-16 Thread Christoph Berg
Hello Florian,

well this question has nothing to do with Xorg. You should have asked at the 
compiz mailinglist. But nevertheless...

--snip--
 configure.ac:197: warning: macro `AM_GCONF_SOURCE_2' not found in library
--snip--
 metadata/Makefile.am:82: GCONF_SCHEMAS_INSTALL does not appear in

These lines tell you, that you are missing the m4 macros from the gconf 
package. I guess, installing gconf-devel or whatever it is called on Ubuntu 
should solve your problem.

Fine regards,
  Christoph




signature.asc
Description: This is a digitally signed message part.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: compiling COMPIZ from GIT failes

2009-01-16 Thread Florian Lier

Hey,

yes u're right ... sorry -

I solved this prob, but some new came up I'll ask @ compiz list.

cheers, flo

Christoph Berg schrieb:

Hello Florian,

well this question has nothing to do with Xorg. You should have asked at the 
compiz mailinglist. But nevertheless...


--snip--
  

configure.ac:197: warning: macro `AM_GCONF_SOURCE_2' not found in library


--snip--
  

metadata/Makefile.am:82: GCONF_SCHEMAS_INSTALL does not appear in



These lines tell you, that you are missing the m4 macros from the gconf 
package. I guess, installing gconf-devel or whatever it is called on Ubuntu 
should solve your problem.


Fine regards,
  Christoph


  



___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


begin:vcard
fn:Florian Lier
n:Lier;Florian
org:;AG AI | so far
adr:;;Germany
email;internet:f...@icram.de
title:B. Sc. Informatics
tel;work:University | AG Angewandte Informatik
tel;home:/home/usr
x-mozilla-html:FALSE
url:www.icram.de
version:2.1
end:vcard

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: KMS with Xorg on G45 failing

2009-01-16 Thread Jesse Barnes
On Tuesday, January 6, 2009 5:41 pm Mike Lothian wrote:
 2009/1/7 Mike Lothian m...@fireburn.co.uk:
  2009/1/7 Jesse Barnes jbar...@virtuousgeek.org:
  On Tuesday, January 6, 2009 3:53 pm Mike Lothian wrote:
  2009/1/6 Jesse Barnes jbar...@virtuousgeek.org:
   On Tuesday, January 6, 2009 2:35 pm Mike Lothian wrote:
   Hi there
  
   Thought I'd send in some info about Xorg with a KMS kernel, libdrm
   and xf86-video-intel
  
   I've attached my dmesg and Xorg.0.log
  
   The only outputs are LVDS 15 screen an a disconnected VGA out and
   HDMI connectors
  
   I'll quite happily submit a bug but thought the code was a little
   too new to harp on about especially if this is a known issue
  
   Let me know if I can help further
  
   Looks like the 3D driver init failed... Maybe you need to rebuild
   your DRI driver as well against the same libdrm?
 
  I rebuild libdrm mesa and xf86-video-intel each time is there
  something else I'm missing?
 
  This one looks like a Mesa mismatch somehow, so try rebuilding that too.
 
  --
  Jesse Barnes, Intel Open Source Technology Center
 
  OK I've recompiled libdrm mesa and xf86-video-intel using their master
  tree's and recompiled the head drm-intel-next branch of the kernel the
  versions I were using before were only a few hours old
 
  Good news X starts, bad news is the screen goes black once KDE starts
  to load, pretty sure it's when the compositing is started
 
  Another strange things happened, normally with KMS all my VTs are at
  the laptops native resolution now they're in the top left corner at
  what looks like 800x600 but not streched
 
  Also when booting from a non-KMS but regular GEM kernel UXA doesn't
  work now. I've had to switch back to EXA
 
  I'm attaching my dmesg and Xorg log from the KMS retry
 
  Could any of the problems be caused by:
 
  [drm:i915_get_vblank_counter] *ERROR* trying to get vblank count for
  disabled pipe 0
 
 
  Cheers
 
  Mike

 Oh yes and when X started is went back to the native resolution

The 800x600 but not stretched issue sounds like it could be related to the 
initial_config bug I fixed this week.  Can you give Dave's latest drm-next 
tree a try?  It fixes that bug (among others).  Also the 2D driver needs a 
few fixes we're still working on.  The intel-...@lists.freedesktop.org is a 
good place to watch for those.


-- 
Jesse Barnes, Intel Open Source Technology Center
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Fedora 10: Trouble installing libdrm (2.4.4) for latest Intel driver (2.6)

2009-01-16 Thread Joe Smith
Dan,

Thanks for the info.  While I was waiting, I tried installing the latest libdrm 
rpm from RPM Fusion or Rawhide, not sure which.  It was tagged 2.4.3 fc11, but 
I have fc10.  I assumed (ha ha) this was ok because the package manager let me 
do it.  I know, a slippery slope.  Anyway, X didn't start after that, and I 
couldn't easily get it back.  So I'm reinstalling now.

So the question is, can I install rpms tagged for fc11 on fc10?  If so, then 
perhaps X crashed because I had partially installed libdrm 2.4.4 from source, 
and a conflict developed...?

In the meantime, I'll use the sites you provided.  But fc11 rpms would be 
easier/faster, if that's allowed!

Thanks.


--- On Fri, 1/16/09, Dan Nicholson dbn.li...@gmail.com wrote:

 From: Dan Nicholson dbn.li...@gmail.com
 Subject: Re: Fedora 10: Trouble installing libdrm (2.4.4) for latest Intel 
 driver (2.6)
 To: stop...@yahoo.com
 Cc: xorg@lists.freedesktop.org
 Date: Friday, January 16, 2009, 1:39 PM
 On Fri, Jan 16, 2009 at 8:10 AM, Joe Smith
 stop...@yahoo.com wrote:
  I'm having trouble installing the new Intel driver
 at
  http://intellinuxgraphics.org/2008Q4.html.
  This is xf86-video-intel 2.6.0 for an Intel GME965
 chipset.
  I'm running Fedora 10, 2.6.27.9-159.fc10.i686.
 
  I think it all stems from an incomplete libdrm
 installation, due to a
  missing build directory:
 
  In libdrm-2.4.4 directory:
  # more README
 
  By default, libdrm and the DRM header files will
 install into
  /usr/local/.
  If you want to install this DRM to replace your
 system copy, say:
  ./configure --prefix=/usr --exec-prefix=/
  Then,
  make install
 
  To build the device-specific kernel modules:
  cd linux-core/
  make
  cp *.ko
 /lib/modules/VERSION/kernel/drivers/char/drm/
 (where VERSION is your kernel version:
 uname -f)
  Or,
  cd bsd-core/
  make
  copy the kernel modules to the appropriate
 place
 
 
  I did part one and everything completed normally. 
 Actually, I added make to
  the process because that's the usual order of
 operations.
  ./configure --prefix=/usr --exec-prefix=/
  make
  make install
 
  However, I can't do part 2 because there is no
 directory called linux-core.
  I did see a directory called shared core.  Here's
 the listing:
 
 The libdrm tarball doesn't contain the kernel modules.
 That README is
 more for the intention of checking out the drm git
 repository, which
 has both the library and kernel modules. It may be too late
 now, but
 you might not want to replace your system libdrm. You can
 easily stuff
 it in /usr/local or /opt or $HOME.
 
  I rebooted anyway (who knows?) and don't see the
 new version:
 
  # dmesg | grep drm
  [drm] Initialized drm 1.1.0 20060810
  [drm] Initialized i915 1.6.0 20080730 on minor 0
 
 If you follow the intel guide, you should install a newer
 kernel which
 has the updated drm modules.
 
  But I pushed ahead anyway, and tried installing the
 Intel driver:
 
  In xf86-video-intel-2.6.0 directory:
  # ./configure
  .
  checking for DRM... configure: error: Package
 requirements (libdrm = 2.4.3)
  were not met:
  Requested 'libdrm = 2.4.3' but version of
 libdrm is 2.4.0
 
 The important thing is the pkgconfig file. You need to
 update the
 PKG_CONFIG_PATH environment variable to point to the
 directory where
 your freshly installed libdrm.pc is installed. Intel has a
 guide for
 building, but it has some misinformation. You might want to
 see the
 Xorg wiki, which is more in depth. Here's both pages.
 
 http://intellinuxgraphics.org/install.html
 http://wiki.x.org/wiki/Development/git
 
 The bit about building lidbrm as --prefix=/usr
 --exec-prefix=/ is not
 correct. This will put the libraries and pkgconfig file in
 /lib and
 /lib/pkgconfig. pkg-config does not look there by default,
 only
 /usr/lib/pkgconfig. But, again, it would probably be better
 to just
 install the components outside of the system directories.
 
 --
 Dan


  

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Proper way to enable port access tracing with current xserver

2009-01-16 Thread Alex Villací­s Lasso
Alex Deucher escribió:
 On Fri, Jan 16, 2009 at 3:21 PM, Alex Villací­s Lasso
 a_villa...@palosanto.com wrote:
   
 Alex Deucher escribió:
 
  From what I glean from the traces, it seems that using VESA to start up
 the primary Savage chipset works correctly. However, when trying to
 initialize the Oak chipset as secondary (just that one, without
 reference to the primary Savage chipset), it ends up in a loop of
 in(3da) = ff and hangs. Interestingly, I saw no hint that the Savage
 chipset was ever moved out of the legacy  VGA mapping in order to
 initialize the Oak chipset via POST. Which ties back to my previous
 question: what measures (if any) are supposed to be taken by the xserver
 in order to hand over the legacy VGA ports to a secondary chipset that
 needs access to them for POST, when run with a different chipset as
 primary? As in, Savage is mapped to legacy VGA, I want to POST the Oak
 chipset, which needs a mapping to the VGA ports too, so what should the
 xserver do?

 
 The bridge chipset needs to route vga to the proper card.  Pre-1.5
 xservers used to handle this.  libpciaccess does not yet AFAIK.

 Alex


   
 Then, this is technically a regression.

 So, lets say I want to add this support back. Is this squarely a
 libpciaccess change, or do I have to place this on the xserver (since
 this is specifically a VGA requirement)? Is anyone currently working on
 this, by any chance (just to avoid duplicating work). Any suggestions on
 the proper API to expose from libpciaccess? Are there any official docs
 on bridge chipsets, besides the actual source code of old xservers?
 

 Ideally it would be done in the kernel to be used with libpciaccess.
 Tiago and Paulo have been working on it:
 http://lists.freedesktop.org/archives/xorg/2007-October/029507.html
 I'm not sure what the current status is.  As for documents, your best
 bet is probably hw reference guides from the vendors who make the PCI
 chipsets (AMD/VIA/Intel/etc.).  Most are available.

 Alex

   
I have collected a few patches. However, the referenced repositories are 
now 404. Would it be acceptable to skip a kernel implementation for now 
and instead just modify xserver/libpciaccess to provide a level of 
functionality similar to pre-1.5 ?

-- 
perl -e '$x=2.4;print sprintf(%.0f + %.0f = %.0f\n,$x,$x,$x+$x);'

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [Fwd: Dual Monitor problem]

2009-01-16 Thread Marius Gedminas
On Fri, Jan 16, 2009 at 12:43:06PM +0100, Ferenc Vajda wrote:
 I don't know if I am at the right place.

You are.

 I need some help setting my
 dual monitor system, which does not really work on my computer.
 
 I have been experienced several hours to set up my notebook to use an 
 additional external monitor, but only mirroring works (and only in res. 
 1280x1024). What I need is two separate desktops where I can move cursor 
 and windows between the two screens (a large desktop would be also 
 acceptable, but two separate is better).

What's the difference between a single large desktop and two separate
desktops, if you can drag windows between them?  Separate workspace
switching?  I'm afraid I don't know of any window manager that would
support that.

 Parameters:
   HP Compaq 6710b -- video: Intel GMA X3100 (resolution 1650x1080 )
 + external 1280x1024 LCD (Belinea ...)
   Linux: Fedora 9

 Experiences with xorg.conf (original and current xorg.conf are attached 
 below)
   - added to ServerLayout
 Screen  0  Laptop Screen 0 0
 Screen  1  External Screen rightof Laptop Screen

This kind of old-style static multi-screen configuration is no longer
supported.  See http://intellinuxgraphics.org/dualhead.html for details
about the new configuration options.

I'd suggest going back to the original xorg.conf and adding just a
single tweak to it: in section Screen, add a SubSection Display,
with a Virtual line specifying the largest total desktop size (summed
over all monitors) that you're going to use.  Then you can use xrandr
from the command line or, e.g., GNOME's Screen Resolution applet to drag
your monitors around to a desired configuration.

HTH,
Marius Gedminas
-- 
If Linux doesn't have the solution, you have the wrong problem.


signature.asc
Description: Digital signature
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Font rendering problem in 1.6 branch (fine in 1.5 and master)

2009-01-16 Thread Jeremy Huddleston
Has anyone noticed oddities in font rendering (or perhaps even just  
font selection) in the 1.6 branch?  It looks fine in 1.5 and in  
master, but 1.6 shows something different.  I looked through the list  
of nominated patches for 1.6 and nothing jumped out as fixing this.   
Does someone have a patch they want to nominate that addresses this  
issue, or do I need to go digging to find it...


inline: 1.5.pnginline: 1.6.pnginline: master.png



___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Font rendering problem in 1.6 branch (fine in 1.5 and master)

2009-01-16 Thread Xavier Bestel
Le vendredi 16 janvier 2009 à 14:22 -0800, Jeremy Huddleston a écrit :
 Has anyone noticed oddities in font rendering (or perhaps even just  
 font selection) in the 1.6 branch?  It looks fine in 1.5 and in  
 master, but 1.6 shows something different.  I looked through the list  
 of nominated patches for 1.6 and nothing jumped out as fixing this.   
 Does someone have a patch they want to nominate that addresses this  
 issue, or do I need to go digging to find it...

Are you sure it's not just the DPI which is different ?
Maybe you should compare logfiles or xdpyinfo outputs.

Xav


___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Font rendering problem in 1.6 branch (fine in 1.5 and master)

2009-01-16 Thread Jeremy Huddleston

On Jan 16, 2009, at 14:27, Xavier Bestel wrote:

 Le vendredi 16 janvier 2009 à 14:22 -0800, Jeremy Huddleston a écrit :
 Has anyone noticed oddities in font rendering (or perhaps even just
 font selection) in the 1.6 branch?  It looks fine in 1.5 and in
 master, but 1.6 shows something different.  I looked through the list
 of nominated patches for 1.6 and nothing jumped out as fixing this.
 Does someone have a patch they want to nominate that addresses this
 issue, or do I need to go digging to find it...

 Are you sure it's not just the DPI which is different ?
 Maybe you should compare logfiles or xdpyinfo outputs.

It's 96dpi in all three cases.

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Font rendering problem in 1.6 branch (fine in 1.5 and master)

2009-01-16 Thread Jeremy Huddleston
Ah... I figured it out...  It has to do with --enable-builtin-fonts  
going from no by default in 1.5 to yes by default in 1.6 to being  
removed in master.

If this is being punted in master, and has been off in previous  
releases, should we turn it back off in 1.6 by reverting  
385943e0e97463ce4681a9b6a4a40d7e3c91e51e ?

http://cgit.freedesktop.org/xorg/xserver/commit/?id=385943e0e97463ce4681a9b6a4a40d7e3c91e51e

On Jan 16, 2009, at 14:37, Jeremy Huddleston wrote:


 On Jan 16, 2009, at 14:27, Xavier Bestel wrote:

 Le vendredi 16 janvier 2009 à 14:22 -0800, Jeremy Huddleston a  
 écrit :
 Has anyone noticed oddities in font rendering (or perhaps even just
 font selection) in the 1.6 branch?  It looks fine in 1.5 and in
 master, but 1.6 shows something different.  I looked through the  
 list
 of nominated patches for 1.6 and nothing jumped out as fixing this.
 Does someone have a patch they want to nominate that addresses this
 issue, or do I need to go digging to find it...

 Are you sure it's not just the DPI which is different ?
 Maybe you should compare logfiles or xdpyinfo outputs.

 It's 96dpi in all three cases.

 ___
 xorg mailing list
 xorg@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/xorg

___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Font rendering problem in 1.6 branch (fine in 1.5 and master)

2009-01-16 Thread Dan Nicholson
On Fri, Jan 16, 2009 at 2:51 PM, Jeremy Huddleston
jerem...@freedesktop.org wrote:
 Ah... I figured it out...  It has to do with --enable-builtin-fonts
 going from no by default in 1.5 to yes by default in 1.6 to being
 removed in master.

 If this is being punted in master, and has been off in previous
 releases, should we turn it back off in 1.6 by reverting
 385943e0e97463ce4681a9b6a4a40d7e3c91e51e ?

 http://cgit.freedesktop.org/xorg/xserver/commit/?id=385943e0e97463ce4681a9b6a4a40d7e3c91e51e

What about backporting 49b93df8a3002db7196aa3fc1fd8dca1c12a55d6 to 1.6?

http://cgit.freedesktop.org/xorg/xserver/commit/?id=49b93df8a3002db7196aa3fc1fd8dca1c12a55d6

--
Dan
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [Fwd: Dual Monitor problem]

2009-01-16 Thread Alex Deucher
On Fri, Jan 16, 2009 at 5:38 PM, Yan Seiner y...@seiner.com wrote:

 On Fri, January 16, 2009 2:21 pm, Marius Gedminas wrote:
 On Fri, Jan 16, 2009 at 12:43:06PM +0100, Ferenc Vajda wrote:

 Experiences with xorg.conf (original and current xorg.conf are attached
 below)
   - added to ServerLayout
 Screen  0  Laptop Screen 0 0
 Screen  1  External Screen rightof Laptop Screen

 This kind of old-style static multi-screen configuration is no longer
 supported.  See http://intellinuxgraphics.org/dualhead.html for details
 about the new configuration options.


 Is this only for Intel or is it generic?

All randr 1.2 capable drivers.

Alex
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Current support and roadmap for discrete graphics card hot switching

2009-01-16 Thread William Tracy
On Fri, Jan 16, 2009 at 8:45 AM, Albert Vilella avile...@gmail.com wrote:
 I think a user logout-login, which at least in Ubuntu corresponds to a gdm
 restart nowadays, is a much
 leaner option than a cold reboot of the system. You only lose the opened
 windows,
 but all services like connection to internet, etc, are kept alive, so it's
 better than a reboot.

Just thinking out loud here: If desktop session management were good
enough, even open windows could be persisted.

Even better would be if there were a mechanism to transparently
disconnect an app from one X session, wait for X to restart, and then
attach it to the new session. Probably doable at the toolkit level,
but that doesn't help with all the zillions of apps written against
legacy toolkits.

Random idea: There are already several special-purpose X servers that
run on top of Xorg supporting special magic like hardware compositing.
What if there were a server that could dynamically dispatch to/from
different Xorg instances? It would notice when Xorg dies, and stop
sending it events. When a new Xorg launches, it would send a series of
new window commands, and attach all of its clients to those windows.

Right now I'm assuming that both cards would support equivalent
resolutions and color depths. If not, then never mind. :-P

Anyway, I agree that restarting the server is less painful than a full reboot.

-- 
William Tracy
afishion...@gmail.com -- wtr...@calpoly.edu
Vice President, Cal Poly Linux Users' Group
http://www.cplug.org

I disapprove of what you say, but I will defend to the death your
right to say it.
-- Evelyn Beatrice Hall, frequently mis-attributed to Voltaire
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Bug in interaction between freeglut and mesa-7.3-rc2

2009-01-16 Thread Ian Romanick
On Fri, 2009-01-16 at 12:14 +0100, Florian Echtler wrote:

 I've found a bug in the interaction between freeglut and mesa. This
 applies to mesa 7.2 and 7.3-rc2 and occurs both with freeglut-current
 and freeglut-2.6.0-rc1. I'm using the radeon driver on a Radeon Mobility
 X1400.
 
 The bug causes a segfault in libGL.so, though even with debug info
 enabled, I can't see the exact location. However, I can work around the
 bug when, in freeglut, I replace 

Why would you do that?  There is no improvement to freeglut and no
benefit to its users.  If the read drawable and the drawing drawable are
always the same, just use glXMakeCurrent.  You already have a tool that
does *exactly* the job you need done.  Why change to a tool that is
intended to suit a different need?

 glXMakeContextCurrent(
 fgDisplay.Display,
 window-Window.Handle,
 window-Window.Handle,
 window-Window.Context
 );
 
 with 
 
 glXMakeCurrent(
 fgDisplay.Display,
 window-Window.Handle,
 window-Window.Context
 );
 
 According to the GLX spec, these two calls should be equivalent.

They are morally equivalent.  However, glXMakeContextCurrent requires
that the drawables be created by by one of glXCreateWindow,
glXCreatePixmap, or glXCreatePbuffer.

See:

http://www.opengl.org/sdk/docs/man/xhtml/glXMakeContextCurrent.xml
http://www.opengl.org/sdk/docs/man/xhtml/glXMakeCurrent.xml

See also this thread on the Mesa list:

http://marc.info/?l=mesa3d-devm=123211968809340w=2

 However, the first one causes a segfault, the second one doesn't. So I
 suppose this is really a bug in mesa somewhere. Is this information
 sufficient to fix it, or do you need anything else?

See:

http://bugs.freedesktop.org/show_bug.cgi?id=19625



signature.asc
Description: This is a digitally signed message part
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Input transformations: Compiz

2009-01-16 Thread Bipin George Mathew
I was looking at a way to do input transformation at the window manager
level and I came across this thread -
http://lists.freedesktop.org/archives/compiz/2007-February/001351.html
Have there been any modifications to David's approach/XServer patch?
I was actually looking into an approach suggested in same thread by Felix -
a WM-only approach which has its advantages. Though he does mention an
obstacle - Client pointer grabs are unavailable - not sure what that means -
does this approach require anything beyond what XGrabPointer/button can do?

Bipin
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Input transformations: Compiz

2009-01-16 Thread Chris Ball
Hi,

I was looking at a way to do input transformation at the window
manager level and I came across this thread -
http://lists.freedesktop.org/archives/compiz/2007-February/001351.html
Have there been any modifications to David's approach/XServer
patch?

I think the server patch is unchanged, but input transformations are now
working via Sam Spilsbury's patches to compiz:

http://smspillaz.wordpress.com/2008/11/08/weekend-work/
http://smspillaz.wordpress.com/2008/11/01/before-murphys-law-ensures-the-contrary/
http://smspillaz.wordpress.com/2008/10/21/input-redirection-update/

- Chris.
-- 
Chris Ball   c...@laptop.org
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Disable screen blanking on VT switch

2009-01-16 Thread Connor Behan
I have two video cards each hooked up to a different monitor. The AGP 
card driving my laptop screen is the boot display device which shows the 
console and I've configred xorg to use the PCI card driving my external 
monitor when I start it. I can use Ctrl+Alt+Fn to switch between X on 
the big screen and a ttyn on the small screen but whenever I do so, the 
inactive screen is not surprisingly blank. I'd like to be able to see X 
persist on my external monitor when a text console is in the foreground 
and eventually to see the output of a background tty on my laptop screen 
when tty7 is in the foreground.

I know the second direction sounds harder... people have been asking 
questions about it since 2002 and the only solutions I've seen have been 
unstable kernel patches that are now dead. But I am naive enough to 
think that X could be configured to not blank the screen even when its 
own console is switched away from. After all, it's still running. Do you 
know of a way to do this? I've tried all combinations of tty and vt 
related arguments to X but haven't got it to work yet.

One thing that I think might work, is initializing a framebuffer for 
both cards and then using con2fb to move a particular console to the 
other framebuffer before starting X. I can't try this however because 
all framebuffer drivers I've tried only create a framebuffer device for 
the primary AGP card. So do you know how to disable screen blanking when 
switching VTs or how to use multiple monitors with a framebuffer driver? 
Thanks alot.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg