Re: The PCI device 0x10de087d (ION) at 02@00:00:0 has a kernel module claiming it.

2011-11-22 Thread Aaron Plattner
Check the output of lsmod for modules that can claim that device. 
It's probably either nvidia or nouveau.  I think you can find out 
for sure by running


  readlink /sys/bus/pci/devices/:02:00.0/driver

Unload the conflicting driver and try again.

On 11/22/2011 02:58 AM, Anatolii Ivashyna wrote:

Hi all, I use ION nettop, and have this message in boot.log when using
xorg7-nv driver.

Linux: Thinstation, kernel ver: 3.0.9TS

Markers: (--) probed, (**) from config file, (==) default setting,

(++) from command line, (!!) notice, (II) informational,

(WW) warning, (EE) error, (NI) not implemented, (??) unknown.

(==) Log file: /var/log/Xorg.0.log, Time: Tue Nov 22 12:48:34 2011

(++) Using config file: /etc/X11/x.config-0

(==) Using config directory: /etc/X11/xorg.conf.d

(EE) NV: The PCI device 0x10de087d (ION) at 02@00:00:0 has a kernel
module claiming it.

(EE) NV: This driver cannot operate until it has been unloaded.

(EE) No devices detected.

Please help!

*All the best,*

*Anatolii Ivashyna*



___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Xlib: extension NV-GLX missing on display

2011-09-28 Thread Aaron Plattner

On 09/28/2011 01:43 PM, Alan Coopersmith wrote:

On 09/28/11 12:38 PM, al...@verizon.net wrote:

   Each and every time I go back to the command line after using
   a browser in the X11 environment, I find one or multiple messages,
Xlib:  extension NV-GLX missing on display :0.0
   on the console.
   They come from libXext.

   Not life-threatening but very annoying.
   Nonsensical for an Intel video device par excellence.

   In hopes of getting rid of the messages, I upgraded my original
   Xorg-7.6 package with the Server, Intel and Evdev drivers and
   libXext to the (latest) versions shown above.
   To no avail.

   What could be the solution?


Make sure your libGL.so is the Mesa version, not the one from the
nvidia proprietary drivers.


You're supposed to be able to use the NVIDIA libGL with indirect 
non-NVIDIA GLX contexts, so the fact that it's emitting this warning is 
a bug in the NVIDIA driver.  We've filed internal bug number 805693.


-- Aaron
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: rminitadapter failed

2011-08-19 Thread Aaron Plattner
On Fri, Aug 19, 2011 at 01:18:08AM -0700, Sapfeer wrote:
 Hi all!

 I'm stuck with possible X server error - when I boot OS it works for a
 while, 5-15 minutes and the X crashes with message 'rminitadapter
 failed'. I start it again after several minutes and it works fine for
 some time and then crashes again. I can't provide more details right now
 - I'll be able to post X server logs a bit later. Can anyone at least
 briefly explain what is going on and whether this problem can relate to X
 server, but not to my video card?..

It sounds like you're using the NVIDIA driver and that the kernel module is
failing to initialize your graphics card.  I would suggest looking at
/var/log/syslog or /var/log/messages for errors.  If you need more
assistance, please run nvidia-bug-report.sh and send the resulting
nvidia-bug-report.log.gz file to linux-b...@nvidia.com.

Sincerely,
Aaron
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


[ANNOUNCE] xts 0.99.0

2011-05-24 Thread Aaron Plattner
This is the first modular release of the X Test Suite.  It's
designed to test an X server and its drivers against the X11
specification.  If you've ever used the old CVS versions of this
suite, this is a significant rewrite of the build system thanks
to Dan Nicholson, Peter Hutterer, and a few others.  See the
README in the tarball for information about how to run the test
suite, or see http://xorg.freedesktop.org/wiki/BuildingXtest for
some woefully out of date additional information.

It's not expected that all the tests will pass.  This release is
simply to set a baseline against which future test runs can be
compared.  Patches to fix warnings in the build or the tests
themselves would be very much appreciated.

Since this is the first-ever release of XTS at its new git home,
I've omitted the changelog because it's very long.  If you're
interested in the complete list, you can browse it here:
http://cgit.freedesktop.org/xorg/test/xts/commit/?id=xts-0.99.0

-- Aaron


git tag xts-0.99.0

http://xorg.freedesktop.org/releases/individual/test/xts-0.99.0.tar.bz2
SHA1: a947804e91cf619c7cc226ca0bd1c9afc68ffac8  xts-0.99.0.tar.bz2

http://xorg.freedesktop.org/releases/individual/test/xts-0.99.0.tar.gz
SHA1: dfb754bd2afd460621a7e7b8e99f41d2d820c625  xts-0.99.0.tar.gz
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce


[ANNOUNCE] xts 0.99.0

2011-05-24 Thread Aaron Plattner
This is the first modular release of the X Test Suite.  It's
designed to test an X server and its drivers against the X11
specification.  If you've ever used the old CVS versions of this
suite, this is a significant rewrite of the build system thanks
to Dan Nicholson, Peter Hutterer, and a few others.  See the
README in the tarball for information about how to run the test
suite, or see http://xorg.freedesktop.org/wiki/BuildingXtest for
some woefully out of date additional information.

It's not expected that all the tests will pass.  This release is
simply to set a baseline against which future test runs can be
compared.  Patches to fix warnings in the build or the tests
themselves would be very much appreciated.

Since this is the first-ever release of XTS at its new git home,
I've omitted the changelog because it's very long.  If you're
interested in the complete list, you can browse it here:
http://cgit.freedesktop.org/xorg/test/xts/commit/?id=xts-0.99.0

-- Aaron


git tag xts-0.99.0

http://xorg.freedesktop.org/releases/individual/test/xts-0.99.0.tar.bz2
SHA1: a947804e91cf619c7cc226ca0bd1c9afc68ffac8  xts-0.99.0.tar.bz2

http://xorg.freedesktop.org/releases/individual/test/xts-0.99.0.tar.gz
SHA1: dfb754bd2afd460621a7e7b8e99f41d2d820c625  xts-0.99.0.tar.gz
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: dummy driver and maximum resolutions, config hacks via LD_PRELOAD, etc

2011-04-06 Thread Aaron Plattner
On Wed, Apr 06, 2011 at 02:37:52PM -0700, Antoine Martin wrote:
 On 04/06/2011 09:13 PM, Adam Jackson wrote:
  On 4/6/11 6:51 AM, Antoine Martin wrote:
  
  1) I can't seem to make it use resolutions higher than 2048x2048 which
  is a major showstopper for me:
  Virtual height (2560) is too large for the hardware (max 2048)
  Virtual width (3840) is too large for the hardware (max 2048)
 
  Seems bogus to me, I've tried giving it more ram, giving it a very wide
  range of vsync and hsync, added modelines for these large modes, etc
  No go.
  
  It is bogus, the driver has an arbitrary limit.  Look for the call to
  xf86ValidateModelines in the source, and compare that to (for example)
  what the vesa driver does.
 Here's a patch which constifies the hard-coded limits and increases them
 to more usable values (4096x4096). I've tested it on Fedora 14 and it
 allows me to allocate much bigger virtual screens.
 
 diff --git a/src/dummy_driver.c b/src/dummy_driver.c
 index 804e41e..05450d5 100644
 --- a/src/dummy_driver.c
 +++ b/src/dummy_driver.c
 @@ -85,6 +85,9 @@ static Bool   dummyDriverFunc(ScrnInfoPtr pScrn,
 xorgDriverFuncOp op,
  #define DUMMY_MINOR_VERSION PACKAGE_VERSION_MINOR
  #define DUMMY_PATCHLEVEL PACKAGE_VERSION_PATCHLEVEL
 
 +#define DUMMY_MAX_WIDTH=4096
 +#define DUMMY_MAX_HEIGHT=4096

4096 is low.  Modern GPUs go up to at least 16kx16k, and I think you can
get away with X screens at the protocol level up to 32kx32k, though I
vaguely recall there being some restriction against that.

 4) Acceleration... Now this last bit really is a lot more far fetched,
 maybe I am just daydreaming.  Wouldn't it be possible to use real
 graphics cards for acceleration, but without dedicating it to a single
 Xdummy/Xvfb instance?  What I am thinking is that I may have an
 under-used graphics card in a system, or even a spare GPU (secondary
 card) and it would be nice somehow to be able to use this processing
 power from Xdummy instances. I don't understand GEM/Gallium kernel vs X
 server demarcation line, so maybe the card is locked to a single X server
 and this is never going to be possible.

Not with the dummy driver, but real drivers can do that if they have the
functionality.  For example [shameless plug], you can use the
UseDisplayDevice none option with the NVIDIA driver:

ftp://download.nvidia.com/XFree86/Linux-x86/270.30/README/xconfigoptions.html#UseDisplayDevice
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: glXSwapBuffers very slow, potential problems?

2010-10-06 Thread Aaron Plattner
On Wed, Oct 06, 2010 at 07:20:11AM -0700, Roland Plüss wrote:
 On 10/05/2010 06:48 PM, Aaron Plattner wrote:
  On Tue, Oct 05, 2010 at 08:05:47AM -0700, Roland Plüss wrote:
  I'm running here into a heavy performance problem on both ATI and nVidia 
  cards and the culprit is the glXSwapBuffers call. I've got here two 
  performance readings showing the problem.
 
  The first reading is in a case where performance is somewhat okay.
 
  II [OpenGL] OpenGL Timer: BeginFrame: Run Optimizers = 3ys
  II [OpenGL] OpenGL Timer: BeginFrame: Make Current = 57ys
  II [OpenGL] OpenGL Timer: BeginFrame: Activate primary GC = 7ys
  II [OpenGL] OpenGL Timer: EndFrame: Entering = 2312ys
  II [OpenGL] OpenGL Timer: EndFrame: Activate primary GC = 28ys
  II [OpenGL] OpenGL Timer: EndFrame: Flush = 27ys
  II [OpenGL] OpenGL Timer: EndFrame: Swap Buffers = 4238ys
  II [OpenGL] OpenGL Timer-Total End Frame = 6694ys
 
  EndFrame: Entering is the time for all rendering for the window
  (hence the time between leaving BeginFrame and entering EndFrame
  calls). The flush there is only to make sure it is a problem with
  glXSwapBuffers. 4ms for a swap I would consider a bit high if the
  rendering itself is done in 3ms but maybe this is normal, I don't
  know. But when I show/hide the window to switch to another window
  rendering the same 3D scene (for testing purpose) and switching back
  (always only one of the two windows visible, aka mapped to the screen)
  performance breaks down horribly.
 
  Rendering on the GPU is queued by the rendering commands, and is processed
  asynchronously.  This means that you can't assume that just because all of
  your rendering commands have returned, that the rendering is actually
  complete.  Also, glXSwapBuffers will queue a swap to occur when the
  rendering is complete, but most GL implementations have code to prevent the
  CPU from getting too far ahead.  In particular, at least on our driver, if
  there's a previous swap still pending, glXSwapBuffers won't return until
  that swap is complete.  This means that if your rendering is slow on the
  GPU for some reason (e.g. because you have a pixel shader that takes ages
  to complete), you'll see that time show up in the SwapBuffers call for the
  next frame.
 
 Ah yeah, though glFlush does an implicit glFinish but looks like it
 doesn't.

No, it's the other way around: Finish does an implicit Flush (section 5.2
in OpenGL 4).

 Adding a glFinish instead turns the swap into 120ys or so, which sounds
 fine. Looks like the problem seems to be elsewhere. Placing glFinish at
 various places for testing it looks like glClear (0.18ms raised up to
 6ms) and some FBO activation (0.14ms raised up to 6ms) go through the
 roof. No idea why hiding/showing a window causes them to suddenly spike
 from fast to horribly slow (especially since you need clearing for shadow
 map sooner or later).

  SwapBuffers from a separate thread should work fine, but you may have
  trouble making sure everything is synchronized correctly.  Also, it's
  almost certainly barking up the wrong tree.

 Makes sense. Would be nice if there is a call like in DirectX where you
 can check if rendering is done already so you could skip the actual
 rendering for a frame while still having the game logic updates done in
 that frame.

I think you can do that with GL_ARB_sync: 
http://www.opengl.org/registry/specs/ARB/sync.txt
(or the core sync objects in OpenGL 4, which are described in section 5.3,
coindicentally right after the section for Flush and Finish).
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: glXSwapBuffers very slow, potential problems?

2010-10-05 Thread Aaron Plattner
On Tue, Oct 05, 2010 at 08:05:47AM -0700, Roland Plüss wrote:
 I'm running here into a heavy performance problem on both ATI and nVidia 
 cards and the culprit is the glXSwapBuffers call. I've got here two 
 performance readings showing the problem.
 
 The first reading is in a case where performance is somewhat okay.
 
 II [OpenGL] OpenGL Timer: BeginFrame: Run Optimizers = 3ys
 II [OpenGL] OpenGL Timer: BeginFrame: Make Current = 57ys
 II [OpenGL] OpenGL Timer: BeginFrame: Activate primary GC = 7ys
 II [OpenGL] OpenGL Timer: EndFrame: Entering = 2312ys
 II [OpenGL] OpenGL Timer: EndFrame: Activate primary GC = 28ys
 II [OpenGL] OpenGL Timer: EndFrame: Flush = 27ys
 II [OpenGL] OpenGL Timer: EndFrame: Swap Buffers = 4238ys
 II [OpenGL] OpenGL Timer-Total End Frame = 6694ys
 
 EndFrame: Entering is the time for all rendering for the window (hence the 
 time between leaving BeginFrame and entering EndFrame calls). The flush there 
 is only to make sure it is a problem with glXSwapBuffers. 4ms for a swap I 
 would consider a bit high if the rendering itself is done in 3ms but maybe 
 this is normal, I don't know. But when I show/hide the window to switch to 
 another window rendering the same 3D scene (for testing purpose) and 
 switching back (always only one of the two windows visible, aka mapped to the 
 screen) performance breaks down horribly.

Rendering on the GPU is queued by the rendering commands, and is processed
asynchronously.  This means that you can't assume that just because all of
your rendering commands have returned, that the rendering is actually
complete.  Also, glXSwapBuffers will queue a swap to occur when the
rendering is complete, but most GL implementations have code to prevent the
CPU from getting too far ahead.  In particular, at least on our driver, if
there's a previous swap still pending, glXSwapBuffers won't return until
that swap is complete.  This means that if your rendering is slow on the
GPU for some reason (e.g. because you have a pixel shader that takes ages
to complete), you'll see that time show up in the SwapBuffers call for the
next frame.

glFlush doesn't wait for the rendering to complete either, it just ensures
that any queued rendering is actually sent to the card.  glXSwapBuffers is
specified as doing an implicit glFlush, so calling it yourself is
redundant.  If you want to see how long your rendering is actually taking
on the GPU, try using the GL_ARB_timer_query extension.

 II [OpenGL] OpenGL Timer: BeginFrame: Run Optimizers = 4ys
 II [OpenGL] OpenGL Timer: BeginFrame: Make Current = 59ys
 II [OpenGL] OpenGL Timer: BeginFrame: Activate primary GC = 14ys
 II [OpenGL] OpenGL Timer: EndFrame: Entering = 2560ys
 II [OpenGL] OpenGL Timer: EndFrame: Activate primary GC = 37ys
 II [OpenGL] OpenGL Timer: EndFrame: Flush = 45ys
 II [OpenGL] OpenGL Timer: EndFrame: Swap Buffers = 66642ys
 II [OpenGL] OpenGL Timer-Total End Frame = 69401ys
 
 As it is visible here swap buffers eats suddenly 66ms!
 
 This is on the ATI system. On the nVidia system there is no difference 
 between the two situations but swap buffer always consumes 48ms. Any ideas 
 what can cause glXSwapBuffer to horribly slow down like this? The DE is KDE4 
 with compositing present but disabled (so it should not have any influence).
 
 I read somewhere that somebody tried placing glXSwapBuffers in a separate 
 thread. Would this not cause troubles with the X-server? (meaning, is X 
 thread safe?). But even if this would work 68ms for a swap is brutal.

SwapBuffers from a separate thread should work fine, but you may have
trouble making sure everything is synchronized correctly.  Also, it's
almost certainly barking up the wrong tree.
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: xrandr -o left at wrong monitor in dual-monitor configuration always messes up both monitor displays

2010-09-26 Thread Aaron Plattner
On Sun, Sep 26, 2010 at 10:00:22AM -0700, ddrea...@ms93.url.com.tw wrote:
 My Equipment

  *   ASUS M51SN with nVIDIA 9500M
  *   Ubuntu 10.04
  *   Current nVIDIA proprietary driver
  *   2 Monitors: one with the laptop, another is SAMSUNG 24 supporting HDMI

 I have configured the two monitors to use separate X server using 
 nvidia-settings.

 xrandr -q indicates the monitor with the laptop as screen 0 and the SAMSUNG 
 screen
 1.

 The 24 display can only be successfully rotated by opening a terminal in 
 that 24
 monitor with the command of xrandr -o left.

 Adding --screen 1 to the command above works fine.


 However, while opening a terminal in the laptop(screen 0), entering the above 
 commands with
 --screen 1 option will always deteriorate both screens so that I must 
 restart X.

Are you running Compiz?  I have a similar problem where Compiz appears to
get confused about the layout of a pair of screens when you rotate one of
them.  The terminal you used to run xrandr should still have the keyboard
focus, so try running killall compiz (or possibly killall compiz.real)
to see if your screens get fixed.  If you start compiz again with the
screen already rotated then it seems to work.

I haven't gotten around to tracking down exactly where the bug is,
unfortunately.

-- Aaron
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Howto create a window with ARGB32-visual?

2010-09-24 Thread Aaron Plattner
On Fri, Sep 24, 2010 at 05:25:05AM -0700, Clemens Eisserer wrote:
 Hi,
 
 I would like to create a Window with ARGB32-visual to reproduce a bug
 I experience
 with shaped ARGB32 windows when not using a composition manager,
 however I always get a BadMatch error.
 
 My attemp tried to find a 32-bit visual, and pass that to XCreateWindow:
 
 XVisualInfo info;
 int cnt;
 XVisualInfo *visInfos = XGetVisualInfo(display, 0, NULL, cnt);
 
 while(cnt--  0) {
   if(visInfos[x].depth == 32) {
info = visInfos[x];
}
 }
 
 XCreateWindow(display, root, 0, 0, 200, 200, 0, 32, InputOutput,
 info.visual,  0, NULL);
 
 Any idea whats wrong here?

You need to create a colormap and I think also specify a border pixel value
and pass those in via the attribute structure parameter to XCreateWindow.

 Thank you in advance, Clemens
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


[ANNOUNCE] libvdpau 0.4.1

2010-09-08 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sorry I haven't released one of these in a long while.  This minor update
just changes a few small, but important, documentation details.

- -- Aaron



Aaron Plattner (1):
  Bump version to 0.4.1

Anssi Hannula (1):
  vdpau.h: Clarify video mixer field amount recommendation

Stephen Warren (2):
  vpdau.h: Fix typo and clarify wording.
  More doc issues pointed out by Xine authors.

git://anongit.freedesktop.org/~aplattner/libvdpau
http://cgit.freedesktop.org/~aplattner/libvdpau

git tag: libvdpau-0.4.1

http://people.freedesktop.org/~aplattner/vdpau/libvdpau-0.4.1.tar.gz
MD5: 8e1f0639bea4e4a842aee93ab64406cc  libvdpau-0.4.1.tar.gz
SHA1: d09deff305e4927bc96fca8383f991d137e64d45  libvdpau-0.4.1.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJMh8VKAAoJEHYgpP6LHaLQECwH/2T6+dvAqUH0h48CkgF+jamJ
pBYVlEw8/kreeClSJ9UD999Jzrlr9AjeaHmBLuwFTIgrMxlRjdzDis+LafKVgHlx
r2a/EKGRnMhTMzAsGSqzT8aUWCUJ283khssFwRgLbMW732/qURA/iGCcWLhNzLy9
OncPDulmyF1R7kDGUH0B16rov86PiNna6lDHBIhEN79Uu+dA/ln/c9v4mhxXSo8G
H2WQJkiLNjyK59+sEkt/38Fr8J5IL0roM0WwSXaPpyXM52POxd4WC+/8SwJSC9dF
73vt0sgAC5GYtho0bh50flXmZJY472jyUZzhuFHtsSzMvWiWhAgZ0MvqcbtHYGw=
=mkg6
-END PGP SIGNATURE-
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce


Re: Exclusive Fullscreen Mode

2010-09-06 Thread Aaron Plattner
On Mon, Sep 06, 2010 at 04:29:22PM -0700, Tomas Carnecky wrote:
 On 9/6/10 8:51 PM, Roland Plüss wrote:
  1) What about changing resolution?
  SDLMAME as far as I know changes the resolution while full screen and
  some games do this too (typically commercial ones like UT*, Quake* and
  company). I read somewhere that only root is allowed to change
  resolution on-the-fly. Another place states only resolutions in the
  xorg.conf are valid. What's the ground truth in this case? Can one
  change resolution dynamically from a client application?

 Any user can change the resolution. The modern way is to use randr, on
 older servers you need to use xf86vidmode. Proprietary nvidia drivers
 don't use either, they have their own API for doing that.

Sorry to be pedantic about it, but this is false.  The xf86vidmode
extension is supported, and RandR (1.1) is how you switch between TwinView
metamodes.  Typically, the RandR and VidMode mode lists will be populated
with the various single-screen modes that got validated when the server
started.

You can manipulate the mode lists using the NV-CONTROL extension, but
actual screen resolution changes are effected through one of those two
extensions.

-- Aaron
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Getting crazy over transparency

2010-08-02 Thread Aaron Plattner
On Mon, 2010-08-02 at 09:37 -0700, Rashid wrote:
 I dont know why,
 
 but im to stupid to blitting transparent images with xlib. DAMN, with
 SDL was ist quite easy. Dont know if someone want to help me, but im
 trying it since days and getting mad over time... 
 
 Can someone have a quick look, why it isnt working? skullmask.xpm is a 1
 bit image. Use ./built für building (make is for cross compiling). 
 
 The whole stuff is in the zip archive. I would be realy happy someone
 would help me.

It looks like XpmReadFileToPixmap is producing a pixmap with the wrong
depth, which is being rejected by XSetClipMask.  It works for me if I
change the call to this:

XpmAttributes attr;
attr.depth = 1;
attr.valuemask = XpmDepth;
if (XpmReadFileToPixmap(mainwindow-display, mainwindow-window,
skullmask.xpm, skullImage.clipmask,
maskshade, attr)) {
printf (Error reading file (XpmReadFileToPixmap)\n);
exit (1);
}

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com

[ANNOUNCE] xf86-video-nv 2.1.18

2010-07-30 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

It's been a while since the last nv driver release, so there have been quite a
number of accumulated changes.  This update adds a pile of new product names,
fixes a couple of bugs, and cleans up a lot of old code (thanks to Gaetan).  It
now also refuses to load if a kernel module is bound to the device, which will
prevent it from interfering with the nouveau module's framebuffer driver.

- -- Aaron


Aaron Plattner (10):
  Bug #24787: Don't crash if LVDS initialization fails
  Update MCP6x supported products.
  Revert Refuse to bind to a device which has kernel modesetting active.
  Depend on libpciaccess = 0.10.7.
  Fix the kernel driver error reporting to be a little more verbose.
  Bring NVKnownChipsets up to date.
  Unlike for lspci and the kernel, X bus IDs are decimal instead of hex.
  Add PCI table entries for the GeForce 315
  Add a missing  }, pointed out by Johannes Obermayr.
  nv 2.1.18

Ben Skeggs (1):
  nv: refuse to load if there's a kernel driver bound to the device already

Gaetan Nadon (13):
  COPYING: update file with Copyright notices from source code.
  The /compat code is no longer required.
  config: upgrade to util-macros 1.8 for additional man page support
  config: update AC_PREREQ statement to 2.60
  config: remove AC_PROG_CC as it overrides AC_PROG_C_C99
  config: remove unrequired AC_HEADER_STDC
  config: remove unrequired AC_SUBST([XORG_CFLAGS])
  config: complete AC_INIT m4 quoting
  config: replace deprecated AM_CONFIG_HEADER with AC_CONFIG_HEADERS
  config: replace deprecated AC_HELP_STRING with AS_HELP_STRING
  config: replace deprecated use of AC_OUTPUT with AC_CONFIG_FILES
  config: add comments for main statements
  Remove RANDR_12_INTERFACE checking, always defined.

Marcin Slusarz (1):
  Refuse to bind to a device which has kernel modesetting active.

Markus Strobl (1):
  Bug #19817: Add support for GeForce 7025 and 7050.

Tiago Vignatti (1):
  Don't use libcwrappers for calloc and free

git tag: xf86-video-nv-2.1.18

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.18.tar.bz2
MD5:  b12f0b2114849d1a542d8084732573d3  xf86-video-nv-2.1.18.tar.bz2
SHA1: d35b2fa5a26a507a9cc95b69243d9fd0c0f32aa2  xf86-video-nv-2.1.18.tar.bz2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.18.tar.gz
MD5:  d6952f4ac1c0de31eed326e709e2c1f4  xf86-video-nv-2.1.18.tar.gz
SHA1: 1a9054cd1a5bc0b70da6d81f248a7ffcd3f29811  xf86-video-nv-2.1.18.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJMU0dvAAoJEHYgpP6LHaLQKSIH/09UY5CX7ydAulmmtKC/zZxT
NqMb1bDW4gn0hLXbY/cSO0gqzVf0Tw4u02MCGMl9eFLHYFip1th1DXGKxTsmtnnj
cZ4hhfql3rphhVJ586jyJwzNs1Tp383PbSVVZxs+WmoCFyKvqh5plO+rNbHgIGxE
crPZ93AEpAY3GTqfXZS/hauE8tZ+TIC21AxeKV2EeAX8HhXMm+k9a3GMhCfcKjLh
e8hZjOsyfijX0BodlX3qaItIycCrPxV0NRO45V9T1KGf4V3q0Er/haWJq5SlMfMW
AfDXEXE9MrrkKs7rgPbEdCyjH49juuOAieoYc497Ekd1Ik9W5lPVV/RgX3dr2hg=
=iaFa
-END PGP SIGNATURE-
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


[PATCH xcb] Check for POLLERR, POLLHUP, and POLLNVAL in _xcb_conn_wait.

2010-07-11 Thread Aaron Plattner
Fixes the XTS XConnectionNumber test.

This test creates a display connection, closes its file descriptor, tries to
send a no-op, and then expects an error.  XCB just goes into an infinite loop
because poll() returns immediately with POLLNVAL set.

Signed-off-by: Aaron Plattner aplatt...@nvidia.com
---
 src/xcb_conn.c |   10 ++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/src/xcb_conn.c b/src/xcb_conn.c
index 50a662b..6a07730 100644
--- a/src/xcb_conn.c
+++ b/src/xcb_conn.c
@@ -337,6 +337,16 @@ int _xcb_conn_wait(xcb_connection_t *c, pthread_cond_t 
*cond, struct iovec **vec
 if(FD_ISSET(c-fd, wfds))
 #endif
 ret = ret  write_vec(c, vector, count);
+
+#if USE_POLL
+if((fd.revents  (POLLERR |
+  POLLHUP |
+  POLLNVAL)))
+{
+_xcb_conn_shutdown(c);
+ret = 0;
+}
+#endif
 }
 
 if(count)
-- 
1.7.0.4



pgpctoqgAvSyo.pgp
Description: PGP signature
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com

Re: xorg-server 1.8.99.904 doesn't support mi overlay either?

2010-07-02 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, Jul 02, 2010 at 09:14:39AM -0700, diblidabliduu wrote:
   Hi xorg-server folks,

 I just tried the xorg-server RC4 today and I get the same error message
 as with RC2:

 dlopen: /usr/lib/xorg/modules/drivers/nvidia_drv.so: undefined symbol:
 WindowTable

 So is this about the mi overlay support?

No, this is one of the things broken by the change from video driver ABI 7
to ABI 8.  If it had loaded, the driver would have spit out a giant error
about ABI 8 not being supported and then refused to initialize.

Support for the new ABI is going through testing now, and a beta release that
supports it will hopefully be out soon.

Sincerely,
Aaron
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJMLh36AAoJEHYgpP6LHaLQdOkH/2b/7t3BRIa5DS5bEMFUdDV1
JceixiJBFU1TPCEsGVEtyXBADsjcT8vTsayKeMhjpyvF6yCD9gAFWQPkOQmfcNsp
+CUdARId3YZaPkZJUNKudkjZ6leiz98F6msi2y8W78BQE2lZny/ECLK0HY2e2/OL
meQy8v+pVYg+Fgb/BmPejDDc9BpXafrhc1VAP16swVzucvCOWd6d/tIy3oC129G1
s6HHR1R/O2B9jWnKbD5bTxUAtLnC61XA0w7nM1ryr52k2LB/N8E5d83iwEmYD1Nb
MdS0z77WMK2ZhzCBrhceeWBCgdMpoLPxD8r56mvzxOLq0k9l025SN/tB0DQtdQ8=
=ELa/
-END PGP SIGNATURE-
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg
Your subscription address: arch...@mail-archive.com


Re: Video overlay - Window updating

2010-03-11 Thread Aaron Plattner
On Mon, Mar 08, 2010 at 04:41:46AM -0800, Iban Rodriguez wrote:
   Good Morning,
 
   I have a question about video overlay and window repainting. I have
 a project in which I need to show a video on a non-rect part of an
 application window. After some attempts I decided to use play the
 video on one window (window 1) and over it put the application window
 (window 2) where the zone I want the video to show is painted using
 the color key of the video overlay. With this configuration the video
 should be shown as if it were embedded in window 2. However, it works
 only in some situations, which I describe bellow:
 
   1.- If window 1 (video) is completely covered by window 2, the video
 doesn't show.
   2.- If window 1 (video) is not completely covered by window 2 (app)
 by the left, the video is only show on the part that is not covered.
  3.- If window 1 (video) is not completely covered by window 2 (app)
 by the bottom, the video is only show on the part that is not covered.
  4.- If window 1 (video) is not completely covered by window 2 (app)
 by the bottom and by the left simultaneously, the video is shown on
 the parts not covered and also on the parts of window 2 (app) which
 are painted using the color key of the video overlay.

  For my project, I need to reproduce the behaviour of case 4 in the
 situation of case 1 but I don't know how to do it. I have tried it
 with and without a window manager with the same result. I don't know
 very much about the X server but the problem seems to be that when it
 needs to update a window, it repaints the minimum rectangle which
 cover all pixels that need to be updated so it only shows the video in
 case 4 where the minimum rectangle is the entire window 1 (because of
 the parts not covered). So my question is, is there a solution for
 this problem? Can I tell the X server that some windows must be always
 completely repainted? Is there any other way for managing the video
 overlay that avoids this problem?

The problem is that this is not a valid use of the overlay.  Among other
things, you can't assume that the driver really uses a hardware overlay or
just fakes it.  I suspect what's happening here is that the server is
clipping the rendering against the occluding window, and simply skips it
for the parts that it thinks are not visible.  The X server doesn't know
that the hardware would let the video show through into parts of the
occluding window.

If you want to overlay stuff on top of the video, you'll need to either
render it into the original video stream before sending it to Xv or use
something that explicitly supports sub-pictures, like VDPAU or XvMC.

-- Aaron
___
xorg@lists.freedesktop.org: X.Org support
Archives: http://lists.freedesktop.org/archives/xorg
Info: http://lists.freedesktop.org/mailman/listinfo/xorg


Re: State of Zaphod dual screen support

2010-03-08 Thread Aaron Plattner
On Mon, Mar 08, 2010 at 08:38:04AM -0800, David Mohr wrote:
 On Wed, Mar 3, 2010 at 1:24 PM, Aaron Plattner aplatt...@nvidia.com wrote:
  On Tue, 2010-03-02 at 22:24 -0800, David Mohr wrote:
  On Mon, Mar 1, 2010 at 6:07 PM, David Mohr damaili...@mcbf.net wrote:
   On Mon, Mar 1, 2010 at 3:38 PM, Peter Hutterer 
   peter.hutte...@who-t.net wrote:
   On Mon, Mar 01, 2010 at 12:41:02AM -0700, David Mohr wrote:
   On Sun, Feb 28, 2010 at 11:37 PM, Peter Hutterer
   peter.hutte...@who-t.net wrote:
On Sun, Feb 28, 2010 at 11:29:12PM -0700, David Mohr wrote:
I'm part of the minory who currently uses a Zaphod style dual 
monitor
setup with separate X screens for every monitor. When I recently
upgraded from 7.4 to 7.5, some utilites which I adopted[1] which
manipulate the mouse cursor started malfunctioning. My two X screens
are setup to be apart so that the mouse does not pass between 
them,
and I use my utilities to move the mouse between the two screens. 
But
with 7.5 every now and then a condition is triggered where the mouse
cursor will just continually jump from screen to screen, keeping 
the X
server at 100% CPU. I cannot even shut it down using
CTRL-ALT-Backspace.
   
I've noticed comments in other threads on this mailing list that
Zaphod mode is not really supported any more (for completeness' 
sake,
I'm using the binary Nvidia drivers). So my question is, is there
value in trying to track down the bug in Xorg which causes the mouse
to jump back and forth?
   
yes. I've seen this myself and I have not yet identified the issue. 
it's a
server bug and unrelated to the binary driver. If you can help track 
this
issue down, it would be much appreciated.
  
   Ok. Unfortunately I have not been able to find reliable conditions for
   triggering the bug. I'll try again and see what I can find.
  
   i found using a wacom tablet with a xinerama setup and then switch back 
   and
   forth triggers it eventually. the problem is the eventually bit...
  
   Yes, it's similar for me. One of the tools I use switches the mouse
   over when it hits the edge of the screen, so it's warping the pointer
   relatively often. I can't reproduce the problem reliably, but if I
   keep going back and forth it doesn't take very long to trigger it.
  
   Is there any way to get good information out of the running X instance
   once the bug has been triggered? I can only think of sending a signal
   to get a core dump, but then I'm not sure how much useful information
   that would contain.
  
   once it happens, gdb in and single-stepping may be the only approach. a
   backtrace would be great already, just to make sure if you're seeing the
   same problem as I am.
  
   Ugh. Here the trouble begins. When I attach to the process with gdb,
   it tells me it's in nvidia_drv.so, which of course doesn't have
   debugging symbols. So I can't get a useful backtrace or start to
   single step.
 
  I tried it a second time and again was only able to break in
  nvidia_drv.so. I'm wondering if I installed all the right debugging
  packages. I use debian, so I installed xserver-xorg-core-dbg. Is that
  sufficient?
 
  I don't have much experience with how gdb behaves if there are no
  debugging symbols available in _part_ of the program. Could it be that
  I can inspect the X server by setting a breakpoint somewhere and then
  continuing?
  If so, what would be a good place to put a breakpoint (I have no clue
  about X internals)?
 
  Since the cursor is actively changing, you could try putting a
  breakpoint on xf86SetCursor.
 
 Thanks Aaron, the xf86SetCursor suggestion worked out. Unfortunately I
 must admit I'm not sure how to proceed, since the execution path
 constantly leads into the closed nvidia_drv module... Here is an
 excerpt from my gdb session, maybe this is some help for Peter to
 decide whether it's the same bug or not.
 
 (gdb) break xf86SetCursor
 Breakpoint 1 at 0x8185b64: file
 ../../../../hw/xfree86/ramdac/xf86HWCurs.c, line 115.
 (gdb) cont
 Continuing.
 
 Breakpoint 1, xf86SetCursor (pScreen=0x925a730, pCurs=0x0, x=0, y=0)
 at ../../../../hw/xfree86/ramdac/xf86HWCurs.c:115
 115 {
 (gdb) n
 116 xf86CursorScreenPtr ScreenPriv =
 (xf86CursorScreenPtr)dixLookupPrivate(
 (gdb) n
 121 if (pCurs == NullCursor) {
 (gdb)
 116 xf86CursorScreenPtr ScreenPriv =
 (xf86CursorScreenPtr)dixLookupPrivate(
 (gdb)
 118 xf86CursorInfoPtr infoPtr = ScreenPriv-CursorInfoPtr;
 (gdb)
 121 if (pCurs == NullCursor) {
 (gdb)
 122 (*infoPtr-HideCursor)(infoPtr-pScrn);
 (gdb)
 0xb5ffb8f0 in ?? () from /usr/lib/xorg/modules/drivers/nvidia_drv.so
 (gdb) n
 Cannot find bounds of current function
 (gdb) cont
 Continuing.
 
 Breakpoint 1, xf86SetCursor (pScreen=0x922ba58, pCurs=0x9653e38, x=7, y=409)
 at ../../../../hw/xfree86/ramdac/xf86HWCurs.c:115
 115 {
 (gdb) break xf86HWCurs.c

[ANNOUNCE] xf86-video-nv 2.1.17

2010-03-08 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This release of the basic NVIDIA driver adds support for RandR 1.2's new-style
gamma control, which also fixes a problem on xserver 1.7 where the screen may be
black or strange colors.

It also adds support for MCP7x-based motherboard GPUs.

- -- Aaron


Aaron Plattner (6):
  Bug #26612: Separate LUTs per output.
  G80: Log unrecognized outputs
  Bug #19545: Add support for MCP7x-based integrated GPUs.
  G80: Cast register reads before shifting them to avoid truncation to 32 
bits.
  More products
  nv 2.1.17

Alan Coopersmith (1):
  Update Sun license notices to current X.Org standard form

Gaetan Nadon (1):
  configure.ac: remove unused sdkdir=$(pkg-config...) statement

git tag: xf86-video-nv-2.1.17

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.17.tar.bz2
MD5: 4401c7b956e60a6d7de68ca6a8ec05d0  xf86-video-nv-2.1.17.tar.bz2
SHA1: 9f165c085e6420470191a544404066524f2f1c61  xf86-video-nv-2.1.17.tar.bz2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.17.tar.gz
MD5: f3da828bffaa43daea9cf52b1f08b821  xf86-video-nv-2.1.17.tar.gz
SHA1: b5454cb3e64aa93d36f13ae5f0a489ec4e1fd33b  xf86-video-nv-2.1.17.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iQEcBAEBAgAGBQJLlXDsAAoJEHYgpP6LHaLQDvoH/jfojTTgEVffRMDeHECnM7UE
YBIUJVSDpGnbhAmU2b7Mg74/kPHorGbqEr5fYafjeIQtfWasDGiYmiY2LV5QJaxg
IbSw1AsluLP+bGKbtf5rSNfQ2H1Uysobcuo2k2r047NIk7fYIws6Pq36KvYccEK0
oTHuA2aKcS+v63LAo5IfMdlegITWXv/kMWlAW8AQJVCTU4OMipTpiUN0ixiSqYjY
eO1zaIiw5suwYyYA3BErQWFuNu3eV2vuP4XOlONFcogEpLicOA60qnZdGtdfFntt
u6PQjVHS43CI+pQCjlQRiwlLY7bIGMta0Kw/dwd/pFLcWNE8tyAZcW4GxGmoeDo=
=Gnl5
-END PGP SIGNATURE-
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


[ANNOUNCE] libvdpau 0.4

2010-01-28 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'm pleased to announce the release of libvdpau 0.4.  In addition to a
couple of documentation updates, this release adds infrastructure for
libvdpau to query the VDPAU back-end driver name from the X server using
version 1.2 of the DRI2 extension.

Support for the new DRI2 protocol is on track to be in the xserver 1.8
series.  This protocol will allow libvdpau to load the appropriate driver
back-end on a per-X-screen basis, which in turn will allow VDPAU to operate
correctly on X servers where different X screens are driven by cards from
different vendors that have different VDPAU implementations.  To enable
this support in libvdpau, you must install dri2proto version 2.2 or higher.
Otherwise, it will fall back to the default of nvidia.

- -- Aaron



Aaron Plattner (4):
  $(docdir) apparently requires autoconf 2.60
  Query DRI2 for the driver name.
  Update the COPYING copyright date to include recent changes
  Bump version to 0.4

Stephen Warren (3):
  Documentation enhancements for Uoti from ffmpeg.
  YV12 documentation fix.
  trace: Fix a picture info bracket mismatch.


git://anongit.freedesktop.org/~aplattner/libvdpau
http://cgit.freedesktop.org/~aplattner/libvdpau

git tag: libvdpau-0.4

http://people.freedesktop.org/~aplattner/vdpau/libvdpau-0.4.tar.gz
MD5: 06da6f81ad37708b33a20ed177a44d81  libvdpau-0.4.tar.gz
SHA1: 3e0304bca10fdcd0ca1f9947612c30426db78141  libvdpau-0.4.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iQEcBAEBAgAGBQJLYhrBAAoJEHYgpP6LHaLQQdcIAIPHifQHwJSBxPqnjRgw6y15
+c+dZGe/p1lWYIf9iRREwJSwaBGA33dNtChJoLzmljM9MnNqfTcEoAGFGYlJbHC1
r4dZ0eCmakpICYwIw0uAVbcx9hC6EdQI9fOaupuAEpuZh80PVZd8shW4l1mmOzFl
ioxXS/IXGpTmBSV5Q8iqTEvTFvuz3rbkhonk2djnSVuagT48JL+soYgQ89cQDuwh
8uhUwhWaKUXMy8j3uAp3PgAtHbz6Pu61cokLJLK14okk3d7ImoOZZPLhUht00Tlc
L3OQ25KbdDhUuuuPbMD19Z43b3cpHPgHb+UfmGcmLE1hWSLSrbD+dZgD5lV/Ln0=
=lLd1
-END PGP SIGNATURE-
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce


Re: please help - MIT-SHM extension XShmCreatePixmap issue

2010-01-15 Thread Aaron Plattner
On Fri, Jan 15, 2010 at 11:02:13AM -0800, mark wrote:
 Hello all,
 
 I've got a question regarding an error that I'm getting trying to
 communicate with the X server.  It seems my XShmCreatePixmap call is
 returning:
 
 X Error of failed request:  BadImplementation (server does not implement 
 operation)
 
 I'm using a piece of software written to display a JPEG in a xwindow. The
 software basically just decodes a jpeg then puts it into shared memory
 and displays it on the screen every X seconds without ripping/flashing
 (some double buffering done).The software has been written to use the
 MIT-SHM extension.  The software runs fine compiled and executed on a
 different device running the 1.6.0 version of the Xorg server.  I'm
 running it on version 1.7.3 (I've tried 1.60, and 1.6.2 and get the same
 results);  xdpyinfo returns info stating that the server supports pixmaps
 (see below) and that MIT-SHM support is an available extension.  The X
 log file is attached below. My error handling routines are returning the
 below attached info.  I suspect that when syncing after the
 XShmCreatePixmap call was made the server rejects the call.  As a result,
 the pixmap drawable isn't created so when XShmPutImage is called this
 error occurs:
 
 X Error of failed request:  BadDrawable (invalid Pixmap or Window
 parameter)
 
 I've attached a snippet of the code where the error seems to occur. This
 software has worked perfect fine for years on 6.8.0 and Xfree 4.3.0
 versions of X.  Not sure what to do or try here or where to continue
 looking.  Any help would be greatly appreciated.  Thank you very much for
 your time so far.

SHM pixmaps are an optional part of the MIT-SHM extension, and are disabled
in most recent drivers / acceleration architectures because they cause
major performance headaches.  You need to query the extension with
XShmQueryVersion and look at the returned 'sharedPixmaps' boolean:

   XShmQueryVersion returns the version numbers of the extension
   implementation. Shared memory  pixmaps  are  supported if the
   pixmaps argument returns true.

You can do a similar query with 'xdpyinfo -ext MIT-SHM'.  Look at the botom
of the output:

   MIT-SHM version 1.1 opcode: 142, base event: 98, base error: 159
 shared pixmaps: no

If SHM pixmaps are not supported, your application needs to use something
else, such as XShmPutImage.

 Do any of you know where I would get a list of all the Major and Minor
 opcodes for X requests?

For the major opcodes, xdpyinfo -queryExtensions.  For the minor ones, look
at the protocol headers installed in /usr/include/X11/extensions/* by the
*proto packages.

Hope that helps!

Sincerely,
Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


[ANNOUNCE] xf86-video-nv 2.1.16

2009-12-15 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This release adds IDs for new products.  It also contains a fix from Adam
Jackson to not crash on xserver 1.7, but please note that it still doesn't work
correctly on that release because the server never calls the old-style colormap
setup code.

New products added:

0x05E6 - GeForce GTX 275
0x05EB - GeForce GTX 295
0x0607 - GeForce GTS 240
0x060A - GeForce GTX 280M
0x0618 - GeForce GTX 260M
0x061D - Quadro FX 2800M
0x061F - Quadro FX 3800M
0x0644 - GeForce 9500 GS
0x0652 - GeForce GT 130M
0x065A - Quadro FX 1700M
0x06EC - GeForce G 105M
0x06EF - GeForce G 103M
0x0A20 - GeForce GT 220
0x0A23 - GeForce 210
0x0A2A - GeForce GT 230M
0x0A34 - GeForce GT 240M
0x0A60 - GeForce G210
0x0A62 - GeForce 205
0x0A63 - GeForce 310
0x0A65 - GeForce 210
0x0A66 - GeForce 310
0x0A74 - GeForce G210M
0x0A78 - Quadro FX 380 LP
0x0CA3 - GeForce GT 240
0x0CA8 - GeForce GTS 260M
0x0CA9 - GeForce GTS 250M

- -- Aaron


Aaron Plattner (5):
  New boards
  More new boards
  Remove an unnecessary (and typo'd) gitignore comment
  New board names
  nv 2.1.16

Adam Jackson (1):
  g80: Add a no-op gamma hook so we don't crash on 1.7 servers

Gaetan Nadon (6):
  .gitignore: use common defaults with custom section # 24239
  .gitignore: use common defaults with custom section # 24239
  Several driver modules do not have a ChangeLog target in Makefile.am 
#23814
  INSTALL, NEWS, README or AUTHORS files are missing/incorrect #24206
  INSTALL, NEWS, README or AUTHORS files are missing/incorrect #24206
  Makefile.am: add ChangeLog and INSTALL on MAINTAINERCLEANFILES

git tag: xf86-video-nv-2.1.16

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.16.tar.bz2
MD5: fb02d5506e35054348d1c2b2c924530d  xf86-video-nv-2.1.16.tar.bz2
SHA1: fce8c42268f1b60c8aece2adb35d780f02300fe8  xf86-video-nv-2.1.16.tar.bz2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.16.tar.gz
MD5: 9a047e20deb26b561f5ffc63c87d8017  xf86-video-nv-2.1.16.tar.gz
SHA1: 81d7b279fd8308dd1f5dc58568bb66cc02bffeb5  xf86-video-nv-2.1.16.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iQEcBAEBAgAGBQJLKDczAAoJEHYgpP6LHaLQM44H+QHt5vRFbR8ImxKb56IEGE6l
3uEyq8hXo6Knry1cv5olABsz3B9ENiB3r3N+tEK5hyzEM0D9gFfpDcah/YmeUz60
lfKDifSDY40JjPbevVqDzzM34p4yMZy53e9yV51Izgm4wnUSORFtPPXGRHhxxdNd
cWXiKHU6lw81mJT02gE44VKLYrOQJZBXVw6J/7rTUmeTpboQIJ6hiPpwnh06VicA
XfESjjF3os8quoyc7JNxsvitqoQaZoS/0fYObt1qRNsXKjlPkFKJf3WskJD+ky9N
ANZ9E84J2C90AEhCoaEylom9tBrVaMt0dk7ftfeU/hwW71Q3l2SeURMXKAc+tMI=
=CzMS
-END PGP SIGNATURE-
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce


[ANNOUNCE] xf86-video-nv 2.1.16

2009-12-15 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This release adds IDs for new products.  It also contains a fix from Adam
Jackson to not crash on xserver 1.7, but please note that it still doesn't work
correctly on that release because the server never calls the old-style colormap
setup code.

New products added:

0x05E6 - GeForce GTX 275
0x05EB - GeForce GTX 295
0x0607 - GeForce GTS 240
0x060A - GeForce GTX 280M
0x0618 - GeForce GTX 260M
0x061D - Quadro FX 2800M
0x061F - Quadro FX 3800M
0x0644 - GeForce 9500 GS
0x0652 - GeForce GT 130M
0x065A - Quadro FX 1700M
0x06EC - GeForce G 105M
0x06EF - GeForce G 103M
0x0A20 - GeForce GT 220
0x0A23 - GeForce 210
0x0A2A - GeForce GT 230M
0x0A34 - GeForce GT 240M
0x0A60 - GeForce G210
0x0A62 - GeForce 205
0x0A63 - GeForce 310
0x0A65 - GeForce 210
0x0A66 - GeForce 310
0x0A74 - GeForce G210M
0x0A78 - Quadro FX 380 LP
0x0CA3 - GeForce GT 240
0x0CA8 - GeForce GTS 260M
0x0CA9 - GeForce GTS 250M

- -- Aaron


Aaron Plattner (5):
  New boards
  More new boards
  Remove an unnecessary (and typo'd) gitignore comment
  New board names
  nv 2.1.16

Adam Jackson (1):
  g80: Add a no-op gamma hook so we don't crash on 1.7 servers

Gaetan Nadon (6):
  .gitignore: use common defaults with custom section # 24239
  .gitignore: use common defaults with custom section # 24239
  Several driver modules do not have a ChangeLog target in Makefile.am 
#23814
  INSTALL, NEWS, README or AUTHORS files are missing/incorrect #24206
  INSTALL, NEWS, README or AUTHORS files are missing/incorrect #24206
  Makefile.am: add ChangeLog and INSTALL on MAINTAINERCLEANFILES

git tag: xf86-video-nv-2.1.16

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.16.tar.bz2
MD5: fb02d5506e35054348d1c2b2c924530d  xf86-video-nv-2.1.16.tar.bz2
SHA1: fce8c42268f1b60c8aece2adb35d780f02300fe8  xf86-video-nv-2.1.16.tar.bz2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.16.tar.gz
MD5: 9a047e20deb26b561f5ffc63c87d8017  xf86-video-nv-2.1.16.tar.gz
SHA1: 81d7b279fd8308dd1f5dc58568bb66cc02bffeb5  xf86-video-nv-2.1.16.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iQEcBAEBAgAGBQJLKDczAAoJEHYgpP6LHaLQM44H+QHt5vRFbR8ImxKb56IEGE6l
3uEyq8hXo6Knry1cv5olABsz3B9ENiB3r3N+tEK5hyzEM0D9gFfpDcah/YmeUz60
lfKDifSDY40JjPbevVqDzzM34p4yMZy53e9yV51Izgm4wnUSORFtPPXGRHhxxdNd
cWXiKHU6lw81mJT02gE44VKLYrOQJZBXVw6J/7rTUmeTpboQIJ6hiPpwnh06VicA
XfESjjF3os8quoyc7JNxsvitqoQaZoS/0fYObt1qRNsXKjlPkFKJf3WskJD+ky9N
ANZ9E84J2C90AEhCoaEylom9tBrVaMt0dk7ftfeU/hwW71Q3l2SeURMXKAc+tMI=
=CzMS
-END PGP SIGNATURE-
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: PCIe and PCI dual cards?

2009-11-29 Thread Aaron Plattner
On Sat, Nov 28, 2009 at 02:35:19AM -0800, Timothy S. Nelson wrote:
 On Thu, 26 Nov 2009, Jaguar Finch wrote:
 
  I'm trying to get a PCIe and a PCI video card to work together for a 
  multi-monitor setup but
  I'm getting (EE) No devices detected.  Fatal server error:  no screens 
  found.
  
  This is under Ubunutu 9.04 and both are nvidia cards. According to the log, 
  both cards are
 
 [snip]
 
  X.Org X Server 1.6.0
  Release Date: 2009-2-25
 
   Here's your problem.  Dual screen card support was broken for quite a 
 while.  Basically, 1.6.x doesn't support them, except by accident.  I'm a 

Actually, for drivers that don't require I/O access or legacy VGA, it does
work if you specify BusIDs.  Jaguar's problem was that he was using the
lspci style of BusID rather than the X style (i.e. bus:dev.func rather
than bus:dev:func).  We really ought to agree on a common BusID
formatting style and then be consistent about it.

 Fedora user, and it completely failed for me for Fedora versions maybe 9-11. 
 Xorg 1.7 (used in Fedora 12) is supposed to support multiple graphics chips. 
 So here's what I see your options to be:
 1.Upgrade to xorg 1.7 and see if it helps you (you may have to upgrade
   your whole OS to get the xorg upgrade)
 2.Downgrade to one of the versions that used to work (may not support
   modern screen cards).
 3.Get a screen card that supports dual monitors on one chip.  Be warned
   that these must be on one chip on the card; I have a screen card that
   supports 4 monitors, but on Fedora 11 I can only use 2 because the
   problem is that there are two chips on the card, and each supports two
   monitors, but xorg 1.6 only supports one chip at a time (except under
   special circumstances).
 
   HTH,
 
 
 -
 | Name: Tim Nelson | Because the Creator is,|
 | E-mail: wayl...@wayland.id.au| I am   |
 -
 
 BEGIN GEEK CODE BLOCK
 Version 3.12
 GCS d+++ s+: a- C++$ U+++$ P+++$ L+++ E- W+ N+ w--- V- 
 PE(+) Y+++ PGP-+++ R(+) !tv b++ DI D G+ e++ h! y-
 -END GEEK CODE BLOCK-

Content-Description: ATT1.txt
 ___
 xorg mailing list
 xorg@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/xorg
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


[ANNOUNCE] libvdpau 0.3

2009-11-20 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This version of libvdpau moves the driver install directory to a configurable
place, which is /usr/lib/vdpau by default.  It also adds versioning to the
drivers as well as libvdpau_trace.  This should address the concerns that some
distributions had about where the libraries are installed.

It also includes some documentation updates and code to build the
documentation with Doxygen.

- -- Aaron



Aaron Plattner (4):
  Build documentation.
  Fix distcheck.
  Move VDPAU drivers into their own module directory.
  Bump version to 0.3

Stephen Warren (2):
  Documentation updates.
  Update VDPAU_VERSION and add VDPAU_INTERFACE_VERSION

git://anongit.freedesktop.org/~aplattner/libvdpau
http://cgit.freedesktop.org/~aplattner/libvdpau

git tag: libvdpau-0.3

http://people.freedesktop.org/~aplattner/vdpau/libvdpau-0.3.tar.gz
MD5: 2ae5b15d6ede1c96f0fa0aefcc573297  libvdpau-0.3.tar.gz
SHA1: e32980329c84dbb90e2954e4a38051618f053ef7  libvdpau-0.3.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iQEcBAEBAgAGBQJLBvY5AAoJEHYgpP6LHaLQkXoIAJtUYe+4IRuutpdGwg2j8VGQ
RdnXmOVFW3cFPjHcydE7f9ZbcoZPNQm6F+2VYUXBzq1EHTbQJE23AaNjaIgrzJ9G
Xs8U2z/OgFKoBXuq+I5dGKzxaMEl8eeU46PTXKI6evtpdJMTC3/BKSvycdLZ90ig
SW1Upm1Is6DLQEtpn29fkdvY/GXDnkkUHZw3CBaViQNe1SKwUHQgU5ubVHmd/ams
/lZE6oIwJ1ET1y30n9PLYvwNbSusVE87kMeZDUK5GREOLMIrY9nG5LyH+7l/d4VZ
SE4sa3vIk/jCgu7I1uT3dLRT2krwshO0+B9KqajQMVYT5Y7C4bDZWsg9v3U8bgs=
=hvO9
-END PGP SIGNATURE-
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce


[ANNOUNCE] libvdpau 0.2

2009-09-17 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'm pleased to announce the first release of libvdpau as a standalone
library.  This package contains the libvdpau wrapper library and the
libvdpau_trace debugging library, along with the header files needed
to build VDPAU applications.  To actually use a VDPAU device, you need
a vendor-specific implementation library.  Currently, this is always
libvdpau_nvidia.  You can override the driver name by setting the
VDPAU_DRIVER environment variable.

These files are also currently shipped as part of the NVIDIA driver
package.  We're going to continue to do that for now, but the
long-term goal is to ship only libvdpau_nvidia in that package.

git://anongit.freedesktop.org/~aplattner/libvdpau
http://cgit.freedesktop.org/~aplattner/libvdpau

git tag: libvdpau-0.2

http://people.freedesktop.org/~aplattner/vdpau/libvdpau-0.2.tar.gz
MD5: e0641a208839eb88fe7c01ee5af83735  libvdpau-0.2.tar.gz
SHA1: 9d290f2baea915beb8d395f96246608716dbdf95  libvdpau-0.2.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iQEcBAEBAgAGBQJKsobZAAoJEHYgpP6LHaLQ6JAIAImtbxJSd9RfhqMIYt0Sr6z9
CqY/9mRJp8Cp/dxrqi+an3Xu1MxgIVIz+KOBG0x6h+BiISVKzoWexNBHnco6hu2r
tjbPgp5pdQ3YBau/cUe+MbZKJpALf1vmzDGhBXm3DDG7H7+Wj1clO6XlDLGy74OC
TLoLbRNQtSFNEKgbIPTfLsKeCL0ep4VvPjxCjmLfDpvjkrt0UItASSGclnHsZ2gY
4asnSnmlfDnJ40cEBb4eepmQMA1J2WecyO7vwe6I16Ecnh+neU7+Kqys2piuAt2x
FsrBF5I7zc1g/NwVRrmUj6p4Pt37jXouFLPNWbSJxGoXFeDAEu452e85we/dFZs=
=MVZe
-END PGP SIGNATURE-
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce


Re: [ANNOUNCE] libvdpau 0.2

2009-09-17 Thread Aaron Plattner
On Thu, Sep 17, 2009 at 02:48:24PM -0700, Dave Airlie wrote:
 On Fri, Sep 18, 2009 at 4:58 AM, Aaron Plattner aplatt...@nvidia.com wrote:
  I'm pleased to announce the first release of libvdpau as a standalone
  library.  This package contains the libvdpau wrapper library and the
  libvdpau_trace debugging library, along with the header files needed
  to build VDPAU applications.  To actually use a VDPAU device, you need
  a vendor-specific implementation library.  Currently, this is always
  libvdpau_nvidia.  You can override the driver name by setting the
  VDPAU_DRIVER environment variable.
 
 Are you planning some sort of protocol for this, like DRI or XvMC has?
 
 having the server specify the driver rather than the client guessing,

That's the plan.  I also want to add indirect rendering at some point.  I
haven't decided yet whether I want to tackle indirect and driver selection
in the same initial version of the protocol.

 otherwise we just end up with the original XvMC unusable single vendor
 solution.

Yeah, nobody wants that.  At least with libvdpau, you don't get
vendor-specific code statically linked into your applications -- apps that
work with libvdpau_nvidia now will magically work with other vendor
backends in the future.  vdp_device_create_x11 takes a Display* and a
screen index and returns a get_proc_address pointer specific to that
device.

  These files are also currently shipped as part of the NVIDIA driver
  package.  We're going to continue to do that for now, but the
  long-term goal is to ship only libvdpau_nvidia in that package.
 
  git://anongit.freedesktop.org/~aplattner/libvdpau
  http://cgit.freedesktop.org/~aplattner/libvdpau
 
  git tag: libvdpau-0.2
 
  http://people.freedesktop.org/~aplattner/vdpau/libvdpau-0.2.tar.gz
  MD5: e0641a208839eb88fe7c01ee5af83735  libvdpau-0.2.tar.gz
  SHA1: 9d290f2baea915beb8d395f96246608716dbdf95  libvdpau-0.2.tar.gz
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: latest git xserver fails to startx on x86_64

2009-08-09 Thread Aaron Plattner
On Fri, Aug 07, 2009 at 11:28:57AM -0700, Justin P. Mattock wrote:
 Dave Airlie wrote:
  On Fri, Aug 7, 2009 at 3:52 PM, Justin P.
  Mattockjustinmatt...@gmail.com  wrote:
 
  Dave Airlie wrote:
 
  On Fri, Aug 7, 2009 at 2:37 PM, Justin P.
  Mattockjustinmatt...@gmail.comwrote:
 
  with a quick glance, is this type of error due to something in the
  xserver, or is this something that I did with the build of either the
  xserver or nvidia
 
  dlopen: /usr/lib64/xorg/modules/drivers/nvidia_drv.so: undefined symbol:
  resVgaShared
 
  you seem to have accidentally binary driver.
 
  Dave.
 
 
  yeah, I was trying to determine if this was caused by a change in the
  xserver, or a problem with the build of nvidia.  (creating a pure64 is a
  bit of a pain)
 
   From what it seems xf86-video-vesa activates the xserver.
 
  I think what I'll do is downgrade everything with the xserver(since nvidia
  seems to have an abi v5 and the xserver is at abi 7) just to be safe.

It's 6, not 7.  You might be looking at the XINPUT ABI version.

  nvidia don't support master X servers usually until they are released and
  shipping in Ubuntu.

I should jump in here and mention that while we don't *officially* support
master X servers until the ABI is declared stable, I do try to keep support
relatively up to date in the shipping drivers.  I bake the git commit ID of the
snapshot the driver was built against into the driver, so if you start X with
-ignoreABI and -logverbose 5, it will (if it loads at all) tell you which
version of the prerelease SDK it was built against.

We wait for whoever the release manager happens to be to declare the ABI stable
before flipping the switch in the driver to make it not require -ignoreABI.
It's not when any particular distro ships that determines it.

Support for Dave's RAC removal change should be in the next beta release.  I
know it would be nice if you could just use the driver with any custom-built X
server, but hopefully this is the next best thing.

-- Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Support for 30bit color?

2009-07-20 Thread Aaron Plattner
On Mon, Jul 20, 2009 at 04:53:42PM -0700, Andrew Theurer wrote:
 Hello,
 
 I am trying to figure out if 30bit color (10R/10G/10B) is possible on X.
 I see a number of video cards claiming support for 30bit color, but I am
 not so sure it's supported in X.  Are there any examples out there of
 users using 30bit color?  Any particular adapters that play nice?  Anyone
 also doing this with HDMI?

The NVIDIA driver supports X screens with depth 30 color on Quadro GPUs.
For display, some GPUs and some DisplayPort monitors support full-precision
depth 30 color.  VGA also works, but involves a lossy digital-to-analog
conversion.  Other display connections (such as DVI and HDMI) will still
work but will be dithered to 24-bit color.

-- Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: status with the latest git xserver

2009-07-19 Thread Aaron Plattner
On Sat, Jul 18, 2009 at 12:07:17AM -0700, Justin P. Mattock wrote:
 This is a status from over here:
 
 macbook pro ati chipset works perfectly
 with the latest git xserver, minor issue due to me
 getting a bit confused but nothing out of the ordinary.
 
 As for my other machine the imac nvidia chipset, well
 a bit of an ABI issue:
 
 (II) Loading extension DRI2
 (II) LoadModule: nvidia
 (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so
 (II) Module nvidia: vendor=NVIDIA Corporation
  compiled for 4.0.2, module version = 1.0.0
  Module class: X.Org Video Driver
  WARNING WARNING WARNING WARNING 
 This server has a video driver ABI version of 6.0 that is not
 supported by this NVIDIA driver.  Please check
 http://www.nvidia.com/ for driver updates or downgrade to an X
 server with a supported driver ABI.
 =
 (WW) NVIDIA: The driver will continue to load, but may behave strangely.
 (WW) NVIDIA: This server has an unsupported input driver ABI version 
 (have 7.0, need  5.0).  The driver will continue to load, but may 
 behave strangely.
 (II) LoadModule: kbd
 
 soulution was(after googling a bit)
   startx -ignoreABI,

startx has this dumb behavior where it will silently ignore any arguments it
doesn't recognize.  To actually pass that option on to the X server, you have to
do this:

startx -- -ignoreABI

Yes, that's a space, two dashes, another space, and then another dash.

 but for some reason this didn't do anything.
 I had to use in xorg.conf
 
 Section ServerFlags
  Option IgnoreABI true
 EndSection
 
 Hopefully we don't have to wait to long for nvidia..

Does it work?  We have a policy of trying to support an ABI but not actually
marking it supported without -ignoreABI until after it has been declared frozen
by the X.org release manager.

 Anyways nice work with the xserver xorg!!
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


[ANNOUNCE] xf86-video-nv 2.1.14

2009-07-02 Thread Aaron Plattner
Aaron Plattner (6):
  Fix a dumb typo in the chip name entry for the GeForce 9800 GTX+
  More chip names.
  New chip support.
  Fix modesets on certain GPUs that were broken by the previous commit.
  More new chips
  Bump to 2.1.14

Adam Jackson (1):
  Remove useless loader symbol lists.

Yinan Shen (1):
  G80: Fix incorrect I2C port access for ports above 3.

git tag: xf86-video-nv-2.1.14

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.14.tar.bz2
MD5: 118637515155624f8846c481a637c8c2  xf86-video-nv-2.1.14.tar.bz2
SHA1: eda0e94c7a86c7cbc51a9a5de79e71813f28d045  xf86-video-nv-2.1.14.tar.bz2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.14.tar.gz
MD5: f6af05380cb3617c34b18753471165ef  xf86-video-nv-2.1.14.tar.gz
SHA1: 5675a9cbd4f95ee6898cdb887a3f0263c7c16ba8  xf86-video-nv-2.1.14.tar.gz


signature.asc
Description: Digital signature
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: Xorg resolution with VGA Switch

2009-05-27 Thread Aaron Plattner
On Wed, May 27, 2009 at 07:49:18AM -0700, Aurélien PROVIN wrote:
 Hi,
 
 I bought a VGA switch to switch some devices on one monitor.
 But resolution of my X session is limited to 1024x768 instead of 1920x1200.
 If I hotplug the switch when the X session is started in 1920x1200, it works.

It sounds like the switch is preventing the hardware from reading the
EDID from the monitor.  Check /var/log/Xorg.0.log for warnings to that
effect.

If that is the case, then your best bet might be to save the EDID to a
file by clicking the Acquire EDID button in nvidia-settings when the
monitor is connected directly, then feed it back to the driver by
using the CustomEDID option in /etc/X11/xorg.conf.  See the README
for more details on how to do that.

Please direct further questions to linux-b...@nvidia.com.

Hope that helps!
-- Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Accessing an openGL viewport's pixels on multiple X servers

2009-05-25 Thread Aaron Plattner
On Mon, May 25, 2009 at 07:15:33AM -0700, Jerome Guilmette wrote:
 Hi everyone,
 
 I hope this is the good list to send my question to for it is a rather a
 specific matter. All my tests were done using an NVIDIA GeForce 8600M, driver
 version 180.29, under the Fedora 10 OS.
 
 Here is my issue: I have a program that takes a few screenshots of an
 application (window) and saves it into a file.  I discovered that, while
 taking a snapshot of an application running an openGL viewport, i.e.
 glxgears, and switching to another X server, I was not able to save the images
 in their integrity, the openGL viewport wasn't displayed. So the raw content
 of an openGL viewport isn't available while we are in another X Server than
 the one the program is running into. So would it mean that only one openGL
 context per X server is possible?

I'm not sure how you concluded that only one context per X server is possible
from the fact that you can't access a window's pixels when the server is not on
the active virtual terminal.  You can definitely have multiple contexts per X
server.

 In a multiple X sessions context, is there a way to have access to the
 rendered openGL viewport pixels while being currently on a different X server
 using standard X functions? In any case, what exactly are the mechanisms in
 place?

No, you can't do that.  When you VT-switch away from the current X server, all
windows are clipped so that no rendering occurs to them, and OpenGL processing
is suspended.  The non-active X server is not supposed to touch the hardware at
all, to allow the active X server to use it.

 Any inputs would be very much appreciated, might it be a link to more
 documentation or a direct answer.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: How to make xorg prefer nvidia over nv driver in a xorg.conf less configuration?

2009-05-14 Thread Aaron Plattner
On Wed, May 13, 2009 at 11:58:44PM -0700, Francesco Pretto wrote:
 Stefan Dirsch sndirsch at suse.de writes:
  See discussion on 
  
http://lists.freedesktop.org/archives/xorg/2008-July/037098.html
  
  and the following commmits in git master of xserver:
  
  commit 0dbfe0ebc69c307c0626ba824de15d03de1251d4
 
 
 
 I'm sorry, but this not a very useful answer. You are basically saying the
 driver loading priority of Xorg is hardcoded and if you want to change it,
 recompile Xserver. I've already read that discussions in the past: I DON'T 
 want
 Xorg to default ship with nvidia set at higher priority and I perfectly 
 agree
 with reverting that commit. I just want to learn how to configure my system to
 accomplish what I've asked. There are 2 considerations:
 
 1) You've pointed me the driver loading priority is hardcoded in Xorg so can't
 be changed by normal users. Maybe HAL fdi policies files can be used to
 accomodate my task?
 2) If there's no configurable option to solve this, this would de 
 definitively a
 lacking feature: Xorg can't prefer one driver instead of another in a 
 xorg.conf
 less configuration. As I've explained, this would be very useful in my setup
 where basically I continue to swap video card depending the fact I'm running
 native or virtualized.

You can sort of do this, but you do need an xorg.conf.  For example,

   Section ServerLayout
  Identifier default layout
  Screen 0 nvidia
  Screen 1 vesa
   EndSection

   Section Device
   Identifier nvidia
   Driver nvidia
   EndSection

   Section Device
   Identifier vesa
   Driver vesa
   EndSection

   Section Screen
  Identifier nvidia
  Device nvidia
   EndSection

   Section Screen
  Identifier vesa
  Device vesa
   EndSection

Whichever screen corresponds to the hardware that's not present should
fail to initialize.

With this setup, starting X with the nvidia driver set up does this:

   (II) LoadModule: nvidia
   (II) Loading /usr/lib/xorg/modules/drivers//nvidia_drv.so
   (II) Module nvidia: vendor=NVIDIA Corporation
   compiled for 4.0.2, module version = 1.0.0
   Module class: X.Org Video Driver
   (II) LoadModule: vesa
   (II) Loading /usr/lib/xorg/modules/drivers//vesa_drv.so
   (II) Module vesa: vendor=X.Org Foundation
   compiled for 1.5.99.902, module version = 1.3.0
   Module class: X.Org Video Driver
   ABI class: X.Org Video Driver, version 5.0
   [...]
   (II) NVIDIA(0): NVIDIA GPU GeForce 8600M GT (G84) at PCI:1:0:0 (GPU-0)
   [...]
   (II) Unloading /usr/lib/xorg/modules/drivers//vesa_drv.so

If I sabotage the nvidia driver, it does this instead:

   (EE) NVIDIA: Failed to load the NVIDIA kernel module. Please check your
   (EE) NVIDIA: system's kernel log for additional error messages.
   (II) UnloadModule: nvidia
   (II) Unloading /usr/lib/xorg/modules/drivers//nvidia_drv.so
   (EE) Failed to load module nvidia (module-specific error, 0)
   [...]
   (II) VESA(0): initializing int10
   (II) VESA(0): Primary V_BIOS segment is: 0xc000
   (II) VESA(0): VESA BIOS detected
   (II) VESA(0): VESA VBE Version 3.0
   (II) VESA(0): VESA VBE Total Mem: 14336 kB
   (II) VESA(0): VESA VBE OEM: NVIDIA
   (II) VESA(0): VESA VBE OEM Software Rev: 96.132
   (II) VESA(0): VESA VBE OEM Vendor: NVIDIA Corporation
   (II) VESA(0): VESA VBE OEM Product: GeForce 8600M GT
   (II) VESA(0): VESA VBE OEM Product Rev: Chip Rev

You ought to be able to do something similar with your VM device.  This is
essentially what the server autoconfig would have done for you if that patch
hadn't been reverted, you just have to do it manually now.

Hope that helps!

-- Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Changing Xorg-Configuration on the fly

2009-04-09 Thread Aaron Plattner
On Wed, Apr 08, 2009 at 09:17:52AM -0700, Leif Bergerhoff wrote:
 Remi Cardona wrote:
  Unless you're using the closed nVidia driver, the xrandr tool will do
  _exactly_ what you're looking for.
  Take a look at man xrandr, especially the --pos option.

I don't think this is true.  The --pos option controls the positioning
of a CRTC within a single framebuffer / X screen.  It sounds like
you're using multiple X screens and want to change their logical
layout.  I don't know of any tool that allows that, and I don't know
if it's even possible.

 I have already read the manpage and xrandr sounds good.
 The nVidia driver doesn't support all the xrandr options, right?

 So I'll have a more detailled look on xrandr once again, but I think I need
 another way to access the xserver.

You might need to extend the Xinerama protocol.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: XVideo support in xf86-video-nv / G90

2009-02-23 Thread Aaron Plattner
On Sun, Feb 22, 2009 at 09:26:11AM -0800, Henry-Nicolas Tourneur wrote:
 I would like to know if there are any plans to add XVideo support for G90
 cards to the nvidia free 2D driver.

I'm afraid not.  XV on those GPUs requires the 3D engine, and setting that
up is too complicated to be within the scope of that driver.

-- Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Commit 56f6fb [randr: Avoid re-querying...] breaks Twinview dynamic display configuration

2009-02-04 Thread Aaron Plattner
This commit causes a regression in dynamic display configuration with the
NVIDIA driver, where trying to switch to a new configuration with
nvidia-settings fails.  The problem is that nvidia-settings adds the new
mode to the list, then queries the mode pool with RRGetScreenInfo.  This
now fails to pick up the new mode and nvidia-settings can't find it,
resulting in a failed mode switch.

https://bugs.launchpad.net/ubuntu/+source/nvidia-settings/+bug/325115
http://www.nvnews.net/vbulletin/showthread.php?t=127359

I'm not sure exactly what the right fix is to avoid reintroducing bug
#19037, but the DIX definitely needs to query the DDX to see if the mode
pool changed.

-- Aaron


commit 56f6fb8c8652c85e522e42557f8969987069076b
Author: Eric Anholt e...@anholt.net
Date:   Fri Jan 30 19:06:17 2009 -0800

randr: Avoid re-querying the configuration on everything but 
GetScreenResources.

The new path should only re-query on the other requests when we haven't
gathered the information from the DDX yet (such as with a non-RandR 1.2 
DDX).

Bug #19037.
(cherry picked from commit 317f2b4a9fe4b606975711bc332166a82db5087d)

:100644 100644 b5cebdc... 9c9b7c0... M  randr/randrstr.h
:100644 100644 38314de... 12b9a4a... M  randr/rrinfo.c
:100644 100644 95662c9... da633b2... M  randr/rrscreen.c
:100644 100644 7f9a798... 36135c6... M  randr/rrxinerama.c
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [PATCH] dix: die if we can't activate or init the VCP/VCK.

2009-02-03 Thread Aaron Plattner
On Tue, Feb 03, 2009 at 04:30:45PM -0800, Peter Hutterer wrote:
 If we have a busted xkb setup, the XKB initialization on the core devices
 fails and leaves us with dev-key-xkbInfo == NULL. This in turn causes
 segfaults lateron.
 
 Return BadValue when the XKB configuration for a master device failed, and if
 that happens for the VCP/VCK, die semi-gracefully.
 
 Reported by Aaron Plattner.
 
 Signed-off-by: Peter Hutterer peter.hutte...@who-t.net

Looks good to me:

(EE) XKB: Couldn't open rules file /X/share/X11/xkb/rules/evdev
XKB: Failed to compile keymap
Keyboard initialization failed. This could be a missing or incorrect setup 
of xkeyboard-config.

Fatal server error:
Failed to activate core devices.

Signed-off-by: Aaron Plattner aplatt...@nvidia.com

[resending with fixed To: header]
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: general question: xorg Nvdriver

2009-01-28 Thread Aaron Plattner
On Wed, Jan 28, 2009 at 11:12:23AM -0800, Florian Lier wrote:
 Hey all,
 
 I'm trying to get the current (X.Org X Server 1.6.99.1) X running on 
 several test-systems for like 2 or 3 months now. (for mpx purposes)
 I tested several systems with ATI, INTEL and NVIDIA gcards...
 The most difficult system seems to be the one with NVIDIA cards.
 As far as I can interpret the backtrace there is always a problem with the nv 
 driver.
 
 Backtrace:
 0: /home/fl0/mpxcompiz/bin/Xorg(xorg_backtrace+0x3b) [0x80e829b]
 1: /home/fl0/mpxcompiz/bin/Xorg(xf86SigHandler+0x51) [0x809ccb1]
 2: [0xb7f78400]
 3: /home/fl0/mpxcompiz/bin/Xorg(xf86SetDesiredModes+0x27b) [0x80ab18b]
 4: /home/fl0/mpxcompiz/lib/xorg/modules/drivers//nv_drv.so [0xb7a6412c]
 5: /home/fl0/mpxcompiz/lib/xorg/modules/drivers//nv_drv.so [0xb7a64562]
 6: /home/fl0/mpxcompiz/bin/Xorg(AddScreen+0x19d) [0x80684ad]
 7: /home/fl0/mpxcompiz/bin/Xorg(InitOutput+0x23a) [0x808b12a]
 8: /home/fl0/mpxcompiz/bin/Xorg [0x8068ba1]
 9: /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe5) [0xb7b19685]
 10: /home/fl0/mpxcompiz/bin/Xorg [0x8068231]

The nv driver calls back into the X server to set the first mode by calling
xf86SetDesiredModes, which is crashing somewhere.  This sounds like a
regression in the X server.  I'd recommend getting a debug X server,
catching the crash in GDB, and then getting a backtrace.  It's possible
that the nv driver is doing something funky to the modepool that confuses
xf86SetDesiredModes, but your modes look pretty normal.

 Does anyone of you know a revision which doesn't stuck on startup including 
 the nv-driver?
 Pls, correct me if I'm wrong with the nv driver thang.
 
 I also check out the xorg tinderbox from time to time ... seems that the last 
 time the master branch compiled
 successfully was 2009-01-20.
 
 cheers, fl0
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [PATCH] Additional pci-id for the nv driver

2009-01-26 Thread Aaron Plattner
On Sat, Jan 24, 2009 at 04:09:47AM -0800, Alberto Milone wrote:
 On Saturday 24 January 2009 01:02:37 you wrote:
  On Fri, 2009-01-23 at 09:16 -0800, Alberto Milone wrote:
   Dear Aaron,
  
  
   I've noticed that the nv driver is not automatically chosen for my
   Geforce 7300 GT unless I don't specify the nv driver in the
   xorg.conf.
 
  Sorry, now that I look at it some more, something else must be wrong.
  The 0x2E2 device ID should have hit the NVGetPCIXpressChip case in
  NVPciProbe:
 
  const CARD32 id = ((dev-device_id  0xfff0) == 0x00F0 ||
  (dev-device_id  0xfff0) == 0x02E0) ?
  NVGetPCIXpressChip(dev) : dev-vendor_id  16 |
  dev-device_id;
 
  and come up with 0x0393. Can you please put a breakpoint there and see
  what went wrong?
 
  -- Aaron
 
 
 
 That case loop works well. I made the driver print a warning line right after 
 that point and this is the result:
 (WW) NV: Detected device 0x10de0393 (GeForce 7300 GT) at 0...@00:00:0
 
 
 The problem is that pci-id of my card (02E2) is not in 
 /usr/share/xserver-xorg/pci/nv.ids

Aha.  Where does this file come from?  It's not part of the driver package.
I assume it's generated somehow?

 Without my patch:
 :~$ grep 02E2 /usr/share/xserver-xorg/pci/*
 
 
 With my patch:
 :~$ grep 02E2 /usr/share/xserver-xorg/pci/*
 /usr/share/xserver-xorg/pci/nv.ids:10DE02E2
 
 
 
 :~$ lspci -n |grep 300
 01:00.0 0300: 10de:02e2 (rev a2)
 
 
 This is why, as you can see in autodetection.log X doesn't use nv for my 
 card when no driver is specified in the xorg.conf. In manualdetection.log you 
 can see what happens when I set the driver to nv manually.
 
 
 The NVGetPCIXpressChip case is used when the driver is being loaded but we 
 need to handle this before so that X knows which driver has to be used.

Why is it using PCI ID match tables in the first place, and why does
NVKnownChipsets matter?  The server already has an autoconfig mechanism
where it can fall back gracefully if the driver reports a given chip as
unsupported or it fails to initialize for whatever reason.  NVKnownChipsets
is just a translation table from PCI ID to product name.  The driver
supports more chipsets than what's in the table, including these bridged
devices where the real device ID has to be extracted from it.

Given that lots of stuff appears to be moving towards PCI ID match tables
even though it's not really a good idea, maybe it would be better to merge
NVPciIdMatchList and NVKnownChipsets, get rid of the mask-based matches in
NVIsSupported, and just flood all of the wildcard ranges with Unknown GPU
entries.

-- Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [PATCH] Additional pci-id for the nv driver

2009-01-23 Thread Aaron Plattner
On Fri, 2009-01-23 at 09:16 -0800, Alberto Milone wrote:
 Dear Aaron,
 

 I've noticed that the nv driver is not automatically chosen for my
 Geforce 7300 GT unless I don't specify the nv driver in the
 xorg.conf.

Sorry, now that I look at it some more, something else must be wrong.
The 0x2E2 device ID should have hit the NVGetPCIXpressChip case in
NVPciProbe:

const CARD32 id = ((dev-device_id  0xfff0) == 0x00F0 ||
   (dev-device_id  0xfff0) == 0x02E0) ?
  NVGetPCIXpressChip(dev) : dev-vendor_id  16 |
dev-device_id;

and come up with 0x0393.  Can you please put a breakpoint there and see
what went wrong?

-- Aaron


---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Very large resolutions

2009-01-06 Thread Aaron Plattner
On Tue, Jan 06, 2009 at 12:58:49PM -0800, Steve W wrote:
 I seem to remember a thread a while back where someone was talking about the 
 limits of the size of a combined screen that can be produced.
 
 I run a 6 screen setup with 3 video cards (all nvidia) under a OS from 
 redmond, and I've always wanted to upgrade.
 
 Currently the combined footprint is 3840x2048 from 6 identical 1280x1024 
 screens.
 
 Is this still an impossibility?

See chapter 14 in the README [1]:

   o Only the intersection of capabilities across all GPUs will be advertised.

 The maximum OpenGL viewport size depends on the hardware used, and is
 described by the following table. If an OpenGL window is larger than the
 maximum viewport, regions beyond the viewport will be blank.

 OpenGL Viewport Maximums in Xinerama

 GeForce GPUs before GeForce 8:  4096 x 4096 pixels
 GeForce 8 and newer GPUs:   8192 x 8192 pixels
 Quadro: as large as the Xinerama
 desktop

Also chapter 19:

19B. MAXIMUM RESOLUTIONS

The NVIDIA Accelerated Linux Graphics Driver and NVIDIA GPU-based
graphics cards support resolutions up to 8192x8192 pixels for the
GeForce 8 series and above, and up to 4096x4096 pixels for the GeForce
7 series and below, though the maximum resolution your system can
support is also limited by the amount of video memory (see USEFUL
FORMULAS for details) and the maximum supported resolution of your
display device (monitor/flat panel/television). Also note that while
use of a video overlay does not limit the maximum resolution or refresh
rate, video memory bandwidth used by a programmed mode does affect the
overlay quality.

Hope that helps!

-- Aaron

[1]: ftp://download.nvidia.com/XFree86/Linux-x86/180.18/README/index.html
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: X server exported symbols broken

2008-12-11 Thread Aaron Plattner
On Thu, Dec 11, 2008 at 09:18:30AM -0800, Paulo C?sar Pereira de Andrade wrote:
   I changed the script to check for the dirname of the header
 being processed also, instead of only checking if the path is
 relative.

Thanks, Paulo.  This fixed the xorg_symbols table on my NFS build setup.

-- Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Weird corruption with Xephyr

2008-12-11 Thread Aaron Plattner
On Thu, Dec 11, 2008 at 07:50:41AM -0800, Yan Seiner wrote:
 I'm seeing this rather weird corruption with Xephyr.  The system is 
 running Debian Lenny  with a custom 2.6.27.4 kernel.  I am running 
 Xephyr 1.5.3.  I've captured an image of this here:
 
 http://seiner.com/images/imga0018.jpg
 
 This happens after some time - can be minutes, hours or days.  The 
 screen just flickers and then wraps right a lot and down very slightly.  
 You can see it in the poor quality image above (sorry for the poor 
 quality image.  That was shot with a handheld video camera).  This has 
 happened on 2 of the 3 Xephyr servers so far.

This actually does sound like a bug in the driver to me.  Please run
nvidia-bug-report.sh after reproducing the problem and send the resulting
nvidia-bug-report.log file to linux-b...@nvidia.com.

-- Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [RFC] Xorg symbols that should not be public

2008-12-09 Thread Aaron Plattner
On Tue, Dec 09, 2008 at 02:46:15AM -0200, Paulo C?sar Pereira de Andrade wrote:
   Hi Aaron,
 
   Can you test with a install of the current X Server git master, and
 check what symbols are missing if any? Just use the script attached.

Nifty script!  I took the liberty of making it print out the paths so I
could make sure it was picking up the right binaries.

At first, the build didn't work because I configure my build directory with

   $ /home/aaron/X/modular/xserver/configure --prefix=/X ...

and all of my include paths start with /home/aaron/X/modular/xserver,
resulting in an empty xorg_symbols array.  Then, instead of failing to
build, it produced an X server that did this:

dlopen: /X/lib/xorg/modules/extensions//libextmod.so: undefined symbol: 
XvScreenInitProc
(EE) Failed to load /X/lib/xorg/modules/extensions//libextmod.so
(EE) Failed to load module extmod (loader failed, 7)
(EE) Failed to load module dri (module does not exist, 0)
X: symbol lookup error: /X/lib/xorg/modules//libint10.so: undefined symbol: 
xf86ReadBIOS

It would be nice if sdksyms.sh could cause the build to fail if it finds no
symbols.  Also, the rule to build sdksyms.c needs a dependency on
sdksyms.sh and all of the headers that it includes.  Otherwise, someone
will change the SDK and then wonder why his new symbol doesn't show up
because xorg_symbols is stale.

Once I got the build to generate sdksyms.c correctly, the resulting server
worked fine.

   I remember last year/start of this year, when I check, the nvidia
 driver required miInitializeCompositeWrapper (don't remember if any
 other symbols). Currently that function is not in the sdk, and the
 apparently only user, libxaa has it as a hidden symbol, what probably
 is wrong.

The NVIDIA driver has an option to enable the composite wrapper, but it's
disabled by default.  If enabled, the driver will attempt to
xf86LoadSubModule libxaa to find miInitializeCompositeWrapper but will
just print a warning and continue if it can't find it.  Most of the no-CW
kinks have been worked out of our acceleration code now, so I won't shed
any tears if miInitializeCompositeWrapper goes away.

(II) Loading sub module xaa
(II) LoadModule: xaa
(II) Loading /X/lib/xorg/modules//libxaa.so
(II) Module xaa: vendor=X.Org Foundation
compiled for 1.6.99.1, module version = 1.2.1
ABI class: X.Org Video Driver, version 5.0
(WW) NVIDIA(0): UseCompositeWrapper was requested but
(WW) NVIDIA(0): miInitializeCompositeWrapper was not found.

   libwfb may also need some update, wfbrename.h maybe should not be
 installed, but I don't remember if the nvidia driver used it.

The NVIDIA driver doesn't use wfbrename.h itself, but it relies on
libwfb.so not defining any [^w]fb symbols that will clash with ones from
libfb.so.  From a quick skim of your nifty script's output, it looks like
all of libwfb.so's exported symbols are properly prefixed with wfb, so I
think it's okay.  There are probably some symbols in wfbrename.h that have
disappeared from the source, though, and can be removed.  (Note that it
needs to rename everything exported when -fvisibility=hidden is disabled,
too).

   xf86Rename.h probably should not be in the sdk, but bundled with
 the drivers that provide fallbacks for older servers.

You're right, this should get pulled into the driver dist tarballs like the
rest of the files in that directory.

  Of these, we need the following:
 
  LoaderGetABIVersion
  LoaderShouldIgnoreABI
  miCreateAlphaPicture
  noRenderExtension
  PictureMatchVisual
  xf86AddGeneralHandler
  xf86DeregisterStateChangeNotificationCallback
  xf86DisableGeneralHandler
  xf86EnableGeneralHandler
  xf86RemoveGeneralHandler
  XineramaVisualsEqualPtr
 
   I think all of these should always be available. And if one
 compiles with --disable-xinerama, the xinerama function should
 not be called.

I'm not sure what will happen if you try to use the driver with a server
compiled with --disable-xinerama.  In theory, it should work, but we don't
support it.

-- Aaron


syms.gz
Description: Binary data
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: [RFC] Xorg symbols that should not be public

2008-12-08 Thread Aaron Plattner
On Tue, Dec 09, 2008 at 12:36:01AM -0200, Paulo C?sar Pereira de Andrade wrote:
   Hi,
 
   In my Linux x86 computer, using only git master, and with X Server
 configured with --prefix=/usr --disable-builtin-fonts --enable-dri2
 --enable-dri --enable-xephyr
 
   With all buildable modules also installed, attached is the list
 of symbols that are not used by any module.
 
   There are 2 main kinds of symbols that should be public:
 o Symbols accessed by input/video modules
 o Symbols accessed by extensions or other kind of modules
 
   And of course, there is a large amount of symbols in the list
 that should not be exported. And some should have a stub when
 some feature is disabled.
 
   I think some private headers, for things like libextmod, libxaa,
 libfb, etc, should be created, still with symbols exported at
 first, to guarantee binary compatibility. But some symbols that
 are currently exported should be made private, and not advertised
 in the sdk, as they are only used in the X Server binary.

Hi Paulo,

Of these, we need the following:


LoaderGetABIVersion
LoaderShouldIgnoreABI
miCreateAlphaPicture
noRenderExtension
PictureMatchVisual
xf86AddGeneralHandler
xf86DeregisterStateChangeNotificationCallback
xf86DisableGeneralHandler
xf86EnableGeneralHandler
xf86RemoveGeneralHandler
XineramaVisualsEqualPtr


Thanks,
-- Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: xserver: Branch 'master' - 2 commits

2008-10-15 Thread Aaron Plattner
On Wed, Oct 15, 2008 at 04:38:03PM -0300, Tiago Vignatti wrote:
 Hi Aaron,

 Please, I don't want to be rude or something here but how can we argument 
 that these functions above are _used_ by the sample server if none of its 
 open drivers use it? This is questionable.

Hi Tiago,

The fact that the NVIDIA driver uses these functions is pretty much
irrelevant here: the functions are a useful part of the input/general
handler API.  Removing useful API entry points just because no driver
happens to use them *today* is a bad idea because it prevents modules that
would like to use them in the future from working with current servers.

As for the NVIDIA driver, you can use nm -D on it to see which symbols it's
linked against.  Also, I have a list of the strings we pass to LoaderSymbol
at http://people.freedesktop.org/~aplattner/loadersymbol
It was a bit out of date, so I updated it.

-- Aaron

 Aaron Plattner escreveu:
  hw/xfree86/common/xf86.h   |4 +
  hw/xfree86/common/xf86Events.c |   84 
 +++--
  hw/xfree86/loader/xf86sym.c|2  3 files changed, 62 insertions(+), 
 28 deletions(-)
 New commits:
 commit 3fc4f40b6c6cb416c9dc4bdb35c91b4f32c03ccc
 Author: Aaron Plattner [EMAIL PROTECTED]
 Date:   Sun Oct 12 16:08:26 2008 -0700
 Restore xf86{Enable, Disable}GeneralHandler.
 These were useful as part of the generic handler ABI, and are used 
 by the NVIDIA
 driver.
 This reverts part of commit 
 50081d2dfb79878cb931a15c265f0d60698dfd39.
 diff --git a/hw/xfree86/common/xf86.h b/hw/xfree86/common/xf86.h
 index 84ea633..fbbfc73 100644
 --- a/hw/xfree86/common/xf86.h
 +++ b/hw/xfree86/common/xf86.h
 @@ -195,6 +195,8 @@ void xf86DisableInputHandler(pointer handler);
  void xf86EnableInputHandler(pointer handler);
  pointer xf86AddGeneralHandler(int fd, InputHandlerProc proc, pointer 
 data);
  int xf86RemoveGeneralHandler(pointer handler);
 +void xf86DisableGeneralHandler(pointer handler);
 +void xf86EnableGeneralHandler(pointer handler);
  void xf86InterceptSignals(int *signo);
  void xf86InterceptSigIll(void (*sigillhandler)(void));
  Bool xf86EnableVTSwitch(Bool new);
 diff --git a/hw/xfree86/common/xf86Events.c 
 b/hw/xfree86/common/xf86Events.c
 index e91b332..babe45b 100644
 --- a/hw/xfree86/common/xf86Events.c
 +++ b/hw/xfree86/common/xf86Events.c
 @@ -743,6 +743,20 @@ xf86DisableInputHandler(pointer handler)
  }
   _X_EXPORT void
 +xf86DisableGeneralHandler(pointer handler)
 +{
 +IHPtr ih;
 +
 +if (!handler)
 +return;
 +
 +ih = handler;
 +ih-enabled = FALSE;
 +if (ih-fd = 0)
 +RemoveGeneralSocket(ih-fd);
 +}
 +
 +_X_EXPORT void
  xf86EnableInputHandler(pointer handler)
  {
  IHPtr ih;
 @@ -756,6 +770,20 @@ xf86EnableInputHandler(pointer handler)
  AddEnabledDevice(ih-fd);
  }
  +_X_EXPORT void
 +xf86EnableGeneralHandler(pointer handler)
 +{
 +IHPtr ih;
 +
 +if (!handler)
 +return;
 +
 +ih = handler;
 +ih-enabled = TRUE;
 +if (ih-fd = 0)
 +AddGeneralSocket(ih-fd);
 +}
 +
  /*
   * As used currently by the DRI, the return value is ignored.
   */
 commit 2217d22a76cdb2460f9683a6bf74c7248612889d
 Author: Aaron Plattner [EMAIL PROTECTED]
 Date:   Sun Oct 12 16:07:24 2008 -0700
 Revert xfree86: xf86{Enable, Disable}InputHandler can be static.
 These were potentially useful as part of the input handler ABI, 
 even if nobody
 currently uses them.
 This reverts commit 278c11f01fbc6d6bd91c5a7127928c9ef5d29fca.
 diff --git a/hw/xfree86/common/xf86.h b/hw/xfree86/common/xf86.h
 index 0956f9c..84ea633 100644
 --- a/hw/xfree86/common/xf86.h
 +++ b/hw/xfree86/common/xf86.h
 @@ -191,6 +191,8 @@ xf86SetDGAModeProc xf86SetDGAMode;
  void SetTimeSinceLastInputEvent(void);
  pointer xf86AddInputHandler(int fd, InputHandlerProc proc, pointer data);
  int xf86RemoveInputHandler(pointer handler);
 +void xf86DisableInputHandler(pointer handler);
 +void xf86EnableInputHandler(pointer handler);
  pointer xf86AddGeneralHandler(int fd, InputHandlerProc proc, pointer 
 data);
  int xf86RemoveGeneralHandler(pointer handler);
  void xf86InterceptSignals(int *signo);
 diff --git a/hw/xfree86/common/xf86Events.c 
 b/hw/xfree86/common/xf86Events.c
 index a2c206e..e91b332 100644
 --- a/hw/xfree86/common/xf86Events.c
 +++ b/hw/xfree86/common/xf86Events.c
 @@ -462,34 +462,6 @@ xf86ReleaseKeys(DeviceIntPtr pDev)
  }
  }
  -static void
 -xf86EnableInputHandler(pointer handler)
 -{
 -IHPtr ih;
 -
 -if (!handler)
 -return;
 -
 -ih = handler;
 -ih-enabled = TRUE;
 -if (ih-fd = 0)
 -AddEnabledDevice(ih-fd);
 -}
 -
 -static void
 -xf86DisableInputHandler(pointer handler)
 -{
 -IHPtr ih;
 -
 -if (!handler)
 -return;
 -
 -ih = handler;
 -ih-enabled = FALSE;
 -if (ih-fd = 0)
 -RemoveEnabledDevice(ih-fd);
 -}
 -
  /*
   * xf86VTSwitch --
   *  Handle requests for switching the vt.
 @@ -756,6 +728,34 @@ xf86RemoveGeneralHandler

Re: xserver: Branch 'master'

2008-10-10 Thread Aaron Plattner
On Thu, Oct 09, 2008 at 09:34:49PM +1030, Peter Hutterer wrote:
 Apologies for not spotting this earlier.
 
 On Mon, Sep 08, 2008 at 08:51:56AM -0700, Aaron Plattner wrote:
  commit 079625570d51e41569b73b2fd9237eb8f967f408
  Author: Aaron Plattner [EMAIL PROTECTED]
  Date:   Mon Sep 8 08:50:52 2008 -0700
  
  Bump ABI major versions for the TryClientExceptions change from commit 
  883811c.
 
 
  
   #define ABI_ANSIC_VERSION  SET_ABI_VERSION(0, 4)
  -#define ABI_VIDEODRV_VERSION   SET_ABI_VERSION(4, 1)
  -#define ABI_XINPUT_VERSION SET_ABI_VERSION(3, 1)
  -#define ABI_EXTENSION_VERSION  SET_ABI_VERSION(1, 1)
  +#define ABI_VIDEODRV_VERSION   SET_ABI_VERSION(5, 0)
  +#define ABI_XINPUT_VERSION SET_ABI_VERSION(4, 0)
 
 ABI_XINPUT_VERSION was bumped with the MPX merge, thus 3 is already the
 correct version (server 1.5 has 2, btw.) Should we revert part of this patch?

Sorry about that.  I asked Adam if he wanted me to revert that part of it
after I realized I'd done that, and he said something along the lines of,
nah, integers are cheap.  I certainly wouldn't object if you reverted it,
but you should definitely get consensus from the release maintainer du
jour.

-- Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: Building X

2008-09-23 Thread Aaron Plattner
On Tue, Sep 23, 2008 at 11:40:19AM -0400, James Cloos wrote:
  Adam == Adam Jackson [EMAIL PROTECTED] writes:
 
 Adam You say this as though taste wasn't a good reason.
 
 Not everyone agrees, though, that the traditional pattern and cursor on
 startup is less tasteful than your preference.
 
 There is a reason that, when the black root was first added to XFree86,
 it was quickly reversed and redone as the -br and -wr options.

Taste aside, I have at least one VGA flat panel that fails to center the
image if I start X with -br (and disable the NVIDIA logo), where starting
with -wr or the root weave works fine.

Of course, I'm not arguing that the root weave is anything close to the
right way to solve that particular problem...

-- Aaron
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


[ANNOUNCE] xf86-video-nv 2.1.12

2008-08-28 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The last release had an unfortunate bug that caused
CPUToScreenColorExpandFill to treat transparent pixels as black instead.
Sorry about that!

- -- Aaron


Aaron Plattner (2):
  G80: Fix a CPUToScreenColorExpandFill bug introduced by commit 2e0416c.
  Bump to 2.1.12.

git tag: nv-2.1.12

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.12.tar.bz2
MD5: 42f12a36d7afc26c817e8e8f5c8b7274  xf86-video-nv-2.1.12.tar.bz2
SHA1: d468596e6ffb41582cd3b214e42fc0004cc93418  xf86-video-nv-2.1.12.tar.bz2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.12.tar.gz
MD5: 8a26dc4a57637d846b2f9d1cd410991e  xf86-video-nv-2.1.12.tar.gz
SHA1: 0278890cd2d113304a6128225c2cb1a16c706ceb  xf86-video-nv-2.1.12.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.9 (GNU/Linux)

iQEcBAEBAgAGBQJItu22AAoJEHYgpP6LHaLQbvQH/2KbQ06iPArEyux/4eoP8rSD
VtXTxhfDNWLP+iwUb6KqRFj6wAZlIWZ2gBONor1SMPJzhmRfazPJvYwMb/MaMFfa
Yopgx7n0FAYlaWSFXEPxQlslKmIA75g+bjfvHjfXJCBC1D609pO1gEwYrK1MZIj8
q1X2W3G0imYwLCc8CxFhIDnXHPHsRwv9xj0rehoty79jJPOTLCvQkKCtBhWhIXL3
8zNXoLvech6RPXlHVXeMaBAKhDsckDh80ELpi8YbFTkPkzP/yulRFU/ocFWZXIRp
d2sihOej23BojJWG7Q/mYvqm/154tDR4o4cDmRKX4bVaiT34Wpf20IdG6tQgcLQ=
=ajol
-END PGP SIGNATURE-
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce


[ANNOUNCE] xf86-video-nv 2.1.10

2008-06-30 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This release adds chip names for the GeForce 9 mobile chips and the GeForce GTX
GPUs.  It also adds code to read DDC-based EDIDs for LVDS panels that have such
a thing.

- -- Aaron


Aaron Plattner (9):
  GeForce 9 mobile chips.
  GeForce GTX 280 and 260 chip names.
  Replace copyright notices with stock MIT X11 boilerplate.
  Add new chips to the man page and fix capitalization of Quadro.
  Add a note that MODE_PANEL really means larger than BIOS-programmed 
panel size.
  G80: Handle extended I2C ports and LVDS panels with DDC-based EDIDs.
  Fix build by using CARD32 instead of uint32_t, like we do everywhere else.
  More G8x chips.
  Bump to 2.1.10.

git tag: nv-2.1.10

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.10.tar.bz2
MD5: bbee6df3e66d31a7da1afda59b40777d  xf86-video-nv-2.1.10.tar.bz2
SHA1: 03545be9634a043b68438dae2a3266c41af60e7e  xf86-video-nv-2.1.10.tar.bz2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.10.tar.gz
MD5: 894fea928c2e2f548c28b9ff413a6cc6  xf86-video-nv-2.1.10.tar.gz
SHA1: 7d412f87a4a2ee3d62719b71465fb62912aba5e1  xf86-video-nv-2.1.10.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.9 (GNU/Linux)

iQEcBAEBAgAGBQJIaW82AAoJEHYgpP6LHaLQgc8H/0T7D37kqoGBpfxt7G2+oP+i
pqBS6GqGhClabnxqfu7HG1BxoagB5stJ70+87M/IHO+JKyczgizQw3/KUkA25TgM
Pv+4DrXOB5KpB/tBqfaXEb2JAZmjAiFPLQdGI39G9XiX5z83oMYxvmozhxFSmtE4
IfEcxSHu/v1W7W1KOdadq1Bz6yoE2CFyNR3n8DVg2rMgu9cIdvwDBBFSsLewehYB
6czQ5065HAHONtfLWV+zPCygzeUClO0iwfQuLOVWArPhW05p2cF0HE1KAs/zQBc/
FRCxu1Kew2gcVRJ+0b74HK7kxzAxO07DpfTxukSqTtC7YG7TAR+vx9OXKCXfeEE=
=eMKb
-END PGP SIGNATURE-
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce


[ANNOUNCE] xf86-video-nv 2.1.9

2008-05-09 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This release adds some new product names to the list, fixes startup hangs on a
couple of GPUs, and adds an option -- AllowDualLinkModes -- to enable validation
of dual-link DVI modes.  Note that not all GPUs are configured at boot to
support dual-link modes, so enable this option at your own risk.

- -- Aaron


Aaron Plattner (8):
  Bug #14885: Add missing static keywords.
  I win at C.
  Sort the IsSupported table.
  Fix a few startup bugs.
  More G80s.
  Add an option to allow validation of dual-link DVI modes.
  Yet more G80s.
  Bump to 2.1.9.

Matthieu Herrb (1):
  Makefile.am: nuke RCS Id

git tag: nv-2.1.9

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.9.tar.bz2
MD5: c6b7e52fa18455c22eb040b8d2575ce5  xf86-video-nv-2.1.9.tar.bz2
SHA1: 3143c09ea0b96421738bdaca4f8638bfa9c90d81  xf86-video-nv-2.1.9.tar.bz2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.9.tar.gz
MD5: e4956607b4a25298767af3f4e63c541a  xf86-video-nv-2.1.9.tar.gz
SHA1: 24c35b4b7381803b85c9f8f067e097da1b24  xf86-video-nv-2.1.9.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.9 (GNU/Linux)

iQEcBAEBAgAGBQJIJQMXAAoJEHYgpP6LHaLQDOgH/32sCr3d7Jw8RF22yAgZ+Wd/
KGARYX4/1YXUyc0/RoxoJDQbKBWmy3KK8C+ZrxAxkhJh0uid6QuFQPPj+w2IX//8
fnBq/cCK7YMUef4yN/iTR7dF/MAxhcdxwMyJLyt35W6YO4AEF1JOSC9iyylPFW06
Hvwc4ENnjbthcVxXJYfnPdR4enDv916VsYL3nmsfKZQlzCLXHAyTtBFdMJJYc+Ir
Cvd9x/WjraMJPcxeMgVkrEBlAL4WvD3RE8VfxPIA3mllIdL+bMWmsu47THpLzjSW
PhYQqubCQ31NQAoZHv8Yr1ce1SuFOg5E9sT/peYo/28zi0LfCTI00DnSLdmrSNA=
=+vKd
-END PGP SIGNATURE-
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce


[ANNOUNCE] xf86-video-nv 2.1.8

2008-03-06 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Aaron Plattner (5):
  G80: Fix connector mapping and TMDS initialization on certain GPUs.
  GeForce 9600 GT.
  More new chip names.
  Add xf86gtf.c to the compat file list.
  Bump to 2.1.8.

Søren Sandmann Pedersen (1):
  Bug #14484: Fix G80SorSetProperty return value.

git tag: nv-2.1.8

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.8.tar.bz2
MD5: c3e8c98287dc98677bebfbe1ba51ab77  xf86-video-nv-2.1.8.tar.bz2
SHA1: 82a0f0bf9c3f528312cc7c497630946b206d81f5  xf86-video-nv-2.1.8.tar.bz2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.8.tar.gz
MD5: e2e481b2a69bfec2cd9365966a51fe2a  xf86-video-nv-2.1.8.tar.gz
SHA1: b8c32b1fde89354a5b5a46fd298aac4bb7e0a082  xf86-video-nv-2.1.8.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.7 (GNU/Linux)

iQEVAwUBR9DY/XYgpP6LHaLQAQJ0Swf/ULI+suLHSvsVXYehCaCiBdU4q/8mIuxB
QNTBmMuZQORRXr6PhfeGPQd7sgwIAqTpK2Hg32a/IxpZtohyB2V+Tx9XhIJAMSoe
DyghPNpb+6ZC6T1FRGgCq3lA5CX+V8GhEEaYFQ+GoYwqgFTibCw1sdDJVm4uV6zS
WdEgfTtA1+ql000HnQWDydvw2ZU/4PZPDjMXDgNrmhKJ0sVh5ItlpCHmZX/k8mEO
QllzFSaykB8rXiswu0+Akc1MLaLs0MFB0arxNmOgdwcdnVFCvmYxt7YXUdbY8d6t
bdQxLiXY7rukSvSCBMLfq6JMFq57N3YuE/cnwSt5JnUXZko/datH1Q==
=hrg6
-END PGP SIGNATURE-
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce


[ANNOUNCE] xf86-video-nv 2.1.2

2007-07-10 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

This release fixes the LVDS display on certain MacBooks and laptops with
certain native flat panel modes (typically 1440x900).  It also adds a
dither RandR 1.2 output property and a corresponding FPDither xorg.conf
option for 6-bit flat panels.

- -- Aaron


Aaron Plattner (6):
  Put the GPU into don't corrupt the framebuffer mode to work around 
MacBook wackiness.
  Work around more MacBook wackiness.
  GeForce 8600M GT.
  G80: Add a dithering property and corresponding config file option.
  More GeForce 8 series mobile chips.
  Bump to 2.1.2.

git tag: nv-2.1.2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.2.tar.bz2
MD5: 8b6a56ef0e1ec29e798059e5a546335a  xf86-video-nv-2.1.2.tar.bz2
SHA1: 70c4898de93af15916804622520df712360a7b7e  xf86-video-nv-2.1.2.tar.bz2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.2.tar.gz
MD5: cb491403ac18ae4c765807af07b6d9eb  xf86-video-nv-2.1.2.tar.gz
SHA1: 934d73c66fbc0ff86a267cddd3e0d00e864be631  xf86-video-nv-2.1.2.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQFGk8Ff9WrMjwm6ljURAjRxAKDTcevcZCf0GKV69D9JcN3IKdPgXQCggS+U
5Itv7nUrQPxOKm5Yn+uaT1Q=
=L6Ai
-END PGP SIGNATURE-
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce


[ANNOUNCE] xf86-video-nv 2.1.1

2007-07-02 Thread Aaron Plattner
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Aaron Plattner (3):
  Support configs with BAR1  total RAM  256 MB.
  GeForce 8400M G.
  Bump to 2.1.1.

git tag: nv-2.1.1

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.1.tar.bz2
MD5: 47c31c1d15b441fddcb7d665ca48beef  xf86-video-nv-2.1.1.tar.bz2
SHA1: d6e7cea320b6f75cba64fc4f478d372b5199faf1  xf86-video-nv-2.1.1.tar.bz2

http://xorg.freedesktop.org/archive/individual/driver/xf86-video-nv-2.1.1.tar.gz
MD5: 579c7b47a1b94460aefcb0468a5d8075  xf86-video-nv-2.1.1.tar.gz
SHA1: f0a0faff93cee39386455c00ff3c880f2b6aa35e  xf86-video-nv-2.1.1.tar.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQFGiVmt9WrMjwm6ljURAhdBAKCSTKp0p6O7RPHnpZplBt7cs2+JBACfW1CH
rD+E4imOkRFgPV6vhwv6M70=
=1r38
-END PGP SIGNATURE-
___
xorg-announce mailing list
xorg-announce@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg-announce