Re: [Xpert]Nvidia and Suspend
On Tue, Oct 22, 2002 at 10:37:07PM -0400, David Dawes wrote: On Tue, Oct 22, 2002 at 09:52:26AM +0200, Ducrot Bruno wrote: On Mon, Oct 21, 2002 at 05:14:28PM -0400, David Dawes wrote: On Fri, Oct 18, 2002 at 08:38:22AM +0100, Philip Blacker wrote: I'm wondering if it's feasible for the kernel to route ACPI power [...] Does it make a difference if you switch to a text VT before initiating a suspend? That's how XFree86 handles APM suspend/resume. With a correctly implemented VTEnter(), the driver should then reinitalise the hardware correctly when switching back to the X server after resuming. Some drivers don't to enough hw state initialisation in their VTEnter() function to handle returning from suspend. All of this assumes that the BIOS (and OS?) put the video hardware back into a sane text console state -- ideally the same state as after a normal system boot. If that doesn't happen, then there are likely to be problems. No difference. It is already a common trick for swsusp people. I don't see anything in the code that could explain why a perticular xaa extension is no more functionnal for this card. For me, VTEnter() from the nv driver seems to be correct. [...] Another way to solve the problem is to start another Xserver on another display then kill it. This probably does the reset that should come though APM. If VTEnter() is doing everything needed, I'm wondering why starting another X server then killing it solves the problem. Me too. And the bad news is that I really do not understand what happens here. If swsusp can use some equivalent of apmd (or acpid), it should be possible to have that daemon force a VT switch on suspend, and switch back on resume (using chvt(1) as I suggested above). I think adding ACPI support to XFree86 is the way to go rather than a user space deamon. Kernel 2.4.20 is going to have a (almost) fully working ACPI implementation, the swsusp patch will still be needed for S4(suspend to disk), S1 (suspend to RAM) will work on some machines. What's the current state of acpid? Nothing is done from swsusp to send power management notifications to user space before suspension. The same apply for ACPI. What's the use of acpid if it doesn't get any notifications before suspension? Is everything that needs to be done expected to be handled in the kernel? Yes. All processes are 'frozen', then a suspend event to all devices is send. If swsusp want to notify the suspension to user processes, it will require a major redisign at this stage. Same apply also for ACPI (under Linux, I do not speak for *BSD) because it call swsusp (for 2.5 kernels only) to implement S4. -- Ducrot Bruno http://www.poupinou.orgPage profaissionelle http://toto.tu-me-saoules.com Haume page ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Xterm raw input
Xavier Bestel wrote: Le mar 22/10/2002 à 18:12, Russell a écrit : Hi all, I'm testing out some terminal escape sequences as in: http://cns.georgetown.edu/~ric/howto/Xterm-Title/ctlseqs.txt Just try typing: echo -ne \033[2t into an xterm;) Anyway, how can i type control sequences into an xterm without the cursor moving? When i press ESC, it gets intercepted by the shell. ESC-[ doesn't work either. type Ctrl-V ESC. more generally, Ctrl-V 'escapes' the next character (ha!) so the shell doesn't intercept it. Is there a way to echo a string from one xterm into another xterm? yes. type 'tty' in the first xterm. it will tell you the name of the controlling terminal (something like /dev/pts/2), then you can 'echo teletransmitter works /dev/pts/2' in the second xterm. Thanks, that works well. Is xterm its 'own' terminal, or is it always an emulator for VT102/220? When i type ctrl-v then keypad 7/Home, or the dedicated Home key on a PC102 keyboard, i get ^[[H or ESC-[H (CSI-H). This code is not in the xterm control sequences spec: http://cns.georgetown.edu/~ric/howto/Xterm-Title/ctlseqs.txt HOME is the dedicated Home key, and KP7 is the keypad 7/Home key: /etc/X11/xkb/keycodes/xfree86 HOME=97 /* scan codes */ KP7 =79 These symbolic names are converted to logical names: /etc/X11/xkb/symbols/us HOME [ Home ] KP7 [ KP_Home KP_7 ] ^GRP1 ^GRP2(numlock) What happens now? Are these logical names all that is sent to the xterm? Does the xterm translate them to show ^[[H on the screen? I thought terminals were supposed to take control sequences and interpret them, but xterm seems to take X-specific logical key names directly. Is there some other stage that converts the logical key names to terminal control sequences and feeds them to xterm? ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Xterm raw input
Le mer 23/10/2002 à 13:09, Russell a écrit : When i type ctrl-v then keypad 7/Home, or the dedicated Home key on a PC102 keyboard, i get ^[[H or ESC-[H (CSI-H). This code is not in the xterm control sequences spec: http://cns.georgetown.edu/~ric/howto/Xterm-Title/ctlseqs.txt No, this ESC sequence should have no effect on xterm. The terminal should just report it as-is to the application which will interpret it (e.g. a text editor will move the cursor to beginning-of-line) Xav ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]Error:LTHashTableForEachItem NULL hash table specified when calling of XtDestroyApplicationContext...
Dear X toolkit experts,Can anyone tell me what can causeError:LTHashTableForEachItem NULL hash table specified when calling ofXtDestroyApplicationContext when destroyed context exists,please_Thank you for any reccomendationYours faithfullyPeter Fodrek
RE: [Xpert]Re: SIGFPE in Radeon 7500 DRI support
Title: RE: [Xpert]Re: SIGFPE in Radeon 7500 DRI support From: David Hampton [EMAIL PROTECTED] On Thu, 2002-10-17 at 05:06, Michel Dänzer wrote: Program received signal SIGFPE, Arithmetic exception. 0x40771fbb in gl_test_os_katmai_exception_support () from /usr/X11R6/lib/modules/dri/radeon_dri.so (gdb) bt #0 0x40771fbb in gl_test_os_katmai_exception_support () ^^^ This is such a wonderfully expressive function name, why doesn't anybody read it? :/ Yes, it is a nice expressive name, but my expertise is in network protocols not in video drivers. I don't know what Katmai is, why I need it, why its not in my Red Hat kernel, where to find it, or why the lack of it is crashing every OpenGL application on my computer. If this function is expected to create a SIGFPE, why isn't it trapped and handled? It's actually a poorly named function all around IMHO. A better name would be s/katmai/sse/ SSE is the SIMD instruction set of the Intel architecture which was first introduced in the pentium 3 architecture where the processore core was codenamed Katmai. Okay, now you know. Dont let me go into the depth of codenaming now. ;-) -Alex.
Re: [Xpert]Hooking onto Xserver
Thanks for that information, Actually, what my requirement is to get the coordinates of the rectangular portion on the desktop that has changed. Comparing the screens would be a heavy task. So I am looking for an approach that will keep giving me the coordinates of the changed rectangles. Hooking onto the Xserver might affect the performance, Can I go one level below the x server, sit somewhere in between the X server and the display driver ? Actually I am not clear about how the X server actually renders the data on the screen.Can I get some information on that ? thanks, Anurag --- Scott Long [EMAIL PROTECTED] wrote: I think the easiest way to do what you are suggesting would be to run a proxy (your hook) that listens on the usual X socket (either TCP or UNIX). Then, run the X server on a non-standard port, and forward all the packets in both directions through your proxy. This assumes, first of all, that it's possible to run X on a non-standard port. I don't know how you would do this. Also, it would be inefficient if you only wanted to intercept X *requests*, since the X responses would have to pass back through your proxy on their way to the client. Could the Record extension somehow be used to do this? On Tue, 22 Oct 2002 01:20:05 -0700 (PDT) Anurag Palsule [EMAIL PROTECTED] wrote: Hello, Is it possible for me to write a wrapper or a hook to the Xserver,so that all the X calls, instead of going to Xserver, will land into my application and my application will inturn pass them to the X server for further processing. If yes, how ? thanks, Anurag __ Do you Yahoo!? Y! Web Hosting - Let the expert host your web site http://webhosting.yahoo.com/ ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert Scott Long SwiftView, Inc. http://www.swiftview.com ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert __ Do you Yahoo!? Y! Web Hosting - Let the expert host your web site http://webhosting.yahoo.com/ ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Matrox G450, two cards, two heads
kevin, The MGASDRAM option fixed the problem. I now have a system with 5 working screens (2 G450 PCI Cards with 4 heads and 1 sis 630 AGP with 1 head). Thank you for your help and tell your co-worker thanks. Gregg On Tue, 2002-10-22 at 17:35, Kevin Oberman wrote: From: Gregg Lebovitz [EMAIL PROTECTED] Sender: [EMAIL PROTECTED] Date: 22 Oct 2002 16:39:09 -0400 I am trying to put together a linux system that will drive 4 to 6 heads for use in an transportation operations center. I am using Redhat 7.3, xfree86 4.2.0, and the xfree86 drivers from the Matrox site. I can get one card with 2 heads to work if the card is configured to be the primary video controller, however the secondary cards display junk (looks like memory is not being mapped properly). Has anyone gotten multiple G450 cards to work and if so can you share your configuration with me. Has anyone gotten a single G450 card to work as a secondary controller? A co-worker found the solution on the Matrox mail list. It appears to be a problem with a dual controller system where one card is a Matrox. The magic for him was to use the XF86Config line of: Options MGASDRAM in the Device section. Just remove the '#'. This may not fix it, but it worked for him. (He has a 3 headed display with a G450 and a Radeon 8500.) Good luck! R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: [EMAIL PROTECTED]Phone: +1 510 486-8634 ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Patch for the Xlib i18n code
Olivier Chapuis writes: On Mon, Oct 21, 2002 at 03:14:47PM +0200, Egbert Eich wrote: I have some more patches for omGeneric.c. So I will take care of your fixes. Thanks. Thanks to have applied my patch! Just one note without real importance, my name is Chapuis and not Chapius :o) OK. I have altered your patch a little bit: 1. I've restructured your patch of lcFile.c a little bit. 2. in omGeneric.c we don't need to free xlfd_name because it doesn't get allocated by parse_fontdata() when we pass NULL as the font_data_return argument. In all cases but C_PRIMARY we don't care about this information. Otherwise your fixes were OK. Thanks! Regards, Egbert. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Colormap Allocation problems under Linux 7.3
On Tue, Oct 22, 2002 at 10:26:57PM -0400, David Dawes wrote: I'd be interested to see the patch. Attached to this message the patch (-no-render/NoRender/-render and -render-colors/RenderColors). It is not a definitive version: should add some docs and I've not yet change the -render-colors logic (you give an integer N and a CxCxC color cube + 2^M grey is used so that C^3+2^M = N and C is maximal for C^3 N). Regards, Olivier PS: patch gziped and done in xc/programs/Xserver render_patch.txt.gz Description: application/gunzip
Re: [Xpert]Colormap Allocation problems under Linux 7.3
Around 9 o'clock on Oct 23, Olivier Chapuis wrote: An application should query if the xserver has render support and if not take appropriate decisions. No? E.g., in FVWM we have a function which simulate XRenderComposite and a font spec can have two fonts one for Xft and an other one for core text rendering. Client-side text is a powerful mechanism which is not easily replaced with core fonts. Xft uses the core protocol GetImage/PutImage when Render is not available, but performance suffers greatly. Applications like OpenOffice (and FrameMaker) are unable to provide reasonable text output with the core font primitives. Client-side text is useful independently from anti-aliasing. In some case it is not the number of colors which counts but how the apps share the colours. Our working assumption is that 8-bit PseudoColor displays are useful only for legacy applications; those don't generally share well with others as they need custom or writable color values. Any significant number of preallocated color cells generally cause these applications to fail. Moreover, if you use standard dithering methods, dithering is really really better in mode #5 and #6 than in the default mode. So I think that Render doesn't (currently) support dithering, and dithering to a random set of colors is rather expensive. I like having both a color cube and a gray ramp as I believe that provides more accurate anti-aliased text, I'm not sure how dithering fits into this model. Yes, using StaticColor is more or less equivalent than mode #6 (but with a bit different colours proportion (3/3/2) by default). The new fb frame buffer code fits the largest color cube and fills the remaining entries with a gray ramp, precisely as you've described for mode #6. PseudoColor with a completely allocated colormap is a poor substitute -- with StaticColor, the server will automatically fit new color requests into the existing color entries, while PseudoColor will fail the allocation request. I think a full mode does not hurt and assure backward compatibility. But, I do not care. I am more interested by mode #5. Any PseudoColor mode which doesn't permit the bulk of legacy applications to run without problems isn't interesting in this context; it may be that the 'default' mode can allocate a few more colors than it does in current CVS, but I don't think it should use a 5x5x5 cube An other subject. Do you think that it is better to always use pow(2,k) grey colours (e.g., use 16 grey for the default in the place of the 21 grey) so that the grey are aligned with the 32x32x32 colour cube which is used to approximate the render colors? That's a good idea; we could actually eliminate the entries in the gray ramp duplicated within the color cube. Keith PackardXFree86 Core TeamHP Cambridge Research Lab ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]Re: 8-bit pseudocolur emulation on direct color hardware
Around 10 o'clock on Oct 23, Dr Andrew C Aitchison wrote: I'd like to do this. Where abouts would it go in - I guess that this would be another directory alongside programs/Xserver/hw/xfree86/xf24_32bp and programs/Xserver/hw/xfree86/xf8_32bpp ? As this module should be independent of the XFree86 driver interfaces, it should live in Xserver/miext instead, alongside the shadow frame buffer code. Where should I look to generate the list of functions which would need to be implemented ? The usual suspects can be found in any rendering wrapping module; software cursors, backing store and shadow frame buffer are three such examples. Are we just going for 8bit pseudocolor on 24 bit directcolor, or is it worth trying 5bit pseudo on 15/16 bit directcolour too ? I'm not sure 5-bit is interesting; many legacy apps assume 256 writable colormap entries or they'd work with a 5x5x5 cube preallocated by Render. Can anyone point me at instructions for testing an X server, once I thing I have it working ? I know about I've just updated the Makefile for building the X test suite documentation; that's found in the test module (test/xsuite/xtest/doc). Perhaps those would be of interest? In particular, the 'pt -i' command runs a single test from the current directory (one of xsuite/xtest/tset/*/*). Keith PackardXFree86 Core TeamHP Cambridge Research Lab ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]Only 16 bpp mode via DVI on Radeon VE, when 24 bpp is shown
Although 24bpp mode is shown as only visual in XFree86.0.log and via xdpyinfo, the image is obviously shown only with 16bpp on the TFT screen. The 64 grayscale colors are easily visible. I'm using XFree86 Version 4.1.0.1 on x86-Linux connected via DVI (digital mode) to the screen. Graphics card is ATI Radeon VE (module radeon) which works well in true 24bpp mode on Windows98. Any configuration detail I overlooked? Is anyone else using the ATI Radeon VE in digital mode? As I read on http://www.xfree86.org/pipermail/xpert/2002-October/021803.html Jeff Brubaker has the same problem. As you can in the logs, the 24bpp is really selected, and nowhere (in the whole log) can be 16bpp seen. from XFree86.log: (**) RADEON(0): Depth 24, (--) framebuffer bpp 32 (II) RADEON(0): Pixel depth = 24 bits stored in 4 bytes (32 bpp pixmaps) (==) RADEON(0): Default visual is TrueColor (==) RADEON(0): RGB weight 888 (II) RADEON(0): Using 8 bits per RGB (8 bit DAC) from xdpyinfo: depths (7):24, 1, 4, 8, 15, 16, 32 depth of root window:24 planes number of visuals:8 default visual id: 0x23 visual: visual id:0x23 class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits -- Holger Isenberg [EMAIL PROTECTED] http://mars-news.de ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]Core fonts issues [was: Problems with Type1 big fonts]
DD Unless the old Type1 backend can unreservedly be replaced by the new DD FreeType2 backend, then it should be disabled, and maybe even a fake DD type1 font module created for the modular build so that existing DD configurations don't break. If there are still reasons for wanting the DD old backend, then it needs to be configurable, at least at build-time. DD If we want to provide more flexibility in allowing the user to control DD what font suffixes are handled by what backend, there would need to be DD some type of run-time configurability. I was looking into that when other things came up; I may very well be able to come back to this. Anyway, here's the plan I had. The idea would be to have a new interface, Bool FontFileRegisterRendererPriority(FontRendererPtr, int priority) where the existing FontFileRegisterRenderer interface in renderer.c is an alias for FFRRP with priority set to 0. Priority is an integer (positive or negative), and renderers with higher priority override renderers of lower priority. The Type 1 renderer would register with negative priority for both PFA and CID; in the absence of another CID renderer, it would render CID fonts, but PFA fonts would be handled by FreeType. FreeType would register with default priority for both PFA and TTF. X-TT would register with positivie priority for TTF. In a configuration in which all renderers are linked in, X-TT would handle TTF, FreeType would handle PFA, and Type 1 would handle CID. In a default configuration (no X-TT), both PFA and TTF are handled by FreeType. The advantage of that is that there are no new configuration mechanisms -- we simply leverage off the existing module loader. It's also easily extensible -- I expect to implement bitmap support in the FreeType backend after 4.3.0, and then you'll want the existing bitmap renderers to override FreeType if they're linked in. The downside is that it's not completely flexible, not allowing for example to have TTF support using FreeType while using Type 1 for PFA. I don't think anyone cares. DD Also, I'd really like to see some resolution to the separate FreeType DD and X-TT backends for TrueType fonts. As it is now, if someone chooses DD X-TT, they will still need the old Type1 backend for Type1 fonts DD regardless of other considerations. Is it still not possible to resolve DD the issues that led to two TrueType backends in the first place? Here's my personal perception. X-TT contains support for embedded bitmaps, which FreeType 1 didn't have. The new FreeType backend fully supports embedded bitmaps. X-TT also contains a number of features such as fake bolding and automagic slanting, collectively known as TTCap. These should be handled at the toolkit level in my opinion, and at any rate implementing new features in the core fonts system at this point is pretty much pointless. Still, users of X-TT have become accustomed to these features being made available at the server level, and would probably not accept them being taken away. I shall not implement the said features in FreeType, which I want to remain a small and clean piece of code. I shall also not integrate myself the (existing) patches that implement TTCap in the FreeType backend. If there is a person interested in doing that, I'll be glad to help -- but only if said person commits to maintaining the code for the indefinite future. Juliusz ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Problems with Type1 big fonts
AH There's a problem with this at the moment. If you build a static AH server you get two font renderers registered to deal with .pfa/.pfb AH fonts. Solution Juliusz - is it just to disable Type1 for static builds AH because it's too buggy ? For now, all I can tell for sure is that I do disable Type 1 in my personal build. This has the side-effect of disabling support for CIDFonts. Details about my medium-term ideas on the subject are in a different message. I am sorry for not having done that yet, and I apologise in advance for not being able to do it right now. Juliusz ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]nvidia hardware overlay
I'm trying to figure out how to use an overlay plane. I'm running: Redhat 7.3 and the Linux kernel 2.4.18-10 NVidia Quadro2 video card XFree86 4.2.0 GLX 4.0.2-1.0.3123 I'm new to openGL so what I'm trying to do may be flawed. Here is the sequence: dpy= XOpenDisplay() glXQueryVersion() - reports major=1, minor=2 main_visual= glxChooseVisual() main_context= glXCreateContext(dpy,overlay_visual,0.GL_TRUE) overlay_visual= glxChooseVisual() with GLX_LEVEL = 1 overlay_context= glXCreateContext(dpy,overlay_visual,0.GL_TRUE) win= XCreateWindow() using the main visual's screen, depth, and visual glXMakeCurrent(dpy,win,main_context) to access the main plane - Drawing commands for main plane go here glXMakeCurrent(dpy,win,overlay_context) to access the overlay plane - At this point I get a BadMatch error which I think is because the overlay context was created with overlay_visual and win was created with main_visual. What am I doing wrong? One thing that is interesting, xprop -root displays the SERVER_OVERLAY_VISUALS, but glxinfo doesn't display anything about overlays. begin:vcard n:Hammond;Eric tel;work:(607)721-4585 x-mozilla-html:FALSE org:L-3 Communications' Link Simulation and Training adr:;; version:2.1 email;internet:[EMAIL PROTECTED] note:(607) 721-4585 x-mozilla-cpt:;19648 fn:Eric Hammond end:vcard
[Xpert]Problem with 16bit color display under XFree86 4.1.0
Hi, I currently have XFree86 4.1.0 on the redhat 7.2 machine. I am trying to program for the 16bit color display, but I just could not get the color right. I already have the correct display for 8bit and 24bit on this machine. Some help is needed and will be appreciated. Since the display is 16bit, according to the result of xdpyinfo (see below), the masks for r,g,b are 0xf800, 0x7e0, 0x1f. Does it mean the following formula should generate the correct entry for a color with r, g, b value red, green and blue? red 0xf800 + green 0x7e0 + blue 0x1f Of course, I couldnot get the correct result with the above formula. I used a similar thing for 24bit and it worked fine. Thanks a lot for your help. Best regards, Kevin The following is the output of xdpyinfo under 16bit: name of display::0.0 version number:11.0 vendor string:The XFree86 Project, Inc vendor release number:4010 XFree86 version: 4.1.0 maximum request size: 4194300 bytes motion buffer size: 256 bitmap unit, bit order, padding:32, LSBFirst, 32 image byte order:LSBFirst number of supported pixmap formats:7 supported pixmap formats: depth 1, bits_per_pixel 1, scanline_pad 32 depth 4, bits_per_pixel 8, scanline_pad 32 depth 8, bits_per_pixel 8, scanline_pad 32 depth 15, bits_per_pixel 16, scanline_pad 32 depth 16, bits_per_pixel 16, scanline_pad 32 depth 24, bits_per_pixel 32, scanline_pad 32 depth 32, bits_per_pixel 32, scanline_pad 32 keycode range:minimum 8, maximum 255 focus: window 0x6e, revert to PointerRoot number of extensions:26 BIG-REQUESTS DOUBLE-BUFFER DPMS Extended-Visual-Information FontCache GLX LBX MIT-SCREEN-SAVER MIT-SHM MIT-SUNDRY-NONSTANDARD RENDER SECURITY SGI-GLX SHAPE SYNC TOG-CUP XC-APPGROUP XC-MISC XFree86-Bigfont XFree86-DGA XFree86-Misc XFree86-VidModeExtension XInputExtension XKEYBOARD XTEST XVideo default screen number:0 number of screens:1 screen #0: dimensions:1024x768 pixels (333x241 millimeters) resolution:78x81 dots per inch depths (7):16, 1, 4, 8, 15, 24, 32 root window id:0x31 depth of root window:16 planes number of colormaps:minimum 1, maximum 1 default colormap:0x20 default number of colormap cells:64 preallocated pixels:black 0, white 65535 options:backing-store NO, save-unders NO largest cursor:32x32 current input event mask:0xd0001d KeyPressMask ButtonPressMask ButtonReleaseMask EnterWindowMask SubstructureRedirectMask PropertyChangeMask ColormapChangeMask number of visuals:4 default visual id: 0x23 visual: visual id:0x23 class:TrueColor depth:16 planes available colormap entries:64 per subfield red, green, blue masks:0xf800, 0x7e0, 0x1f significant bits in color specification:6 bits visual: visual id:0x24 class:TrueColor depth:16 planes available colormap entries:64 per subfield red, green, blue masks:0xf800, 0x7e0, 0x1f significant bits in color specification:6 bits visual: visual id:0x25 class:DirectColor depth:16 planes available colormap entries:64 per subfield red, green, blue masks:0xf800, 0x7e0, 0x1f significant bits in color specification:6 bits visual: visual id:0x26 class:DirectColor depth:16 planes available colormap entries:64 per subfield red, green, blue masks:0xf800, 0x7e0, 0x1f significant bits in color specification:6 bits ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Colormap Allocation problems under Linux 7.3
On Wed, Oct 23, 2002 at 08:31:37AM -0700, Keith Packard wrote: Around 9 o'clock on Oct 23, Olivier Chapuis wrote: An application should query if the xserver has render support and if not take appropriate decisions. No? E.g., in FVWM we have a function which simulate XRenderComposite and a font spec can have two fonts one for Xft and an other one for core text rendering. Client-side text is a powerful mechanism which is not easily replaced with core fonts. Xft uses the core protocol GetImage/PutImage when Render is not available, but performance suffers greatly. Applications like OpenOffice (and FrameMaker) are unable to provide reasonable text output with the core font primitives. Client-side text is useful independently from anti-aliasing. In some case it is not the number of colors which counts but how the apps share the colours. Our working assumption is that 8-bit PseudoColor displays are useful only for legacy applications; those don't generally share well with others as they need custom or writable color values. Any significant number of preallocated color cells generally cause these applications to fail. Agree with the working assumption. But, if we take as example the mail which starts this thread, what it is needed is only 2 writable colours and 16 normal one. So, I will add that we _may_ have only one such legacy application which needs only a few colours. Moreover, if you use standard dithering methods, dithering is really really better in mode #5 and #6 than in the default mode. So I think that Render doesn't (currently) support dithering, and dithering to a random set of colors is rather expensive. I like having both a color cube and a gray ramp as I believe that provides more accurate anti-aliased text, I'm not sure how dithering fits into this model. I do not know too. I made various dithering experimentation with a color cube + a ramp, but not yet found something which _never_ gives a less good result than dither only in the cube. Also I think as you that a cc + a ramp is good. I do not think that all these gamma corrected (maybe spherical) pallet and visual colour distance can _really_ give improvement. My only success is a better color distance than the euclidean distance when we have a big ramp compared to the size of the color cube: |r1-r2| + |g1-g2| + |b1-b2| + f*(|r1-g1| + |g1-b1| + |b1-r1| - (|r2-g2| + |g2-b2| + |b2-r2|)) this distance forbid colours to become grey and grey to become coloured. Yes, using StaticColor is more or less equivalent than mode #6 (but with a bit different colours proportion (3/3/2) by default). The new fb frame buffer code fits the largest color cube and fills the remaining entries with a gray ramp, precisely as you've described for mode #6. PseudoColor with a completely allocated colormap is a poor substitute -- with StaticColor, the server will automatically fit new color requests into the existing color entries, while PseudoColor will fail the allocation request. I think a full mode does not hurt and assure backward compatibility. But, I do not care. I am more interested by mode #5. Any PseudoColor mode which doesn't permit the bulk of legacy applications to run without problems isn't interesting in this context; it may be that the 'default' mode can allocate a few more colors than it does in current CVS, but I don't think it should use a 5x5x5 cube 256 - (5*5*5 + 32 or 16) + 2 = 101 / 117 I think that maybe the user which starts this thread will be happy with this (until it uses a modern app which does not care about (or can not be configured to detect) the 5x5x5+32 or 16 allocated cc+r) other solutions: 256 - (5*5*4 + 32 or 16) + 2 = 126 / 142 256 - (4*5*4 + 32 or 16) + 2 = 146 / 162 a default (for comparison): 256 - (4*4*4 + 16) + 4 = 180 If I was an X expert I will allows all the above mode, but I am only a poor FVWM workers :o). So, if the principe of the patch I post in this thread is accepted, I will finish it following your suggestion ... An other point, maybe XRender should has a function which describes the colormap it uses (for depth = 8 PseudoColor). Regards, Olivier ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Patches for xfree86 cyrix (NatSemi GX1) driver
I've picked up your patches for this Alan, and will be applying them soon. But I have disabled the 5530 support for now, to encourage geode driver testing. Alan. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]8 over 24 app
Hi, Where can I find a simple app that uses Overlay 8+24? I can't seem to find any. What do you use to test overlay? Luugi ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]Trident Cyber 9397 issues
After lurking on the debian-x mailing list to no avail, and then doing some 'research' into the palette issue with the current 4.2.1 (might even be 4.2.x). I seem to get between 16 and 256 colours repeated round and round. Anything above 8bit colour depths are un-usable. 8bit colour depths as usable but its not nice. I thought at first it could be a debian only issue as the problem occured during an apt-get upgrade session, however my 'research' showed many others have the same problem, not only for the debian users but also people building from source. Once or twice (seems to depend on the phase of the moon) the Xserver fires up properly with the right set of colours in 16bit (and other colour depths) however this only *ever* happens after a cold reboot if at all. Also when it does work the gamma is probably around 1.8 ish and cannot be changed. I also notice that 4.2.x introduced a gamma fix for Trident cards. To me this is a big coincidence.what do you think I have of course attached my XF86Config-4 file and XFree86.0.log file, do send me tests etc, I really want this fixed as I am condemed to using remote Xsessions off a windoze box to get a proper looking desktop :-/ I have tried the latest binary from ftp://people.redhat.com/mharris/test-driver/trident_drv.o and to no aval, even with a cold reboot. many thanks in advance for a reply Alex -- _ / He who laughs last hasn't been told the \ \ terrible truth. / - \ ^__^ \ (oo)\___ (__)\ )\/\ ||w | || || Section ServerLayout Identifier XFree86 Configured Screen 0 Screen0 0 0 InputDeviceSynaptics Touchpad CorePointer InputDeviceLaptop Keyboard CoreKeyboard EndSection Section Files RgbPath /usr/X11R6/lib/X11/rgb ModulePath /usr/X11R6/lib/modules FontPath /usr/X11R6/lib/X11/fonts/TrueType/ FontPath /usr/X11R6/lib/X11/fonts/Type1/ FontPath /usr/X11R6/lib/X11/fonts/Speedo/ FontPath /usr/X11R6/lib/X11/fonts/CID/ FontPath /usr/X11R6/lib/X11/fonts/misc/:unscaled FontPath /usr/X11R6/lib/X11/fonts/75dpi/:unscaled #FontPath /usr/X11R6/lib/X11/fonts/100dpi/:unscaled FontPath /usr/X11R6/lib/X11/fonts/misc/ FontPath /usr/X11R6/lib/X11/fonts/75dpi/ #FontPath /usr/X11R6/lib/X11/fonts/100dpi/ EndSection Section Module Load dbe Load dri Load extmod Load glx Load record Load xtrap Load speedo Load type1 Load freetype EndSection Section InputDevice Identifier Laptop Keyboard Driver keyboard Option XkbRules xfree86 Option XkbModel pc104 Option XkbLayout gb Option XkbOptions altwin:left_meta_win,compose:ralt EndSection Section InputDevice Identifier Synaptics Touchpad Driver mouse Option Protocol PS/2 Option Device /dev/psaux Option Emulate3Buttons Option Emulate3Timeout 50 EndSection Section Monitor Identifier Laptop LCD 800x600 VendorName Monitor Vendor ModelNameMonitor Model HorizSync31.5 - 37.9 VertRefresh 50.0 - 90.0 #ModeLine 800x600 400.0 800 840 968 1056 600 601 605 628 +hsync +vsync ModeLine 640x480 25.2 640 656 752 800 480 490 492 525 -hsync -vsync EndSection Section Device ### Available Driver options are:- ### Values: i: integer, f: float, bool: True/False, ### string: String, freq: f Hz/kHz/MHz ### [arg]: arg optional Option SWcursor off # [bool] Option PciRetry off # [bool] Option NoAcceloff # [bool] #Option SetMClk # freq #Option MUXThreshold # i Option ShadowFB off # [bool] #Option Rotate# [str] #Option VideoKey # i Option NoMMIO off # [bool] Option NoPciBurst off # [bool] #Option MMIOonly # [bool] Option CyberShadowoff # [bool] Option CyberStretch off # [bool] #Option XvHsync # i #Option XvVsync # i #Option XvBskew # i #Option XvRskew # i Identifier Trident Cyber 9397 Driver trident VendorName Trident BoardName Cyber 9397 ChipSet cyber9397 BusID PCI:0:2:0 EndSection Section Screen Identifier
[Xpert]X restarts with NeoMagic NM2160 (128XD)
I have a very strange problem. I just installed linux a Fujitsu Stylistic 2300 pen tablet. Everything seemed to just work. I got the pen working and everything. GDM started up and everything seemed hunky dory until I tried to login. First I tried loggin into a GNOME session. It'd get about half way through gnome initialization sequence and X restarted. So I tried KDE. Same thing. So I tried failsafe. That worked got an xterm and ran twm. That worked. So I killed twm, and tried gnome-session. Same thing, X restarts. Upon investigation several things are happening that I can see. One, gnome and kde don't have permissions to connect to the DISPLAY. Another thing, an .Xauthority file isn't being created, so I added my own with proper things like localhost.localdomain:0 localhost.localdomain/unix:0 hostname:0 hostname/unix:0 And that didn't help. Here is a sample from the .xsession_errors files (excuse mistypes as I'm having to retype the errors by hand): X connection to :0.0 broken (explicit kill or server shutdown). X connection to :0.0 broken (explicit kill or server shutdown). Gdk-ERROR **: X connection to :0.0 broken (explicit kill or server shutdown). Gdk-ERROR **: Fatal IO error 104 (Connection reset by peer) on X server :0.0. Gdk-ERROR **: X connection to :0.0 broken (explicit kill or server shutdown). Gdk-ERROR **: Fatal IO error 104 (Connection reset by peer) on X server :0.0 Gdk-ERROR **: Fatal IO error 104 (Connection reset by peer) on X server :0.0 And here's what's reported in /var/log/messages: Oct 23 18:01:41 localhost gdm(pam_unix)[5470]: session closed for user root Oct 23 18:01:41 localhost gdm[5470]: gdm_slave_xioerror_handler: Fatal X error - Restarting :0 I'm enabling the options that the neomagic man page says need to be enabled to prevent lockups with this chipset (can't remember them off the top of my head). Any ideas on what's going on here? Please let me know if any more information is needed. -- Ti Leggett [EMAIL PROTECTED] ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Trident Cyber 9397 issues
On Wed, Oct 23, 2002 at 10:25:08PM +0100, Alexander Clouter wrote: After lurking on the debian-x mailing list to no avail, and then doing some 'research' into the palette issue with the current 4.2.1 (might even be 4.2.x). I seem to get between 16 and 256 colours repeated round and round. Anything above 8bit colour depths are un-usable. 8bit colour depths as usable but its not nice. I thought at first it could be a debian only issue as the problem occured during an apt-get upgrade session, however my 'research' showed many others have the same problem, not only for the debian users but also people building from source. Once or twice (seems to depend on the phase of the moon) the Xserver fires up properly with the right set of colours in 16bit (and other colour depths) however this only *ever* happens after a cold reboot if at all. Also when it does work the gamma is probably around 1.8 ish and cannot be changed. I also notice that 4.2.x introduced a gamma fix for Trident cards. To me this is a big coincidence.what do you think I have of course attached my XF86Config-4 file and XFree86.0.log file, do send me tests etc, I really want this fixed as I am condemed to using remote Xsessions off a windoze box to get a proper looking desktop :-/ I have tried the latest binary from ftp://people.redhat.com/mharris/test-driver/trident_drv.o and to no aval, even with a cold reboot. Try the updated driver from http://www.xfree86.org/~alanh Make sure you powerdown again. Alan. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Patches for xfree86 cyrix (NatSemi GX1) driver
On Wed, Oct 23, 2002 at 04:50:02PM -0400, Alan Cox wrote: But I have disabled the 5530 support for now, to encourage geode driver testing. Sure. I'm going to do some testing myself. I'll also see if I can get your driver to do 5520. The 5520 VSA1 (if you can find working 5520 VSA1!) is much like the 5530 but without Xv. If I can get 5520 VSA1 working in geode then I can make the cyrix driver just a biosfree driver for the older/weirder end of the universe. Well, it'd be even nicer to deprecate the cyrix driver in favour of the geode driver if you can bring across the 5510/5520 changes. The durango code may even help you as this is NS's own library to access the hardware. Alan. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Patches for xfree86 cyrix (NatSemi GX1) driver
Yes, Sorry Alan. I've not even looked at your stuff yet, but as the current Cyrix driver in CVS purely deals with the (ISA based) integrated There isnt really an ISA version. All of them appear on PCI (5510, 5520 and 5530/5530+). Neither 5520 or 5530 actually work with the cyrix driver from my own testing. version and the new geode driver deals with the newer PCI stuff is there any point in merging your stuff ? The cyrix code in 4.2 (I dont know about CVS) doesn't work on any hardware I tried. Its unfinished code (eg it doesnt set up part of the frame buffer line length, it turns on the hw cursor but never sets it up...). By inspection alone its incomplete. I don't want to confuse people if I integrate the code on which driver to use, and most certainly that NS will be contributing patches to the new geode driver. The geode driver looks a far better thing. I need to grab a current version some day and check it handles the BIOS bugs, rotate and the other bits I needed fully but I see no reason to keep the code I have. If it doesn't its easier to fix your version to know that the vga left/right panning is scaled differently to direct registers and also that you cannot use the vga emulation for palette control for 8bit palette loading (doesnt work on some firmware versions) Alan ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Patches for xfree86 cyrix (NatSemi GX1) driver
I'm not familiar with the original MediaGX chips, but the 4.2 driver has a FindIsaDevice function that probes ports directly rather than using any PCI vendor codes. Is the MediaGX the 5510 ? MediaGX is a brand name for the CPU side of the CPU+5510/5520/5530. The history goes something like this. Cyrix introduce MediaGX processor and chipset. Basically an embedded 486 with all the glue on a few chips. The CPU acquires MMX (GXm) the chipset goes very integrated 5520, then a bit less integrated 5530, then adds new stuff like video overlay in 5530+. Nat Semi buys that chunk of Cyrix. The processor get renamed Geode, the chipset eventually turns into the 1200 and so on. NS then integrate the entire thing onto one BGA chip (2200 I believe) None of it was ISA, none of it works on 4.2. I assume 4.0 the probe code for PCI devices that answered to ISA probes worked differently as 4.0 found the chips but didnt work, 4.2 doesnt work at all. 3.3.x worked variably depending on the patches used (the base one was very wobbly until 3.3.6) If your code fixes the original MediaGX then we should apply your fixes. The geode driver only deals with 5530 (not 5520 or 5510). I'm still doing some debugging on older 5520 (problem with mode switch bugs in the older firmware). I don't have 5510 (and its very rare). bits I needed fully but I see no reason to keep the code I have. If it doesn't its easier to fix your version to know that the vga left/right panning is scaled differently to direct registers and also that you cannot use the vga emulation for palette control for 8bit palette loading (doesnt work on some firmware versions) Please do get a current copy, and send in any patches you make. Once I get time I will do - would you expect the CVS driver to backport (well more just compile) on 4.2 ? - that makes it a lot easier to test plus its something that has to get done as real Red Hat XFree86 work so easier to schedule. Alan ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]Problems with Thinkpad 760XL, (Trident TGUI 96xx)
I am only getting garbage, as in having an erronous refresh rate(?), on the screen when trying to get Xfree86 4.2.1 going on a Thinkpad 760Xl. Its supposed to be doing 800x600 in 16bpp. From lspci -v -v: 00:03.0 VGA compatible controller: Trident Microsystems TGUI 9660/968x/968x (rev d3) (prog-if 00 [VGA]) Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- Status: Cap- 66Mhz- UDF- FastB2B+ ParErr- DEVSEL=medium TAbort- TAbort- MAbort- SERR- PERR- Interrupt: pin A routed to IRQ 11 Region 0: Memory at 0800 (32-bit, non-prefetchable) Region 1: Memory at 0840 (32-bit, non-prefetchable) Region 2: Memory at 0880 (32-bit, non-prefetchable) Expansion ROM at 000c Which seems to be detected nicely by Xfree86 -configure: Section Device Identifier Card0 Driver trident VendorName Trident BoardName TGUI 96xx BusID PCI:0:3:0 EndSection Any ideas on this? Should I set specific refresh rates (Ive already tried with some numbers found on the net, same result) or does the problem lye elsewhere? I havent had any luck with the linux kernel framebuffer drivers for trident either, if that matters. --- John Bäckstrand ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]Re: [Dri-devel] Re: r200 and libxaa
Am Montag, 21. Oktober 2002 14:56 schrieb Kevin E Martin: In general, this is one of those DRI CVS tree and XFree86 CVS tree have greatly diverged problems, which will hopefully be fixed soon when the two trees are resynced. This resync is currently being planned and will hopefully happen sometime soon. Hear ye! That's exactly what I ask for in the RandR Support on XFree86 4.3 thread. I think we need more often a resync. Maybe in an extra branch? Some like xfree-cvs-sysnc-branch? Michel, thanks for the patch! I will aplly these to the XFree86 tree. As we discussed in our private e-mails, when you have time to get to the other changes to the Radeon driver in the DRI tree, please send patches either to me or to the XFree86 patch list. That's the problem with all the new subsystems (DRI, Gatos). Regards, Dieter -- Dieter Nützel Graduate Student, Computer Science University of Hamburg Department of Computer Science @home: Dieter.Nuetzel at hamburg.de (replace at with @) ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]DVI output with 'nv' driver
Hello, I've been having trouble getting the digital output of my NVidia GeForce 2 to work with the 'nv' driver provided with XFree86 4.2.0. When the DVI cable is connected, all I get is fuzzy blue scrolling down the screen. At the moment, I'm using the 'nv' driver in conjunction with VGA output to the LCD panel, but this causes a great drop in image quality -- no good. Incidentally, digital DVI output works fine with the 'nvidia' driver, but this doesn't work with the 2.5 Linux kernel (and I'd rather not taint the kernel). Is there any documentation on getting the 'nv' driver to make use of the GeForce 2's DVI digital output? I'm uxing a Dell 1702FP flat panel LCD display. Thanks very much! __ Do You Yahoo!? Everything you'll ever need on one web page from News and Sport to Email and Music Charts http://uk.my.yahoo.com ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]cirrus on powerpc
Hi, I have a cirrus gd5446 with 1 mb memory installed in a Motorola Powerstack 4400 (PCI PPC) running linux (kernel 2.4.17pre1, suse 7.3). I think the cirrus driver is not endian save, because it crashes my system hard. I can't even get the framebuffer driver to work. Only thing which is working is XFree86 3.3.6 with framebuffer. Is this a known problem, and if yes, is there a way I can help porting it? Greetings Marc ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Patches for xfree86 cyrix (NatSemi GX1) driver
I'm surprised that if the original 5510/5520 chips are PCI then why aren't we using the PCI information rather than port probing ? Nobody ever finished the driver ? They are all PCI I checked ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]CrtlAltKP_+/KP_- and cvs xfree
David Dawes [EMAIL PROTECTED] writes: I'm using cvs version of xfree from today and CrtlAltKP_+/KP_- no longer works. Previously I was using cvs version of xfree from few weeks ago (before RandR merge). Config is exactly the same. [...] I committed a fix for this yesterday. Let me know if you still see problems with it. Works fine now. Thanks. David -- Arkadiusz MikiewiczCS at FoE, Wroclaw University of Technology [EMAIL PROTECTED] AM2-6BONE, 1024/3DB19BBD, arekm(at)ircnet, PLD/Linux ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]XML system wide configuration
Hello i've been reading the discussion about the XML configuration , not only of Xfree but system-wide. I also think it's a good idea, i want to discuss the idea with the people interested in it, i don't think this list is the best place, so please, the people interested email me so i get your mails. I think that at least Mikael Olenfalk and Michael Michael are interested but i dont have Michael Michael mail, please email me. Here on, i think that maybe we need to create a mailing list somewhere (don't know where though) . Now i just want to begin to share some ideas. Another important think about this, i already have code that reads the hardware and the disk layouts and write a Xml with it. i want to send privately as i suppose most people in this list is not really interested. If anyone is still reading, just a curiosity about X when it's suppossed to be 4.3 released, i've been seeing 4.2.99 for ages. cheers Jordi ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]XFree86 CVS and 1600x1024 (DVI) / Radeon
Wonderful. The reason I never saw the post was that my delivery was disabled for the list. I was wondering why my XFree86 mail folder was rather light lately. :-/ This must automatically happen when my mail server starts bouncing messages when the quota gets exceeded. Anyway, here's answers to the replies I found on the web archives: Keith Packard wrote: Hmm. The Radeon driver mode selection code was restructured during the RandR integration, it's quite possible that some mode lines aren't getting properly added. But, your log file seems to list 1600x1024 as a valid mode, so I'm a bit confused. What does 'xrandr' list as valid sizes? The log you enclosed showed the screen running at 16bpp; I assume setting the depth to 24 switches that to 32bpp? Yes, lately I've been running at 16bpp due to the lack of benefit in the higher bit plane modes. I've attached a 24 bit log file. During this session, I verified that pixmaps do have quite a bit of color banding, more so than other 24bit displays. It did push the fbbpp up to 32bpp. Note that this is DVI-DFP. It could be hardware limited, but I can't verify this (there's nothing but Linux on this machine). I've also attached output of some xrandr commands. This required rebuilding xrandr without any opt flags. The executable built with standard opt flags (none specified by myself, I only specified ProjectRoot) refused to interpret any arguments. This is likely due to using gcc 3.2 (RH8). Also note that it lists 1600x1024 as the active mode. The root window is indeed 1600x1024, but the image my SGI is getting is definately 1280x1024. Dr Andrew C Aitchison wrote: When I was looking for a dvi card for my SGI 1600SW, I thought I read that the Radeon didn't official support that resolution; maybe the driver now supports the resolution limit on the card :-( It seems to work quite well here. As distributed with RH8, XFree86 came up at 1600x1024 without needing a modeline. Previously, I used the following: Modeline sgi1600x1024 106.9 1600 1632 1656 1672 1024 1027 1030 1067 That log file says that the config file requested 16bpp :-) Yes, I've been running 16bpp due to the fact that it looks exactly the same as 24bpp. :-) 24bit log file attached. It also says that it is using the DAC in 6bit mode, which would not help. Hrm.. here's the 24bit log. It's listed as 8bit here, but there's still considerable color banding. Jeff This is a pre-release version of XFree86, and is not supported in any way. Bugs may be reported to [EMAIL PROTECTED] and patches submitted to [EMAIL PROTECTED] Before reporting bugs in pre-release versions, please check the latest version in the XFree86 CVS repository (http://www.XFree86.Org/cvs) XFree86 Version 4.2.99.1 / X Window System (protocol Version 11, revision 0, vendor release 6600) Release Date: 15 October 2002 If the server is older than 6-12 months, or if your card is newer than the above date, look for a newer version before reporting problems. (See http://www.XFree86.Org/) Build Operating System: Linux 2.4.19-xfs i686 [ELF] Module Loader present Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: /var/log/XFree86.0.log, Time: Mon Oct 21 22:23:40 2002 (==) Using config file: /etc/X11/XF86Config (==) ServerLayout Anaconda Configured (**) |--Screen Screen0 (0) (**) | |--Monitor Monitor0 (**) | |--Device ATI Radeon VE (**) |--Input Device Mouse0 (**) |--Input Device Keyboard0 (**) Option XkbRules xfree86 (**) XKB: rules: xfree86 (**) Option XkbModel pc105 (**) XKB: model: pc105 (**) Option XkbLayout us (**) XKB: layout: us (==) Keyboard: CustomKeycode disabled (**) FontPath set to unix/:7100 (**) RgbPath set to /usr/X11R6/lib/X11/rgb (==) ModulePath set to /usr/X11R6-CVS/lib/modules (--) using VT number 7 (II) Open APM successful (II) Module ABI versions: XFree86 ANSI C Emulation: 0.1 XFree86 Video Driver: 0.6 XFree86 XInput driver : 0.3 XFree86 Server Extension : 0.1 XFree86 Font Renderer : 0.3 (II) Loader running on linux (II) LoadModule: bitmap (II) Loading /usr/X11R6-CVS/lib/modules/fonts/libbitmap.a (II) Module bitmap: vendor=The XFree86 Project compiled for 4.2.99.1, module version = 1.0.0 Module class: XFree86 Font Renderer ABI class: XFree86 Font Renderer, version 0.3 (II) Loading font Bitmap (II) LoadModule: pcidata (II) Loading /usr/X11R6-CVS/lib/modules/libpcidata.a (II) Module pcidata: vendor=The XFree86 Project compiled for 4.2.99.1, module version = 1.0.0 ABI class: XFree86 Video Driver, version 0.6 (II) PCI: Probing config type using method 1 (II) PCI: Config type is 1 (II) PCI: stages = 0x03, oldVal1 = 0x, mode1Res1 = 0x8000 (II) PCI: PCI scan (all values are in hex) (II) PCI:
[Xpert]X starting with GNOME Desktop Error
Was wondering if someone knows why I'm getting: AUDIT: Tue Oct 22 07:50:51 2002: 353 X: client 125 rejected from local host in my XFree log file when trying to start X with GNOME as the desktop. I can set metacity as the window manager and start GNOEM from an xterm window and it works. Thanks, Danny L. Morgan XFree86 v4.2.1 GNOME v2.0.2 on GenToo Linux x86 v1.4_rc1 ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]Savage4 Video Driver
Where is the "updated" Savage4 video driver kept? I'm looking for the video card driver for Windows 98. Thanks! Sheila
[Xpert]Trident Blade 3D 9880 locks up redhat's XFree86-4.2.0-8
i have a trident blade 3d agp card that does not interact well at all with XFree86-4.2.0-8 from redhat. it seems that closing windows in gnome/sawfish is an operation that has a high likelihood of crashing the whole machine. the card is listed as supported... is this a known problem? thanks, stig ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]colour mouse cursor
Are there any plans to make mouse cursor multi-colour and animated? wbw Roman. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]Only 16 bpp mode via DVI, when 24 bpp is shown
Although 24bpp mode is shown as only visual in XFree86.0.log and via xdpyinfo, the image is obviously shown only with 16bpp on the TFT screen! The 64 grayscale colors are easily visible. I'm using XFree86 Version 4.1.0.1 on x86-Linux connected via DVI (digital mode) to the screen. Graphics card is ATI Radeon VE (module radeon) which works well in true 24bpp mode on Windows98. Any configuration detail I overlooked? Is anyone else using the ATI Radeon VE in digital mode? from XFree86.log: (**) RADEON(0): Depth 24, (--) framebuffer bpp 32 (II) RADEON(0): Pixel depth = 24 bits stored in 4 bytes (32 bpp pixmaps) (==) RADEON(0): Default visual is TrueColor (==) RADEON(0): RGB weight 888 (II) RADEON(0): Using 8 bits per RGB (8 bit DAC) from xdpyinfo: depths (7):24, 1, 4, 8, 15, 16, 32 depth of root window:24 planes number of visuals:8 default visual id: 0x23 visual: visual id:0x23 class:TrueColor depth:24 planes available colormap entries:256 per subfield red, green, blue masks:0xff, 0xff00, 0xff significant bits in color specification:8 bits -- Holger Isenberg [EMAIL PROTECTED] http://mars-news.de ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]Re: VBlank interrupt?
Svgalib's kernel module has an ioctl that waits for retrace, using the card's interrupt. This functionality of the module is independent of svgalib, and can be used also in X11, text mode or fbdev. The interrupt is supported on Matrox, SiS, Rendition and some ATI and nVidia cards. -- Matan Ziv-Av. [EMAIL PROTECTED] ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Patches for xfree86 cyrix (NatSemi GX1) driver
- The cyrix driver as is in the xfree86 CVS does not work on National's current Centaurus II development board (GX1+5530). (Presumably it works on older chipsets, I don't know enough to speak to that). It doesnt work on anything as far as I can tell. Incomplete code. - Alan's version of the cyrix driver, without the minor patches I made, does not work on the 5530. With the patches it is usable. Works on some - seems to be VSA/BIOS dependant 8(. Your fixes are in the new release I put up today the patches addresses this). It is not perfect, I cannot cleanly reset back to text mode (the system remains up, but the video becomes garbled). Curious - do you know what BIOS you have and if its VSA1/VSA2 ? Although the geode driver is the wave of the future on the 5530, presumably the cyrix driver should be maintained, if this is low cost, so as not to lock up if used? Also, the cyrix driver may be a good deal smaller. I plan to maintain it, improve the 5520 support and also teach it to do BIOSfree setup. On older boxes the VSA is so fragile being able to bash the chips directly instead of via software faked vga extensions is a must. I'll keep 5530 working but won't now have to worry about adding all the stuff like Xv to it. PS: there's some cool kernel bits heading into the tree soon I hope that get many things between 15-30% speedups. Alan ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Patches for xfree86 cyrix (NatSemi GX1) driver
But I have disabled the 5530 support for now, to encourage geode driver testing. Sure. I'm going to do some testing myself. I'll also see if I can get your driver to do 5520. The 5520 VSA1 (if you can find working 5520 VSA1!) is much like the 5530 but without Xv. If I can get 5520 VSA1 working in geode then I can make the cyrix driver just a biosfree driver for the older/weirder end of the universe. I've now got the native mode code for the 5510/5520 setting modes standalone, just figuring out how X mode data works so I can turn it into the right format. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Patches for xfree86 cyrix (NatSemi GX1) driver
is much like the 5530 but without Xv. If I can get 5520 VSA1 working in geode then I can make the cyrix driver just a biosfree driver for the older/weirder end of the universe. Well, it'd be even nicer to deprecate the cyrix driver in favour of the geode driver if you can bring across the 5510/5520 changes. The Geode driver is depending on SoftVGA, on the 5510 thats a really bad idea, on the 5520 its dubious at best. I'll have a play once I can get it not to die on the spot. The durango code may even help you as this is NS's own library to access the hardware. Tested the geode driver as of current CVS - it crashes my hardware dead with a blank screen and so hard it needs a power cycle to get it back. Looks like it explodes during the mode switch. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Problems with Thinkpad 760XL, (Trident TGUI 96xx)
On Mon, Oct 21, 2002 at 11:23:03AM +0200, John Bäckstrand wrote: I am only getting garbage, as in having an erronous refresh rate(?), on the screen when trying to get Xfree86 4.2.1 going on a Thinkpad 760Xl. Its supposed to be doing 800x600 in 16bpp. From lspci -v -v: Get the newer driver from http://www.xfree86.org/~alanh and make sure you powerdown your machine before trying it. Alan. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]gzipped fonts with xft
XB Is there a way to make xft2 read gzipped fonts ? Currently it doesn't, XB and that makes all bitmaps fonts unusable. http://www.xfree86.org/pipermail/fonts/2002-October/002203.html Juliusz ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]DVI output with 'nv' driver
On Mon, 21 Oct 2002, Richard Weber wrote: Hello, I've been having trouble getting the digital output of my NVidia GeForce 2 to work with the 'nv' driver provided with XFree86 4.2.0. When the DVI cable is connected, all I get is fuzzy blue scrolling down the screen. DVI doesn't work in 4.2.0. You have to use XFree86 CVS. Mark. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Xfree 4.2 fails to initialize LCD screen
On Wed, 23 Oct 2002, Charles Moon wrote: I have a Sony Vaio Laptop PCG-FXA59 with a 1400 x 1050 native display powered by an ATI Rage Mobility M1 AGP adapter. When I attempted to install RedHat 8.0, the anaconda graphical setup correctly probed the ATI chip, but returned unknown for monitor. At that point a small white square appeared in the upper left corner of the screen and intermittent flashes of color appeared over the screen. The only way out was to shutdown. After using the text based upgrade over a fully functioning installation of RH 7.2, when I tried to start the x server [startx] the exact same screen abberation happened. Upon further testing I have discovered I can get the LCD display to work IF when I start the Try 'Option NoCompositeSync' in the XF86Config's device section. Marc. +--+---+ | Marc Aurele La France | work: 1-780-492-9310 | | Computing and Network Services | fax:1-780-492-1729 | | 352 General Services Building | email: [EMAIL PROTECTED] | | University of Alberta +---+ | Edmonton, Alberta | | | T6G 2H1 | Standard disclaimers apply| | CANADA | | +--+---+ XFree86 Core Team member. ATI driver and X server internals. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]XFree86 Bugzilla
On Mon, 21 Oct 2002, David Dawes wrote: This comes up from time to time. The bottom line is that having an XFree86 bug tracking system is of limited use unless the XFree86 developers use it. Since that's the group that it would impact the most, that's where the motivation for it should come from. BTW, is there an official Linux kernel bug tracking system? Yes. It's name is RedHat. ;) While the main development team may not be tracking bugs, the corporations which have significant Linux efforts (RedHat, IBM, etc.) are. The main development effort benefits from that infrastructure without acknowledging it. Besides, what Linux does is not necessarily the right answer. Many people complain that Linux development is not scaling because the kernel complexity is exceeding the ability of one person to grasp it. And I hope that no one is suggesting that XFree86 should use BitKeeper ... The *BSD development teams provide examples of running and maintaining a project over long periods of time--even longer than Linux. These projects *do* have bug tracking systems. However, this is a political problem, not technical. Bug tracking will appear when lack of it annoys the development team. So, your best bet to get a bug tracking system implemented is simply to file lots of bug reports in the mailing lists until it annoys the developers. ;) -a ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]misaligned fullscreen mode in VMware
I'm trying VMware 3.2 on Red Hat Linux 7.3, custom-compiled kernel. Graphics chip is an integrated Intel 845G. I am running the very latest (as of one day ago) XFree86 from CVS, main trunk. My XF86Config-4 file is below. Performance with the Win2k guest OS is almost acceptable in the windowed mode, but when switching to full-screen mode, after I get the warning that DGA acceleration isn't supported by my drivers, I then find that performance is essentially okay, with one big BUT: the screen image is shifted up and to the left, quite a bit. It generally wraps at the right edge. I've tried a variety of depth, fbbpp and resolution settings on both the guest os and the linux machine itself. Nothing seems to work. When I have depth 16, I do find that DRI is working (doesn't work in 24), but this doesn't seem to have anything to do with DGA acceleration (I'm new to all this). Any tips? I have read in past posts that there is an open issue about whether the fault here lies with VMware or with driver maintainers who don't feel that DGA acceleration is the part of the driver. Question is: what to do in the meantime? Are there any other options to explore? I'm nowhere smart enough to hack at the driver code myself. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Xfree 4.2 fails to initialize LCD screen
Thanks for the tip, but it didn't work. - Original Message - From: Marc Aurele La France [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, October 23, 2002 7:11 PM Subject: Re: [Xpert]Xfree 4.2 fails to initialize LCD screen On Wed, 23 Oct 2002, Charles Moon wrote: I have a Sony Vaio Laptop PCG-FXA59 with a 1400 x 1050 native display powered by an ATI Rage Mobility M1 AGP adapter. When I attempted to install RedHat 8.0, the anaconda graphical setup correctly probed the ATI chip, but returned unknown for monitor. At that point a small white square appeared in the upper left corner of the screen and intermittent flashes of color appeared over the screen. The only way out was to shutdown. After using the text based upgrade over a fully functioning installation of RH 7.2, when I tried to start the x server [startx] the exact same screen abberation happened. Upon further testing I have discovered I can get the LCD display to work IF when I start the Try 'Option NoCompositeSync' in the XF86Config's device section. Marc. +--+---+ | Marc Aurele La France | work: 1-780-492-9310 | | Computing and Network Services | fax:1-780-492-1729 | | 352 General Services Building | email: [EMAIL PROTECTED] | | University of Alberta +---+ | Edmonton, Alberta | | | T6G 2H1 | Standard disclaimers apply| | CANADA | | +--+---+ XFree86 Core Team member. ATI driver and X server internals. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Xterm raw input
Xavier Bestel wrote: Le mer 23/10/2002 à 13:09, Russell a écrit : When i type ctrl-v then keypad 7/Home, or the dedicated Home key on a PC102 keyboard, i get ^[[H or ESC-[H (CSI-H). This code is not in the xterm control sequences spec: http://cns.georgetown.edu/~ric/howto/Xterm-Title/ctlseqs.txt No, this ESC sequence should have no effect on xterm. The terminal should just report it as-is to the application which will interpret it (e.g. a text editor will move the cursor to beginning-of-line) If xterm doesn't generate it, where does the ^[[H code come from? ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Xterm raw input
Russell wrote: Xavier Bestel wrote: Le mer 23/10/2002 à 13:09, Russell a écrit : When i type ctrl-v then keypad 7/Home, or the dedicated Home key on a PC102 keyboard, i get ^[[H or ESC-[H (CSI-H). This code is not in the xterm control sequences spec: http://cns.georgetown.edu/~ric/howto/Xterm-Title/ctlseqs.txt No, this ESC sequence should have no effect on xterm. The terminal should just report it as-is to the application which will interpret it (e.g. a text editor will move the cursor to beginning-of-line) If xterm doesn't generate it, where does the ^[[H code come from? I found it: CSI Ps ; Ps H Cursor Position [row;column] (default = [1,1]) (CUP) Now the only question is: does X send logical key names to control xterm, or does it send terminal control codes? ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]misaligned fullscreen mode in VMware
Sorry, forgot config file: Section ServerLayout Identifier XFree86 Configured Screen 0 Screen0 0 0 Screen 1 Screen1 0 0 InputDeviceMouse0 CorePointer InputDeviceKeyboard0 CoreKeyboard EndSection # Module loading section Section Module Load dbe # Double-buffering Load GLcore # OpenGL support Load dri # Direct rendering infrastructure Load glx # OpenGL X protocol interface Load extmod # Misc. required extensions EndSection Section InputDevice Identifier Keyboard0 Driver keyboard Option XkbGeometry pc Option XkbKeycodes xfree86 Option XkbTypes default Option XkbCompat default Option XkbRules xfree86 Option XkbModel pc105 Option XkbLayout ru(basic) Option XkbOptions grp:caps_toggle EndSection Section InputDevice Identifier Mouse0 Driver mouse Option Device/dev/mouse Option Protocol IMPS/2 Option Emulate3Buttons no Option ZAxisMapping 4 5 EndSection Section Monitor Identifier MX75 VendorName HP ModelName HP Pavilion MX75 HorizSync 30 - 70 VertRefresh 50 - 120 Option dpms EndSection Section Device Identifier Intel 845G Option DRI on Option AGPMode 1 Driver i810 VendorName Intel BoardName Intel 845G BusID PCI:0:2:0 Screen 0 VideoRam 16384 EndSection Section Device Identifier VMware SVGA Driver vmware BusID PCI:0:15:0 Screen 1 EndSection Section Device Identifier Linux Frame Buffer Driver fbdev BoardName Linux Frame Buffer EndSection Section Screen Identifier Screen0 Device Intel 845G Monitor MX75 DefaultDepth 24 Subsection Display Depth 24 Modes 1280x1024 1024x768 800x600 640x480 Viewport 0 0 EndSubSection Subsection Display Depth 16 Modes 1280x1024 1024x768 800x600 640x480 Modes 1024x768 Viewport 0 0 EndSubSection Subsection Display Depth 8 Modes 1280x1024 1024x768 800x600 640x480 Viewport 0 0 EndSubSection Subsection Display Depth 4 Modes 1280x1024 1024x768 800x600 640x480 Viewport 0 0 EndSubSection Subsection Display Depth 1 Modes 1280x1024 1024x768 800x600 640x480 Viewport 0 0 EndSubSection EndSection Section Screen Identifier Screen1 Device VMware SVGA Monitor MX75 DefaultDepth 24 Subsection Display Depth 24 Modes 1280x1024 1024x768 800x600 640x480 Viewport 0 0 EndSubSection EndSection Section DRI Mode 0666 EndSection On Wed, 2002-10-23 at 19:42, Noel Bush wrote: I'm trying VMware 3.2 on Red Hat Linux 7.3, custom-compiled kernel. Graphics chip is an integrated Intel 845G. I am running the very latest (as of one day ago) XFree86 from CVS, main trunk. My XF86Config-4 file is below. Performance with the Win2k guest OS is almost acceptable in the windowed mode, but when switching to full-screen mode, after I get the warning that DGA acceleration isn't supported by my drivers, I then find that performance is essentially okay, with one big BUT: the screen image is shifted up and to the left, quite a bit. It generally wraps at the right edge. I've tried a variety of depth, fbbpp and resolution settings on both the guest os and the linux machine itself. Nothing seems to work. When I have depth 16, I do find that DRI is working (doesn't work in 24), but this doesn't seem to have anything to do with DGA acceleration (I'm new to all this). Any tips? I have read in past posts that there is an open issue about whether the fault here lies with VMware or with driver maintainers who don't feel that DGA acceleration is the part of the driver. Question is: what to do in the meantime? Are there any other options to explore? I'm nowhere smart enough to hack at the driver code myself. ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]X starting with GNOME Desktop Error
Hi, Was wondering if someone knows why I'm getting: AUDIT: Tue Oct 22 07:50:51 2002: 353 X: client 125 rejected from local host in my XFree log file when trying to start X with GNOME as the desktop. I can set metacity as the window manager and start GNOEM from an xterm window and it works. Should you add xhosts + in some where like xinitrc, etc/? Lau ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Patches for xfree86 cyrix (NatSemi GX1) driver
Re: I cannot cleanly reset back to text mode (the Curious - do you know what BIOS you have and if its VSA1/VSA2 ? The BIOS says XpressROM V3.1.0 (National's), built 03/08/2001, with chip 5530A Rev B1. I'm running a fairly recent cvsup of FreeBSD 4.6 Stable with one minor mod (to not use the TSC as a clock) on a Centaurus II (GX1). I imagine/assume this is running VSA2 (it says VSA 0190). The motherboard version says Centaurus, 1.D. Both your version of the cyrix driver (as patched) and the xfree86 geode driver work, although the cyrix driver is clearly not switching back to text mode correctly... - bruce ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Core fonts issues [was: Problems with Type1 big fonts]
On Wed, Oct 23, 2002 at 08:13:10PM +0200, Juliusz Chroboczek wrote: DD Unless the old Type1 backend can unreservedly be replaced by the new DD FreeType2 backend, then it should be disabled, and maybe even a fake DD type1 font module created for the modular build so that existing DD configurations don't break. If there are still reasons for wanting the DD old backend, then it needs to be configurable, at least at build-time. DD If we want to provide more flexibility in allowing the user to control DD what font suffixes are handled by what backend, there would need to be DD some type of run-time configurability. I was looking into that when other things came up; I may very well be able to come back to this. Anyway, here's the plan I had. The idea would be to have a new interface, Bool FontFileRegisterRendererPriority(FontRendererPtr, int priority) where the existing FontFileRegisterRenderer interface in renderer.c is an alias for FFRRP with priority set to 0. Priority is an integer (positive or negative), and renderers with higher priority override renderers of lower priority. The Type 1 renderer would register with negative priority for both PFA and CID; in the absence of another CID renderer, it would render CID fonts, but PFA fonts would be handled by FreeType. FreeType would register with default priority for both PFA and TTF. X-TT would register with positivie priority for TTF. In a configuration in which all renderers are linked in, X-TT would handle TTF, FreeType would handle PFA, and Type 1 would handle CID. In a default configuration (no X-TT), both PFA and TTF are handled by FreeType. The advantage of that is that there are no new configuration mechanisms -- we simply leverage off the existing module loader. It's also easily extensible -- I expect to implement bitmap support in the FreeType backend after 4.3.0, and then you'll want the existing bitmap renderers to override FreeType if they're linked in. This would at least address the immediate issue, and it does need to be addressed before 4.3.0. The downside is that it's not completely flexible, not allowing for example to have TTF support using FreeType while using Type 1 for PFA. I don't think anyone cares. If someone does, then I guess they'll implement the more flexible solution. If anyone is interested in that, please let me know. DD Also, I'd really like to see some resolution to the separate FreeType DD and X-TT backends for TrueType fonts. As it is now, if someone chooses DD X-TT, they will still need the old Type1 backend for Type1 fonts DD regardless of other considerations. Is it still not possible to resolve DD the issues that led to two TrueType backends in the first place? Here's my personal perception. X-TT contains support for embedded bitmaps, which FreeType 1 didn't have. The new FreeType backend fully supports embedded bitmaps. X-TT also contains a number of features such as fake bolding and automagic slanting, collectively known as TTCap. These should be handled at the toolkit level in my opinion, and at any rate implementing new features in the core fonts system at this point is pretty much pointless. Still, users of X-TT have become accustomed to these features being made available at the server level, and would probably not accept them being taken away. I shall not implement the said features in FreeType, which I want to remain a small and clean piece of code. I shall also not integrate myself the (existing) patches that implement TTCap in the FreeType backend. If there is a person interested in doing that, I'll be glad to help -- but only if said person commits to maintaining the code for the indefinite future. The priority scheme should at least help a bit for now, but this issue still needs to be solved. There's always the drastic solution of just dropping one of them. Before anyone gets upset, that won't happen at least in the 4.3.0 timeframe, but I won't make any guarantees beyond that. At a minimum I'd like to see a clear summary of the issues from the point of view of an X-TT advocate. David ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Problems with Type1 big fonts
On Wed, Oct 23, 2002 at 08:19:31PM +0200, Juliusz Chroboczek wrote: AA Warning: font renderer for .pcf registered more than once For some reason that I don't understand, all renderers get registered twice in the modular server. FreeType is not at fault. I haven't seen any evidence of all renderers being registered twice. The bitmap module is always loaded by the core server, so also specifying it in the Modules section of the config file may lead to it being registered twice. I've added this warning to fontfile/renderer.c in the hope that somebody competent will end up looking into that. I modified that a little to only print the warnings for the first server generation -- otherwise you see them every time the X server recycles. I also modified the registration code to clear the list of renderers at the start of each new server generation. David ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]XFree86 Bugzilla
On Wed, Oct 23, 2002 at 04:26:59PM -0700, Andrew P. Lentvorski wrote: On Mon, 21 Oct 2002, David Dawes wrote: This comes up from time to time. The bottom line is that having an XFree86 bug tracking system is of limited use unless the XFree86 developers use it. Since that's the group that it would impact the most, that's where the motivation for it should come from. BTW, is there an official Linux kernel bug tracking system? Yes. It's name is RedHat. ;) It seems IBM is setting something up too, but like Red Hat and other vendors, they're motivated by their business needs (which is fine). While the main development team may not be tracking bugs, the corporations which have significant Linux efforts (RedHat, IBM, etc.) are. The main development effort benefits from that infrastructure without acknowledging it. Red Hat, Debian, and others do track XFree86 bugs too. The results of that are a useful contribution. It works because someone at each those organisations does the filtering and followup, and passes on the relevant reports information and/or fixes to the XFree86 developers. Besides, what Linux does is not necessarily the right answer. Many people complain that Linux development is not scaling because the kernel complexity is exceeding the ability of one person to grasp it. And I hope that no one is suggesting that XFree86 should use BitKeeper ... No, nobody is suggesting that XFree86 should use BitKeeper (at least I hope they're not :-) It's quite understandable that the Linux kernel does though, since that's apparently what motivated it in the first place (but I don't want to turn this thread into a BK pro/con argument). The *BSD development teams provide examples of running and maintaining a project over long periods of time--even longer than Linux. These projects *do* have bug tracking systems. However, this is a political problem, not technical. Bug tracking will appear when lack of it annoys the development team. So, your best bet to get a bug tracking system implemented is simply to file lots of bug reports in the mailing lists until it annoys the developers. ;) A bug tracking system will appear when the developers feel that it would make their life easier. I don't know too many of us that have the time to go back and look over lists of bugs. Most of us find it easier to deal with them as they come in. If too many come in, more don't get dealt with. The only way I see a bug tracking system working right now is if someone makes the commitment to administer it. That means cleaning it regularly to keep it up to date (filtering/categorising reports, removing duplicates and out of date reports, tracking XFree86 commits and closing reports when they've been fixed, etc). That would allow developers to look through it when they wanted to without forcing the overhead of keeping it up to date onto them. If the developers are asked to do all of this, they won't, and the result will be a nice bug tracking system full of bugs marked unassigned. I don't see that as very useful. We could have taken that approach, but then everyone would be asking why their bugs haven't been looked at instead of why we don't have a bug tracking system. I'd prefer to not create the illusion. David ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]problem with notebook compaq presario
Hello I have a problem to configure xfree with my notebook compaq presario 12XL521. I have installed mandrake 9.0 with Xfree 4.2.1 Linux on text mode works fine, but when I try to start Xfree the system crashs. The video card is a Trident Cyberblade A1 I think that the problem is with the monitor. I waiting for a help of us Sorry by my english, I speak spanish Thank you Fabricio Lattaro
Re: [Xpert]Matrox G450, two cards, two heads
Has anyone gotten multiple G450 cards to work and if so can you share your configuration with me. Has anyone gotten a single G450 card to work as a secondary controller? Gregg Sure do! I use 3 G450 cards. http://www.pvv.org/~kim/Monitors6.html The trick, as I learned here, is to disable acceleration. Do not use the mga driver from Matrox. It is very nice for 2 screens with one card, but not for more. Use the mga driver from XFree86! If you have changes to recommend to my XF86Config file, or webpage, please email me. At the moment, I am trying to get OpenGl working. So far without success. [EMAIL PROTECTED] ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]questions about cvs and small bug in xv
Hello, I've tested the cvs from october the 16. I've saw that the new cursor seem to use real transparency. Is it accelerated? How can it be used in an other app? I think it was working with Xrender but the samples code at http://www.eax.com/render/ didn't work as well as the mouse cursor(slow and buggy) and I don't know where to look for after thoses informations. When I use gmplayer with xv output, if the cursor is on the rendering surface, it sometimes create a blue square at the place where the cursor is. With x11, it work well. I've got a radeon 32mo sdr, a slackware 8.1 with a custom 2.4.19 linux kernel and X was compiled with gcc 2.95.3 Cedric ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Colormap Allocation problems under Linux 7.3
On Tue, Oct 22, 2002 at 08:08:24PM -0700, Keith Packard wrote: Around 22 o'clock on Oct 22, David Dawes wrote: -no-render-extension / NoRenderExtension -render-extension (for cancelling a NoRenderExtension option in XF86Config) Might shorten these to '-norender' and '-render'. However, I'd argue that Render should be considered a core extension and not be made optional at all. Applications like OpenOffice and Mozilla will not function reasonably without it, and (see below), it's impact can be mitigated or even eliminated, although some apps will probably produce unexpected results without any render colors in the default colormap aside from black and white. An application should query if the xserver has render support and if not take appropriate decisions. No? E.g., in FVWM we have a function which simulate XRenderComposite and a font spec can have two fonts one for Xft and an other one for core text rendering. I note that we don't have a '-noshape' option available. clGreyScale PseudoColor 0 : default default 1 : 8 grey 8 grey 2 : 16 grey 2x2x2 cc + 4 grey (or 8?) 3 : 32 grey 3x3x3 cc + 8 grey (or 16?) 4 : 64 grey 4x4x4 cc + 16 grey (or 23?) 5 : 128 grey5x5x5 cc + 16 grey (or 32?) 6 : 256 grey6x6x6 cc + 32 grey (or 30) -render-color-limit (int)cl / RenderColorLimit (int)cl This seems more like a mode to me, and it seems like fewer choices would be better. Certainly mode #6 is not useful as it's identical to a static colormap, except that the server won't do any nearest color matching. I suggest three models would be sufficient: -render-colors none - render uses only BlackPixel and WhitePixel -render-colors few - render gets 16(?) levels of gray -render-colors default - render gets a modest number as in current CVS In some case it is not the number of colors which counts but how the apps share the colours. XFree-4.2 run in mode #6 (TrueColor) and you will have no problems with GTK and QT apps, Xemacs, FVWM and probably others apps (do not test Mozilla neither OpenOffice) as a lot of apps use a 6x6x6 cc (by default or because the app detects such allocated cube). So even with XFree-4.2, in some environement, you have 12 free colors until you start an old application. As with XFree CVS if you start one KDE apps you have no more free colours. Moreover, if you use standard dithering methods, dithering is really really better in mode #5 and #6 than in the default mode. So I think that -render-colors lotof - mode #5 -render-colors full- mode #6 can be usefull. Mode #5 is intersting, a lot of apps can use a 5x5x5 cc and the standard ordered dithering gives good result (it is the minimal default cc used by GTK) and in the other hand you have a reasonable number of free colours. Yes, using StaticColor is more or less equivalent than mode #6 (but with a bit different colours proportion (3/3/2) by default). I think a full mode does not hurt and assure backward compatibility. But, I do not care. I am more interested by mode #5. An other subject. Do you think that it is better to always use pow(2,k) grey colours (e.g., use 16 grey for the default in the place of the 21 grey) so that the grey are aligned with the 32x32x32 colour cube which is used to approximate the render colors? I make some tests and this seems ok. Regards, Olivier ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
Re: [Xpert]Colormap Allocation problems under Linux 7.3
On Tue, Oct 22, 2002 at 10:26:57PM -0400, David Dawes wrote: On Tue, Oct 22, 2002 at 01:25:40AM +0200, Olivier Chapuis wrote: On Mon, Oct 21, 2002 at 04:54:26PM -0400, David Dawes wrote: [snip] I just write a patch which does this today. I think also that it is a good idea to have an option which allows to control the colours that Render preallocate. [snip] It is the first time I really read the code under Xserver so I am not sure I do the right things. Here what I do: I've added two new members to the ScreenRec structure: disableRender and renderColorLimit. In common/xf86Helper I've added a new function xf86SetRenderOptions(ScreenPtr) which setups these two members accordingly to the cmd line or config file option. Then, the driver should call xf86SetRenderOptions before it calls fbPictureInit. Is this implemented in a way that allows render to be enabled/disabled on a per-screen basis? If so, it's probably OK to put that stuff into the ScreenRec. Render is disabled in fbPictureInit: if disableRender, then fbPictureInit do nothing and return TRUE. It seems to me that this is ok: the driver should works as if it has no Render support (?). At least this works with the vesa and neomagic drivers. Your description sounds reasonable to me. One thing I am not really happy with is that we should add one line per driver. Maybe the two new members should be set in common/xf86Config.c (xf86HandleConfigFile). On the other hands, maybe some drivers will do not like to be compiled with RENDER, but run with renderDisabled? The way you're doing it the same as the way we currently handle enabling/disabling backing store on a per-screen basis. Yes, I've made NoRender on a per-screen basis. As BackingStore, the option can be set in the top of the Screen section, in the Display subsection of the Screen section and in the Device section. I'd be interested to see the patch. I will send a patch soon. Regards, Olivier ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]Optimal colormap
On Tue, Oct 22, 2002 at 01:25:40AM +0200, Olivier Chapuis wrote: clGreyScale PseudoColor 0 : default default 1 : 8 grey 8 grey 2 : 16 grey 2x2x2 cc + 4 grey (or 8?) 3 : 32 grey 3x3x3 cc + 8 grey (or 16?) 4 : 64 grey 4x4x4 cc + 16 grey (or 23?) 5 : 128 grey5x5x5 cc + 16 grey (or 32?) 6 : 256 grey6x6x6 cc + 32 grey (or 30) Why not use an optimal colourmap? http://www.pvv.org/~kim/Palette256.html [EMAIL PROTECTED] ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert
[Xpert]XIM / XOM
Hello All, I am trying to write one basic/simple XIM for any one of the Indian Language ( Tamil ). But the problem is I cun't find any basic info abt XIM Server programming infos. I need help from the people those who working or worked with XIM Server programming. Help me for writing some BASIC XIM, Which will return two char for every single English char OR something like this. Already I wasted nearly 3 months in this. I have downloaded XCIN to find out the what is happening inside the XIM. But it is very difficult for me to understand the code. :( I downloaded IMdkit, I found a sampleIM program in doc folder. Presently I going thru the CODE. Still I can't able to find the tech behind, How the event from app is transfered to XIM and what are the ENV variables used for that etc. Can anyone give clear idea about XIM Server programming Thanks, -- Bharathi S, IndLinuX Team, (__) DONLab, TeNeT Group, oo / IIT-Madras, Chennai-INDIA. (_/\ ___ Xpert mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xpert