Re: libs/wine/debug: Avoid over-allocating memory in default_dbgstr_wn.
Michael Karcher wrote: Am Dienstag, den 15.07.2008, 15:55 -0700 schrieb Dan Hipschman: if (n 0) n = 0; size = 12 + min( 300, n * 5 ); -dst = res = funcs.get_temp_buffer( n * 5 + 7 ); +dst = res = funcs.get_temp_buffer( size ); This looks like fixing an under-allocation, not over-allocation. The patched code allocates at least 5 bytes more, and never less than 312. Never _more_ than 312, because MIN(300, x) = 300. tom
Re: Wine compiler options benchmarks
Ben Hodgetts (Enverex) wrote: Just in-case anyone was ever curious about how well Wine performs with different C/XXFLAGs I did a test today with RC5 to see how much of a difference it makes with 3DMark 2001 SE, nothing major but if someone can think of a better benchmark to try, please let me know (I had hoped to try Oblivion or some such but it has no benchmark feature). Core2Quad Q9450 @ 3.4Ghz || 4GB PC8500 5-5-5-15 || GCC 4.3.1 || Linux 2.6.25 3DMark 2001 SE B300. Wine 1.0-rc5. 28576 -march=native -O3 -pipe -fomit-frame-pointer 28522 -march=native -O3 -pipe -fomit-frame-pointer -mfpmath=sse,387 28511 -march=native -O2 -pipe -fomit-frame-pointer 28427 -march=native -O3 -fomit-frame-pointer -ffast-math -funroll-loops -Wall -pipe 28426 -march=native -O2 -pipe -mfpmath=sse,387 28311 -march=native -O2 -pipe 28270 -march=native -Os -pipe -fomit-frame-pointer 28126 -O3 -pipe -fomit-frame-pointer 28072 Wine Default (-g -O2) 27984 -march=native -O3 -pipe 27646 -march=native -Os -pipe -pipe only speeds up compilation, not the resulting machine code. -Wall makes gcc show more warnings about the code and again, it does not speed up the code. tom
Re: Hardy Heron -- Pulseaudio interferes with non-gnome audio
Susan Cragin wrote: The new default pulseaudio in Hardy screws up every program that isn't gnome, delivering terrible sound. https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/198453 Applications that use ALSA should work fine with the alsa pulse plugin - should. Unfortunately the plugin has a few bugs, and the developers (of both pulseaudio and alsa) have been unresponsive in this matter. See - http://www.pulseaudio.org/ticket/198 - https://bugtrack.alsa-project.org/alsa-bug/view.php?id=2601 Question -- what does WINE do with pulseaudio? Some suggested to create a native PulseAudio wine driver. That would yield the best results as far as the performance and reliability goes. Unfortunately we'll be stuck with the alsa driver for some time to come, so I tried to do the best to make the alsa pulse plugin to work with wine. I had to patch both the wine alsa driver as well as the alsa pulse plugin and it worked fine for my taste. I submitted the patch to wine-patches: http://www.winehq.org/pipermail/wine-patches/2008-February/050561.html I don't know if the relevant alsa pulse plugin patch is publicly available, I haven't looked at that matter in a long time. Some of the needed patches are probably still in my local repositories. It makes no sense for me to work on this anymore until the developers respond to my questions. http://forum.skype.com/index.php?showtopic=112021 I heard that the skype developers are very interested in getting skype working with pulseaudio, even helped tisting some patches. I have gotten skype to work with pulseaudio, I think if you apply the second patch attached to alsa bug #2601 skype will work. I haven't looked at the code since because apparently nobody from the alsa team is interested in fixing the bug. tom
Re: WineHQ should discourage the use of cracks
Vincent Povirk wrote: On Tue, Mar 4, 2008 at 12:53 PM, Alexander Nicolaysen Sørnes [EMAIL PROTECTED] wrote: I'm not sure if we should remove the option for 'fully functional, requires hacks'. A lot of people come to the AppDB to find out how they can make their apps work, and are more interested in the end result as opposed to how to get there. In practice, is there really enough difference between fully functional and mostly functional that we need another rating? People who only care about the end result would know that anything Silver or above will just about work. I've seen Gold applied to software that is really mostly functional, requires hacks, i.e. there are some other minor problems that can't be worked around (and Platinum for software with minor problems as well). Then again, if we add a few more variables, we can express the ratings with radar charts. ;) http://img231.imageshack.us/img231/9466/screenshot5ec1.png Yeah, use a multi-dimensional rating system. Have different criteria and not just one. Rate each with zero to four stars. The overall rating (platinum, gold, garbage) is then a function of all the criteria ratings. Rating: - 0: Does not work - 1: Works but ... - 2: Works but requires dlls (download from internet) - 3: Works but requires local changes in winecfg (sound settings etc) - 4: Works with vanilla wine Criteria: - Installation - Functionality - Usability - ??? The radar chart is not a bad idea. smartvote.ch, a site that helps you find out who to vote for (in switzerland), creates nice charts: http://dbservice.com/ftpdir/tom/smartspider.png tom
Re: Wine and PulseAudio
Stefan Dösinger wrote: Am Freitag, 22. Februar 2008 01:50:09 schrieb Tomas Carnecky: I switched my desktop to PA yesterday, got most apps working, and to my surprise even flash (netscape 32bit plugin in a 64bit browser). All apps that I need use PA natively, only Wine doesn't have a PA sound driver. The point was and is that we don't want yet another half working sound backend in Wine. We will have to maintain the Alsa one because there are things PA won't be able to give us by design, like HW mixing and lowest-latency direct access which is needed for gaming. HW mixing is overrated :) Current CPUs can do the mixing just as well, if not with a even better quality. I don't say it's useless as I'm sure professionals have a good reason to use high-end cards with high-quality mixers (but they should use wineasio or winejack anyway). Also, PA ideally only adds less then 1ms of latency, that's hardly an issue, even in gaming. Mind you, I don't say winealsa should go away, I'm just saying PA isn't as bad as it looks. Of course, as you said, if someone has patches and intends to maintain it we're happy to accept it. But I think none of the current Wine sound developers(aka Maarten) has any intentions to spend time on a PA backend. Wineasio is maintained as an external component, so I guess winepulse could be too. There are enough free git repo providers around. tom
Re: Wine and PulseAudio
Jan Zerebecki wrote: On Fri, Feb 22, 2008 at 01:50:09AM +0100, Tomas Carnecky wrote: nifty features, like per-app volume, transparent sink switching etc, some of which are impossible to emulate through the alsa pulse plugin. Which features can't be used through the alsa-pulse plugin? And is there any technical reason for that or just missing functionality in that plugin? All wine apps are identified as 'ALSA plug-in [wine-preloader]' in the PA daemon, so you can't set per-app volume and sinks since all wine apps show up under the same name. That is a technical limitation of the alsa plugin and can't be fixed (well, unless the plugin does some /proc/self voodoo to extract the true name). However that does not make the plugin unusable, but per-app settings is a major functionality of PA, so it's kind of bad if that doesn't work. I have gotten winealsa to work with a patched alsa pulse plugin, but whether my patch will be accepted remains uncertain. There are some fundamental differences between ALSA and PA, for example ALSA uses frames and PA uses microseconds and I suspect there are rounding issues involved that make winealsa hang in certain situations. I'll need some help from the PA developers to fix that, but so far it's been quite difficult to get hold of them. tom
Re: Wine and PulseAudio
Jan Zerebecki wrote: On Sun, Feb 24, 2008 at 12:05:46PM +0100, Tomas Carnecky wrote: All wine apps are identified as 'ALSA plug-in [wine-preloader]' in the PA daemon, so you can't set per-app volume and sinks since all wine apps show up under the same name. That is a technical limitation of the alsa plugin and can't be fixed (well, unless the plugin does some /proc/self voodoo to extract the true name). However that does not make the plugin unusable, but per-app settings is a major functionality of PA, so it's kind of bad if that doesn't work. I'm not sure if /proc/self is the right way to implement this but in some way it should be made possible with the alsa-PA plugin. (Why isn't the changed process name used, that is also shown by ps? Perhaps it should be implemented by using a environment variable and/or argument to the alsa-PA plugin, that would also allow the following feature?) http://www.pulseaudio.org/browser/trunk/src/pulse/util.c#L160 Basically uses readlink(/proc/self/exe) and if that fails the falls back to prctl(PR_GET_NAME). Does PA (when used directly) also always use the original process name? I imagine I'm not the only one who would want to have a bit more configurability there (some processes with the same executable should probably not share the same settings and the other way around). When a PA client connects to the daemon it can specify the context name (a name identifying the application) and each stream also has a name (the alsa pulse plugin sets that to PCM Playback for playback streams). Each client can choose its own name, the alsa pulse plugin extracts the name like described above, applications that use PA natively usually choose a fixed name: mplayer uses MPlayer, netscape flash uses Adobe Flash etc. I have gotten winealsa to work with a patched alsa pulse plugin, but whether my patch will be accepted remains uncertain. There are some fundamental differences between ALSA and PA, for example ALSA uses frames and PA uses microseconds and I suspect there are rounding issues involved that make winealsa hang in certain situations. I'll need some help from the PA developers to fix that, but so far it's been quite difficult to get hold of them. We should really get those things that are wrong in either winealsa or PA fixed. (Assuming the PA developers are interested...) AFAIK bug #10942 and at least the part of bug #10910 about Could not find 'PCM Playback Volume' element are things that need fixing in winealsa. See http://www.pulseaudio.org/ticket/198 what the pulse devs think of that issue. tom
Wine and PulseAudio
This subject was discussed a few months ago (around 10/2007). But it was rather a discussion about whether to make a PA sound driver or not. I personally would love to see that happen, if not only because PA has some very nifty features, like per-app volume, transparent sink switching etc, some of which are impossible to emulate through the alsa pulse plugin. Also, then next Ubuntu and Fedora releases will have PA enabled by default so if Wine doesn't work with that well there will be complaints. I switched my desktop to PA yesterday, got most apps working, and to my surprise even flash (netscape 32bit plugin in a 64bit browser). All apps that I need use PA natively, only Wine doesn't have a PA sound driver. Since winealsa.drv will stay the default driver for Linux for the foreseeable future, I started digging and hacking to see why it doesn't work with the alsa pulse plugin and what can be fixed. There are a few bug reports that track the winealsa.drv/PA issue, such as [1]. There is very little required to make it work. Only two tiny changes to the alsa pulse plugin (one can be described as a quirk, until I figure out how exactly the alsa API can be emulated using PA, seems to be a very specific issue in how Wine uses the alsa API since other apps work fine) and sound works perfectly (tested with foobar2000 and WoW, both playback and recording). alsa-1.0.16 was just released, so I hope the needed changes make it into the next version. In the PA volume control, all wine apps show up under the same name: ALSA plug-in [wine-preloader] and thus share the same volume and sink preferences. The alsa plugin could try to do some /proc/self voodoo to extract the true exe name. But given the simplicity of the PA API (no hwparams negotiation, just request a format and you'll get it) and the current problems with the alsa pulse plugin, I think a true wine sound driver would be a viable alternative. I'm still waiting to hear from those people who have said that they have a wine PA driver working :) tom [1] http://bugs.winehq.org/show_bug.cgi?id=10910.
Re: Error while compiling the wine-0.9.49 in RedHat Linux
James McKenzie wrote: Please also be aware that Wine will NOT build properly with any version of gcc 4.0 Can you elaborate? I have been building wine with gcc-4.* since forever and never had any problems with that. Currently I'm using gcc 4.2.2 (Gentoo 4.2.2 p1.0). tom
Re: Wine and LD_PRELOAD
Stefan Dösinger wrote: Am Sonntag, 18. November 2007 17:13:20 schrieb Lionel Tricon: Hi wine list, I am currently working for the next generation of the klik project (1 application = 1 file) and we actualy face some troubles with picasa which is running under linux thanks to wine. It seems that wine deal badly with the LD_PRELOAD feature which is one of the main composant we use to virtualize and execute the application (with fuse for the underlying filesystem). Is there a way to force wine to go through LD_PRELOAD (to overload some system calls) ? or this limitation is driven by the architecture ? I think wine dlopen()s many libraries instead of linking dynamically to them. This makes the build environment more independent of the runtime environment, and we can provide binary builds with all features enabled, and they will still run if the user's system does not have all the libs. We had an issue like that with libGL.so a long time back, and it was fixed by changing some parameter to dlopen. Does anyone remember what that was, and if this can be done for the other libraries too? IIRC the problem was that wine did dlopen(libGL.so) and then dlsym() directly on the returned handle. That made preloading libGL using LD_PRELOAD impossible. The change that was made was to use RTLD_DEFAULT instead of the library handle in dlsym(). Now however wine should use RPATH which can be overriden by LD_LIBRARY_PATH so preloading libraries should be easier. Now it's possible to use LD_LIBRARY_PATH to load a different libGL, don't know how it works with other libraries though. I don't think LD_PRELOAD will ever work as long as wine uses dlsym() on the handle that dlopen() returns. You could try LD_LIBRARY_PATH though, or change every wine_dlsym() to use RTLD_DEFAULT and see if that solves your problems The details are all fuzzy in my head since it's so long ago when I fought with LD_PRELOAD/LD_LIBRARY_PATH. But that is the best I can remember. btw, wine again uses the actual handle in dlsym, I don't know when that was reintroduced, but don't bother changing it as wine now uses RPATH on platforms that support it, from dlls/winex11.drv/opengl.c: ... = wine_dlsym(opengl_handle, glXGetProcAddressARB, NULL, 0); tom
Re: Would also like to contribute to Wine
Reece Dunn wrote: I am interested in the Wine project, and my manager at work has accepted my request to work on it. I work for a company that develops Windows software, so as I have access to Visual Studio, and thus the sources for ATL, MFC and the msvc runtime (provided with the full version of Visual Studio), I state that I will not participate in the development of these components. IIRC Alexander Julliard (the chef here) said that people who've seen any sources from Microsoft can't contribute to wine. tom
Re: [5/5] WineD3D: texkill ignores the .w only in ps 1.x
+ +hr = IDirect3DDevice9_Clear(device, 0, NULL, D3DCLEAR_TARGET, 0xff00ff00, 0.0, 0); +ok(hr == D3D_OK, IDirect3DDevice9_CreatePixelShader returned %s\n, DXGetErrorString9(hr)); Is that worth changing? tom
Re: [PATCH 3/6] winealsa: Implement IDsCaptureDriverImpl_Open
Maarten Lankhorst wrote: Sanity check to see if device can be opened or not Is alloca() not allowed in wine? (asking because you allocate hw_params on the stack, but it's freed if the function exits, so it's only used inside this one function). tom
Re: [PATCH 3/6] winealsa: Implement IDsCaptureDriverImpl_Open
Maarten Lankhorst wrote: Last time I tried using alloca in alsa volume control it was rejected, so I'm just walking safe path. Well the existing coded uses alloca() ... just saw this while compiling wine (winealsa.drv): waveout.c: In function 'wodOpen': waveout.c:582: warning: the address of 'sw_params' will always evaluate as 'true' which I know very well, it's from snd_*_alloca() macros, snd_pcm_sw_params_alloca() is this case. tom
Re: Problem with dlls/winex11.drv/opengl.c, revision 1.104
Chris Rankin wrote: With the update of dlls/winex11.drv/opengl.c from revision 1.103 to 1.104, World of Warcraft no longer starts in OpenGL mode. WoW complains that my graphics card does not have dual-TMU support. (I have a Radeon R200 with Mesa 7.1) http://bugs.winehq.org/show_bug.cgi?id=9307
Can we please have at least a minimal coding style ?
This type of construct seems popular in the wine source: while (isspace(*GL_Extensions)) GL_Extensions++; Start = GL_Extensions; Or even worse (I've seen this in winex11.drv, and it took me quite a long time until I understood it - it was part of a larger block with a lot these constructs): if (cond) do_sth(); do_sth_else(); This indentation is so utterly stupid, it makes my brain hurt (I'm sure I'm not the only one)!! I don't know what AJs stance is on code readability and I know that there are no official coding style guidelines, but can we please avoid this? Can we agree on a set of fundamental style rules? To make peer-reviewing patches easier? And for people not familiar with the code easier to understand it? Please? tom
Re: WGL: GetPixelFormat fix for offscreen formats
from the patch: return physDev-current_pf; + TRACE((%p): returns %d\n, physDev, physDev-current_pf); } What's the purpose of that TRACE() ? tom
Re: [PATCH] x11drv: Allow to disable focus to desktop window after last wine window looses focus.
Vitaliy Margolen wrote: Tomas Carnecky wrote: This patch introduces a new registry key, FocusDesktopWindow, that can be used to specify whether wine should set the foreground window to the desktop window after the last wine app looses focus. The default is enabled (the same behavior as now). This is a bad hack. The reason I *fixed* the focus lose so Wine can grab Ok, so there's no easy way to make the app believe it has focus when it doesn't? Linux usually has virtual desktops, I think it wouldn't be bad if we could make an app believe it owns the focus while the user switches to a different virtual desktop and does something else there. For example World of Warcraft stops sound if it finds out it lost focus and there's currently a bug in wine that makes WoW unaware that it regained focus, which effectively disables sound. Then you should fix this bug and not create hacks. I have zero knowledge about gdi32/user32, I can't fix that bug myself, not without spending weeks digging through the code. If you have an idea what could cause the bug, I'll test patches etc and if you think it's something that'll take longer to fix, I'll open a bug. But there's not much more I can do at this moment. I already thought about asking Blizzard how they check whether WoW has focus or not, but there's no point since they'd ignore my emails again. tom
InputHint and WM_TAKE_FOCUS
Commit d836a5062141dd42293ed044debbaf25f914f383 broke sound in World of Warcraft. Yes, it sound strange but after I analyzed the issue it makes perfectly sense: a commit to dlls/winex11.drv/event.c broke sound. According to the wine source and the ICCCM spec, wine uses either the 'Passive' or 'Locally active' input model. InputHint is always set and WM_TAKE_FOCUS only if UseTakeFocus is set in the registry. InputHint indicates that the window will have focus set by the WM, and WM_TAKE_FOCUS indicates that the app may set focus to it's own subwindows. Both metacity an compiz _don't_ send WM_TAKE_FOCUS to windows that have InputHint set! The problem that arises from [1] is the following: If the WM calls XSetInputFocus() then the xserver sends a FocusIn event to the wine window, but wine ignores that event and waits for an WM_TAKE_FOCUS message. But that message never arrives because the WM never sends it. The above mentioned commit sets the foreground window to desktop if the WoW window looses focus. This makes WoW stop sound output. But the WoW window never regains focus and WoW won't resume sound output. I assume this bug has been in wine for a long time and even though it's a bug I enjoyed being able to switch away from WoW and still hear the sound (fishing/grinding while reading slashdot, hacking on wine etc). The solution? Remove [1] (to process the FocusIn event) or remove InputHint from WM_HINTS. That would fix around the missing sound. Anyhow, 'sound while WoW lost focus' will be lost forever and I imagine there's no way to make that possible again but using a custom patch. tom [1]: http://source.winehq.org/source/dlls/winex11.drv/event.c#L509
Re: InputHint and WM_TAKE_FOCUS
Tomas Carnecky wrote: The solution? Remove [1] (to process the FocusIn event) or remove InputHint from WM_HINTS. That would fix around the missing sound. Anyhow, 'sound while WoW lost focus' will be lost forever and I imagine there's no way to make that possible again but using a custom patch. I was being too fast, this does not solve the missing sound. So someone with more insight into winex11.drv please take a look at this. If you need me to test/try patches, I'll do whatever I can. tom
Re: [PATCH] winex11.drv: Initialize OpenGL and the pixelformat list only when the driver is loaded.
Please don't apply, it's wrong. In the case when Wine was compiled with OpenGL support but OpenGL isn't available at runtime these checks are still needed in every function. tom
Re: [RFC] winex11.drv: Prepare ChoosePixelFormat() for code rewrite.
Vitaliy Margolen wrote: Few notes: You constructed the array of some attributes which you never use. You don't check if those attributes match anything - the part you removed. When app will ask for something that can't be supported your code will return it to the app - which is plain wrong. We simply have nothing to choose from, wine supports only a single pixelformat. And as described in the comment, once the infrastructure is in place the helper function will do the real job. MSDN: If the function succeeds, the return value is a pixel format index (one-based) that is the closest match to the given pixel format descriptor. You must ensure that the pixel format matched by the ChoosePixelFormat function satisfies your requirements. For example, if you request a pixel format with a 24-bit RGB color buffer but the device context offers only 8-bit RGB color buffers, the function returns a pixel format with an 8-bit RGB color buffer. There's also a _lot_ code that goes like this: GLXFBConfig *pCfgs = glXGetFBConfigs(...); ConvertPixelFormatWGLtoGLX(.., index, ..); glXGetFBConfigAttrib(.., *pCfgs[index], ..); XFree(pCfgs); This malloc()/free() can be removed by keeping a copy of the GLXFBConfig the the pixelformat registry (instead of the index to the GLXFBConfig list). tom
Re: FPS tool for wine
[EMAIL PROTECTED] wrote: The largest gaming site in Norway recently did an extensive review of gaming on Linux, but Wine was left out of the benchmark because no FPS tool exists for Wine. Surely this can't be true? I guess I could modify yukon [1] to not capture the actual frames but only save info about swapuffer timings. Would you be interested in that? tom [1] http://www.neopsis.com/projects/yukon
Re: My plans for SOC
Maarten Lankhorst wrote: - Remove the queuing thread and use Lock() and Unlock() instead. There has to be a thread somewhere: for buffers that are DSBPLAY_LOOPING. tom
Re: Forum proposal
Scott Ritchie wrote: On Thu, 2007-03-01 at 20:57 +0100, Tomas Carnecky wrote: Luis C. Busquets Pérez wrote: I understand that for some people the mailing list is a far better thing. May be for some other, a forum is better. Why not trying both systems? This question has already been answered.. anyway: if you have both a ML and a forum, you effectively split the community into two parts! Nobody will be on both the forum and the ML, it will be harder for the users because they'll have to two places to go to, and it will be harder for the developers because bug-reports will be posted two two different places. tom This just isn't true. I read the mailing list and check the various forums where Wine is discussed regularly. It's not that difficult to do both, since I check my email anyway. Maybe you, but there are other people who just don't have time to monitor both a forum and the ML. And my point still stands, you split the community up! tom
Re: Forum proposal
Luis C. Busquets Pérez wrote: I understand that for some people the mailing list is a far better thing. May be for some other, a forum is better. Why not trying both systems? This question has already been answered.. anyway: if you have both a ML and a forum, you effectively split the community into two parts! Nobody will be on both the forum and the ML, it will be harder for the users because they'll have to two places to go to, and it will be harder for the developers because bug-reports will be posted two two different places. tom
Re: [Bug 6689] Created links do not reflect WINEPREFIX used
Lei Zhang wrote: Setting the Exec= line in a .desktop file to: WINEPREFIX=foo wine bar worked in KDE. In Gnome, it works if run from the terminal is set. Unfortunately the Freedesktop spec for desktop entries does not specify a way to add environment variables. So yes, it looks like we'll need a wrapper script, or perhaps add a --prefix option to Wine? or change the exec line to: /bin/sh -i -c WINEPREFIX=foo wine tom
Re: add support for slists
Dmitry Timoshkov wrote: And one more... Damjan Jovanovic [EMAIL PROTECTED] wrote: +__int64 interlocked_cmpxchg64( __int64 **dest, __int64 *xchg, __int64 *compare ) +{ +_lwp_mutex_lock( interlocked_mutex ); +if (memcmp(*dest, compare, 8) == 0) +memcpy(*dest, xchg, 8); +else +memcpy(compare, *dest); +_lwp_mutex_unlock( interlocked_mutex ); +return compare; +} Is there any particular reason that you use memcmp/memcpy instead of directly manipulating 64-bit values? also, compare is a pointer to __int64 but the return type is __int64. Doesn't gcc complain about that? tom
Re: Howdy! Newbie with keyboard patches.
Peter Seebach wrote: In message [EMAIL PROTECTED], Dmitry Timoshkov writes: Usually xmodmap is used to redefine keys behaviour in X11, no need for 3rd party apps. If xmodmap could do this, I'd agree. Xmodmap can't map F13 to Control-F1, though, and many Windows apps can't hack F13. Xmodmap can only map keys to single keys; it can't change their modifiers. I'm sure I could hack you a small app that takes input events from /dev/input/eventX and uses XTest to fake X key events. That way you could 'map' F13 to Control-F1. I've written a few 'drivers' for input devices (MS Strategic Commander etc) that way.. and it works great. tom (aka wereHamster :P )
Re: World of Warcraft Progress
darckness wrote: First off, the D3D (DX9) runs MUCH better (faster and more smoothly) than the OGL mode. Big kudos to you guys; last time I played, the D3D mode was completely unusable because there was no D3D support in wine. Very impressive. The GLX_ARB_vertex_buffer_object is described on the AppDb page for World of Warcraft and has now been known for quite some time already. If you apply the hack, you'll see that running in OGL and D3D mode won't make a difference in the framerate. I also noticed the modeswitch crash, but I didn't investigate it. I always edit the Config.wtf file if I need to switch modes (or graphics settings). But I'll look into if if nobody else will ;) tom
Re: wined3d: state_pointsprite should apply to all texture units
Chris Robinson wrote: +for (i = 0; i GL_LIMITS(texture_stages); i++) { +/* Note the WINED3DRS value applies to all textures, but GL has one + * per texture, so apply it now ready to be used! + */ +if (GL_SUPPORT(ARB_MULTITEXTURE)) { +GL_EXTCALL(glActiveTextureARB(GL_TEXTURE0_ARB + i)); +checkGLcall(glActiveTextureARB); +} else if (i0) { +FIXME(Program using multiple concurrent textures which this opengl implementation doesn't support\n); +} I'd do that '} else if (i == 1) {', otherwise you'll print a whole lot FIXME's, and maybe add a 'break', since it doesn't make sense to call glTexEnvi() over and over for the same texture. tom
Re: [Winmm/winealsa] Don't use asynchronous callbacks in dsound any more
Maarten Lankhorst wrote: Instead of using asynchronous callbacks that uses signals, use a seperate thread that can be cancelled, this prevents deadlock issues. Basically we use snd_pcm_wait() that tells us when enough room is free to commit another buffer, then we commit the previous buffer and make the next buffer ready. Since snd_pcm_wait() uses poll(), we don't have signals in winealsa any more. Correct me if I'm wrong, but doesn't your code create and destroy the thread in ::Play() and ::Stop()? It would be better to destroy it together with the DirectSoundBuffer object (whether you create the thread when the object is created or the first time ::Play() is called is not so important, but I've been told the CreateFrame() overhead is big so that we should try to minimize its use). but I would suggect to create teh thread when the DSB object is created, as the CreateThread() in ::Play() may have side-effects (delay in the sound etc). tom
Re: audio glitch patch by mike hearn
[EMAIL PROTECTED] wrote: Can anyone help me get this baby compiled as i want to play my games without the nasty audio glitches and don't care if i have to be root/sudo/whatever. I can give you my dsound/alsa patches if you want. I've changed the dsound code to use 'native' alsa buffers/mixer instead of the one implemented in dsound and I don't have any problems at all with bad sound anymore. tom
Re: audio glitch patch by mike hearn
Stefan Dösinger wrote: Am Donnerstag 07 Dezember 2006 22:20 schrieb Tomas Carnecky: I can give you my dsound/alsa patches if you want. I've changed the dsound code to use 'native' alsa buffers/mixer instead of the one implemented in dsound and I don't have any problems at all with bad sound anymore. Are you going to work on getting this committed into wine git? Unfortunately that would require too much work, well, more or less a rewrite of the current dsound core and some drivers. And I don't have the time or patience for that. But of course, if someone decides to pick my patches up, I'll give him every support I can. tom
Re: audio glitch patch by mike hearn
For your convenience.. http://dbservice.com/ftpdir/tom/dsound-alsa.patch You have to enable 'ALSA' and full hardware acceleration, no emulation. Don't know if it works with other applications than World of Warcraft though... Note that this patch is _full_ of non-important changes that could be removed, I just didn't come along doing it as I always just 'git-fetch' and 'git-rebase origin'.. tom
Re: dlls: add wpcap
James Hawkins wrote: +@ stdcall pcap_setbuff(ptr long) wine_pcap_setbuff As I said before, you don't need to forward these, just name the APIs like they are in the spec file and it should work. You can see that wine_pcap_close() calls pcap_close() which is a function implemented in the pcap library (see www.tcpdump.org), so naming the functions pcap_?? is not possible. tom
Re: Looking for programmator to complete Direct3D 9.0c with GLSL in the Wine
Andreas Mohr wrote: Would be nice to have many more people (e.g. those who don't feel like programming things but still want to contribute somehow) offering such dedication towards getting their personal goals in various OSS projects reached. GNOME has a 'bounties' webpage, http://www.gnome.org/bounties/ - If we start with these 'bounties', wouldn't it be nice to have such a page too? tom
Re: winex11.drv: Bug#6501. XIM problem with XInitThreads().
Byeong-Sik Jeon wrote: I think xorg people knows this problem already. https://bugs.freedesktop.org/show_bug.cgi?id=1182 Won't this be fixed in the next Xlib version (1.1)? XLib-1.1 will use XCB which is said to be truly MT safe. If we just wait until everybody has updated XLib to 1.1 then we don't need to use the workaround. tom
Re: [SM-Spell-Submit] Wine release 0.9.24
David Kowis wrote: Alexandre Julliard wrote: What's new in this release: - Support for multiple monitors using Xinerama. W00t! Now if I could get it to build in amd64 :( Now if you posted the error message or tried to describe what exactly fails :( tom
Re: stuck ctrl/alt keys
Tomas Carnecky wrote: So.. now that the cause is known, what would be the right solution? Do you still think it's the WMs fault? Or should wine be changed? The patch in the attachment removes update_key_state() from mouse.c - the decision whether this is the right approach or not is left to someone else. If you think it's worth a try to send this patch to wine-patches, please do so, I'll have to continue using a custom patchset anyway since the app I want to run doesn't work on vanilla wine, so I don't heave really a strong interest in getting this patch accepted. Getting patches accepted was quite hard for me, and that was because lack of feedback. It leads to nowhere if I submit the very same patch more than twice, and without feedback I can't improve it. So, my two patches (one for alsa/dsound and this for the stuck keys) will remain public domain and free for everyone to use, and if you feel like it, to improve and send back to wine for inclusion. I'll (more or less) retire from writing patches. But of course if I come up with a fix for something that I think is worth sharing, I'll let you guys know ;) tom e4a76b49921e3af8095baadace976143af05160d diff --git a/dlls/winex11.drv/mouse.c b/dlls/winex11.drv/mouse.c index 5ac714c..4d4a83a 100644 --- a/dlls/winex11.drv/mouse.c +++ b/dlls/winex11.drv/mouse.c @@ -103,18 +103,6 @@ static inline void update_button_state( /*** - * update_key_state - * - * Update the key state with what X provides us - */ -static inline void update_key_state( unsigned int state ) -{ -key_state_table[VK_SHIFT] = (state ShiftMask ? 0x80 : 0); -key_state_table[VK_CONTROL] = (state ControlMask ? 0x80 : 0); -} - - -/*** * update_mouse_state * * Update the various window states on a mouse event. @@ -124,7 +112,6 @@ static void update_mouse_state( HWND hwn struct x11drv_thread_data *data = x11drv_thread_data(); get_coords( hwnd, x, y, pt ); -update_key_state( state ); /* update the cursor */ @@ -712,7 +699,6 @@ BOOL X11DRV_GetCursorPos(LPPOINT pos) if (XQueryPointer( display, root_window, root, child, rootX, rootY, winX, winY, xstate )) { -update_key_state( xstate ); update_button_state( xstate ); TRACE(pointer at (%d,%d)\n, winX, winY ); cursor_pos.x = winX;
stuck ctrl/alt keys
That is _really_ annoying, but nobody seems to know what it causes. So, I once again started to investigate it. The culprit is the 'update_key_state' function in mouse.c, which modifies the global 'key_state_table' without telling the application that the keystate has changed. A patch that was submitted long ago to the wine-patches mailing list changed the code in the above mentioned function to call 'KEYBOARD_ChangeOneState', which takes care of informing the application of the keystate change (but only if the keystate has actually changed). AJ rejected the patch, don't know the exact reason but it was more or less because it didn't seem the proper fix for him. I watched how/when 'update_key_state' is called and one place is in 'X11DRV_EnterNotify' (which for some reason is also in mouse.c). The problems is that KeymapNotify is sent after EnterNotify and because 'update_key_state' modifies the global keystate table (at EnterNotify), 'KEYBOARD_ChangeOneState' (called by 'X11DRV_KeymapNotify') doesn't inform the application about the changed keystate. The question now is, is the WM allowed to dispatch EnterNotify before KeymapNotify? I couldn't find any definitive answer, but I've found one book where this order is explicitly mentioned and one PDF where it's less clear, but from my understanding also leads to this conclusion. Webpage: http://www.sbin.org/doc/Xlib/chapt_08.html This event type (KeymapNotify), if it is selected, always follows immediately after an EnterNotify or FocusIn event. PDF: www.hpl.hp.com/techreports/Compaq-DEC/CRL-90-8.pdf page 18: KeymapNotify - Keyboard state at EnterNotify, FocusIn events (less clear, but if this is true it would be quite useless if KeymapNotify was dispatched after EnterNotify) So.. now that the cause is known, what would be the right solution? Do you still think it's the WMs fault? Or should wine be changed? tom
Re: my dsound/winealsa hacks
James Courtier-Dutton wrote: As Direct Sound does not know anything about periods, I don't really know how you will be able to get it to work well with ALSA. I expect that some sort of double buffer will be required. Does Direct Sound have a concept of position of the ADC, and also a concept of where in the buffer it is sensible to fill with new samples? When the application creates a buffer, it passes a structure to CreateSoundBuffer() that describes what kind of sound the buffer will contain, and the data include: - format (PCM/ALAW/ULAW etc) - number of channels - bits per sample - rate (Hz) and - size of the buffer it wants, in bytes The application can use Lock() to request s pointer to a buffer where it can write X bytes of sound data to, once it has written the data, it calls Unlock() and then DirectSound passes the data to the soundcard. World of Warcraft calls Lock to get a pointer where it can fill 4096 bytes (regardless of how big the period is, because DirectSound doesn't know about periods), writes the sound to the buffer and calls Unlock(), I'm using the async handler that invokes a function that reads the data from the intermediary buffer and passes it to the alsa). And yes, the app can call GetCurrentPosition() to find out the 'Play' and 'Write' positions in the buffer, play is where the soundcard is at the moment (eg. the position where I will read the data from and pass it to alsa), write is where the app can write the data to, currently I'm using 'playpos+period'. I haven't implemented the DirectSoundCapture, but I guess that it will work the same: the app calls Lock() and when the call returns it will receive a pointer to a buffer where X bytes of the captured data is, so if the app wants 4096 bytes, I'll have to wait until alsa has returned Y periods (where frames_to_bytes(Y) X) and then return to the app. I'm sorry, I didn't explain myself clearly in the previous mail :-/ tom
Re: my dsound/winealsa hacks
.. another small update, now tries to create the buffer size as close as possible to what the app requested. The whole patch is available at the same URL, I also created a patch of only ./dlls/winmm/winealsa/audio.c to make it easier to read the patch, the patch is here: http://dbservice.com/ftpdir/tom/alsa-audio.patch tom
my dsound/winealsa hacks
My ultimate goal was to solve the dsound underruns which were so horrible that I had to disable sound in World of Warcraft. While I managed to get the sound working flawlessly (really... I never heard such clear sound under wine) in WoW, it required WoW-specific hacks so my patch will never make it (in its current form) into the official git tree, but maybe someone can use some of my ideas to improve dsound/winealsa. The basic idea is to let alsa mix the sound instead of the infamous dsound mixer. The advantage is that if the hardware supports mixing, there is very little overhead, if not, there's still dmix which can do that, but dsound doesn't need to care how it's done, it's entirely up to alsa. Now to the actual implementation, it's pretty straight-forward: I 'forced' CreateSoundBuffer to create a hardware buffer (right now it creates a hw buffer only for the primary buffer) and changed the winealsa driver to support that. Because winealsa driver opens only one 'connection' (snd_pcm_t) to the soundcar (allowing only one buffer per device), I simply opened a new 'connection' for each buffer and configured the hardware for 44100Hz/U16LE/stereo (that's what WoW expects). Up to this point it was easy. One of the big problems was that WoW requests a 16kb buffer, but alsa is unable to allocate a buffer with this exact size, and it caused problems if I passed the alsa buffer to WoW (so WoW could write directly to it). I had to allocate a separate buffer, keep track of the play/write positions in both buffers and copy the data from one to the other. I'm fairly sure this all could be done in a WoW-independent way, eg. configure the hardware as the application requests it (by passing LPCDSBUFFERDESC to the low-level driver etc) and keeping track of the read/write positions could also be done in a better way. There are a free questions that should be answered before someone can go on and make a 'proper' implementation. I simply bypassed the primary buffer, it doesn't exist since the data from the secondary buffers are written directly to the soundcard. If it should be possible to read the data from the primary buffer then we should forget this all and look for another solution. I believe that it's not possible (it's not possible to write to the primary buffer and the API doesn't differentiate between reads and writes), but someone should test that under windows. (this is only one question but I'm sure you'll come up with more ;) ) Here is the complete patch, I also changed the IDirectSound interface and a lot unrelated code, so the patch is a bit big :-/ There also are a few memory leaks, I didn't bother freeing the memory etc. http://dbservice.com/ftpdir/tom/dsound-alsa.patch tom
Re: my dsound/winealsa hacks
Tomas Carnecky wrote: I'm fairly sure this all could be done in a WoW-independent way, eg. configure the hardware as the application requests it (by passing LPCDSBUFFERDESC to the low-level driver etc) and keeping track of the read/write positions could also be done in a better way. A small update on that, I managed to indeed make it WoW-independent, I tested it with foobar2000 and the playback works fine, there is an issue with the sound volume, it keeps changing if I seek :-/ Right now I'm documenting the code, so my next patch will be a little bit better readable and easier to understand :) tom
Re: my dsound/winealsa hacks
James Courtier-Dutton wrote: I have place some documentation on the ALSA wiki site: https://bugtrack.alsa-project.org/wiki/wikka.php?wakka=ALSAresampler It tries to explain the constraints that the current ALSA resampler works under. You might like to read it as I think it will have impact on your plans. Thanks. One question though, if the app in in blocking mode and requests the said two periods, will alsa wait until the hardware has processed three 48000Hz-periods and then copy the two 44100Hz-periods to the application (because: 3 periods at 48000Hz 2*1024 frames at 44100Hz)? DirectSound doesn't know anything about periods, the windows application operates on bytes rather than frames or periods. So whether I'd have to wait for two or three periods wouldn't matter. The important thing is that I get X bytes in the right format to pass that back to the application. tom
Re: ALSA implementation
Jan Zerebecki wrote: As explained in the mail refrenced above the main problem is that in wine the alsa callback signal (that we currently use) won't work properly without special care, but the fd based method (for example with select) should work as expected. Why won't it work without special care? Is it because of the SIGIO signal? Wouldn't the fd-method require a separate thread? tom
Re: ALSA implementation
Jan Zerebecki wrote: To fix bug #4093 we need to replace the currently used signal callback method (very complex to make signals work properly [in Wine], thus we should avoid it) with I guess a fd based method for example with select. The alsa-api documentation about this looks pretty usable. Would that fix the DSOUND_MixOne underrun problem, too? Or is that a different bug? tom
Re: X11DRV: fix fbconfig regression
Roderick Colenbrander wrote: There's this check: if ((!WineGLInfo.glxDirect !strcmp(1.2, WineGLInfo.glxServerVersion)) || (WineGLInfo.glxDirect !strcmp(1.2, WineGLInfo.glxClientVersion))) This is not the correct way of loading opengl functions or deciding whether they are available or not. According to the GLX_ARB_get_proc_address spec, we need to check _only_ glXQueryExtensionsString() and glXQueryVersion() and it's not correct to make assumptions based on server/client versions or extension strings. This patch removes all the client/server code and replaces it with the correct checks. The patch fixes the code in has_opengl() (and also replaces one wine_dlsym() with pglXGetProcAddressARB()) as well as the helper function glxRequireExtension() and fixes the code that decices whether pbuffers are available. Pbuffers are part of GLX 1.3, so if the GLX version is 1.3 we _have_ pbuffers, no need to check for the GLX_SGIX_pbuffers extension anymore ( - ||) If this patch works with ATI drivers, I don't know, but that's certainly the correct way of querying functions. tom diff --git a/dlls/winex11.drv/opengl.c b/dlls/winex11.drv/opengl.c index bceefb5..fc48c7e 100644 --- a/dlls/winex11.drv/opengl.c +++ b/dlls/winex11.drv/opengl.c @@ -307,7 +307,6 @@ static BOOL has_opengl(void) { static int init_done; static void *opengl_handle; -const char *glx_extensions; int error_base, event_base; @@ -375,6 +374,13 @@ #undef LOAD_FUNCPTR opengl_handle = NULL; } +/* + * We only check the GLX version and extension string as reported by glXQueryVersion() and + * glXQueryExtensionsString(), which is the correct way of doing it. + * + * Use the following comment block with caution! + */ + /* In case of GLX you have direct and indirect rendering. Most of the time direct rendering is used * as in general only that is hardware accelerated. In some cases like in case of remote X indirect * rendering is used. @@ -395,43 +401,26 @@ #undef LOAD_FUNCPTR * The versioning checks below try to take into account the comments from above. */ -/* Depending on the use of direct or indirect rendering we need either the list of extensions - * exported by the client or by the server. - */ -if(WineGLInfo.glxDirect) -glx_extensions = WineGLInfo.glxClientExtensions; -else -glx_extensions = WineGLInfo.glxServerExtensions; - -/* Based on the default opengl context we decide whether direct or indirect rendering is used. - * In case of indirect rendering we check if the GLX version of the server is 1.2 and else - * the client version is checked. - */ -if ((!WineGLInfo.glxDirect !strcmp(1.2, WineGLInfo.glxServerVersion)) || -(WineGLInfo.glxDirect !strcmp(1.2, WineGLInfo.glxClientVersion))) +/* glXChooseFBConfig and friends are part of GLX 1.3 and implemented by the GLX_SGIX_fbconfig extension */ +if (3 = WineGLInfo.glxVersion[1]) { pglXChooseFBConfig = (void*)pglXGetProcAddressARB((const GLubyte *) glXChooseFBConfig); pglXGetFBConfigAttrib = (void*)pglXGetProcAddressARB((const GLubyte *) glXGetFBConfigAttrib); pglXGetVisualFromFBConfig = (void*)pglXGetProcAddressARB((const GLubyte *) glXGetVisualFromFBConfig); +} else if (NULL != strstr(WineGLInfo.glxExtensions, GLX_SGIX_fbconfig)) { +pglXChooseFBConfig = (void*)pglXGetProcAddressARB((const GLubyte *) glXChooseFBConfigSGIX); +pglXGetFBConfigAttrib = (void*)pglXGetProcAddressARB((const GLubyte *) glXGetFBConfigAttribSGIX); +pglXGetVisualFromFBConfig = (void*)pglXGetProcAddressARB((const GLubyte *) glXGetVisualFromFBConfigSGIX); } else { -if (NULL != strstr(glx_extensions, GLX_SGIX_fbconfig)) { -pglXChooseFBConfig = (void*)pglXGetProcAddressARB((const GLubyte *) glXChooseFBConfigSGIX); -pglXGetFBConfigAttrib = (void*)pglXGetProcAddressARB((const GLubyte *) glXGetFBConfigAttribSGIX); -pglXGetVisualFromFBConfig = (void*)pglXGetProcAddressARB((const GLubyte *) glXGetVisualFromFBConfigSGIX); -} else { -ERR( glx_version as %s and GLX_SGIX_fbconfig extension is unsupported. Expect problems.\n, WineGLInfo.glxClientVersion); -} +ERR( GLX Version (%d.%d) too low and GLX_SGIX_fbconfig unsupported. Expect problems.\n, + WineGLInfo.glxVersion[0], WineGLInfo.glxVersion[1]); } -/* The mesa libGL client library seems to forward glXQueryDrawable to the Xserver, so only - * enable this function when the Xserver understand GLX 1.3 or newer - */ -if (!strcmp(1.2, WineGLInfo.glxServerVersion)) -pglXQueryDrawable = NULL; -else -pglXQueryDrawable = wine_dlsym(RTLD_DEFAULT, glXQueryDrawable, NULL, 0); +/* glXQueryDrawable is part of GLX 1.3 */ +if (3 = WineGLInfo.glxVersion[1]) +pglXQueryDrawable =
Re: X11DRV: fix fbconfig regression
Tomas carnecky wrote: This is not the correct way of loading opengl functions or deciding whether they are available or not. According to the GLX_ARB_get_proc_address spec, we need to check _only_ glXQueryExtensionsString() and glXQueryVersion() The spec: http://www.opengl.org/registry/specs/ARB/get_proc_address.txt Section: 3.3.12 Obtaining Extension Function Pointers - A non-NULL return value for glXGetProcAddressARB does not guarantee that an extension function is actually supported at runtime. The client must must also query glGetString(GL_EXTENSIONS) or glXQueryExtensionsString to determine if an extension is supported by a particular context. [snip] glXGetProcAddressARB may be queried for all of the following functions: - All GL and GLX extension functions supported by the implementation (whether those extensions are supported by the current context or not). - All core (non-extension) functions in GL and GLX from version 1.0 up to and including the versions of those specifications supported by the implementation, as determined by glGetString(GL_VERSION) and glXQueryVersion queries. - If the function is part of GLX version 'X' and glXQueryVersion returns 'X' or higher OR if the function is part of an extension that is included in glXQueryExtensionsString THEN glXGetProcAddressARB returns a valid function. I hope that clears things up :) tom
Re: Fwd: RFC: OpenGL x11drv rewrite (WoW fix)
Roderick Colenbrander wrote: If we could set a pbuffer flag in there and retrieve it in wglMakeCurrent it would work. I fear that this can only be done in a clean way if it code would be in x11drv :( I did that, I created a new field in the PDEVICE structure and used two new ExtEscape codes (SET_FLAGS/GET_FLAGS), but Alexandre doesn't want to add new ExtEscape codes.. That's why I hacked even more on wine and moved the wgl implementation to x11drv... and there they are, my old patches. I never bothered updating them though. tom
Re: X11Drv/OpenGL pixelformat rewrite
Testing with World of Warcraft, the game runs fine, but I see this in the console: err:wgl:wglGetPixelFormatAttribivARB Unable to convert iPixelFormat 0 to a GLX one, expect problems! tom
Re: World of Warcraft (WoW) patch/more address space layout stuff
Mike Hearn wrote: It's a bug in WoW itself, it relies upon the exact way NT maps memory which is different to how Linux does it. I guess they are storing information in the high bits of a pointer somewhere or some similar trick. Since WoW also runs on MacOSX, how does the memory layout on MacOSX differ from NT and Linux? Maybe that's the reason why they won't do a Linux port: because they rely on a certain memory layout and the code can't be changed that easily. tom
Re: World of Warcraft (WoW) patch/more address space layout stuff
[EMAIL PROTECTED] wrote: No, they have added this regressions after a little patch-set. So they can fix it. And as we can't download a playable demo ... Interesting.. if MacOSX has a similar memory layout as linux, maybe we could get Blizzard to include a workaround that is only active when it sees that it's running under wine/cedega. tom
Re: user32: Make all the recently added ShowWindow tests pass in Wine
Dmitry Timoshkov wrote: Tomas Carnecky [EMAIL PROTECTED] wrote: The problem seems to be the SWP_NOACTIVATE that is now added to 'swp' in case of SW_MINIMIZE. Before your patch, SW_MINIMIZE has it's own 'case' section, now it falls through and in 'case SW_SHOWMINNOACTIVE' SWP_NOACTIVATE is added. Many thanks for the investigation. Does the attached patch fix the regression? Yes, this patch fixes it :) tom
Re: user32: Make all the recently added ShowWindow tests pass in Wine
Dmitry Timoshkov wrote: Could you try to identify which piece of the patch caused the regression? It wasn't that hard after all.. ;) The problem seems to be the SWP_NOACTIVATE that is now added to 'swp' in case of SW_MINIMIZE. Before your patch, SW_MINIMIZE has it's own 'case' section, now it falls through and in 'case SW_SHOWMINNOACTIVE' SWP_NOACTIVATE is added. maybe change it to: case SW_SHOWMINNOACTIVE: swp |= SWP_NOACTIVATE | SWP_NOZORDER; case SW_MINIMIZE: swp = ~SWP_NOACTIVATE; but I'm not sure if this is the right solution. tom
Re: WineGL, take two
Kuba Ober wrote: Patches to configure are redundant. configure is autogenerated. Change the source file, namely configure.ac. Thanks, modifying configure.ac was easier than I thought ;) If someone wants to pick up the development from here, feel free to take the patches. The last patch for configure.ac is not on my webserver, if someone decides to work on my patches, just tell me and I'll send it to you. tom
Re: user32: Make all the recently added ShowWindow tests pass in Wine
Dmitry Timoshkov wrote: Hello, this patch makes all the recently added ShowWindow tests pass in Wine, and is aimed to fix the bug #4960. Although this patch creates a regression for popup windows created with WS_MAXIMIZED style set and calling ShowWindow( SW_SHOWMAXIMIZE ) right after that (Windows still calls SetWindowPos in that case, but Wine doesn't anymore - the tests show that SetWindowPos should not be called for child windows if their state already matches the ShowWindow command) but top level windows already behave slightly differently in Wine since they are WM driven, and adding window state change (min/max/restore) through a WM will make them behave even more differently. Changelog: Make all the recently added ShowWindow tests pass in Wine. This introduced a regression with WoW: The game starts fine, but as soon as I switch workspace the window disappears (isn't visible in the workspace switcher anymore), and when I switch back the game runs at 3-5fps.. dead slow. When I press ATL-TAB the game minimizes correctly (and the underlying gnome-terminal receives focus), but pressing ALT_TAB again doesn't make the game full-screen, the game appears as a 32x32pixel window in the top left corner of the screen. and from now on ALT-TAB toggles between these two modes, eg. invisible and 32x32pixel window. To bring the game to the foreground, I need to 'maximize' it using ALT-TAB, switch to another workspace and then back again. This however doesn't make the game run at fullspeed, it still runs slowly. I'm using E17 as the window manager, but I'll test it under metacity, too. tom
Re: user32: Make all the recently added ShowWindow tests pass in Wine
Dmitry Timoshkov wrote: If you could minimize the failure to a small test case that would help a lot. Unfortunately, I have neither Windows nor the required knowledge to write windows applications :( I did two logs when switching workspace (winpos-*) and two when minimizing (minimize-*), always one with your patch and one without. Workspace switching: I started the game, changed the workspace and then the log starts: change workspace to the one where the game is running and back. Minimizing: I started the game, minimized it and then the log starts: maximize it and minimize it. Note that the output differs significantly, for example, when min/maximizing with your patch, I didn't see any X11DRV_Expose messages in the log, without, there were heaps of these messages. Can that be the cause for the slow game? And in the winpos logs, without your patch, I see X11DRV_Expose when I switched to the workspace where the game was running, with your patch, the messages came when I was switching back. The four logs are attached. tom trace:x11drv:X11DRV_Expose win 0x10024 (101) 0,0 32x32 fixme:system:SystemParametersInfoW Unimplemented action: 113 (SPI_SETMOUSESPEED) fixme:win:EnumDisplayDevicesW ((null),0,0x7fbcee48,0x), stub! trace:x11drv:X11DRV_SetWindowPos hwnd 0x10024, after (nil), swp 0,0 1280x1024 flags 2108 trace:x11drv:SWP_DoWinPosChanging hwnd 0x10024, after (nil), swp 0,0 1280x1024 flags 3908 trace:x11drv:SWP_DoWinPosChanging current (0,0)-(32,32) style b600 new (0,0)-(1280,1024) trace:x11drv:SWP_DoNCCalcSize hwnd 0x10024 old win (0,0)-(32,32) old client (0,0)-(32,32) new win (0,0)-(1280,1024) new client (0,0)-(1280,1024) trace:x11drv:X11DRV_set_window_pos win 0x10024 window (0,0)-(1280,1024) client (0,0)-(1280,1024) style b600 trace:x11drv:X11DRV_sync_window_position setting win 101 pos 0,0,1280x1024 after 0 changes=4c trace:x11drv:X11DRV_SetWindowPosstatus flags = 1006 trace:x11drv:X11DRV_ShowWindow hwnd=0x10024, cmd=3, wasVisible 1 trace:x11drv:WINPOS_MinMaximize 0x10024 3 trace:x11drv:X11DRV_SetWindowPos hwnd 0x10024, after (nil), swp 0,0 1280x1024 flags 0160 trace:x11drv:SWP_DoWinPosChanging hwnd 0x10024, after (nil), swp 0,0 1280x1024 flags 1960 trace:x11drv:SWP_DoWinPosChanging current (0,0)-(1280,1024) style 9700 new (0,0)-(1280,1024) trace:x11drv:SWP_DoNCCalcSize hwnd 0x10024 old win (0,0)-(1280,1024) old client (0,0)-(1280,1024) new win (0,0)-(1280,1024) new client (0,0)-(1280,1024) trace:x11drv:X11DRV_set_window_pos win 0x10024 window (0,0)-(1280,1024) client (0,0)-(1280,1024) style 9700 trace:x11drv:X11DRV_sync_window_position setting win 101 pos 0,0,1280x1024 after 0 changes=40 trace:x11drv:X11DRV_set_window_pos mapping non zero size or off-screen win 0x10024 trace:x11drv:X11DRV_SetWindowPosstatus flags = 1827 trace:x11drv:X11DRV_ConfigureNotify win 0x10024 new X rect 0,0,1280x1024 (event 0,0,1280x1024) trace:x11drv:X11DRV_ConfigureNotify win 0x10024 new X rect 0,0,1280x1024 (event 0,0,1280x1024) trace:x11drv:X11DRV_Expose win 0x10024 (101) 32,0 1248x32 trace:x11drv:X11DRV_Expose win 0x10024 (101) 0,32 1280x427 trace:x11drv:X11DRV_Expose win 0x10024 (101) 0,459 509x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 771,459 509x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 0,460 507x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 773,460 507x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 0,461 506x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 774,461 506x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 0,462 505x2 trace:x11drv:X11DRV_Expose win 0x10024 (101) 775,462 505x2 trace:x11drv:X11DRV_Expose win 0x10024 (101) 0,464 504x97 trace:x11drv:X11DRV_Expose win 0x10024 (101) 776,464 504x97 trace:x11drv:X11DRV_Expose win 0x10024 (101) 0,561 505x2 trace:x11drv:X11DRV_Expose win 0x10024 (101) 775,561 505x2 trace:x11drv:X11DRV_Expose win 0x10024 (101) 0,563 506x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 774,563 506x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 0,564 508x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 772,564 508x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 0,565 1280x459 trace:x11drv:X11DRV_Expose win 0x10024 (101) 509,459 262x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 507,460 266x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 506,461 268x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 505,462 270x2 trace:x11drv:X11DRV_Expose win 0x10024 (101) 504,464 272x97 trace:x11drv:X11DRV_Expose win 0x10024 (101) 505,561 270x2 trace:x11drv:X11DRV_Expose win 0x10024 (101) 506,563 268x1 trace:x11drv:X11DRV_Expose win 0x10024 (101) 508,564 264x1 fixme:system:SystemParametersInfoW Unimplemented action: 113 (SPI_SETMOUSESPEED) trace:x11drv:X11DRV_ShowWindow hwnd=0x10024, cmd=6, wasVisible 1 trace:x11drv:WINPOS_MinMaximize 0x10024 6 trace:x11drv:X11DRV_SetWindowPos hwnd 0x10024, after (nil), swp 0,0 32x32 flags
Re: user32: Make all the recently added ShowWindow tests pass in Wine
Tomas Carnecky wrote: Minimizing: I started the game, minimized it and then the log starts: maximize it and minimize it. I'm sorry, this is the wrong description. ALT-TAB switches the active window, like under Windows, it does not minimize! Even without your patch I can see the slowdown when I 'Iconify' the window (ALT + right mouse button brings up a context menu where I can select such actions). However, I can make the game run normal again by pressing ALT-TAB a few times. tom
WineGL, take two
After submitting the patches last night, I got some feedback on IRC. It seems that adding new exports to gdi32.dll is bad (it apparently tends to break applications, those using safedisc2 seem to be good candidates), so I had to look for another solution. Everything that follows here is under the assumption that we want to move all access to the native opengl library to x11drv. But that's the impression I got from Alexandre in the other mail regarding new escape codes (http://www.winehq.org/pipermail/wine-devel/2006-May/047539.html). If the above assumption is true then opengl32.dll needs access to the low-level driver (eg. x11drv) where all the WGL functions are implemented. And if adding new gdi32.dll exports is not an option then ExtEscape() or some other Escape function is the only way to go. Someone told me that even in Windows Vista opengl uses an escape function to talk to the low-level driver. So my new approach is this: - one new gdi driver function: WineGL_getDispatchTable = returns the dispatch table for the basic WGL functions. - two new ExtEscape() codes: = WINEGL_GET_DISPATCH: returns dispatch table as returned by the new driver function = WINEGL_GET_PRIVATE: returns physDev for the given HDC. - opengl32.dll gets the dispatch table using the new escape code when it is loaded. - in x11drv, opengl.c is replaced by various winegl_??? files. Why is WINEGL_GET_PRIVATE needed? The question really is: where do we want to have the WGL extension entry points? I'd recommend to have them in x11drv, why? If they were in opengl32.dll, we'd have to update the dispatch table whenever we would add a new extension. And because the application calls the functions which are entirely implemented in x11drv and not called through wrappers (which don't really exist in my patchset anyway) and the functions still need access to the physDev structure... well, hence the second escape code. I've removed all opengl related #ifdefs, I'd like to have configure set WINEGLFILES to either 'winegl_noop.c', which implements noop functions when opengl isn't supported, or a list with all the required winegl files when all required headers are present. And there is no need to check for linGL.so, too, (at least not for opengl32 and x11drv) because all functions are loaded using wine_dlsym() or glXGetProcAddress(). What is needed at compilation time are the three opengl header files gl.h, glx.h and glu.h. I'm not a configure expert and this big script scares me. That's why the configure patch is king of a hack, but should work when someone uses --without-opengl. The WGL related code is mostly copied over from opengl32/wgl.c and opengl32/wgl_ext.c, I've formatted it a bit to look nicer and changed what was needed to embed it into x11drv. For example 'get_dispay()' is gone as we now have direct access to the display through 'gdi_display'. The only thing that I've disabled is the WGL_ARB_render_texture extension (nvidia drivers don't support it and there is a good chance that no application relies explicitly on this extension, however, the extension can be enabled by uncommenting one line in x11drv/winegl_extensions.c). I've put all the patches, eleven to be precise, on my server: http://dbservice.com/ftpdir/tom/wine/ up until patch number 10, it should not introduce any regression as it only makes the necessary changes to gdi to support the new GDI driver function (but doesn't use it yet) and adds the winegl files to x11drv. Patch number 10 is a bit large, it changes configure to compile the winegl files, removes x11drv/opengl.c and updates gdi32.DescribePixelFormat and friends to use the functions from the dispatch table. Patch number 11 seems big but it basically only replaces functions with wrapper functions that call the function pointer from the dispatch table. It also removes the now unnecessary wgl_ext.[c|h] functions and updates the Makefile. tom
Re: gdi: Add WineGL driver interface and wrapper functions
... and here the patch diff --git a/dlls/gdi/driver.c b/dlls/gdi/driver.c index 9f40a8d..012d882 100644 --- a/dlls/gdi/driver.c +++ b/dlls/gdi/driver.c @@ -194,6 +194,33 @@ #define GET_FUNC(name) driver-funcs.p## GET_FUNC(StrokePath); GET_FUNC(SwapBuffers); GET_FUNC(WidenPath); + +/* Wine OpenGL implementation */ +GET_FUNC(WineGL_wglChoosePixelFormat); +GET_FUNC(WineGL_wglCopyContext); +GET_FUNC(WineGL_wglCreateContext); +GET_FUNC(WineGL_wglCreateLayerContext); +GET_FUNC(WineGL_wglDeleteContext); +GET_FUNC(WineGL_wglDescribeLayerPlane); +GET_FUNC(WineGL_wglDescribePixelFormat); +GET_FUNC(WineGL_wglGetCurrentContext); +GET_FUNC(WineGL_wglGetCurrentDC); +GET_FUNC(WineGL_wglGetDefaultProcAddress); +GET_FUNC(WineGL_wglGetLayerPaletteEntries); +GET_FUNC(WineGL_wglGetPixelFormat); +GET_FUNC(WineGL_wglGetProcAddress); +GET_FUNC(WineGL_wglMakeCurrent); +GET_FUNC(WineGL_wglRealizeLayerPalette); +GET_FUNC(WineGL_wglSetLayerPaletteEntries); +GET_FUNC(WineGL_wglSetPixelFormat); +GET_FUNC(WineGL_wglShareLists); +GET_FUNC(WineGL_wglSwapBuffers); +GET_FUNC(WineGL_wglSwapLayerBuffers); +GET_FUNC(WineGL_wglUseFontBitmaps); +GET_FUNC(WineGL_wglUseFontOutlines); + +GET_FUNC(WineGL_glGetIntegerv); + #undef GET_FUNC } else memset( driver-funcs, 0, sizeof(driver-funcs) ); @@ -705,3 +732,147 @@ INT WINAPI DrawEscape(HDC hdc, INT nEsca FIXME(DrawEscape, stub\n); return 0; } + + +/*** + * OpenGL wrapper functions + */ + +#define WINEGL_WRAPPER_CREATE(retType, funcName, argPrototype, args) \ +retType WINAPI WineGLWrapper_##funcName argPrototype \ +{ \ +return display_driver-funcs.pWineGL_##funcName args ;\ +} \ + +WINEGL_WRAPPER_CREATE( + INT, wglChoosePixelFormat, (HDC hdc, const PIXELFORMATDESCRIPTOR *pfd), + (hdc, pfd) +) + +WINEGL_WRAPPER_CREATE( + BOOL, wglCopyContext, (HGLRC hglrc1, HGLRC hglrc2, UINT uint), + (hglrc1, hglrc2, uint) +) + +WINEGL_WRAPPER_CREATE( + HGLRC, wglCreateContext, (HDC hdc), + (hdc) +) + +WINEGL_WRAPPER_CREATE( + HGLRC, wglCreateLayerContext, (HDC hdc, INT i), + (hdc, i) +) + +WINEGL_WRAPPER_CREATE( + BOOL, wglDeleteContext, (HGLRC hglrc), + (hglrc) +) + +WINEGL_WRAPPER_CREATE( + BOOL, wglDescribeLayerPlane, (HDC hdc, INT i1, INT i2, UINT uint, LPLAYERPLANEDESCRIPTOR lpLPD), + (hdc, i1, i2, uint, lpLPD) +) + +WINEGL_WRAPPER_CREATE( + INT, wglDescribePixelFormat, (HDC hdc, INT i, UINT uint, LPPIXELFORMATDESCRIPTOR lpPFD), + (hdc, i, uint, lpPFD) +) + +WINEGL_WRAPPER_CREATE( + HGLRC, wglGetCurrentContext, (void), + () +) + +WINEGL_WRAPPER_CREATE( + HDC, wglGetCurrentDC, (void), + () +) + +WINEGL_WRAPPER_CREATE( + PROC, wglGetDefaultProcAddress, (LPCSTR lpcstr), + (lpcstr) +) + +WINEGL_WRAPPER_CREATE( + INT, wglGetLayerPaletteEntries, (HDC hdc, INT i1, INT i2, INT i3, const COLORREF *cref), + (hdc, i1, i2, i3, cref) +) + +WINEGL_WRAPPER_CREATE( +INT, wglGetPixelFormat, (HDC hdc), + (hdc) +) + +WINEGL_WRAPPER_CREATE( + PROC, wglGetProcAddress, (LPCSTR lpcstr), + (lpcstr) +) + +WINEGL_WRAPPER_CREATE( + BOOL, wglMakeCurrent, (HDC hdc, HGLRC hglrc), + (hdc, hglrc) +) + +WINEGL_WRAPPER_CREATE( + BOOL, wglRealizeLayerPalette, (HDC hdc, INT i, BOOL b), + (hdc, i, b) +) + +WINEGL_WRAPPER_CREATE( + INT, wglSetLayerPaletteEntries, (HDC hdc, INT i1, INT i2, INT i3, const COLORREF *cref), + (hdc, i1, i2, i3, cref) +) + +WINEGL_WRAPPER_CREATE( + BOOL, wglSetPixelFormat, (HDC hdc, INT i, const PIXELFORMATDESCRIPTOR *pfd), + (hdc, i, pfd) +) + +WINEGL_WRAPPER_CREATE( + BOOL, wglShareLists, (HGLRC hglrc1, HGLRC hglrc2), + (hglrc1, hglrc2) +) + +WINEGL_WRAPPER_CREATE( + BOOL, wglSwapBuffers, (HDC hdc), + (hdc) +) + +WINEGL_WRAPPER_CREATE( + BOOL, wglSwapLayerBuffers, (HDC hdc, UINT uint), + (hdc, uint) +) + +WINEGL_WRAPPER_CREATE( + BOOL, wglUseFontBitmaps, (HDC hdc, DWORD dw1, DWORD dw2, DWORD dw3), + (hdc, dw1, dw2, dw3) +) + +WINEGL_WRAPPER_CREATE( + BOOL, wglUseFontOutlines, (HDC hdc, DWORD dw1, DWORD dw2, DWORD dw3, FLOAT f1, FLOAT f2, INT i, LPGLYPHMETRICSFLOAT lpGMF), + (hdc, dw1, dw2, dw3, f1, f2, i, lpGMF) +) + +WINEGL_WRAPPER_CREATE( + VOID, glGetIntegerv, (DWORD pname, INT *params), + (pname, params) +) + + +/*** + * low-level driver functions + */ + +void * WINAPI wineGetDCPrivate(HDC hdc) +{ +void * ret = NULL; +DC * dc = DC_GetDCPtr(hdc); + +if (dc) { +ret = (void *) dc-physDev; +GDI_ReleaseObj(hdc); +} + +return ret; +} diff --git a/dlls/gdi/gdi32.spec b/dlls/gdi/gdi32.spec index
Re: gdi: create driver interface for opengl implementation and add wrapper functions.
Tomas Carnecky wrote: This patch adds an WineGL interface to the GDI driver and creates wrapper functions for them. If the wgl function takes a HDC, the wrapper extract the PHYSDEV and prepends it to the driver-function argument list. This way we have direct access to the PHYSDEV in x11drv and don't need to use ExtEscape(). I've run into some problems with this. opengl32.dll exports only the basic wgl functions, if an application wants to use functions which are provided by various extensions, it has to use wglGetProcAddress(). I moved all extensions to x11drv and wglGetProcAddress() returns the pointer to those functions. The problem is that those functions are called directly by the application and not through the wrapper and thus don't have access to PHYSDEV like the basic wgl functions. We could make wrapper functions for each extension, but that would require changing the driver function table every time we add a new extension. I took another path and added a new GDI function 'wineGetDCPrivate(HDC)' that returns the PHYSDEV for the given DC. This way I can have all function prototypes the same for wgl?? in opengl32, WineGLWrapper_wgl?? in GDI and the WineGL_wgl?? implementation in x11drv. In the implementation in x11drv I get the PHYSDEV when required using the new GDI function and have direct access to it, no need to use ExtEscape(). In addition to the basic wgl?? functions the GDI driver has to expose two gl?? functions, glGetString() (for GL_EXTENSIONS to be able to report all supported WGL extensions) and glGetIntegerv() (because GL_DEPTH_BITS and GL_ALPHA_BITS don't have the same meaning on windows and X11). Also, once the x11drv side is stable, opengl32.spec can be changed like this: @ stdcall wglGetCurrentDC() gdi32.WineGLWrapper_wglGetCurrentDC And all but three functions can be deleted from wgl.c (and wgl_ext.c) can be removed completely). Those three functions are: o wglGetProcAddress(): first scan OpenGL functions, if not found fall back to WineGLWrapper_wglGetProcAddress() to let x11drv return the pointer to the wgl function. o glGetString: use glGetString and WineGLWrapper_glGetString() to build the extension string. o glGetIntegerv(): let x11drv return the proper values if the app is requesting GL_DEPTH_BITS or GL_ALPHA_BITS. tom
Re: PBuffer and wglMakeCurrent()
Alexandre Julliard wrote: Tomas Carnecky [EMAIL PROTECTED] writes: As far as I understand, the only way to access x11drv is through GDI (which implements ExtEscape), would your proposed change require new functions for opengl32 - x11drv communication through GDI? Or is it somehow possible to bypass GDI and access x11drv directly from opengl32? The DC access will have to be done through GDI, yes, but that can be a simple wrapper that calls the corresponding driver entry point. For functions that don't need to access the DC, opengl could call x11drv directly, though of course if everything goes through GDI it will make it easier to support opengl with a different driver. Based on your ideas, I did the following: - added a new gdi driver function: 'wglMakeCurrent' - move wglMakeCurernt to x11drv - new function wrapper_wglMakeCurrent in gdi/driver.c - wglMakeCurrent in opengl32 just calls wrapper_wglMakeCurrent and it works, though I had to add -lGL to the x11drv Makefile, I don't know how you see a libGL.so requirement for x11drv, maybe load libGL.so and the required entry points at runtime like in opengl32/wgl.c I had to simplify wglMakeCurrent, I didn't spend much time on this hack, and I had to copy the Wine_GLContext structure to x11drv/opengl.c, quite ugly.. I know I also had to comment out calls to 'NtCurrentTeb()' because it wouldn't compile otherwise, I don't know what that function does, but World of Warcraft works without it (and as a sidenote, I think it runs much faster). I didn't know how much logic to put into opengl32/gdi or x11drv, gdi extracts the PHYSDEV from the HDC and passes it as the first argument to the x11drv function implementation, along with the original arguments (Is is ok to pass HDC to the x11drv?). And I don't know if the function naming is ok, x11drv exports a function named wglMakeCurrent, maybe that should be changed to wine_wglMakeCurrent and I don't know if the gdi wrapper and exported function names (both wrapper_wglMakeCurrent) are ok. If we can sort out the naming I could start adding the driver entry points to both GDI and x11drv, without migrating the actual function implementation, and than start migrating function by function to x11drv. The attached patch is what came up by my hacking, just so you have an idea what I've done ;) tom diff --git a/dlls/gdi/driver.c b/dlls/gdi/driver.c index 9f40a8d..8903752 100644 --- a/dlls/gdi/driver.c +++ b/dlls/gdi/driver.c @@ -194,6 +194,9 @@ #define GET_FUNC(name) driver-funcs.p## GET_FUNC(StrokePath); GET_FUNC(SwapBuffers); GET_FUNC(WidenPath); + +/* OpenGL wrapper functions */ +GET_FUNC(wglMakeCurrent); #undef GET_FUNC } else memset( driver-funcs, 0, sizeof(driver-funcs) ); @@ -705,3 +708,20 @@ INT WINAPI DrawEscape(HDC hdc, INT nEsca FIXME(DrawEscape, stub\n); return 0; } + + + +/* OpenGL wrapper functions */ + +BOOL WINAPI wrapper_wglMakeCurrent(HDC hdc, HGLRC hglrc) +{ +BOOL ret = FALSE; +DC * dc = DC_GetDCPtr( hdc ); +if (dc) +{ +if (dc-funcs-pwglMakeCurrent) +ret = dc-funcs-pwglMakeCurrent( dc-physDev, hdc, hglrc ); +GDI_ReleaseObj( hdc ); +} +return ret; +} diff --git a/dlls/gdi/gdi32.spec b/dlls/gdi/gdi32.spec index 1c245d8..bb9701d 100644 --- a/dlls/gdi/gdi32.spec +++ b/dlls/gdi/gdi32.spec @@ -522,3 +522,6 @@ # @ cdecl DIB_CreateDIBSection(long ptr long ptr long long long) @ cdecl GDI_GetObjPtr(long long) @ cdecl GDI_ReleaseObj(long) + +# OpenGL wrapper function +@ stdcall wrapper_wglMakeCurrent(long long) diff --git a/dlls/gdi/gdi_private.h b/dlls/gdi/gdi_private.h index dfdb008..fd02a57 100644 --- a/dlls/gdi/gdi_private.h +++ b/dlls/gdi/gdi_private.h @@ -182,6 +182,9 @@ typedef struct tagDC_FUNCS BOOL (*pStrokePath)(PHYSDEV); BOOL (*pSwapBuffers)(PHYSDEV); BOOL (*pWidenPath)(PHYSDEV); + +/* OpenGL wrapper functions */ +BOOL (*pwglMakeCurrent)(PHYSDEV,HDC,HGLRC); } DC_FUNCTIONS; /* It should not be necessary to access the contents of the GdiPath @@ -449,3 +452,4 @@ #define DIB_PAL_MONO 2 #endif /* __WINE_GDI_PRIVATE_H */ BOOL WINAPI FontIsLinked(HDC); + diff --git a/dlls/opengl32/wgl.c b/dlls/opengl32/wgl.c index 2e3cac7..b450959 100644 --- a/dlls/opengl32/wgl.c +++ b/dlls/opengl32/wgl.c @@ -539,56 +539,9 @@ static int describeDrawable(Wine_GLConte /*** * wglMakeCurrent (OPENGL32.@) */ -BOOL WINAPI wglMakeCurrent(HDC hdc, - HGLRC hglrc) { - BOOL ret; - DWORD type = GetObjectType(hdc); - - TRACE((%p,%p)\n, hdc, hglrc); - - ENTER_GL(); - if (hglrc == NULL) { - ret = glXMakeCurrent(default_display, None, NULL); - NtCurrentTeb()-glContext = NULL; - } else { - Wine_GLContext *ctx = (Wine_GLContext *) hglrc; - Drawable drawable = get_drawable( hdc ); - if (ctx-ctx == NULL) { - int draw_vis_id, ctx_vis_id; -VisualID
Re: PBuffer and wglMakeCurrent()
Tomas Carnecky wrote: comments? Why do I have the impression that when it comes to x11drv/opengl nobody wants to take the responsibility. I won't submit a patch until someone says 'tom, your approach looks good, improve this and then submit a patch to wine-patches' or 'tom, no, this won't work because ... try to change this ... move the code there ... don't do that, it will break this' etc. I don't want to know whether the implementation details are ok, I just want to know whether the general idea (the flags field) is acceptable. I did the changes I've described and it does work, but I won't work on it any further (eg. make it independent of my previous patches) unless I get a green light. In the attachment is the latest patch I've applied to my local tree, you see that it requires the X11DRV_[S|G]ET_FLAGS escape code which I've added in one of my previous patches. Maybe that gives you an better idea of what I'm trying to do. tom 117000e2604e102f41632dcbe3a5e454f9d218b9 diff --git a/dlls/opengl32/wgl.c b/dlls/opengl32/wgl.c index ac0f401..7490015 100644 --- a/dlls/opengl32/wgl.c +++ b/dlls/opengl32/wgl.c @@ -173,6 +173,16 @@ inline static Font get_font( HDC hdc ) return font; } +inline static BOOL is_pbuffer( HDC hdc ) +{ +long flags; +enum x11drv_escape_codes escape = X11DRV_GET_FLAGS; + +if (!ExtEscape( hdc, X11DRV_ESCAPE, sizeof(escape), (LPCSTR)escape, +sizeof(flags), (LPSTR)flags )) return False; +return ((flags X11DRV_FLAG_PBUFFER) == X11DRV_FLAG_PBUFFER); +} + /*** * wglCreateContext (OPENGL32.@) @@ -571,8 +581,9 @@ BOOL WINAPI wglMakeCurrent(HDC hdc, TRACE( make current for dis %p, drawable %p, ctx %p\n, ctx-display, (void*) drawable, ctx-ctx); ret = glXMakeCurrent(ctx-display, drawable, ctx-ctx); NtCurrentTeb()-glContext = ctx; - if(ret type == OBJ_MEMDC) { + if(ret type == OBJ_MEMDC !is_pbuffer(hdc)) { ctx-do_escape = TRUE; + glDrawBuffer(GL_FRONT); } } LEAVE_GL(); diff --git a/dlls/opengl32/wgl_ext.c b/dlls/opengl32/wgl_ext.c index 6c708f9..2304ccb 100644 --- a/dlls/opengl32/wgl_ext.c +++ b/dlls/opengl32/wgl_ext.c @@ -75,6 +75,22 @@ inline static BOOL is_damaged( HDC hdc ) return ((flags X11DRV_FLAG_DAMAGED) == X11DRV_FLAG_DAMAGED); } +inline static BOOL set_pbuffer_flag( HDC hdc ) +{ +long flags; +enum x11drv_escape_codes escape = X11DRV_GET_FLAGS; + +if (!ExtEscape( hdc, X11DRV_ESCAPE, sizeof(escape), (LPCSTR)escape, +sizeof(flags), (LPSTR)flags )) return False; + +escape = X11DRV_SET_FLAGS; +flags |= X11DRV_FLAG_PBUFFER; +if (!ExtEscape( hdc, X11DRV_ESCAPE, sizeof(escape), (LPCSTR)escape, +sizeof(flags), (LPSTR)flags )) return False; + +return True; +} + /* Some WGL extensions... */ static const char *WGL_extensions_base = WGL_ARB_extensions_string WGL_EXT_extensions_string; static char *WGL_extensions = NULL; @@ -1088,6 +1104,7 @@ HDC WINAPI wglGetPbufferDCARB(HPBUFFERAR hDC = CreateCompatibleDC(object-hdc); SetPixelFormat(hDC, object-pixelFormat, NULL); set_drawable(hDC, object-drawable); /* works ?? */ + set_pbuffer_flag(hDC); TRACE((%p)-(%p)\n, hPbuffer, hDC); return hDC; } diff --git a/include/wine/x11drv_escape.h b/include/wine/x11drv_escape.h index af69ce0..ed7940d 100644 --- a/include/wine/x11drv_escape.h +++ b/include/wine/x11drv_escape.h @@ -22,6 +22,7 @@ #ifndef __WINE_X11DRV_ESCAPE_H #define __WINE_X11DRV_ESCAPE_H #define X11DRV_FLAG_DAMAGED ( 1 0 ) +#define X11DRV_FLAG_PBUFFER ( 1 1 ) #define X11DRV_ESCAPE 6789 enum x11drv_escape_codes
PBuffer and wglMakeCurrent()
In wglMakeCurrent(), when the HDC type is OBJ_MEMDC you activate the frontbuffer for drawing. PBuffers' type is also OBJ_MEMDC, but changing the drawbuffer in that case is wrong. Is there a way to find out if the HDC is a PBuffer? I have some patches in my local tree but I took the freedom to put the X11DRV escape codes to one common header which alexandre doesn't quite like.. In my local tree I have a 'flags' field in X11DRV_PDEVICE which I use to store whether the PBuffer is damaged or not, I don't know if its the right place, but I could add a flag IS_PBUFFER and set it in wglGetPbufferDCARB() and check for it in wglMakeCurrent(). comments? tom
Re: World of Warcraft (WoW) patch/more address space layout stuff
Mike Hearn wrote: (yeah i'm bored :/) Seems the WoW appdb page (apart from being a great example of what an appdb entry should be like!) recommends users patch their Wine to run WoW properly. The patch is this one, which I am SURE we discussed before but I can't find the thread! So my questions are: * Is this working around a bug in WoW? (my guess - almost certainly yes) Speaking from my own experience with WoW bugs and Blizzard: Unless the bug affects Windows or MacOS users and you can prove and show the exact location of the bug (which functions are called, which arguments passed etc.) Blizzard has no interest in fixing the bug whatsoever. To prove the 'targeting circle' bug was easy, WoW passed a negative value to an OpenGL function and the bug affected the game on all platforms when running in OpenGL mode (and thus all MacOS users). And yet Blizzard postponed the fix to the next major patch release (with at least two minor releases in between!), even though the patch probably was a one-liner. I think Blizzard should test WoW under wine or cedega because I believe there are many bugs that are hard to track down and it would be easier to find them when running under another memory layout. I don't think they'll ever to that. Hell, the guy at Blizzard didn't even want to give credit to the OpenSource community for finding that bug. And frankly, playing without the targeting circles was a major pain and I think many people were happy to see that bug fixed. tom
Re: What version of freetype are we requiring these days
Bill Medland wrote: I have just noticed that configure is telling me Warning: Freetype or Fontforge is missing Why does it think that? (freetype-devel version 2.1.9 is installed) What about fontforge? Is that installed?
WoW crashes in 'wine_cp_mbstowcs' under certain circumstances.
These circumstances being if it tries to load an invalid lua file, more specifically a lua file which contains the invalid lua string (not character!) \342\200\260. I don't know what this function does and there isn't a TRACE() in that function, so could someone please review if the function is implemented correctly (wrt the above char sequence) or give me more informations about the arguments so I could add a TRACE() to see what WoW passes to it? thanks tom
Re: WoW crashes in 'wine_cp_mbstowcs' under certain circumstances.
Jesse Allen wrote: On 4/17/06, Tomas Carnecky [EMAIL PROTECTED] wrote: Wine doesn't crash in this function, sorry, it's a bug in pf_vsnprintf() which causes snprintf() to write beyond the end of the buffer. I've attached a patch that fixes it for me, but it's probably better not to create such large buffers on the stack. Anyone with a better fix? I think the patch breaks printing fields larger than 400. I think the existing code should have been able to handle very large fields by allocating the memory to do it. I think more investigation is needed. I thought that, too, but 'flags.FieldLength' was always zero, so the function always used the 40-character buffer. tom
Re: WoW crashes in 'wine_cp_mbstowcs' under certain circumstances.
Jesse Allen wrote: On 4/17/06, Tomas Carnecky [EMAIL PROTECTED] wrote: Jesse Allen wrote: On 4/17/06, Tomas Carnecky [EMAIL PROTECTED] wrote: Wine doesn't crash in this function, sorry, it's a bug in pf_vsnprintf() which causes snprintf() to write beyond the end of the buffer. I've attached a patch that fixes it for me, but it's probably better not to create such large buffers on the stack. Anyone with a better fix? I think the patch breaks printing fields larger than 400. I think the existing code should have been able to handle very large fields by allocating the memory to do it. I think more investigation is needed. I thought that, too, but 'flags.FieldLength' was always zero, so the function always used the 40-character buffer. In the case that it is specified greater than 400, it will break. What makes you think so? Sure the string buffer in the msvcrt test application isn't big enought to hold a 500 character string, but when I increase it it works fine. tom
Re: abort if WINEDEBUG requests functionality that was disabled at configure time
Mike Hearn wrote: Why would anybody build without tracing anyway? I would be *very* surprised if it led to any noticable/real world performance improvement. Because of the exact same reason NVidia doesn't want to add yet another dispatch table to linux's OpenGL libraries (as proposed by someone on the xorg ML to add support for a new fancy feature), for functions that are used heavily and are very small (like glBegin()) it decreases the performance by up to 30% for certain applications (I think it was an opengl test suite). I know the overhead of an dispatch table is bigger than a simple 'if (trace) {}', but this few (load, compare == two?) additional instruction still is noticeable in certain applications. tom
Re: abort if WINEDEBUG requests functionality that was disabled at configure time
Segin wrote: Mike Hearn wrote: On Sat, 15 Apr 2006 23:37:17 -0400, Mike Frysinger wrote: if wine is built with --disable-trace or --disable-debug, then using WINEDEBUG will accept the respective options without complaining and without actually showing any useful information Why would anybody build without tracing anyway? I would be *very* surprised if it led to any noticable/real world performance improvement. Rather than do this why not just restore your packages configure switches to the upstream defaults? It's because some people are Gentoo zealots whose CFLAGS are -O9 --omfg-optimize -march=8088 If we can, why not? If there is the option to disable the trace it's gotta be there for a reason, why would it be there else, for fun? Nobody ever told me that disabling the trace will improve performance but I assumed it, less code means faster code (generally speaking). But it doesn't improve performance, tested it with WoW, at least nothing noticeable. So I'm all for removing this configure option. tom
Re: Missing fonts
Alexander Nicolaysen Sørnes wrote: Vitaliy Margolen skrev: Sunday, April 9, 2006, 4:15:22 PM, Alexander N. Sørnes wrote: I'm sorry, but are you reading this mailing list at all? To sum it up: 1. You need fontforge. 2. You need _working_ fontforge. 3. You need Wine's fonts made with working fontforge. Same thing for freetype. Vitaliy First of all, I am very sorry if I sent a message in ignorance. But wasn't the FontForge stuff introduced before 0.9.11? Compiling and running 0.9.11 works fine, but doing the same for CVS does not. I had the same problem. You need a newer fontforge, mine was from 2005 (from the official gentoo portage tree) and updating to 200604?? solved the problem. tom
Re: Missing fonts
Vitaliy Margolen wrote: Sunday, April 9, 2006, 4:15:22 PM, Alexander N. Sørnes wrote: Hello, I have found some font problems recently with Wine. First of all, the text of most richedit objects started displaying as rectangles since around Wine 0.9.9, and now, with current CVS, no text is displayed in applications witht the standard Windows interface (like winecfg, wordpad, message boxes etc.) on some systems. [skip] I'm sorry, but are you reading this mailing list at all? To sum it up: 1. You need fontforge. 2. You need _working_ fontforge. maybe the configure script should check the fontforge version, to be at least from 2006. My fontforge: Executable based on sources from 20:17 6-Apr-2006-ML.. Checking for Executable based on sources from (.+) (.+)-(.+)-2006 in `fontforge --version` should be enough. tom
Re: Missing fonts
Mike McCormack wrote: Tomas Carnecky wrote: maybe the configure script should check the fontforge version, to be at least from 2006. My fontforge: Executable based on sources from 20:17 6-Apr-2006-ML.. Checking for Executable based on sources from (.+) (.+)-(.+)-2006 in `fontforge --version` should be enough. The configure tests should test the capability of FontForge to build fonts as Wine requires them, not for a specific version. I think a better solution is sfd2ttd, but it might be a while before somebody gets around to implementing that. At least now this information is in the README file, I saw the patch just after I've sent that email :) tom
Re: Missing fonts
Kai Blin wrote: * Tomas Carnecky [EMAIL PROTECTED] [10/04/06, 15:01:00]: At least now this information is in the README file, I saw the patch just after I've sent that email :) It's not. Alexandre convinced me that the README file is the wrong place for it. Configure warns you about a missing fontforge, that will be enough. fontforge isn't a hard dependency. It's not the missing fontforge I'm worried about, it's the incompatible fontforge that creates unusable fonts. In that case I'd rather see wine skip the fonts/ directory and don't create/install any fonts. A quick font test before 'make' installs the fonts would help here (given that this would do 'make' itself before installing the fonts). tom
Re: Coverity doing scans of Wine codebase!
Mike Hearn wrote: * Another (missing NULL ptr check in LoadTypeLibEx) is right, but, I don't think we want to add lots of missing NULL arg checks in the public API implementations. An application will never pass NULL to this function directly as otherwise it'd not work at all, so, a crash with a NULL arg here probably is revealing some other bug. I'd rather it crashed cleanly in a debuggable way than silently return error code and continue, in other words ... Is there a way to tell the code checker to skip the NULL check? Maybe there are flags like '__user' in the kernel source -- '__notnull'. tom
Re: Coverity doing scans of Wine codebase!
Mike McCormack wrote: Tomas Carnecky wrote: * Another (missing NULL ptr check in LoadTypeLibEx) is right, but, I don't think we want to add lots of missing NULL arg checks in the public API implementations. An application will never pass NULL to this function directly as otherwise it'd not work at all, so, a crash with a NULL arg here probably is revealing some other bug. I'd rather it crashed cleanly in a debuggable way than silently return error code and continue, in other words ... Is there a way to tell the code checker to skip the NULL check? Maybe there are flags like '__user' in the kernel source -- '__notnull'. It's complaining because there is already a NULL check further down in the code, but we use the pointer without checking for NULL first. If the function never checks that parameter for NULL, the checker won't complain about it. Ah.. I didn't know that. tom
Re: [opengl] Another possible fix for the BadMatch error
Leon Freitag wrote: BadMatch in X_GLXCreateGLXPixmap is a known problem, I've submitted a patch but it was rejected. Well, try to resubmit it :) Or post it here, so that others could test it. Perhaps you could merge this one-liner into it and then resubmit it. Yep, here it is. But note that it's only supported on GLX 1.3 and higher. tom diff --git a/dlls/opengl32/wgl.c b/dlls/opengl32/wgl.c index c7d3147..a9d0b27 100644 --- a/dlls/opengl32/wgl.c +++ b/dlls/opengl32/wgl.c @@ -509,22 +509,35 @@ static int describeContext(Wine_GLContex static int describeDrawable(Wine_GLContext* ctx, Drawable drawable) { int tmp; - int draw_vis_id; + int nElements; + int attribList[3] = { GLX_FBCONFIG_ID, 0, None }; + GLXFBConfig *fbCfgs; + if (3 wine_glx.version || NULL == wine_glx.p_glXQueryDrawable) { /** glXQueryDrawable not available so returns not supported */ return -1; } + TRACE( Drawable %p have :\n, (void*) drawable); - wine_glx.p_glXQueryDrawable(ctx-display, drawable, GLX_FBCONFIG_ID, (unsigned int*) tmp); - TRACE( - FBCONFIG_ID as 0x%x\n, tmp); - wine_glx.p_glXQueryDrawable(ctx-display, drawable, GLX_VISUAL_ID, (unsigned int*) tmp); - TRACE( - VISUAL_ID as 0x%x\n, tmp); - draw_vis_id = tmp; wine_glx.p_glXQueryDrawable(ctx-display, drawable, GLX_WIDTH, (unsigned int*) tmp); TRACE( - WIDTH as %d\n, tmp); wine_glx.p_glXQueryDrawable(ctx-display, drawable, GLX_HEIGHT, (unsigned int*) tmp); TRACE( - HEIGHT as %d\n, tmp); - return draw_vis_id; + wine_glx.p_glXQueryDrawable(ctx-display, drawable, GLX_FBCONFIG_ID, (unsigned int*) tmp); + TRACE( - FBCONFIG_ID as 0x%x\n, tmp); + + attribList[1] = tmp; + fbCfgs = wine_glx.p_glXChooseFBConfig(ctx-display, DefaultScreen(ctx-display), attribList, nElements); + if (fbCfgs == NULL) { +return -1; + } + + wine_glx.p_glXGetFBConfigAttrib(ctx-display, fbCfgs[0], GLX_VISUAL_ID, tmp); + TRACE( - VISUAL_ID as 0x%x\n, tmp); + + XFree(fbCfgs); + + return tmp; } /***
Re: [opengl] Another possible fix for the BadMatch error
Leon Freitag wrote: Am Dienstag, 4. April 2006 14:49 schrieb Tomas Carnecky: Leon Freitag wrote: BadMatch in X_GLXCreateGLXPixmap is a known problem, I've submitted a patch but it was rejected. Well, try to resubmit it :) Or post it here, so that others could test it. Perhaps you could merge this one-liner into it and then resubmit it. Yep, here it is. But note that it's only supported on GLX 1.3 and higher. tom + fbCfgs = wine_glx.p_glXChooseFBConfig(ctx-display, DefaultScreen(ctx-display), attribList, nElements); Hm, as I see this patch tries to fix the spec violation. I have tried something like this already before on weekend, but glXChooseFBConfig returned NULL and the problem still existed. Right, that is totally possible.. because glXQueryDrawable() just above returns an incorrect FBCONFIG_ID as long as the drawable hasn't been touched by a GLX function (glXMakeCurrent() for example). try to call describeDrawable() before and after glXMakeCurrent() ans see if the output changes. I'll try this patch however, perhaps I made some mistake this weekend. Should it really fix X_GLXCreateGLXPixmap? No, this is only a cosmetic change, but it makes debugging easier because you then see the correct IDs. The X_GLXCreateGLXPixmap fix is much harder :( Although I wrote a couple of patches (because this bug can be fixed in several places) all were rejected. Search the wine-patches mailing list, you'll find the patches there. tom
Re: Implement THREAD_PRIORITY_TIME_CRITICAL
Con Kolivas wrote: Ok. This is not a shot in the dark by the way because you mentioned pipes and I had a quick look at the wine sound code. I committed some changes to the cpu scheduler in 2.6.17-rc1 that change the way it views sleeping on pipes... Works _much_ better with 2.6.17-rc1(-g6246b612). Though I still sometimes hear 'spikes' or 'bursts' (short pulse, high frequency), don't know what produces them but it really hurts my ears, they usually appear when the CPU is under heavy load (when WoW loads world data from the harddrive or when I switch from/to the workspace where WoW runs, but here I suspect wine's window handling code to block somewhere for too long), but now much less now, in fact, it appeared only once/twice in 10 minutes playing which is really great. You are my god, thanks for fixing this! tom
Re: [Bug 4979] New: wine .9.11 make fails on AMD64
Robert Shearman wrote: This is a bug with the artsc-config utility, not with Wine. All of these types of utilities are fundamentally broken on 64-bit distributions when trying to compile as 32-bit. And what about fixing first wine instead of complaining about third party utilities? $ ./configure --prefix=/usr --disable-win16 CFLAGS=-march=k8 -O2 -pipe LDFLAGS=-L/emul/linux/x86/usr/lib $ make ... gcc -m32 -march=k8 -O2 -pipe -o sfnt2fnt sfnt2fnt.o -L../libs/unicode -lwine_unicode -L../libs/port -lwine_port -lfreetype -lz /usr/lib/gcc/x86_64-pc-linux-gnu/4.1.0/../../../../x86_64-pc-linux-gnu/bin/ld: skipping incompatible /usr/lib/libfreetype.so when searching for -lfreetype /usr/lib/gcc/x86_64-pc-linux-gnu/4.1.0/../../../../x86_64-pc-linux-gnu/bin/ld: skipping incompatible /usr/lib/libfreetype.a when searching for -lfreetype /usr/lib/gcc/x86_64-pc-linux-gnu/4.1.0/../../../../x86_64-pc-linux-gnu/bin/ld: cannot find -lfreetype collect2: ld returned 1 exit status make[1]: *** [sfnt2fnt] Error 1 Here wine doesn't use LDFLAGS to link this executable, this breaks compilation on my gentoo box. That's why I have to add -L/emul/linux/x86/usr/lib to CFLAGS. and this maybe isn't the only place, but I don't know enough about makefiles to fix it myself. tom
Re: [Bug 4979] New: wine .9.11 make fails on AMD64
Robert Shearman wrote: You might want to find out why you have to add anything to LDFLAGS. I don't know about other distributions, but on 64-bit Ubuntu passing the -m32 flag to gcc will make it automatically add the standard 32-bit lib directories to the lib path (i.e. /lib32, /usr/lib32, etc.). On gentoo, the (precompiled/binary) compatibility libraries are in /emul/linux/, 32bit libraries from packages which gentoo compiles on its own (glibc and all other multilib aware packages) are in /lib32, /usr/lib32 etc. But my point was.. wine links an executable without using LDFLAGS. tom
Re: [opengl] Another possible fix for the BadMatch error
Willie Sippel wrote: Am Sonntag, 2. April 2006 16:44 schrieb Leon Freitag: Someone mentioned earlier that the code in wgl.c in DescribeDrawable() violates the GLX specification and therefore causes bogus return values for the visualid. However this code has been already present before the regression, but worked somehow. So the regression can't be caused by this violation. I tried to correct this violation by calling glXChooseFBConfig and then glXGetChooseConfigAttrib, as suggested in the spec, and glXChooseFBConfig returns NULL for the appropriate FBCONFIG_ID. To describe what I've seen: because the window isn't created using glXCreateWindow() it has no GLXFBConfig associated (at first), thus no valid FBCONFIG_ID (and because of that you can't find out the VISUAL_ID!), that seems reasonable because no GLX function has touched the drawable before. But that's only up to the first 'activation' of the drawable, eg. call to glXMakeCurrent(), in that function, my driver (nvidia closed source) adds a valid GLXFBConfig to the drawable and after that, describeDrawable() prints the correct IDs. The variable 'draw_vis_id' should not be used in any checks, because it may contain an incorrect VISUAL_ID. Doesn't seem to work, at least not for the applications I tried. Just tested Iconoclast by ASD [1] and Don't Stop by Portal Process [2], same results (BadMatch, X_GLXCreateGLXPixmap). The BadMatch in X_GLXCreateGLXPixmap is another problem.. here you should get a BadMatch in X_GLXMakeCurrent. BadMatch in X_GLXCreateGLXPixmap is a known problem, I've submitted a patch but it was rejected. Despite the recent x11drv rewrite (or was it only the windowing code), the OpenGL handling seems very buggy :( Maybe I should resend the describeDrawable() patch. Don't know why it was rejected. tom
Re: [RESEND][x11drv] cleanup: Move x11drv escape codes to one common header file.
Alexandre Julliard wrote: Tomas Carnecky [EMAIL PROTECTED] writes: This patch doesn't change any logic or C sourcecode, it only moves the X11DRV escape code definition to one common header file. Currently the definitions are spread over several C source files, and even there the definitions differ from file to file. I don't see anything wrong with this patch unless you *want* to have it difficult to maintain the code. Actually, that's more or less what we want. That sort of inter-dll dependencies should be strongly discouraged, and we don't want to add more of it than strictly necessary; as far as possible we also don't want dlls to depend on private headers, so copying a few things across is the preferred approach. If it becomes a problem it means something is wrong with our implementation. It's not that I've added a new inter-dll dependency, at least that's how I think. The dependency already *is* there, all these dlls depend on the ExtEscape() function. What about putitng the x11drv escape codes into the header where the ExtEscape() prototype is? These escape codes belong to the ExtEscape() function, that's what that function understands/supports. tom
Re: What is the reason to use GL_FRONT_LEFT in wglMakeCurrent()
Huw D M Davies wrote: On Sat, Mar 25, 2006 at 02:23:01AM +0100, Tomas Carnecky wrote: GL_FRONT and GL_FRONT_LEFT mean the same unless the drawable is stereo. And I don't think PBuffers or Pixmaps can be stereo, so is there any special reason to use GL_FRONT_LEFT ? Or is it because the spec sais so? See man glXCreateGLXPixmap. The X pixmap is used as the front left buffer of the resulting rendering area, so we make sure that we're rendering into the X pixmap, not some dummy buffer created by glXCreateGLXPixmap. Ok, but see what happens if an application has one rendering context, one window (doublebuffered) and one pbuffer (doublebuffered or not), activates the window (per default the backbuffer is selected for rendering), then activates the pbuffer (together with the same context), now wglMakeCurrent() calls glDrawBuffer(GL_FRONT_LEFT), so the frontbuffer is selected for rendering, and now the application again activates the window and now it suddenly renders into the frontbuffer, instead of the backbuffer like it thinks. wglMakeCurrent() isn't supposed to change the rendering context state (someone prove me wrong, I couldn't find any good description of this function, but neither does glXMakeCurrent) and application can make assumptions according to that. I don't have any wgl-pbuffer demos here, those from the nvidia sdk don't run (and won't for a looong time, sadly). And one application to test it with apparently isn't enough. I have written a test for windows (to test whether wglMakeCurrent changes the drawbuffer or not), but nobody of my friends has a graphics card that supportd pbuffers. tom
Re: What is the reason to use GL_FRONT_LEFT in wglMakeCurrent()
Huw D M Davies wrote: On Mon, Mar 27, 2006 at 05:05:23PM +0100, Huw D M Davies wrote: Ah right, glDrawBuffer alters the rendering state, that's bad. We should presumably be calling glXSwapBuffers here (but only in the GLXPixmap case). Which of course won't work either. We need to find out what happens to the render state under Windows when we make call wglMakeCurrent on a bitmap (I know that even with the rendering state set for GL_BACK then the bitmap gets drawn on) and whether we restore the rendering context when we switch back to using some other dc afterwards. We probably shouldn't do anything in the pbuffer case (which is what's causing your problem I guess). The big issue is that pbuffers and bitmap rendering are getting confused all over the place. Feel free to take http://dbservice.com/tom/LinuxTest.cpp, change it and test it.. it's a win32 application, despite its name :) tom
WGL_PBUFFER_LOST_ARB implementation
This patch implements 'WGL_PBUFFER_LOST_ARB'. It creates a new field in 'X11DRV_PDEVICE' which is used to store flags (currently only one: X11DRV_FLAG_DAMAGED). This flag indicates that the drawable (PBuffer) is damaged and no longer valid. As soon as the x11 driver receives the GLX_DAMAGED event it calls X11DRV_GLX_Event() which then set the flag. wglQueryPbufferARB() uses ExtEscape() to retrieve this flag and can decide whether the PBuffer is lost or not. This patch depends on one I've sent earlier to the wine-patches mailing list: '[x11drv] Move x11drv ExtEscape() codes to one common header file.'. Please look through this patch and tell me if I've done something horribly wrong or missed an important aspect. I'm very new to wine and I don't know what I am allowed to touch and what not. tom diff --git a/dlls/opengl32/wgl_ext.c b/dlls/opengl32/wgl_ext.c index ecefb7e..d11f4db 100644 --- a/dlls/opengl32/wgl_ext.c +++ b/dlls/opengl32/wgl_ext.c @@ -65,6 +65,16 @@ inline static void set_drawable( HDC hdc ExtEscape( hdc, X11DRV_ESCAPE, sizeof(escape), (LPCSTR)escape, 0, NULL ); } +inline static BOOL is_damaged( HDC hdc ) +{ +long flags; +enum x11drv_escape_codes escape = X11DRV_GET_FLAGS; + +if (!ExtEscape( hdc, X11DRV_ESCAPE, sizeof(escape), (LPCSTR)escape, +sizeof(flags), (LPSTR)flags )) return False; +return ((flags X11DRV_FLAG_DAMAGED) == X11DRV_FLAG_DAMAGED); +} + /* Some WGL extensions... */ static const char *WGL_extensions_base = WGL_ARB_extensions_string WGL_EXT_extensions_string; static char *WGL_extensions = NULL; @@ -1124,7 +1133,9 @@ GLboolean WINAPI wglQueryPbufferARB(HPBU break; case WGL_PBUFFER_LOST_ARB: -FIXME(unsupported WGL_PBUFFER_LOST_ARB (need glXSelectEvent/GLX_DAMAGED work)\n); + // FIXME(unsupported WGL_PBUFFER_LOST_ARB (need glXSelectEvent/GLX_DAMAGED work)\n); +*piValue = is_damaged(object-hdc); +TRACE(PBuffer is damaged: %d ?\n, *piValue); break; case WGL_TEXTURE_FORMAT_ARB: diff --git a/dlls/x11drv/event.c b/dlls/x11drv/event.c index 53c1a6a..e642fad 100644 --- a/dlls/x11drv/event.c +++ b/dlls/x11drv/event.c @@ -67,6 +67,15 @@ extern BOOL ximInComposeMode; #define DndURL 128 /* KDE dragdrop */ +/* GLX/PBuffer events */ +/* since we can't always include GL/glx.h and we need this in all cases + * we simply define 'GLX_DAMAGED' here. + * Why can't we use 'X11DRV_register_event_handler'? Because we need to make + * sure that this event passes 'filter_event' and we can't register event handlers + * that are always executed, no matter which events the application wants to recieve */ +#define GLX_DAMAGED0x8020 +extern void X11DRV_GLX_Event( HWND hwnd, XEvent *xev ); + /* Event handlers */ static void EVENT_FocusIn( HWND hwnd, XEvent *event ); static void EVENT_FocusOut( HWND hwnd, XEvent *event ); @@ -117,6 +126,7 @@ static struct event_handler handlers[MAX /* ColormapNotify */ { ClientMessage,EVENT_ClientMessage }, { MappingNotify,X11DRV_MappingNotify }, +{ GLX_DAMAGED, X11DRV_GLX_Event }, }; static int nb_event_handlers = 18; /* change this if you add handlers above */ @@ -224,6 +234,8 @@ static Bool filter_event( Display *displ return (mask QS_PAINT) != 0; case ClientMessage: return (mask QS_POSTMESSAGE) != 0; +case GLX_DAMAGED: +return 1; default: return (mask QS_SENDMESSAGE) != 0; } diff --git a/dlls/x11drv/init.c b/dlls/x11drv/init.c index 9701266..f7d2b68 100644 --- a/dlls/x11drv/init.c +++ b/dlls/x11drv/init.c @@ -322,6 +322,20 @@ INT X11DRV_ExtEscape( X11DRV_PDEVICE *ph return TRUE; } break; +case X11DRV_SET_FLAGS: +if (in_count = sizeof(long)) +{ +physDev-flags = *(long *)in_data; +return TRUE; +} +break; +case X11DRV_GET_FLAGS: +if (out_count = sizeof(long)) +{ +*(long *)out_data = physDev-flags; +return TRUE; +} +break; case X11DRV_SET_DRAWABLE: if (in_count = sizeof(struct x11drv_escape_set_drawable)) { diff --git a/dlls/x11drv/opengl.c b/dlls/x11drv/opengl.c index dd2017b..38d694b 100644 --- a/dlls/x11drv/opengl.c +++ b/dlls/x11drv/opengl.c @@ -28,6 +28,8 @@ #include wine/library.h #include wine/debug.h +#include wine/x11drv_escape.h + WINE_DEFAULT_DEBUG_CHANNEL(wgl); WINE_DECLARE_DEBUG_CHANNEL(opengl); @@ -56,6 +58,34 @@ WINE_DECLARE_DEBUG_CHANNEL(opengl); #define WINAPI __stdcall #define APIENTRYWINAPI +/* retrieve the GLX drawable to use on a given DC */ +inline static Drawable get_drawable( HDC hdc ) +{ +GLXDrawable drawable; +enum x11drv_escape_codes escape = X11DRV_GET_GLX_DRAWABLE; + +if
Re: [opengl] Catch BadMatch errors before they can occur.
Lionel Ulmer wrote: Oh, and this patch creates a new debug-channel 'swapbuffers' and puts both opengl swapbuffers functions into it. What is the reason for this change ? When debugging opengl applications, I'm usually not interested in the *Swap*Buffers() functions, but in context creation, pbuffers stuff etc. It just writes unnecessary messages into the console. When debugging an application that crashes after 10 minutes it makes the log very big. If someone is interested in the *Swap*Buffers() functions, he still can enable them, but for the majority of the cases its useless. That was basically the idea. I'll rewrite the patch and split it up, the GLX spec violation thing could be committed without breaking applications, other changes have to be tested first :) tom
Re: [opengl] Catch BadMatch errors before they can occur.
Lionel Ulmer wrote: On Fri, Mar 24, 2006 at 08:20:08PM +0100, Tomas Carnecky wrote: When debugging opengl applications, I'm usually not interested in the *Swap*Buffers() functions, but in context creation, pbuffers stuff etc. It just writes unnecessary messages into the console. When debugging an application that crashes after 10 minutes it makes the log very big. If someone is interested in the *Swap*Buffers() functions, he still can enable them, but for the majority of the cases its useless. Strange as there are 'opengl' TRACEs for ALL OpenGL functions and extensions so I would guess that those would vastly dwarf the SwapBuffer traces (except if you have an application that does not draw anything on screen and just does swapping its buffers :-) ). That's what I also thought.. now it comes to my mind! I may have compiled with with debug disabled and now with debug enabled make didn't recompile the file with all the 'real' OpenGL functions. so.. what about splitting 'opengl' up into 'opengl' (all the opengl functions) and 'oglsetup' with the setup/wgl/pbuffer etc. functions ? tom
What is the reason to use GL_FRONT_LEFT in wglMakeCurrent()
GL_FRONT and GL_FRONT_LEFT mean the same unless the drawable is stereo. And I don't think PBuffers or Pixmaps can be stereo, so is there any special reason to use GL_FRONT_LEFT ? Or is it because the spec sais so? tom
Re: [opengl] Catch BadMatch errors before they can occur.
Lionel Ulmer wrote: On Fri, Mar 24, 2006 at 08:48:16PM +0100, Tomas Carnecky wrote: so.. what about splitting 'opengl' up into 'opengl' (all the opengl functions) and 'oglsetup' with the setup/wgl/pbuffer etc. functions ? Why not 'wgl' ? So we would have 'opengl' for 'core' functions and 'wgl' for the rest. ok.. here is a patch, but I can't make it work. Maybe someone else sees the bug. tom diff --git a/dlls/opengl32/wgl.c b/dlls/opengl32/wgl.c index 71f7511..0a682b2 100644 --- a/dlls/opengl32/wgl.c +++ b/dlls/opengl32/wgl.c @@ -44,7 +44,8 @@ #include wine/library.h #include wine/debug.h -WINE_DEFAULT_DEBUG_CHANNEL(opengl); +WINE_DEFAULT_DEBUG_CHANNEL(wgl); +WINE_DECLARE_DEBUG_CHANNEL(opengl); /** global glx object */ wine_glx_t wine_glx; @@ -698,7 +720,7 @@ BOOL WINAPI wglShareLists(HGLRC hglrc1, */ BOOL WINAPI wglSwapLayerBuffers(HDC hdc, UINT fuPlanes) { - TRACE((%p, %08x)\n, hdc, fuPlanes); + TRACE_(opengl)((%p, %08x)\n, hdc, fuPlanes); if (fuPlanes WGL_SWAP_MAIN_PLANE) { if (!SwapBuffers(hdc)) return FALSE; diff --git a/dlls/opengl32/wgl_ext.c b/dlls/opengl32/wgl_ext.c index 24e990d..624a82c 100644 --- a/dlls/opengl32/wgl_ext.c +++ b/dlls/opengl32/wgl_ext.c @@ -37,7 +37,7 @@ #include wine/library.h #include wine/debug.h -WINE_DEFAULT_DEBUG_CHANNEL(opengl); +WINE_DEFAULT_DEBUG_CHANNEL(wgl); /* x11drv GDI escapes */ diff --git a/dlls/x11drv/opengl.c b/dlls/x11drv/opengl.c index 3fc0231..68733b4 100644 --- a/dlls/x11drv/opengl.c +++ b/dlls/x11drv/opengl.c @@ -28,7 +28,8 @@ #include wine/library.h #include wine/debug.h -WINE_DEFAULT_DEBUG_CHANNEL(opengl); +WINE_DEFAULT_DEBUG_CHANNEL(wgl); +WINE_DECLARE_DEBUG_CHANNEL(opengl); #if defined(HAVE_GL_GL_H) defined(HAVE_GL_GLX_H) @@ -533,7 +534,7 @@ BOOL X11DRV_SwapBuffers(X11DRV_PDEVICE * return 0; } - TRACE((%p)\n, physDev); + TRACE_(opengl)((%p)\n, physDev); wine_tsx11_lock(); pglXSwapBuffers(gdi_display, physDev-drawable);
Re: [opengl] Catch BadMatch errors before they can occur.
Tony Lambregts wrote: Tomas Carnecky wrote: ok.. here is a patch, but I can't make it work. Maybe someone else sees the bug. What is the problem. this is from a log of Google using your patch and WINEDBUG=+wgl,+opengl As you maybe see in the patch, I've put both SwapBuffers functions to the opengl debug channel (rather than to wgl), but they don't show in my trace. So something with the TRACE_(opengl)(...) is wrong or with the way I've set up the channels. tom
glXCreateGLXPixmap() BadMatch
Tomas Carnecky wrote: Tomas Carnecky wrote: Mike Hearn wrote: Mike Hearn [EMAIL PROTECTED] Optimize thunks by storing GL context in the thread environment block good job. this fixed the BadMatch error in World of Warcraft and also increased performance, from ~20fps to ~25fps. Maybe that was too soon.. I can login with some characters, with others I still get the BadMatch error. This bug may be related to the minimap bug that caused opengl to crash when you entered a building, because I can login with my characters than stand outside, but now with those that stands in an inn (building) or in Ironforge. Does anyone remeber how the minimap bug was fixed and if this BadMatch bug may be related to it? BadMatch is generated if the depth of pixmap does not match the GLX_BUFFER_SIZE value of vis, or if pixmap was not created with respect to the same screen as vis. Futher investigation revealed this: when create_glxpixmap() is called, physDev-depth and physDev-bitmap-pixmap_depth are 1, but GLX_BUFFER_SIZE from the visual returns 32, I don't know how to get the depth directly from the pixmap (in case physDev-bitmap-pixmap_depth is not up-to-date), so any clarification how the bimap/pixmap code works would be great. Also, when looking where the depth is changed, I came across X11DRV_SelectBitmap(), there you change the depth if physDev-depth and physBitmap-pixmap_depth don't match. In the trace I see that WoW creates lots of bitmaps and the last time a bitmap is selected the depth is changed to 1: trace:x11drv:X11DRV_CreateBitmap (0x340) 32x32 1 bpp trace:x11drv:X11DRV_CreateBitmap physBitmap-pixmap_depth: 1 trace:x11drv:X11DRV_SetBitmapBits (bmp=0x340, bits=0x7fd64e50, count=0x80) trace:x11drv:X11DRV_SetBitmapBits (physBitmap-pixmap_depth=1 trace:x11drv:X11DRV_CreateBitmap (0x33c) 32x32 24 bpp trace:x11drv:X11DRV_CreateBitmap physBitmap-pixmap_depth: 24 trace:x11drv:X11DRV_SetBitmapBits (bmp=0x33c, bits=0x7fd888b8, count=0xc00) trace:x11drv:X11DRV_SetBitmapBits (physBitmap-pixmap_depth=24 trace:x11drv:X11DRV_SelectBitmap changing depth of physDev(0x7fd64ed8) to 24 trace:x11drv:X11DRV_SelectBitmap changing depth of physDev(0x7fd64ed8) to 1 And also, I don't see anywhere in the trace that the bitmap depth would be 32 (to match GLX_BUFFER_SIZE). tom
Re: [opengl] check drawable and context Visual IDs in wglMakeCurrent()
Stefan Dösinger wrote: Hi, So.. in this attachment you'll find a patch that does what I've just described. I can't test it on anything else than WoW, so if someone would please review it and test with outher opengl/d3d applications it would be great. No effects noticed with Half-life 1(GL), Warcraft III(GL and D3D) and Jedi Academy(GL). maybe it was because of the earlier opengl patch 'Store GL context in TEB'. But I didn't notice such an increase then.. only from ~20 - ~30fps That patch gave me a hint for a possible reason for the WineD3D slowness with a few games :) good, at least something :) Something else.. now I know why there is this VisualID mismatch. Someone didn't read the GLX spec. in opengl32/wgl.c:describeDrawable() you call glXQueryDrawable() with GLX_VISUAL_ID, but that's not allowed ! only GLX_FBCONFIG_ID and three others, according to the GLX 1.3/1.4 spec [1]. According to the spec, you can get the GLXFBConfig from a GLXDrawable using glXQueryDrawable() and then the Visual from the GLXFBConfig using glXGetFBConfigAttrib(). .. I'll send a patch. tom [1] http://www.opengl.org/documentation/specs/
Re: Store GL context in the TEB
Mike Hearn wrote: Mike Hearn [EMAIL PROTECTED] Optimize thunks by storing GL context in the thread environment block good job. this fixed the BadMatch error in World of Warcraft and also increased performance, from ~20fps to ~25fps. tom
Re: ddraw design flaw
Stefan Dösinger wrote: Hi, I have found a design flaw in ddraw and described it in my 'World of Warcraft 0.10 Public Test Real (PTR)' thread. Why has nobody adressed this issue? In my eyes it is a severe design flaw if an application can't unload ddraw.dll without side effects (especially if this side-effect is that all opengl/directx applications are unusable after that). The ddraw implementation has some unloading bugs at the moment. Not restoring the glx context(that's the problem you reported, right) is only one of them. Another is that the screen resolution isn't restored when the app doesn't care for releasing it's object. I have this problem in mind, and I'll check WineD3D for it, as it most likely affects all D3D libs. Once it's fixed in wineD3D and ddraw uses wineD3D for rendering, this should be sorted out. Sorry for not fixing this at once, but time is limited ;). If you want a solution now, you're free to fix ddraw. But I registered that issue and it's on my lenghty todo list. Thanks for the info. It's good to see that it's a known issue and that it's on someones TODO list. I'm trying to find out whether Survey.dll will be part of the final 1.10 release or if it's only part of the PTR client. I haven't had any luck so far. I don't know why Blizzard doesn't want to give me this information, we'll find out sooner or later (eg. the day when they release 1.10), it just would be nice to know whether the WoW players need a patch or not. I'm not very familiar with wine so I probably won't be able to write a better patch. tom
Re: x11drv: One more fix for stuck ctrl, shift alt.
Vitaliy Margolen wrote: Anything wrong with this patch? It does fix the stuck alt problem for games I use alt+tab on. I can confirm that this fixes the bug. This patch is very important for me as I don't have to switch back and forth between desktop and WoW to un-stuck they alt-key. This patch works fine and I don't see any side-effects. tom
ddraw design flaw
I have found a design flaw in ddraw and described it in my 'World of Warcraft 0.10 Public Test Real (PTR)' thread. Why has nobody adressed this issue? In my eyes it is a severe design flaw if an application can't unload ddraw.dll without side effects (especially if this side-effect is that all opengl/directx applications are unusable after that). So please, will someone explain me why nobody wants to fix this? It's nothing bad now, but if WoW still loads Survey.dll (and thus ddraw.dll) in the final public release many people will demand a fix. Very few people play on the PTR, but there are quite a lot who play WoW under wine.. and they will not be happy if they can't play anymore. tom