Re: [winealsa] add support for mixer devices
Dmitry Timoshkov schreef: Maarten Lankhorst [EMAIL PROTECTED] wrote: +/** + *mxdMessage (WINEALSA.3) + */ +DWORD WINAPI ALSA_mxdMessage(UINT wDevID, UINT wMsg, DWORD dwUser, + DWORD dwParam1, DWORD dwParam2) It would be nice when adding new code make it 64-bit safe from the start, in this case using a proper DRVCALLBACK prototype for the driver callback, and using proper data types for passing in/out parameters around. You're right, and not right. Best thing to do would be adding all the ??dMessage's to mmddk.h, then fix what breaks, and make whole winmm 64 bits safe, THEN it would be useful to make mixer 64 bits safe, I changed it to be future compatible though, wine doesn't build for 64 bits now anyway so there's no harm in making this code 64 bits safe in advance, so I'll send a try 3.
Removal of unused audio drivers
There are 5 different audio drivers for linux, I think this is a bit overkill, so I propose to remove the esd and nas drivers, I don't think anyone uses esd, especially that since for that task alsa can be used now since dmix addon. I'm not sure what nas is for, but it seems to be 'network audio system', I haven't seen any use for it, except that it causes a 30 seconds slowdown at showing 'audio' tab in winecfg. I don't think anyone uses it. For esd I think it's best to remove, for nas I'm also for remove, but I'll settle for removing it from winecfg list same way as winearts was disabled for a while. What are your thoughts about this? Maarten
Re: Wine vs. Cedega in Benchmarks
On 12/04/07, Tom Wickline [EMAIL PROTECTED] wrote: Those guys ran 5 game test and Wine's performance is clearly superior to that of Cedega on benchmarks where Wine was run, they give no details of the Wine configuration, So I can only presume it's a default setup. And since there *trying* to paint the best picture possible for Cedega they don't point out that Wine is superior! It is also important to note that there were minimal performance differences between WINE 0.9.32 and Cedega 6.0. Granted there are only five benchmarks in this Cedega 6.0 performance preview, but the level of performance for Cedega does look extremely promising and we will continue to look at Cedega 6.0 and report back in future articles. Should read : Cedega's performance is currently lagging that of Wine 0.9.32 and with each Wine release Wine's performance and feature set is continuously improving! I'm open for thoughts and suggestions Tbh, I don't think an OpenGL performance comparison is particularly interesting in the first place.
Re: Wine vs. Cedega in Benchmarks
Tbh, I don't think an OpenGL performance comparison is particularly interesting in the first place. Though the interesting thing is that I did my own native Linux vs native MacOS vs Wine benchmarks with glExcess a few days ago. I got pretty much the opposite result. Granted, my benchmarking code was very primitive, so read the results as +/- 10 fps, and this was on fglrx. But I got remarkable differences between wine and native, with native beeing up to 2 times faster. Out of interest I did a quick check on nvidia. The difference smaller, but there too(990 vs 1100 fps in the first glExcess scene at 640x480). Still a ~10% difference. I also tested winelib vs PE .exe(with msvc6) and found no difference(That was 990(winelib) vs 980(PE)). The small differences could be because of my shitty benchmark code or because of compiler differences. But I agree with Henri that a Direct3D performance comparison will be much more interesting. macos low macos high linux low linux high 1 407 135 207 91 2 265 108 264 182 3 468 212 285 125 4 5 429 166 309 97 6 525 201 409 92 7 169 118 238 72 8 242 148 156 96 9 260 91 192 78 10 211 76 178 66 11 248 102 96 84 low: 640x480 high: 1380x850 2: wavy face 3: mountain face 4: OpenGL logo explosion 5: first tunnel 6: another tunnel with futuristic flying objects 7: glass cubes 8: water, moon and sun 9: waterfall 10: lasers 11: Final scene with credits Runs with winelib: macos low macos high linux low linux high 1 390 134 161 122 2 115 115 138 138 3 313 167 237 119 4 5 311 162 239 85 6 417 200 375 95 7 93 93 235 94 8 119 118 155 142 9 259 92 196 77 10 206 73 172 65 11 147 103 79 pgp0ONu6GzuAE.pgp Description: PGP signature
Re: locales, unicode and ansi with msvcrt (bug 8022)
Dmitry Timoshkov wrote: Ann Jason Edmeades [EMAIL PROTECTED] wrote: Bug 8022 (http://bugs.winehq.org/show_bug.cgi?id=8022) has highlighted something interesting which has me puzzled... Basically lets take xcopy as an example command line application. It issues messages to the screen using MSVCRT's wprintf(LUnicode string) type function Wprintf - vsnwprintf and builds a Unicode string to output. This ends up calling fwrite - _write - WriteFile - WriteConsoleA Apparently you need to use appropriate console output APIs directly (that take into account the console input/output code page) instead of using MSVCRT APIs. Unfortunately just using the wide console function will only help the output to the screen, but as my test program shows there is the same discrepency when the output is to a file handle... Jason
Re: Patches / a proposal for the mystic DIB engine
Felix Nawothnig [EMAIL PROTECTED] writes: 2. Export LockDIBSection/Unlock to gdi32. Adding more exports is not nice but there really is no way around that, right? No, LockDIBSection is a driver internal detail, gdi32 has no business knowing about this. 3. Move dc-funcs to dc-physDev-funcs. Many changes but mostly mechanical work. Rationale: This really belongs there. And I need it. :) No it doesn't, physDev is an opaque structure as far as gdi32 is concerned. Data needed by gdi32 belongs in the DC structure. 4. Now we write dibdrv.c for now just containing DIBDRV_Install and DIBDRV_Remove. That function will go through the physDev-funcs list and overwrite each function pointer which is actually implemented with DIBDRV_PutPixel(), whatever. DIBDRV_Install/DIBDRV_Remove will be called from BITMAP_SelectObject() when we switch from non-DIB to DIB vice versa. Note that we can't use DRIVER_load_driver here because of the wanted forward to original driver when not implemented. For this we will need to extend the for DIB objects part in BITMAPOBJ by const DC_FUNCTIONS *orig_funcs; DC_FUNCTIONSlocal_funcs; where orig_funcs to the old physDev-funcs and the new physDev-funcs points to bmp-local_funcs. You certainly don't want to store the full function table in the BITMAPOBJ, it will be the same for all bitmaps. All you need is one function table for the DIB driver and one for the normal graphics driver. Forwarding to the graphics driver can be done privately in the DIB driver, gdi32 doesn't need to know about it. And you probably want a separate physDev pointer for it, you'll need to maintain state for DIBs too. -- Alexandre Julliard [EMAIL PROTECTED]
Re: How come individual applications can't be put on a virtual desktop in winecfg?
Scott Ritchie [EMAIL PROTECTED] writes: When Wine detects that it's about to launch an app with special configurations set in winecfg, why can't it launch a new desktop as though we were calling an entirely new Wine instance? Because usually when an app launches another one it expects to communicate with it, and that won't work if they are in separate desktops. -- Alexandre Julliard [EMAIL PROTECTED]
Re: rpcrt4: Implement RpcMgmtWaitServerListen
Dan Hipschman wrote: @@ -94,7 +95,8 @@ struct connection_ops { RpcConnection *(*alloc)(void); RPC_STATUS (*open_connection_client)(RpcConnection *conn); RPC_STATUS (*handoff)(RpcConnection *old_conn, RpcConnection *new_conn); - int (*read)(RpcConnection *conn, void *buffer, unsigned int len); + int (*read)(RpcConnection *conn, void *buffer, unsigned int len, BOOL check_stop_event); + int (*signal_to_stop)(RpcConnection *conn); int (*write)(RpcConnection *conn, const void *buffer, unsigned int len); int (*close)(RpcConnection *conn); size_t (*get_top_of_tower)(unsigned char *tower_data, const char *networkaddr, const char *endpoint); Hmm, I'm not sure it needs to be this complicated. HeapFree(GetProcessHeap(), 0, msg); } -static DWORD CALLBACK RPCRT4_worker_thread(LPVOID the_arg) -{ - RpcPacket *pkt = the_arg; - RPCRT4_process_packet(pkt-conn, pkt-hdr, pkt-msg); - HeapFree(GetProcessHeap(), 0, pkt); - return 0; -} - static DWORD CALLBACK RPCRT4_io_thread(LPVOID the_arg) { RpcConnection* conn = (RpcConnection*)the_arg; @@ -319,10 +322,14 @@ static DWORD CALLBACK RPCRT4_io_thread(LPVOID the_arg) RpcBinding *pbind; RPC_MESSAGE *msg; RPC_STATUS status; - RpcPacket *packet; TRACE((%p)\n, conn); + EnterCriticalSection(client_connections_cs); + list_add_head(client_connections, conn-client_entry); + ResetEvent(clients_completed_event); + LeaveCriticalSection(client_connections_cs); + for (;;) { msg = HeapAlloc(GetProcessHeap(), HEAP_ZERO_MEMORY, sizeof(RPC_MESSAGE)); @@ -338,17 +345,17 @@ static DWORD CALLBACK RPCRT4_io_thread(LPVOID the_arg) break; } -#if 0 RPCRT4_process_packet(conn, hdr, msg); -#else -packet = HeapAlloc(GetProcessHeap(), 0, sizeof(RpcPacket)); -packet-conn = conn; -packet-hdr = hdr; -packet-msg = msg; -QueueUserWorkItem(RPCRT4_worker_thread, packet, WT_EXECUTELONGFUNCTION); -#endif -msg = NULL; } + + EnterCriticalSection(client_connections_cs); + list_remove(conn-client_entry); + if (list_empty(client_connections)) { +TRACE(last in the list to complete (%p)\n, conn); +SetEvent(clients_completed_event); + } + LeaveCriticalSection(client_connections_cs); + RPCRT4_DestroyConnection(conn); return 0; } I'm not sure your reasoning for doing this. If I'm not mistaken, this change makes it so that only one RPC call at a time is processed. -- Rob Shearman
Re: Removal of unused audio drivers
Maarten Lankhorst wrote: There are 5 different audio drivers for linux, I think this is a bit overkill, so I propose to remove the esd and nas drivers, I don't think anyone uses esd, especially that since for that task alsa can be used now since dmix addon. I'm not sure what nas is for, but it seems to be 'network audio system', I haven't seen any use for it, except that it causes a 30 seconds slowdown at showing 'audio' tab in winecfg. I don't think anyone uses it. For esd I think it's best to remove, for nas I'm also for remove, but I'll settle for removing it from winecfg list same way as winearts was disabled for a while. What are your thoughts about this? Maarten Hi Maarten, I'm using esd actively. There are some audiocard drivers OSS provide and ALSA don't. I haven't used NAS at all and the winecfg delay annoys me too. Regards Vit This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
Re: Removal of unused audio drivers
On Thu, 12 Apr 2007, Maarten Lankhorst wrote: [...] I'm not sure what nas is for, but it seems to be 'network audio system', I haven't seen any use for it, except that it causes a 30 seconds slowdown at showing 'audio' tab in winecfg. I don't think anyone uses it. For esd I think it's best to remove, for nas I'm also for remove, but I'll settle for removing it from winecfg list same way as winearts was disabled for a while. NAS is used to get sound on X terminals. It would be interesting to get input from the LTSP and thin-client crowd before concluding it can be removed. -- Francois Gouget [EMAIL PROTECTED] http://fgouget.free.fr/ Before you criticize someone, walk a mile in his shoes. That way, if he gets angry, he'll be a mile away - and barefoot.
Re: Removal of unused audio drivers
On Thu, Apr 12, 2007 at 01:01:43PM +0200, Vit Hrachovy wrote: Maarten Lankhorst wrote: There are 5 different audio drivers for linux, I think this is a bit overkill, so I propose to remove the esd and nas drivers, I don't think anyone uses esd, especially that since for that task alsa can be used now since dmix addon. I'm not sure what nas is for, but it seems to be 'network audio system', I haven't seen any use for it, except that it causes a 30 seconds slowdown at showing 'audio' tab in winecfg. I don't think anyone uses it. For esd I think it's best to remove, for nas I'm also for remove, but I'll settle for removing it from winecfg list same way as winearts was disabled for a while. What are your thoughts about this? Maarten Hi Maarten, I'm using esd actively. There are some audiocard drivers OSS provide and ALSA don't. I haven't used NAS at all and the winecfg delay annoys me too. Regards Vit What has esound (esd) to do with OSS? Ciao, Marcus
Re: Removal of unused audio drivers
On Thu, Apr 12, 2007 at 10:02:31AM +0200, Maarten Lankhorst wrote: There are 5 different audio drivers for linux, I think this is a bit overkill, so I propose to remove the esd and nas drivers, I don't think anyone uses esd, especially that since for that task alsa can be used now since dmix addon. I'm not sure what nas is for, but it seems to be 'network audio system', I haven't seen any use for it, except that it causes a 30 seconds slowdown at showing 'audio' tab in winecfg. I don't think anyone uses it. For esd I think it's best to remove, for nas I'm also for remove, but I'll settle for removing it from winecfg list same way as winearts was disabled for a while. What are your thoughts about this? why do you compile the drivers in - or why do have the files around for them? my audio dialog just shows oss and alsa; the others get dropped at compile time. also there might be other platforms that have to use those audio drivers - why drop something that works? -- cu pgpy27YrPCCdz.pgp Description: PGP signature
Re: Removal of unused audio drivers
On Thu, Apr 12, 2007 at 01:54:17PM +0200, Marcus Meissner wrote: On Thu, Apr 12, 2007 at 01:01:43PM +0200, Vit Hrachovy wrote: I'm using esd actively. There are some audiocard drivers OSS provide and ALSA don't. I haven't used NAS at all and the winecfg delay annoys me too. Regards Vit What has esound (esd) to do with OSS? Ciao, Marcus AFAIK OSS doesn't provide SW mixing. ESD does. Given situation I have OSS-only capable audiocard and I want to use SW mixing, ESD over OSS is an option for me to hear mixed audio streams from several applications (mp3 as background and sounds from a game, from example). Correct me if I'm wrong :) Regards Vit
Re: Removal of unused audio drivers
On 4/12/07, Vit Hrachovy [EMAIL PROTECTED] wrote: Maarten Lankhorst wrote: There are 5 different audio drivers for linux, I think this is a bit overkill, so I propose to remove the esd and nas drivers, I don't think anyone uses esd, especially that since for that task alsa can be used now since dmix addon. I'm not sure what nas is for, but it seems to be 'network audio system', I haven't seen any use for it, except that it causes a 30 seconds slowdown at showing 'audio' tab in winecfg. I don't think anyone uses it. For esd I think it's best to remove, for nas I'm also for remove, but I'll settle for removing it from winecfg list same way as winearts was disabled for a while. What are your thoughts about this? Keep them all and fix the real problems. Maarten Hi Maarten, I'm using esd actively. There are some audiocard drivers OSS provide and ALSA don't. I haven't used NAS at all and the winecfg delay annoys me too. The slowdown can be reduced by probing for sound simultaneously in multiple threads, and eliminated entirely by populating it asynchronously from other threads. Want a patch? Regards Vit Regards Damjan
Re: Removal of unused audio drivers
Christoph Frick schreef: why do you compile the drivers in - or why do have the files around for them? my audio dialog just shows oss and alsa; the others get dropped at compile time. also there might be other platforms that have to use those audio drivers - why drop something that works? I was just proposing to remove it to see how many people would really need it. If it's used it won't be removed, if nobody objects it means it's just a choice too many. From what I can tell there are thus still some people using esound, so I guess it's safe to keep it. Nas however doesn't seem to be really used, so I'm thinking of either removing it or disabling it in winecfg. Most distributions include wine with all drivers, and I personally think it's not wise to have 30 seconds delay in winecfg. Maarten
Re: locales, unicode and ansi with msvcrt (bug 8022)
Jason Edmeades [EMAIL PROTECTED] wrote: Apparently you need to use appropriate console output APIs directly (that take into account the console input/output code page) instead of using MSVCRT APIs. Unfortunately just using the wide console function will only help the output to the screen, but as my test program shows there is the same discrepency when the output is to a file handle... What your test app is doing? It probably needs a test under Windows to see in which encoding (ANSI/OEM) a not unicode app should receive input via a pipe. -- Dmitry.
Re: Patches / a proposal for the mystic DIB engine
Alexandre Julliard wrote: 2. Export LockDIBSection/Unlock to gdi32. Adding more exports is not nice but there really is no way around that, right? No, LockDIBSection is a driver internal detail, gdi32 has no business knowing about this. In my code the call to LockDIBSection serves two purposes: to lock the DIB section (duh) and to query whether we are in AppMod or GdiMod. The former could be done by triggering an exception and letting the driver catch it that but I'd think that's rather expensive. The latter can't be done at all if this is an driver internal detail so we'd have to always use the DIBDRV implementation when it is there. This would most likely mean some pretty serious performance regressions until we implemented all used functions (in which case we could say the export is only temporary). Since we'll at some point probably have all functions implemented the DIB code in the driver will be dead code and it would have to be removed anyway. So I don't see the point of trying to keep a hack clean. Any another issue: There are cases where we really want to have the GDI operation be done server-side, regardless of whether we have a client-side implementation or not (http://www.winehq.com/pipermail/wine-devel/2007-February/053993.html - I'd agree with that). So I really think we need that export. 3. Move dc-funcs to dc-physDev-funcs. Many changes but mostly mechanical work. Rationale: This really belongs there. And I need it. :) No it doesn't, physDev is an opaque structure as far as gdi32 is concerned. Data needed by gdi32 belongs in the DC structure. Well, yes, it's an opaque structure. With the function pointers added to physDev it would no longer be an opaque structure. So? We'd add a pointer to physDev pointing to an opaque structure of course. My point is that the addresses of the driver functions are not data needed by gdi32 in the strictest sense. Those addresses are defined by the driver and the functions live in the driver, so I think it makes sense if the driver would initialize the table, not gdi32. Doing this by GetProcAddress() in gdi32/driver.c is not a very clean design. Really. You certainly don't want to store the full function table in the BITMAPOBJ, it will be the same for all bitmaps. All you need is one Will it be the same? I'm not sure if there are situations where an application has multiple drivers providing memory DCs loaded. function table for the DIB driver and one for the normal graphics driver. Forwarding to the graphics driver can be done privately in the DIB driver, gdi32 doesn't need to know about it. And you probably Yes, it's possible. But we'd probably have to write a wrapper for every GDI operation. Wouldn't be hard - but it's a lot of code which can be easily avoided by just 10 lines outside of dibdrv (in gdi32/bitmap.c). I don't really see the point. want a separate physDev pointer for it, you'll need to maintain state for DIBs too. Well, we have a couple of DIB-dependent members in BITMAPOBJ - I'd put a pointer to this state there. I admit this isn't really clean though. Felix
Re: Removal of unused audio drivers
why do you compile the drivers in - or why do have the files around for them? my audio dialog just shows oss and alsa; the others get dropped at compile time. also there might be other platforms that have to use those audio drivers - why drop something that works? I was just proposing to remove it to see how many people would really need it. If it's used it won't be removed, if nobody objects it means it's just a choice too many. Dear thin X Window clients, NAS driver removal plans were discussed by wine developers on Alpha Centauri for 50 years. Yours, Prostetnic Vogon Jeltz :) You will like decisions based on statistics only when you are part of majority. Debian and Fedora have nas and esd support in separate packages. libwine-nas is not required for wine installation on Debian Etch. -- Tomas
Re: Add Windows Vista option to winecfg
Kovács András wrote: This patch is needed, because DirectX 10 is Vindows Vista only feature. Nothing wrong with adding Vista to the list but how is this needed? You don't want to disallow usage of dx10 unless Vista is selected, do you? Felix
Re: Add Windows Vista option to winecfg
On 12.04.2007 15:25, Felix Nawothnig wrote: Kovács András wrote: This patch is needed, because DirectX 10 is Vindows Vista only feature. Nothing wrong with adding Vista to the list but how is this needed? You don't want to disallow usage of dx10 unless Vista is selected, do you? Apps might do something like if (WindowsVersion = WindowsVista) UseDX10(); else UseDX9(); So a Vista version would make those apps take the DX10 path. -f.r.
RE: Windows dll as Linux Shared Object
What is the problem with running the whole app as a winelib app? You don't save any resources by having the application outside since you still have wine running, and you have the IPC overhead. I want to provide a library other developers can use without having them worry about the intricacies of execing a winelib application and the IPC. I will take care of all that in my wrapper shared object. Phil
RE: Patches / a proposal for the mystic DIB engine
Felix Nawothnig [mailto:[EMAIL PROTECTED] Okay, I've spent the last days looking into this matter and I'd like to suggest a way to get it started. So. This is the plan: 1. In winex11.drv: -INT X11DRV_LockDIBSection(X11DRV_PDEVICE *physDev, INT req, BOOL lossy) +HBITMAP X11DRV_LockDIBSection(X11DRV_PDEVICE *physDev, INT req, BOOL +force) Lossy isn't used anywhere for LockDIBSection anyway. Force means that the function will only lock if the DIB is already in AppMod. The returned bitmap is ofcourse physDev-bitmap-hbitmap. Rationale: If you lock the DIB to write on it it makes sense that the function actually provides you with that DIB. I had actually a bit a different idea but didn't get around to try it yet. I was thinking about adding an extra value to the DIB section sync state such as DIB_Status_Conf. With this X11DRV_DIB_Coerce would decide based on a configuration setting if it should do DIB_Status_GdiMod (DIBDRV_NEVER in point 5.), DIB_Status_AppMod (DIBDRV_ALWAYS), or DIB_Status_AppMod if the DIB is already in DIB_Status_AppMod (DIBDRV_MIXED). The return value would someow indicate to that driver function if it should just simply return with a failure leaving the rest to GDI32 to deal with or do for the time being the actual work as it does now. GDI32 would on driver failure (or non-existing driver function) invoke the according DIBDRV function for non-meta DCs. A non-existing driver function would still work thanks to the exception handling for DIBs being accessed in application mode, although it is not the preferd way to deal with this until the DIB engine is fully functional (and the DIB handling could then be consequently removed from x11drv entirely). This would reduce the modifications to x11drv to a minimum and also GDI32 wouldn't need much changes at all. Adding a new DIBDRV function would be a change in GDI32 to add the DIBDRV call on failure of the driver function and then switching the DIB_Status value in that driver function to DIB_Status_Conf instead of DIB_Status_GdiMod. 2. Export LockDIBSection/Unlock to gdi32. Adding more exports is not nice but there really is no way around that, right? That and a lot of the other points wouldn't be necessary with above approach. But there might be another complication with this I haven't seen yet. Rolf Kalbermatter
Re: Windows dll as Linux Shared Object
Am Donnerstag 12 April 2007 15:57 schrieb Phil Lodwick: What is the problem with running the whole app as a winelib app? You don't save any resources by having the application outside since you still have wine running, and you have the IPC overhead. I want to provide a library other developers can use without having them worry about the intricacies of execing a winelib application and the IPC. I will take care of all that in my wrapper shared object. Fair enough :-) pgp56TjjsPwp0.pgp Description: PGP signature
Re: Wine vs. Cedega in Benchmarks
On 4/12/07, Stefan Dösinger [EMAIL PROTECTED] wrote: But I agree with Henri that a Direct3D performance comparison will be much more interesting. Well were all three in agreement, I believe a well rounded benchmark review is in order ;) Some test software: Disk I/O Memory: Performance Mark 5.0 PCMark 04 D3D: Aquamark 3 3DMark 2000 3DMark 2001SE 3DMark 2003 3DMark 2005 3DMark 2006 OpenGL: Dronezmark GLExcess suggestions anyone? -- Tom Wickline Respectable computing - Linux/FOSS
Re: Wine vs. Cedega in Benchmarks
Tom Wickline napsal(a): On 4/12/07, Stefan Dösinger [EMAIL PROTECTED] wrote: But I agree with Henri that a Direct3D performance comparison will be much more interesting. Well were all three in agreement, I believe a well rounded benchmark review is in order ;) Some test software: Disk I/O Memory: Performance Mark 5.0 PCMark 04 D3D: Aquamark 3 3DMark 2000 3DMark 2001SE 3DMark 2003 3DMark 2005 3DMark 2006 Nvidia SDK D3D Demos? OpenGL: Dronezmark GLExcess Nvidia SDK OpenGL Demos? suggestions anyone? I have Cedega 6 installed, but I can't run 3DMark 2001, 2003 or 2006, so I can't compare performance. Cedega 6 only support Pixel and Vertex shaders 2.0! Mirek
Re: Wine vs. Cedega in Benchmarks
Am Donnerstag 12 April 2007 16:22 schrieb Tom Wickline: On 4/12/07, Stefan Dösinger [EMAIL PROTECTED] wrote: But I agree with Henri that a Direct3D performance comparison will be much more interesting. Well were all three in agreement, I believe a well rounded benchmark review is in order ;) Some test software: Disk I/O Memory: Performance Mark 5.0 PCMark 04 D3D: Aquamark 3 3DMark 2000 3DMark 2001SE 3DMark 2003 3DMark 2005 3DMark 2006 OpenGL: Dronezmark GLExcess suggestions anyone? I guess some hl2 timedemos may be good too pgp2MaHvPrNyB.pgp Description: PGP signature
Re: Wine vs. Cedega in Benchmarks
Nvidia SDK D3D Demos? Nvidia SDK OpenGL Demos? I don't think SDK demos are good performance benchmarks for overall performance. They can find bottlenecks, but not predict how good something is for games. They could show which features work in Wine / Cedega, but I think that would be unfair to Cedega, because I assume that Transgaming is putting efforts into real games, not SDK demos. And in the end, the average user plays games instead of running sdk demos all the day. So if Cedega fails in all sdk demos that doesn't make it any worse for users. pgpImgJCqPI18.pgp Description: PGP signature
Re: Wine vs. Cedega in Benchmarks
On 4/12/07, Stefan Dösinger [EMAIL PROTECTED] wrote: Nvidia SDK D3D Demos? Nvidia SDK OpenGL Demos? I don't think SDK demos are good performance benchmarks for overall performance. They can find bottlenecks, but not predict how good something is for games. They could show which features work in Wine / Cedega Unfair Have you ever read there propaganda news letter? of all the features they support.. of just how green the grass is over there ;) Ill download and install everything that i can from here: http://http.download.nvidia.com/developer/SDK/Individual_Samples/samples.html and put the results in a table as sample works or not. -- Tom Wickline Respectable computing - Linux/FOSS
Re: [5/5] D3D9: Add a test for the converted vertex decl
Looks good, but this comment is misleading - s/second/first. +/* The contents should correspond to the second conversion */ +VDECL_CHECK(compare_elements(result_decl1, test_elements1)); Also, I thought when Henri was testing this that the object kept changing between sequential Get() calls ? Did you mention something about a driver bug causing this ? Ivan
Re: Wine vs. Cedega in Benchmarks
On 4/12/07, Stefan Dösinger [EMAIL PROTECTED] wrote: I guess some hl2 timedemos may be good too Was this what you had in mind? : http://www.hocbench.com/hl2.html -- Tom Wickline Respectable computing - Linux/FOSS
Re: Questions about using native vs implementing our own
Well the patch definitely works, so if it hasn't been submitted, please do so. There are still crashes in random spots, but that is documented on Paul's wiki page. Tom On 4/11/07, Tom Spear [EMAIL PROTECTED] wrote: I just tried the patch is it fails, but I copied and pasted the code into the correct files and am recompiling wine now. We will see just how far we can get with the stubs.. Tom P.S. did this get submitted to wine-patches today as well? On 4/11/07, Eric Pouech [EMAIL PROTECTED] wrote: Tom Spear a écrit : Hi again all, before I go and file another needless bug, I thought I would ask for opinions. I decided to try to run Process Explorer today with wine. When I first ran it, I got a dialog about missing a function. So I looked back thru the traces and it was because we were missing acledit.dll.. So I imported that from my windows xp install, and got the dialog again. Turned out I was also missing netui0.dll, netui1.dll, and netui2.dll and those in turn needed netrap.dll and samlib.dll. Once I got all of those imported from XP, Process Explorer now runs beautifully. I looked at the version information, and here is the description of each dll acledit is an access control list editor netui0 is NT LM UI Common Code - GUI Classes netui1 is NT LM UI Common Code - Networking classes netui2 is NT LM UI Common Code - GUI Classes netrap is Net Remote Admin Protocol DLL and samlib is SAM Library DLL I assume SAM is the Security Accounts Manager service, so that last dll would go for that and most likely would never be implemented with wine. But, how about the others? Is doing one of these something possibly feasible for a SoC project? I'm sure that there are other projects that use these dll's as well, but I dont know of them.. My biggest question is when is it appropriate for us to build our own DLL's vs just saying to use native? I would personally like to at least see the NTLM stuff get built since I know one of the developers is working on NTLM right now Also, should I file a bug for Process Explorer needing native dll's, or should I maybe file a bug to build our own versions of these dll's, OR should I just leave it alone altogether? I am creating an AppDB page for the program now. Does anyone object to me putting notes about which native dll's are needed on that page? don't remember why I didn't send it earlier :-/ it seems it might be useful A+ -- Eric Pouech The problem with designing something completely foolproof is to underestimate the ingenuity of a complete idiot. (Douglas Adams) [AclEdit]: stubbed out acledit DLL (new in XP) From: Eric Pouech [EMAIL PROTECTED] - needed by Sysinternals' process explorer --- Makefile.in |2 ++ configure |3 +++ configure.ac |1 + dlls/Makefile.in |5 dlls/acledit/Makefile.in | 14 dlls/acledit/acledit.spec |8 +++ dlls/acledit/main.c | 52 + 7 files changed, 85 insertions(+), 0 deletions(-) diff --git a/Makefile.in b/Makefile.in index 7fa8ef0..541ff0d 100644 --- a/Makefile.in +++ b/Makefile.in @@ -157,6 +157,7 @@ ALL_MAKEFILES = \ dlls/Maketest.rules \ programs/Makeprog.rules \ dlls/Makefile \ + dlls/acledit/Makefile \ dlls/activeds/Makefile \ dlls/advapi32/Makefile \ dlls/advapi32/tests/Makefile \ @@ -498,6 +499,7 @@ dlls/Maketest.rules: dlls/Maketest.rules programs/Makeprog.rules: programs/Makeprog.rules.in Make.rules Makefile: Makefile.in Make.rules dlls/Makefile: dlls/Makefile.in Make.rules +dlls/acledit/Makefile: dlls/acledit/Makefile.in dlls/Makedll.rules dlls/activeds/Makefile: dlls/activeds/Makefile.in dlls/Makedll.rules dlls/advapi32/Makefile: dlls/advapi32/Makefile.in dlls/Makedll.rules dlls/advapi32/tests/Makefile: dlls/advapi32/tests/Makefile.in dlls/Maketest.rules diff --git a/configure b/configure index e240093..4fd1069 100755 --- a/configure +++ b/configure @@ -20202,6 +20202,8 @@ ac_config_files=$ac_config_files Makefi ac_config_files=$ac_config_files dlls/Makefile +ac_config_files=$ac_config_files dlls/acledit/Makefile + ac_config_files=$ac_config_files dlls/activeds/Makefile ac_config_files=$ac_config_files dlls/advapi32/Makefile @@ -21421,6 +21423,7 @@ do programs/Makeprog.rules) CONFIG_FILES=$CONFIG_FILES programs/Makeprog.rules ;; Makefile) CONFIG_FILES=$CONFIG_FILES Makefile ;; dlls/Makefile) CONFIG_FILES=$CONFIG_FILES dlls/Makefile ;; +dlls/acledit/Makefile) CONFIG_FILES=$CONFIG_FILES dlls/acledit/Makefile ;; dlls/activeds/Makefile) CONFIG_FILES=$CONFIG_FILES dlls/activeds/Makefile ;; dlls/advapi32/Makefile) CONFIG_FILES=$CONFIG_FILES dlls/advapi32/Makefile ;; dlls/advapi32/tests/Makefile) CONFIG_FILES=$CONFIG_FILES
Re: Removal of unused audio drivers
Marcus Meissner wrote: Hi Maarten, I'm using esd actively. There are some audiocard drivers OSS provide and ALSA don't. I haven't used NAS at all and the winecfg delay annoys me too. Regards Vit What has esound (esd) to do with OSS? If you have soundcard only OSS supports, you cant use ALSA network sound, so you use esd instead. AFAIK. // Jakob
Re: Wine vs. Cedega in Benchmarks
On 4/12/07, Tom Wickline [EMAIL PROTECTED] wrote: Performance Mark 5.0 Lets kick in 6.1 as well. http://wiki.winehq.org/BenchMark-0.9.6?action=AttachFiledo=gettarget=PerformanceTest6.1.png -- Tom Wickline Respectable computing - Linux/FOSS
Re: Add Windows Vista option to winecfg
On 12/04/07, H. Verbeet [EMAIL PROTECTED] wrote: On 12/04/07, Kovács András [EMAIL PROTECTED] wrote: This patch is needed, because DirectX 10 is Vindows Vista only feature. You'll need to change more than that, eg. see my attachment to bug 7558. Note that I only guessed the values used in that patch, to make the dx sdk installer happy. They would still have to be verified to be correct on an actual Vista install. Nevermind... I just saw the other patch.
Re: wined3d: Mark vertex shader 3.0 as foggy shaders if they write out the fog coord
Fabian Bieler wrote: Vertex shaders are marked as 'foggy shaders' in wined3d if they write out the fog coord. Previously this was not done for vertex shaders 3.0. This patch corrects this problem. Please don't do that - the design is flawed enough as it is (GLSL being invoked from vertexshader and such..). The reg_maps was meant to be: - computed in shader pass 1 [ register tracking ] - used in shader pass 2 [ code generation ]. Here you're doing this: - computing reg_maps at the very end of shader pass 2 [ unpack stage ] - using this right afterwards in the calling function to set yet another flag (This-usesFog). I'm not sure what needs to be done instead, since it all looks so broken... - I would say this type of analysis about shader usage needs to go into pass 1 (baseshader), since it is backend independent. It's kind of vertex-shader specific, but we're trying to merge vertex and pixel shaders, and remove code from these two files, not add more of it. - The usesFog looks like it's trying to persist information about register usage after the shader has been compiled for optimization purposes...except that this information is already persisted - if you look in baseshader, you will see the entire reg_maps structure is kept in there, exactly for that purpose [ was done for software shaders actually, but that never happened ]. - Also, other code outside the shaders should not be accessing these flags directly - there should be a function which looks inside the reg_maps, and code from other file should call that function (we are emulating OOP programming in C, so we should try to use encapsulation). Ivan
Re: wined3d: Mark vertex shader 3.0 as foggy shaders if they write out the fog coord
Here is how this was meant to work at some point [ since I see all kinds of incorrect things being done ]: Entry point Code generation == Pixelshader Baseshader --- ARB backend Vertexshader --- --- GLSL backend Code should be moving towards the center from both directions. Vertex and pixelshader: Should contain minimum amount of things Baseshader: Should contain anything backend [ and frontend ] independent. I think it's acceptable to put some things that are frontend-dependent in baseshader, as long as they can be written in a generic way, without tons of (if (pshader) do_x else if (vshader) do_y statements). Pass 0: Tracing: 100% done in baseshader. Pass 1: Register tracking: 100% done in baseshader. Pass 2: Backend independent gen code: In baseshader [ this includes the opcode loop, and all parsing of the shader asm ] Pass 2: Backend dependent gen code: In ARB or GLSL files [ ideally should work with pre-processed shader asm, and broken out tokens - a sort of an intermediate representation if you prefer ]
Re: wined3d: Mark vertex shader 3.0 as foggy shaders if they write out the fog coord
On 12/04/07, Ivan Gyurdiev [EMAIL PROTECTED] wrote: I'm not sure what needs to be done instead, since it all looks so broken... - I would say this type of analysis about shader usage needs to go into pass 1 (baseshader), since it is backend independent. It's kind of vertex-shader specific, but we're trying to merge vertex and pixel shaders, and remove code from these two files, not add more of it. It should probably go into shader_get_registers_used() in baseshader.c, where we fill semantics_out, around line 255.
Re: rpcrt4: Implement RpcMgmtWaitServerListen
On Thu, Apr 12, 2007 at 11:47:05AM +0100, Robert Shearman wrote: Dan Hipschman wrote: @@ -94,7 +95,8 @@ struct connection_ops { RpcConnection *(*alloc)(void); RPC_STATUS (*open_connection_client)(RpcConnection *conn); RPC_STATUS (*handoff)(RpcConnection *old_conn, RpcConnection *new_conn); - int (*read)(RpcConnection *conn, void *buffer, unsigned int len); + int (*read)(RpcConnection *conn, void *buffer, unsigned int len, BOOL check_stop_event); + int (*signal_to_stop)(RpcConnection *conn); int (*write)(RpcConnection *conn, const void *buffer, unsigned int len); int (*close)(RpcConnection *conn); size_t (*get_top_of_tower)(unsigned char *tower_data, const char *networkaddr, const char *endpoint); Hmm, I'm not sure it needs to be this complicated. I'm not sure it does, either, but it's the simplest thing I can think of at the moment. Each connection basically just needs to wait for one of two events: (1) an incoming RPC, or (2) a shutdown request. Since the existing code blocks on a read waiting for an incoming packet, and hence can't poll some global state variable or something, the simplest thing I could think of was to just create an event for shutdowns and multiplex them. If this were pure POSIX, I might consider using signals. I could also try making the read time out and check a state variable every once in a while, but that just seemed sloppy. The patch I went with doesn't add all that much code, so it seemed reasonably simple to me, but maybe I'm missing something better. One thing I don't particularly like about the solution is that it creates an event / pipe for each connection, when it could probably create a single event / pipe for each protocol, but that would add synchronization complexity, so I opted for the simpler, less efficient code. I can go either way, though. HeapFree(GetProcessHeap(), 0, msg); } -static DWORD CALLBACK RPCRT4_worker_thread(LPVOID the_arg) -{ - RpcPacket *pkt = the_arg; - RPCRT4_process_packet(pkt-conn, pkt-hdr, pkt-msg); - HeapFree(GetProcessHeap(), 0, pkt); - return 0; -} - static DWORD CALLBACK RPCRT4_io_thread(LPVOID the_arg) { RpcConnection* conn = (RpcConnection*)the_arg; @@ -319,10 +322,14 @@ static DWORD CALLBACK RPCRT4_io_thread(LPVOID the_arg) RpcBinding *pbind; RPC_MESSAGE *msg; RPC_STATUS status; - RpcPacket *packet; TRACE((%p)\n, conn); + EnterCriticalSection(client_connections_cs); + list_add_head(client_connections, conn-client_entry); + ResetEvent(clients_completed_event); + LeaveCriticalSection(client_connections_cs); + for (;;) { msg = HeapAlloc(GetProcessHeap(), HEAP_ZERO_MEMORY, sizeof(RPC_MESSAGE)); @@ -338,17 +345,17 @@ static DWORD CALLBACK RPCRT4_io_thread(LPVOID the_arg) break; } -#if 0 RPCRT4_process_packet(conn, hdr, msg); -#else -packet = HeapAlloc(GetProcessHeap(), 0, sizeof(RpcPacket)); -packet-conn = conn; -packet-hdr = hdr; -packet-msg = msg; -QueueUserWorkItem(RPCRT4_worker_thread, packet, WT_EXECUTELONGFUNCTION); -#endif -msg = NULL; } + + EnterCriticalSection(client_connections_cs); + list_remove(conn-client_entry); + if (list_empty(client_connections)) { +TRACE(last in the list to complete (%p)\n, conn); +SetEvent(clients_completed_event); + } + LeaveCriticalSection(client_connections_cs); + RPCRT4_DestroyConnection(conn); return 0; } I'm not sure your reasoning for doing this. If I'm not mistaken, this change makes it so that only one RPC call at a time is processed. I took out the thread pool stuff because it just makes it harder to wait for all the RPCs to complete. As far as I can tell, the control flow goes like this: (1) a server thread (one per protocol / port) accepts connections from clients; (2) when the server thread gets a connection, it creates an I/O thread and goes back to listening; (3) the I/O thread waits for RPC packets and processes them. It should still allow concurrent processing of RPCs from different client connections, although it would only allow one RPC per client to be processed at a time. If the client had multiple threads each making RPCs to the same server over the same connection, then, yea, this could degrade performance. It might not be hard to put the thread pool back in. I'll see what I can do. I'll try resubmitting this again in a bit, as I've tweaked the original a little and added more tests, but if you or Alexandre still have some fundamental beef with it, I can try doing things totally different. Thanks for the criticism. Dan
Re: Wine vs. Cedega in Benchmarks
Am Donnerstag 12 April 2007 16:57 schrieb Tom Wickline: On 4/12/07, Stefan Dösinger [EMAIL PROTECTED] wrote: Nvidia SDK D3D Demos? Nvidia SDK OpenGL Demos? I don't think SDK demos are good performance benchmarks for overall performance. They can find bottlenecks, but not predict how good something is for games. They could show which features work in Wine / Cedega Unfair Have you ever read there propaganda news letter? of all the features they support.. of just how green the grass is over there ;) Does that mean we have to do the same? They claim the grass is green as far as games are related. They do not sell cedega as a tool to run sdk demos. Otherwise we may say Wine is better for gaming because it runs Microsoft Office 2003 and thus VBA games will work. We should only compare the functionality Transgaming advertizes in my opinion. pgp6zhVBFVSXv.pgp Description: PGP signature
RE: locales, unicode and ansi with msvcrt (bug 8022)
Apparently you need to use appropriate console output APIs directly (that take into account the console input/output code page) instead of using MSVCRT APIs. Unfortunately just using the wide console function will only help the output to the screen, but as my test program shows there is the same discrepency when the output is to a file handle... What your test app is doing? It probably needs a test under Windows to see in which encoding (ANSI/OEM) a not unicode app should receive input via a pipe. From the original mail: // File i/o has same problem? (Windows - narrow, Wine - includes nulls) WCHAR buffer[] = LHello Jason\n; f = _wfopen(Ltest, Lw+t); fwprintf(f, buffer); fclose(f); So basically inside msvcrt we know we have a Unicode string to output from wprintf (and friends), but what conversion occurs before physically outputting it - Is it just a straight conversion to the console codepage perhaps? Jason
Re: mshtml #1: Change TRACE to FIXME in stubs.
Jacek Caban wrote: S-Insert --- dlls/mshtml/htmlbody.c | 70 1 files changed, 35 insertions(+), 35 deletions(-) diff --git a/dlls/mshtml/htmlbody.c b/dlls/mshtml/htmlbody.c index f8be550..542c989 100644 --- a/dlls/mshtml/htmlbody.c +++ b/dlls/mshtml/htmlbody.c @@ -138,245 +138,245 @@ static HRESULT WINAPI HTMLBodyElement_Invoke(IHTMLBodyElement *iface, DISPID dis static HRESULT WINAPI HTMLBodyElement_put_background(IHTMLBodyElement *iface, BSTR v) { HTMLBodyElement *This = HTMLBODY_THIS(iface); -TRACE((%p)-(%s)\n, This, debugstr_w(v)); +FIXME((%p)-(%s)\n, This, debugstr_w(v)); It should then include Stub in the FIXME message so people know why they get those. bye michael -- Michael Stefaniuc Tel.: +49-711-96437-199 Sr. Network EngineerFax.: +49-711-96437-111 Red Hat GmbHEmail: [EMAIL PROTECTED] Hauptstaetterstr. 58http://www.redhat.de/ D-70178 Stuttgart
Re: [5/5] D3D9: Add a test for the converted vertex decl
Am Donnerstag 12 April 2007 16:59 schrieb Ivan Gyurdiev: Looks good, but this comment is misleading - s/second/first. +/* The contents should correspond to the second conversion */ +VDECL_CHECK(compare_elements(result_decl1, test_elements1)); Also, I thought when Henri was testing this that the object kept changing between sequential Get() calls ? Did you mention something about a driver bug causing this ? Indeed, the comment is broken copypasting. I'll fix that. I did not see the declaration change between the GetVertexDeclaration() / SetFVF() calls, but I'll readd my temporary check for that(If it is not in already). pgpTgUhour2eU.pgp Description: PGP signature
Re: ole32: Void functions should not return a value
Andrew Talbot wrote: diff -urN a/dlls/ole32/rpc.c b/dlls/ole32/rpc.c --- a/dlls/ole32/rpc.c 2007-03-28 12:43:32.0 +0100 +++ b/dlls/ole32/rpc.c 2007-04-12 20:20:48.0 +0100 @@ -1352,7 +1352,7 @@ TRACE(ipid = %s, iMethod = %d\n, debugstr_guid(ipid), msg-ProcNum); params = HeapAlloc(GetProcessHeap(), 0, sizeof(*params)); -if (!params) return RpcRaiseException(E_OUTOFMEMORY); +if (!params) RpcRaiseException(E_OUTOFMEMORY); hr = ipid_get_dispatch_params(ipid, apt, params-stub, params-chan, params-iid, params-iface); @@ -1360,7 +1360,7 @@ { ERR(no apartment found for ipid %s\n, debugstr_guid(ipid)); HeapFree(GetProcessHeap(), 0, params); -return RpcRaiseException(hr); +RpcRaiseException(hr); } params-msg = (RPCOLEMESSAGE *)msg; You've changed the code paths here. -- Rob Shearman
Re: ole32: Void functions should not return a value
Robert Shearman wrote: Andrew Talbot wrote: diff -urN a/dlls/ole32/rpc.c b/dlls/ole32/rpc.c --- a/dlls/ole32/rpc.c 2007-03-28 12:43:32.0 +0100 +++ b/dlls/ole32/rpc.c 2007-04-12 20:20:48.0 +0100 @@ -1352,7 +1352,7 @@ TRACE(ipid = %s, iMethod = %d\n, debugstr_guid(ipid), msg-ProcNum); params = HeapAlloc(GetProcessHeap(), 0, sizeof(*params)); -if (!params) return RpcRaiseException(E_OUTOFMEMORY); +if (!params) RpcRaiseException(E_OUTOFMEMORY); hr = ipid_get_dispatch_params(ipid, apt, params-stub, params-chan, params-iid, params-iface); @@ -1360,7 +1360,7 @@ { ERR(no apartment found for ipid %s\n, debugstr_guid(ipid)); HeapFree(GetProcessHeap(), 0, params); -return RpcRaiseException(hr); +RpcRaiseException(hr); } params-msg = (RPCOLEMESSAGE *)msg; You've changed the code paths here. Hi Rob, I'm not quite sure what you mean. Are you implying that I need return statements after the RpcRaiseException() calls? Can one not just rely on the fact that RpcRaiseException() does not return to the caller? -- Andy.
RE: locales, unicode and ansi with msvcrt (bug 8022)
What your test app is doing? It probably needs a test under Windows to see in which encoding (ANSI/OEM) a not unicode app should receive input via a pipe. Sorry, just realized I had not addressed your last comment - Can you expand on this sample test please and I'll do some experimenting. How do you mean a pipe (just stdin?) and what would this be showing beyond the straight file i/o case (My original test does straight writes to the console, and to a file). Also, for example, any suggestions on how to tell OEM from ANSI, and on telling the codepage used for the conversions. Thanks Jason
locales, unicode and ansi with msvcrt (bug 8022)
Just some thoughts in hope they would help: There is special utility called 'mode' in Windows. I can select output locale: mode con cp prepare ... mode con cp select ... I think it would solve many problems if similiar tool will be implemented in wine. [bug #8022 comment #7] -- Kirill
Re: ole32: Void functions should not return a value
Andrew Talbot wrote: Robert Shearman wrote: Andrew Talbot wrote: diff -urN a/dlls/ole32/rpc.c b/dlls/ole32/rpc.c --- a/dlls/ole32/rpc.c 2007-03-28 12:43:32.0 +0100 +++ b/dlls/ole32/rpc.c 2007-04-12 20:20:48.0 +0100 @@ -1352,7 +1352,7 @@ TRACE(ipid = %s, iMethod = %d\n, debugstr_guid(ipid), msg-ProcNum); params = HeapAlloc(GetProcessHeap(), 0, sizeof(*params)); -if (!params) return RpcRaiseException(E_OUTOFMEMORY); +if (!params) RpcRaiseException(E_OUTOFMEMORY); hr = ipid_get_dispatch_params(ipid, apt, params-stub, params-chan, params-iid, params-iface); @@ -1360,7 +1360,7 @@ { ERR(no apartment found for ipid %s\n, debugstr_guid(ipid)); HeapFree(GetProcessHeap(), 0, params); -return RpcRaiseException(hr); +RpcRaiseException(hr); } params-msg = (RPCOLEMESSAGE *)msg; You've changed the code paths here. Hi Rob, I'm not quite sure what you mean. Are you implying that I need return statements after the RpcRaiseException() calls? Can one not just rely on the fact that RpcRaiseException() does not return to the caller? -- Andy. You completely removed the return from the function at those two points, allowing it to fall through. -- Brian Gerst
Re: ole32: Void functions should not return a value
On 4/12/07, Brian Gerst [EMAIL PROTECTED] wrote: Andrew Talbot wrote: Robert Shearman wrote: Andrew Talbot wrote: diff -urN a/dlls/ole32/rpc.c b/dlls/ole32/rpc.c --- a/dlls/ole32/rpc.c 2007-03-28 12:43:32.0 +0100 +++ b/dlls/ole32/rpc.c 2007-04-12 20:20:48.0 +0100 @@ -1352,7 +1352,7 @@ TRACE(ipid = %s, iMethod = %d\n, debugstr_guid(ipid), msg-ProcNum); params = HeapAlloc(GetProcessHeap(), 0, sizeof(*params)); -if (!params) return RpcRaiseException(E_OUTOFMEMORY); +if (!params) RpcRaiseException(E_OUTOFMEMORY); hr = ipid_get_dispatch_params(ipid, apt, params-stub, params-chan, params-iid, params-iface); @@ -1360,7 +1360,7 @@ { ERR(no apartment found for ipid %s\n, debugstr_guid(ipid)); HeapFree(GetProcessHeap(), 0, params); -return RpcRaiseException(hr); +RpcRaiseException(hr); } params-msg = (RPCOLEMESSAGE *)msg; You've changed the code paths here. Hi Rob, I'm not quite sure what you mean. Are you implying that I need return statements after the RpcRaiseException() calls? Can one not just rely on the fact that RpcRaiseException() does not return to the caller? You completely removed the return from the function at those two points, allowing it to fall through. Read Andrew's last sentence. Technically no return is needed, but it can be added for aesthetics. -- James Hawkins
Re: [2/2] wined3d: Remove usesFog flag from IWineD3DVertexShaderImpl
Stefan Dösinger wrote: Honestly I do not really agree with getter methods like this inside WineD3D. Yes, they do hide the implementation details, namely how the flag is stored. Yes, they do encapsulate data, like the Object Oriented Programming model says. But honestly, how much use is it to do a function call just to read a value inside wined3d? Well, your use of a redundant top-level flag does kind of remove the need for a getter - I only brought it up, since I've moved the reg_maps structure around one too many times, and that's where the first flag is stored. I don't argue that we should make wined3d internals visible to outside wined3d. But inside wined3d, my personal preference is to just access the implementation structure directly. The more complex and interconnected the codebase, the more it makes sense to encapsulate things. I think abstraction also benefits the less experienced developer. You make a good point that this level of detail should be internal to the shader to begin with, regardless of how it's accessed. How much abstraction does such a function give us at the cost of performance? You can always make the function inline (although that's also rather ugly).
Re: locales, unicode and ansi with msvcrt (bug 8022)
Ann Jason Edmeades [EMAIL PROTECTED] wrote: What your test app is doing? It probably needs a test under Windows to see in which encoding (ANSI/OEM) a not unicode app should receive input via a pipe. Sorry, just realized I had not addressed your last comment - Can you expand on this sample test please and I'll do some experimenting. How do you mean a pipe (just stdin?) and what would this be showing beyond the straight file i/o case (My original test does straight writes to the console, and to a file). I meant things like 'dir lst.txt', 'dir | sort lst.txt'. 'dir' and 'sort' could be replaced by some external .exes that get input and produce outpup. Also, for example, any suggestions on how to tell OEM from ANSI, and on telling the codepage used for the conversions. GetConsoleOutputCP -- Dmitry.