http://bugs.winehq.org/show_bug.cgi?id=21515
Summary: VENDOR_WINE vs VENDOR_ATI with xf86-video-ati Product: Wine Version: unspecified Platform: x86-64 OS/Version: Linux Status: UNCONFIRMED Severity: normal Priority: P2 Component: directx-d3d AssignedTo: wine-bugs@winehq.org ReportedBy: cruiseoveride@gmail.com
I seem to have discovered a strange "problem" with wine and the open source radeon driver (xf86-video-ati) on my HD4870.
I installed 3dmark2001SE in a clean wineprefix and ran the benchmark.
When the application launched i saw the following message on the console: fixme:d3d_caps:wined3d_guess_vendor Received unrecognized GL_VENDOR "Advanced Micro Devices, Inc.". Returning VENDOR_WINE.
During the lobby demo, the people were invisible. You could see their guns, bullets, sunglasses, etc.. but the actual people were not there.
I went into dlls/wined3d/directx.c and hard coded VENDOR_MESA in wined3d_guess_vendor(), and ran 3dmark2001se again, but got the same result.
Went back, and hard coded VENDOR_ATI, and ran the benchmark again, and now it was perfect. All the models in the lobby demo are now visible.
I'm just a n00b here, so please dont get aggravated if I'm mistaken. I see two problems here:
1. Wine does not recognise the GL_VENDOR string of ATi cards when using the "radeon" (xf86-video-ati) open source driver.
2. There are rules that are being pulled in with VENDOR_ATI that potentially should be used in VENDOR_WINE and/or in VENDOR_MESA as well.
I am using linux, drm, mesa and xf86-video-ati from git, with my HD4870 on Ubuntu-9.10.
Here is some GL info:
GL_VERSION: 2.0 Mesa 7.8-devel GL_VENDOR: Advanced Micro Devices, Inc. GL_RENDERER: Mesa DRI R600 (RV770 9440) 20090101 TCL DRI2
My WINE Version = wine-1.1.37-124-gd3bd40d
http://bugs.winehq.org/show_bug.cgi?id=21515
Dmitry Timoshkov dmitry@codeweavers.com changed:
What |Removed |Added ---------------------------------------------------------------------------- Version|unspecified |1.1.37
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #1 from Luca Bennati lucak3@gmail.com 2010-01-29 16:59:33 --- Created an attachment (id=25948) --> (http://bugs.winehq.org/attachment.cgi?id=25948) amd open driver recognition patch
so, does this trivial patch help? please test with clean git (revert your changes)
correct me if i'm wrong since i don't use ati cards, but the radeon open source driver does depend on mesa, right?
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #2 from cruiseoveride cruiseoveride@gmail.com 2010-01-29 18:29:12 --- I don't understand what that is supposed to achieve in this context. I already mentioned in my first comment that VENDOR_MESA has no effect.
The way I see it recognition is only a small part of the problem. What wine does with that information is more important, ie. D3D/GL Capabilities associated with each recognised card.
http://bugs.winehq.org/show_bug.cgi?id=21515
ppanon@shaw.ca changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |ppanon@shaw.ca
--- Comment #3 from ppanon@shaw.ca 2010-01-30 03:19:43 --- I'm also running a similar environment: recent git open source ATI/radeon driver with HD3850 recent git mesa (7.8 alpha) Currently they're both from the Ubuntu Xorg-edgers PPA (2010/01/27 snapshot for ATI, 2010/01/28 for mesa).
I'm getting similar behaviour where VENDOR_WINE is being selected based on the GL_VENDOR of "Advanced Micro Devices, Inc." I haven't been able to get as far as cruiseoveride because the apps I'm trying are failing completely with the VENDOR_WINE default.
That said, I think what's happening for cruiseoveride is that Mesa 7.7+ has beta support for OpenGL 2.0 APIs with ATI R600 and above. http://wiki.x.org/wiki/RadeonFeature
So what's probably happening is that Wine code assumes that VENDOR_MESA has a lower level of OpenGL support, whereas it probably uses more recent APIs when the VENDOR_ATI is detected, expecting GL2+ from the proprietary fglrx driver. So by hardcoding to VENDOR_ATI, wine is using more advanced APIs that better support translation of the graphics calls from the application.
It looks like I wouldn't be able to use cruiseoveride's VENDOR_ATI hack though because the HD3850 isn't in Wine's list of ATI devices (which jumps from HD3200 to HD4350). From lspci -v
01:00.0 VGA compatible controller: ATI Technologies Inc RV670PRO [Radeon HD 3850] Subsystem: ATI Technologies Inc Device 2542 Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at d0000000 (64-bit, prefetchable) [size=256M] Memory at fe9e0000 (64-bit, non-prefetchable) [size=64K] I/O ports at c000 [size=256] Expansion ROM at fe9c0000 [disabled] [size=128K] Capabilities: <access denied> Kernel driver in use: radeon Kernel modules: radeon
cruiseoveride's VENDOR_ATI hack has issues however, because it may result in Wine attempting to use GL APIs that are supported by the fglrx driver, but not in the Mesa 7.7+ R600(+) driver support. Seeing as how GL 2.0, and eventually GL3.0+ support is planned through Mesa/Gallium3D, what probably needs to be done is to rewrite the Mesa-based GL translation paths to also select based on the OpenGL level supported.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #4 from ppanon@shaw.ca 2010-01-30 03:30:51 --- Ah, found in glxinfo where the GL_RENDERER info for my HD3850 is OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: Mesa DRI R600 (RV670 9505) 20090101 TCL DRI2 OpenGL version string: 2.0 Mesa 7.8-devel
So I guess I could set up a new card entry in wined3d_private.h with CARD_ATI_RADEON_HD3850 = 0x9505, Since I wouldn't know what to change to be dependent on it, it's not sufficient to solve my problems. However that missing entry may explain why I was having so many problems with Wine even with the proprietary fglrx driver before switching to the open source driver. Perhaps there should also be an entry from HD3870 cards, though I have no idea what the device code is.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #5 from Luca Bennati lucak3@gmail.com 2010-01-30 03:58:37 --- It's true recognition is just a small part of the problem, but it does not mean that it's not correct: the radeon open driver does depend on mesa, so it must be in the mesa section. As already said in comment #3, the closed driver fglrx driver are (probably) different in ther implementation, so wine cannot use VENDOR_ATI that comes from fglrx! If i remember correctly, radeon open only has support for 2D.
What should be done now is out of my competence, because: - i cannot simply add a a value in wined3d_private.h as it seems that that list should be kept as minimal as possible (please correct me if i'm wrong) - the changes to be done in the APIs are most probably competence of Christian Costa, Stefan Doesinger and Henri Veerbet
If you want a real insight and solution of the problem you should CC them for this bug.
http://bugs.winehq.org/show_bug.cgi?id=21515
Cùran debian@carbon-project.org changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |debian@carbon-project.org
--- Comment #6 from Cùran debian@carbon-project.org 2010-01-30 10:32:47 --- (In reply to comment #5) The open radeon driver has 3D support for all chip models (R600 & R700 not entirely complete) except the very latest (Evergreen). You can see that at http://www.x.org/wiki/RadeonFeature (was already linked in comment #3). I'm running several 3D applications with hardware acceleration on my systems. So, no, the radeon driver does not only support 2D.
Now something different, for older models of the Radeon chips, the GL_VENDOR seems to be different and looks like: OpenGL vendor string: DRI R300 Project OpenGL renderer string: Mesa DRI R300 (R300 4E45) 20090101 AGP 8x x86/MMX+/3DNow!+/SSE TCL OpenGL version string: 1.5 Mesa 7.6.1 Please make sure, that any patch recognizes them too.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #7 from Luca Bennati lucak3@gmail.com 2010-01-30 11:15:49 --- (In reply to comment #6)
The open radeon driver has 3D support for all chip models (R600 & R700 not entirely complete) except the very latest (Evergreen). You can see that at http://www.x.org/wiki/RadeonFeature (was already linked in comment #3). I'm running several 3D applications with hardware acceleration on my systems. So, no, the radeon driver does not only support 2D.
Now something different, for older models of the Radeon chips, the GL_VENDOR seems to be different and looks like: OpenGL vendor string: DRI R300 Project OpenGL renderer string: Mesa DRI R300 (R300 4E45) 20090101 AGP 8x x86/MMX+/3DNow!+/SSE TCL OpenGL version string: 1.5 Mesa 7.6.1 Please make sure, that any patch recognizes them too.
I'm sorry i did not actually read the info page, i then stand corrected. Good for ATI/AMD cards users!
My patch just added recognition for AMD vendor string that comes from the radeon open driver ie "Advanced Micro Devices, Inc.". It seemed to me that the "DRI R300 Project" was already there.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #8 from Cùran debian@carbon-project.org 2010-01-30 12:06:56 --- (In reply to comment #7)
It seemed to me that the "DRI R300 Project" was already there.
Ah, yes, „DRI R300 Project“ seems to be in the Mesa list (http://source.winehq.org/source/dlls/wined3d/directx.c#L1126). That would only leave the second question of the initial report open. Thanks for your reply!
http://bugs.winehq.org/show_bug.cgi?id=21515
cruiseoveride cruiseoveride@gmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |titan.costa@wanadoo.fr
http://bugs.winehq.org/show_bug.cgi?id=21515
cruiseoveride cruiseoveride@gmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |stefan@codeweavers.com
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #9 from cruiseoveride cruiseoveride@gmail.com 2010-01-30 13:04:02 --- At the moment, ATi cards are among the best supported video cards on Linux using open source drivers.
The Nvidia blob is likely more complete, but the WINE devs should at least meet the open source graphics community half-way.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #10 from ppanon@shaw.ca 2010-01-30 15:18:16 --- Intel has also done a lot of work on supporting 3D with Mesa open source hardware drivers.
See http://wiki.x.org/wiki/IntelGraphicsDriver
All the more reason for Wine to request the OpenGL version conformance supported by the rendering driver, and use different D3D translation paths depending on the results. MOre than just ATI will benefit. With Mesa 7.7 released and likely included in the next set of distributions (particularly the next LTS of Ubuntu), this should probably be a Wine 1.2 must fix bug.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #11 from P.Panon ppanon@shaw.ca 2010-01-31 00:14:37 --- I'm looking at the comment at the head of wined3d_guess_card() and it looks like the card info is passed back to the windows application so that it can use the card id to makes some guesses about capabilities and what function to use (i.e amount of memory, # of shaders, etc.). I had only looked quickly through a few procedures in directx.c when I wrote comment #3 and I didn't realize that. For VENDOR_MESA, that finction returns one of 3 NVidia card models depending on whether the OpenGL driver supports enough functions to emulate DirectX 7, 8, or 9 which is probably pretty far off from the real capabilities of the ATI card. That means that, at the least, wined3d_guess card probably needs to be changed to also return an appropriate ATI card device ID to the Windows application, even when VENDOR_MESA is identified, so that that the windows app can use the appropriate capabilities of the card.
http://bugs.winehq.org/show_bug.cgi?id=21515
Stefan Dösinger stefandoesinger@gmx.at changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |stefandoesinger@gmx.at
--- Comment #12 from Stefan Dösinger stefandoesinger@gmx.at 2010-01-31 04:50:37 --- Wined3d is not using the guessed card ID to draw conclusions about the card's features. We detect capabilities of the card based on the GL version and advertised extensions.
The IS_D3Dx_CAPABLE macro is not used for cutting out any features either. It is used to guess a sane card ID. Essentially, if the card claims to be a Radeon HD 2200 in the GL strings, but the driver doesn't support shaders we can't report a HD2200 to the app, since we only have d3d7-class features.
In an ideal world the PCI IDs would be ignored. We use the GL extensions to find the features, and report the features in device independent D3D capabilities. Unfortunately, as noted in this bug report, some applications indeed make guesses from the PCI ID, which is why we quite an amount of code to guess them.
So what we need to do is implement parsing of the strings Mesa advertises to detect the card as ATI rather than falling back to the generic VENDOR_WINE stuff that could confuse broken games(broken in the sense that they depend on the PCI ID for correct functionality). Unfortunately I don't have a radeon HD card, the newest I have is a X1600. So someone else will have to send patches.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #13 from P.Panon ppanon@shaw.ca 2010-01-31 12:56:15 --- Thanks Stefan. As you can see from comments #4 and #6, the GL renderer strings for ATI drivers available in wined3d_guess_card() are:
OpenGL renderer string: Mesa DRI R300 (R300 4E45) 20090101 AGP 8x... OpenGL renderer string: Mesa DRI R600 (RV670 9505) 20090101 TCL DRI2
So the first part (Mesa DRI R[36]00) can be used to identify the ATI open source mesa drivers. It looks like (at least for the DRI600 driver) the part in brackets could be used to identify the chip revision for better estimation of chip capabilities. A list of ATI chipset codes are at
http://en.wikipedia.org/wiki/Comparison_of_ATI_graphics_processing_units#Rad...
So I don't know if the R300 driver (current GL 1.5 conformance) offers enough capabilities for DX9 or not. The R600 should. If you want to put together a framework patch for testing the gl render string, then I'm willing to apply it and test it on my end.
I was going to try to take a crack at doing this myself but I haven't quite figured out how to see the TRACE() information results. I was trying to use set + win from http://www.winehq.org/docs/winedev-guide/dbg-commands, but when I tried to do it in winedbg, it complained that it couldn't find debug symbols.
Wine-dbg>set + win No symbols found for debug_options
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #14 from P.Panon ppanon@shaw.ca 2010-01-31 18:57:07 --- Found a little more time to read http://wiki.jswindle.com/index.php/Debugging_Wine while my 2-year old's taking a nap :-). I've now got the dev and dbg packages loaded as well but I still can't seem to get any trace info. For a while I was getting that it couldn't find the channels. Now I'm not getting an error, but no trace info with either wine or winedbg. Is there a build config option that might be set by the standard Ubuntu packaging that would no-op all the TRACE statements? I've applied Luca's patch, so now it presumably is using VENDOR_MESA. However I need to see the TRACE results to see what other settings are being reported (DirectX 7/8/9 capability, why it's showing DRI as disabled when glxinfo says otherwise, etc.)
ppanon@whiteygu:/var/wine/games$ export WINEDEBUG=trace+wined3d ppanon@whiteygu:/var/wine/games$ winedbg drive_c/Program\ Files/Firaxis\ Games/Sid\ Meier's\ Alpha\ Centauri/terran.exe WineDbg starting on pid 001f fixme:mixer:ALSA_MixerInit No master control found on USB Device 0x46d:0x8c2, disabling mixer fixme:mixer:ALSA_MixerInit No master control found on HDA ATI HDMI, disabling mixer start_process () at /var/wine/src/wine1.2-1.1.37/dlls/kernel32/process.c:1037 0x7b858baf start_process+0x4f [/var/wine/src/wine1.2-1.1.37/dlls/kernel32/process.c:1037] in kernel32: movl %esi,0x0(%esp) 1037 return entry( peb ); Wine-dbg>set +wined3d fixme:dbghelp_dwarf:compute_location Unhandled attr op: 1e Wine-dbg>set +wined3d Wine-dbg>set +all Wine-dbg>set trace+all Wine-dbg>continue syntax error Wine-dbg>cont err:winediag:X11DRV_WineGL_InitOpenglInfo Direct rendering is disabled, most likely your OpenGL drivers haven't been installed correctly fixme:d3d:check_fbo_compat Format WINED3DFMT_B8G8R8_UNORM with rendertarget flag is not supported as FBO color attachment, and no fallback specified. fixme:d3d:check_fbo_compat Format WINED3DFMT_B8G8R8A8_UNORM with rendertarget flag is not supported as FBO color attachment, and no fallback specified. fixme:d3d:check_fbo_compat Format WINED3DFMT_B8G8R8X8_UNORM with rendertarget flag is not supported as FBO color attachment, and no fallback specified. fixme:d3d:check_fbo_compat Format WINED3DFMT_B5G6R5_UNORM rtInternal format is not supported as FBO color attachment. fixme:d3d:check_fbo_compat Format WINED3DFMT_R16G16_UNORM rtInternal format is not supported as FBO color attachment. fixme:d3d:check_fbo_compat Format WINED3DFMT_R16G16B16A16_UNORM with rendertarget flag is not supported as FBO color attachment, and no fallback specified. fixme:win:EnumDisplayDevicesW ((null),0,0x33f028,0x00000000), stub! Process of pid=001f has terminated
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #15 from P.Panon ppanon@shaw.ca 2010-02-01 03:37:24 --- Created an attachment (id=25995) --> (http://bugs.winehq.org/attachment.cgi?id=25995) running wine initialization with +wgl,+winediag
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #16 from P.Panon ppanon@shaw.ca 2010-02-01 03:42:48 --- Created an attachment (id=25996) --> (http://bugs.winehq.org/attachment.cgi?id=25996) wine init with +d3d,+d3dcaps trace
Finally managed to get tracing happening. The open source driver GL version appears to only be sufficient for DirectX8 so far. However there are some interesting results:
Here's part of the trace from winex11.drv/opengl.c
trace:wgl:wglGetProcAddress func: 'glAccum' trace:wgl:X11DRV_WineGL_InitOpenglInfo GL version : 1.4 (2.0 Mesa 7.8-devel). trace:wgl:X11DRV_WineGL_InitOpenglInfo GL renderer : Mesa DRI R600 (RV670 9505) 20090101 TCL DRI2. trace:wgl:X11DRV_WineGL_InitOpenglInfo GLX version : 1.2. trace:wgl:X11DRV_WineGL_InitOpenglInfo Server GLX version : 1.2. trace:wgl:X11DRV_WineGL_InitOpenglInfo Server GLX vendor: : SGI. trace:wgl:X11DRV_WineGL_InitOpenglInfo Client GLX version : 1.4. trace:wgl:X11DRV_WineGL_InitOpenglInfo Client GLX vendor: : SGI. trace:wgl:X11DRV_WineGL_InitOpenglInfo Direct rendering enabled: False err:winediag:X11DRV_WineGL_InitOpenglInfo Direct rendering is disabled, most likely your OpenGL drivers haven't been installed correctly
and comparable results from glxinfo:
display: :0 screen: 0 direct rendering: Yes server glx vendor string: SGI server glx version string: 1.2 server glx extensions: GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_copy_sub_buffer, GLX_OML_swap_method, GLX_SGI_swap_control, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_visual_select_group client glx vendor string: Mesa Project and SGI client glx version string: 1.4 client glx extensions: GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_allocate_memory, GLX_MESA_copy_sub_buffer, GLX_MESA_swap_control, GLX_MESA_swap_frame_usage, GLX_OML_swap_method, GLX_OML_sync_control, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap, GLX_INTEL_swap_event GLX version: 1.2 GLX extensions: GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_copy_sub_buffer, GLX_MESA_swap_control, GLX_MESA_swap_frame_usage, GLX_OML_swap_method, GLX_SGI_swap_control, GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: Mesa DRI R600 (RV670 9505) 20090101 TCL DRI2 OpenGL version string: 2.0 Mesa 7.8-devel OpenGL shading language version string: 1.10
So it looks like the GL support isn't quite sufficient yet for DirectX9 support. However I don't know enough about the MESA extensions listed above and whether they might be able to replace some of the missing functions if the Wine code tried to make use of them.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #17 from Stefan Dösinger stefandoesinger@gmx.at 2010-02-01 04:13:22 ---
err:winediag:X11DRV_WineGL_InitOpenglInfo Direct rendering is disabled, most
%
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #18 from Cùran debian@carbon-project.org 2010-02-01 07:48:56 --- Created an attachment (id=25997) --> (http://bugs.winehq.org/attachment.cgi?id=25997) WINEDEBUG=+wgl,+winediag for R300.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #19 from Cùran debian@carbon-project.org 2010-02-01 07:52:09 --- Created an attachment (id=25998) --> (http://bugs.winehq.org/attachment.cgi?id=25998) glxinfo (with all extensions shown) for R300
Just in case Wine doesn't use all available extensions for hardware acceleration, I've attached the glxinfo output for the R300. I hope this is all the information you might need in conjunction with attachment 25997.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #20 from Luca Bennati lucak3@gmail.com 2010-02-01 09:34:05 --- My oneliner has been committed and is in tree, i'm happy to help even if it turns out to be just a little reminder for a rework of wined3d_guess_vendor(). Thank you Stefan and Henri for putting attention on this problem. I will get back if i get a recent enough ATI card to test.
http://bugs.winehq.org/show_bug.cgi?id=21515
Trevour bio_tube@yahoo.com changed:
What |Removed |Added ---------------------------------------------------------------------------- Status|UNCONFIRMED |NEW Ever Confirmed|0 |1
--- Comment #21 from Trevour bio_tube@yahoo.com 2010-02-01 19:09:29 --- *** This bug has been confirmed by popular vote. ***
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #22 from P.Panon ppanon@shaw.ca 2010-02-02 02:49:35 --- Created an attachment (id=26021) --> (http://bugs.winehq.org/attachment.cgi?id=26021) Patch for setting DirectX device impersonation according to Mesa driver renderer string
Since my driver or build installation seems to be messed up and a trace doesn't match up with what I get from glxinfo, I can't test this patch properly right now. Based on the info from Stefan and Luca, I tried to translate the VENDOR_ATI section to use what was available for VENDOR_MESA. Perhaps Cùran or cruiseoveride can test it with WINEDEBUG=+d3dcaps to see if it works better for them?
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #23 from Cùran debian@carbon-project.org 2010-02-02 10:39:25 --- Created an attachment (id=26029) --> (http://bugs.winehq.org/attachment.cgi?id=26029) 2D application with R300 and patch from comment #22 (WINEDEBUG=+d3dcaps,+wgl,+winediag)
(In reply to comment #22) I've built Wine packages with the patch from attachment 26021 and the resulting binaries can be found at http://dev.carbon-project.org/debian/wine-unstable/. The only line with d3dcaps in it is:
fixme:d3d_caps:init_driver_info Unhandled vendor 0001.
Seems to me like something is missing? For more details, please see the attachment of this comment.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #24 from P.Panon ppanon@shaw.ca 2010-02-03 04:45:48 --- Thanks Cùran,
Looks like I made a typo and it should have been +d3d_caps. Currently, I only see a Directx8 capability when I run with that trace.
I'm not quite sure how to resolve the fixme for init_driver_info(). It looks like it's necessary to set the driver_info parameters that are passed to Windows apps. However it requires the vendor code to match the list in driver_version_table[]. While I could hack it to use VENDOR_ATI, at some point, somebody is going to add support for the Intel 3D-accelerated Mesa or nouveau drivers as well. The problem is that the device number isn't unique across vendors, but if the vendor is VENDOR_MESA, then we don't know if it's really an ATI or INTEL, and the GL vendor and renderer strings aren't available to figure it out like they are in wined3d_guess_vendor() wined3d_guess_device()
While it would be possible to modify init_driver_info() to also pass in the GL vendor again, it's not good for function orthogonality and there may be other places where that info eventually becomes necessary. I suspect it would actually be better to split VENDOR_MESA into VENDOR_MESA_ATI, VENDOR_MESA_INTEL, VENDOR_MESA_SOFTPIPE(?), etc so that the actual card vendor information is carried through the wined3d_pci_vendor enumeration. Hopefully one of the wine devs will clarify which way to go.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #25 from Henri Verbeet hverbeet@gmail.com 2010-02-03 05:11:02 --- (In reply to comment #24)
places where that info eventually becomes necessary. I suspect it would actually be better to split VENDOR_MESA into VENDOR_MESA_ATI, VENDOR_MESA_INTEL, VENDOR_MESA_SOFTPIPE(?), etc so that the actual card vendor information is carried through the wined3d_pci_vendor enumeration. Hopefully one of the wine devs will clarify which way to go.
I already mentioned this in a comment to the patch mentioned, but not here yet. You're mostly right here, but rather than stuffing that all into a single "vendor" field, we should split the field into "card vendor" and "GL vendor" fields. The card vendor would be what we pass to the application, while the GL vendor would be what we use internally to apply quirks etc.
http://bugs.winehq.org/show_bug.cgi?id=21515
Cùran debian@carbon-project.org changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26029|0 |1 is obsolete| |
--- Comment #26 from Cùran debian@carbon-project.org 2010-02-03 07:09:47 --- Created an attachment (id=26040) --> (http://bugs.winehq.org/attachment.cgi?id=26040) 3D application with R300 and patch from comment #22 (WINEDEBUG=+d3d_caps)
(In reply to comment #24)
Looks like I made a typo and it should have been +d3d_caps.
Ah, ok. Anyway, the correct output is attached to this comment (and now it shows the/some found OpenGL extensions). Just as a side note, in case it matters (though I doubt it): the application is run in a virtual desktop (video setting > virtual desktop in winecfg), mainly because the application suffers from bug 20467 and is easier to terminate this way.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #27 from P.Panon ppanon@shaw.ca 2010-02-03 11:10:25 --- Good point Henri. After getting some sleep I got the same idea but you beat me to posting it.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #28 from cruiseoveride cruiseoveride@gmail.com 2010-02-04 22:45:58 --- (In reply to comment #25)
(In reply to comment #24)
places where that info eventually becomes necessary. I suspect it would actually be better to split VENDOR_MESA into VENDOR_MESA_ATI, VENDOR_MESA_INTEL, VENDOR_MESA_SOFTPIPE(?), etc so that the actual card vendor information is carried through the wined3d_pci_vendor enumeration. Hopefully one of the wine devs will clarify which way to go.
I already mentioned this in a comment to the patch mentioned, but not here yet. You're mostly right here, but rather than stuffing that all into a single "vendor" field, we should split the field into "card vendor" and "GL vendor" fields. The card vendor would be what we pass to the application, while the GL vendor would be what we use internally to apply quirks etc.
Different drivers for the same hardware are likely to have fewer differences than different drivers on different hardware, so whats wrong with using VENDOR_ATI with AMD cards regardless of driver?
Is there a WINE developer working on a patch to add support for the radeon driver?
If not, how can one of us write an acceptable patch? ie. considering just duplicating (more or less) all the strstr calls from under VENDOR_ATI to VENDOR_MESA is unlikely to be an acceptable solution? Should wined3d_guess_vendor() and wined3d_guess_card() be re-written with something like libpci for detection?
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #29 from cruiseoveride cruiseoveride@gmail.com 2010-02-04 23:20:27 --- Created an attachment (id=26055) --> (http://bugs.winehq.org/attachment.cgi?id=26055) Makes VENDOR_ATI applicable for both fglrx and radeon
(In reply to comment #28)
Different drivers for the same hardware are likely to have fewer differences than different drivers on different hardware, so whats wrong with using VENDOR_ATI with AMD cards regardless of driver?
A patch example.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #30 from Stefan Dösinger stefandoesinger@gmx.at 2010-02-05 03:45:08 ---
Different drivers for the same hardware are likely to have fewer differences than different drivers on different hardware, so whats wrong with using VENDOR_ATI with AMD cards regardless of driver?
We work around a few fglrx bugs, and we don't want to apply those workarounds to Mesa when Mesa doesn't have the bugs, and vice versa. So we want to report VENDOR_ATI to the game, but we also have to keep track of the driver vendor.
If not, how can one of us write an acceptable patch? ie. considering just duplicating (more or less) all the strstr calls from under VENDOR_ATI to VENDOR_MESA is unlikely to be an acceptable solution? Should wined3d_guess_vendor() and wined3d_guess_card() be re-written with something like libpci for detection?
libpci is not an option because it is not portable to e.g. OSX.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #31 from P.Panon ppanon@shaw.ca 2010-02-05 15:53:29 --- I'm not a Wine developer, but I've been getting myself up to speed on debugging with wine and the code relevant to this issue. While the patch I uploaded on Feb 2nd isn't correct, it's got about 1/2 the work that needs to be done. Thanks to the feedback from Stefan, I think I can get it done correctly per his recommendations, it's just going to take me longer than it would a regular Wine developer.
That said, since this problem appears to be pretty straightforward to fix while still meeting Stefan's recommendations, I would rather that Stefan and the other Wine developers work on stuff that require extensive knowledge of DirectX, OpenGL, and Windows internals (i.e. stuff I can't do). So please be a little more patient.
However, cruiseover If you want to help me, since you've also got an ATI card that uses the R600 driver, please capture a debug run with WINEDEBUG=+wgl,+wgldiag,+d3d,+d3dcaps. That give me an idea of whether some of the funny values I'm getting (see comments #14+#15) are due to the driver, Ubuntu's PPA package, or peculiar to my system.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #32 from P.Panon ppanon@shaw.ca 2010-02-05 15:55:55 --- Oops, that should be WINEDEBUG=+wgl,+wgldiag,+d3d,+d3d_caps wine ... That underscore is important.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #33 from cruiseoveride cruiseoveride@gmail.com 2010-02-05 18:39:31 --- (In reply to comment #32)
Oops, that should be WINEDEBUG=+wgl,+wgldiag,+d3d,+d3d_caps wine ... That underscore is important.
What am I supposed to be running?
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #34 from P.Panon ppanon@shaw.ca 2010-02-05 21:51:06 --- (In reply to comment #33)
What am I supposed to be running?
Any Windows Direct3D-using application - the 3dmark2001SE that you mentioned in comment #1 would be fine. But please run it from the command line so that you can redirect the trace output to a file and upload the file as an attachment.
i.e. with whatever WINEPREFIX you need already set WINEDEBUG=+wgl,+wgldiag,+d3d,+d3d_caps wine C:[path to the 3dmark2001 executable] 2>~/winetrace.dump
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #35 from cruiseoveride cruiseoveride@gmail.com 2010-02-06 14:02:08 --- Created an attachment (id=26087) --> (http://bugs.winehq.org/attachment.cgi?id=26087) See comment #34
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #36 from cruiseoveride cruiseoveride@gmail.com 2010-02-06 14:06:30 --- I hope we haven't forgotten the reason I started this bug ticket, ie. there is a workaround being employed when a combination of VENDOR_ATI and CARD_ATI_RADEON_9500 is used that is required when using the open source drivers.
Whether this is specific to 3dmark2001se or to a HD4870, I don't know. But the "Lobby" demo requires VENDOR_ATI and CARD_ATI_RADEON_9500 to work properly when using open source radeon drivers (see comment #1).
http://bugs.winehq.org/show_bug.cgi?id=21515
Casey Jones jonescaseyb@gmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |jonescaseyb@gmail.com
http://bugs.winehq.org/show_bug.cgi?id=21515
P.Panon ppanon@shaw.ca changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26021|0 |1 is obsolete| |
--- Comment #37 from P.Panon ppanon@shaw.ca 2010-02-07 11:32:27 --- Created an attachment (id=26108) --> (http://bugs.winehq.org/attachment.cgi?id=26108) Patch for setting DirectX device impersonation according to Mesa driver renderer string
There's still more to do but it should indicate if I'm on the right track. In addition to the splitting of the card_vendor & gl_vendor previously discussed, I also figured it would make sense to add a new GL_VENDOR_APPLE for GL drivers on the Apple platform and moved the code from match_apple() into it. That simplifies a lot of the match_...() functions, but it does make wined3d_guess_card a lot more tricky. I wasn't sure which of the vendor conditions in the latter applied to gl_vendors vs. card_vendors. I figured I would get something out that cruiseoveride can test, but that the guess card tests should probably be broken out into seperate card-vendor-based functions so that the proprietary ATI, NVidia, and Intel card-vendor based tests can be reused whether it's for a proprietary Linux driver or a proprietary Apple driver (i.e. GL_VENDOR_ATI vs. GL_VENDOR_APPLE) in a way that's more readable than my temporary hack.
Either this way or broken out, the final patch probably needs to be tested on a number of platforms to make sure it hasn't broken detection for those.
Looks like my Mesa/ATI setup is borked since it doesn't return the same values as cruiseoverride's so I'll need to figure that out first. Perhaps soemone else can clean up my patch in the meantime.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #38 from P.Panon ppanon@shaw.ca 2010-02-07 13:31:02 --- One more addendum regarding my last patch, I also wasn't quite sure what to do with some of the match functions like match_geforce5(). There I changed the vendor comparison to a comparison of the gl_vendor, however I'm not sure if that's correct. But is it a limit of the nvidia driver or a fundamental limit of the card that should also apply to other implementations (such as with Apple)? The same probably applies to match_ati_r300_to_500(). However hopefully it's enough that one of the Wine developers can pick it up and smooth any rough edges.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #39 from Stefan Dösinger stefandoesinger@gmx.at 2010-02-07 15:42:33 --- Your patch seems to be on the right track, and yeah, GL_VENDOR_APPLE is a good idea since Apple drivers have their unique set of bugs. We'll probably want GL_VENDOR_APPLE(all OSX drivers), GL_VENDOR_NVIDIA, GL_VENDOR_ATI, GL_VENDOR_MESA(all mesa). Probably in the future GL_VENDOR_INTEL for the windows intel driver, but that's not something that should be in this patch.
I am afraid we'll need a card detection routine for each GL driver vendor and hardware vendor. At least the fglrx card detection can't be shared with Mesa's since mesa just tells you the chip name like RV580 while fglrx tells you the marketed name(like Radeon X1600). Some may be reusable though since OSX and the linux binary drivers seem to have some common GL strings.
You could set up a control table:
{ {GL_VENDOR_NVIDIA, HARDWARE_VENDOR_NVIDIA, match_nvidia_card_binary}, {GL_VENDOR_APPLE, HARDWARE_VENDOR_NVIDIA, match_nvidia_card_binary}, {GL_VENDOR_MESA, HARDWARE_VENDOR_NVIDIA, match_nvidia_card_mesa}, {GL_VENDOR_MESA, HARDWARE_VENDOR_ATI, match_ati_card_mesa}, etc }
and then search for a matching gl and hardware vendor combination and call the function. If none is found write a FIXME and guess a generic dx7 / dx8 / dx9 card depending on the hardware vendor and GL capabilities.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #40 from Stefan Dösinger stefandoesinger@gmx.at 2010-02-07 15:50:10 ---
One more addendum regarding my last patch, I also wasn't quite sure what to do with some of the match functions like match_geforce5(). There I changed the vendor comparison to a comparison of the gl_vendor, however I'm not sure if that's correct
It depends on where the function is used. The geforce5 one is used mainly for the non power of two quirk, which is a hardware limit, so it would apply regardless of the GL driver vendor. Other functions like match_ati_r300_500_apple are used to work around driver specific issues, so you want to match driver + hw_vendor + card_id
I recommend to introduce the separation of HW and driver vendor in small patches without first adding any new features like detection of new mesa cards. Once the GL driver vendor is in place, and the HW vendor is untouched you can discuss and move over the matching functions one by one. For each of those we'll have to look at what the quirk does to decide what to do.
A possible solution is to split up the single matching function into 4, e.g. match_gl_vendor, match_hw_vendor, match_card, match_extra and only apply the quirk if all 4 return TRUE. This would avoid complex single-quirk match functions. If one field doesn't matter to a specific quirk(e.g. the NP2 GF5 quirk) the quirk could use a function that always returns TRUE, or we add special handling for a NULL function.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #41 from P.Panon ppanon@shaw.ca 2010-02-07 21:00:13 --- (In reply to comment #40)
One more addendum regarding my last patch, I also wasn't quite sure what to do with some of the match functions like match_geforce5(). There I changed the vendor comparison to a comparison of the gl_vendor, however I'm not sure if that's correct
It depends on where the function is used. The geforce5 one is used mainly for the non power of two quirk, which is a hardware limit, so it would apply regardless of the GL driver vendor. Other functions like match_ati_r300_500_apple are used to work around driver specific issues, so you want to match driver + hw_vendor + card_id
I recommend to introduce the separation of HW and driver vendor in small patches without first adding any new features like detection of new mesa cards.
Well, that's all very nice but detection of new mesa cards is this bug report is all about after all, as I'm sure cruiseoveride will be happy to remind us :-) So while I'm sure you would like all this code cleaned up, we would like the detection of the new mesa cards. And as I pointed out, my configuration doesn't report GLX info properly so I have a serious problem with testing my results.
Once the GL driver vendor is in place, and the HW vendor is untouched you can discuss and move over the matching functions one by one. For each of those we'll have to look at what the quirk does to decide what to do.
A possible solution is to split up the single matching function into 4, e.g. match_gl_vendor, match_hw_vendor, match_card, match_extra and only apply the quirk if all 4 return TRUE. This would avoid complex single-quirk match functions. If one field doesn't matter to a specific quirk(e.g. the NP2 GF5 quirk) the quirk could use a function that always returns TRUE, or we add special handling for a NULL function.
I guess that depends on how many driver quirks you foresee having to handle. If you have a bunch of card-based quirks (or caps-based tests like the dx10 or spec alpha tests) that are going across multiple GL_VENDOR values, that's going to really blow up your table size, It might be a bit more scalable in the long run, but it doesn't seem to be a huge issue with the few tests you have now so that refactoring shouldn't hold up this bug. Right now I would rather just keep the existing setup with clarification on a few tests of whether they apply to gl_vendor or card_vendor. Besides the match_geforce5(), I think the only other one that needs to be clarified is match_ati_r300_to_500(): should it also match for GL_VENDOR_ATI and GL_VENDOR_APPLE.
I really don't think it should be holding up a solution to this bug. On the other hand, I do appreciate the advantages of the suggestion in comment #39 on code readabilty since ...guess_card() is pretty hairy right now, so I'll see if I can do something about that.
http://bugs.winehq.org/show_bug.cgi?id=21515
P.Panon ppanon@shaw.ca changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26108|0 |1 is obsolete| |
--- Comment #42 from P.Panon ppanon@shaw.ca 2010-02-10 00:58:32 --- Created an attachment (id=26174) --> (http://bugs.winehq.org/attachment.cgi?id=26174) Patch for setting DirectX device impersonation according to Mesa driver renderer string
Here's an updated version of the patch that works against 1.1.38. There was one of the segments that was failing due to a new variable declaration in that build.
I'm hoping that curan and/or cruiseoveride would be willing to try rebuilding with the new patch and testing it.
I've updated my system to Lucid Lynx Alpha 2 and that seems to have fixed some of my glx info issues but I'm still not getting DRI detection (possibly due to a continuing 32bit/64bit mismatch). So I can verify that the Mesa/ATI detection code is being executed, but it's only detecting caps for DirectX8. That shouldn't be a problem for cruiseoverride however, and his card should actually detect as a CARD_ATI_RADEON_HD4800 (0x944c, which should allow it to use more video memory for textures and therefore give better performance), and curan's should show as a CARD_ATI_RADEON_9500.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #43 from P.Panon ppanon@shaw.ca 2010-02-10 01:12:44 --- BTW Stefan, I didn't get feedback from you on whether match_ati_r300_to_500() should use a GL_VENDOR or a HW_VENDOR match so I let it default to GL_VENDOR. I did refactor guess_card() according to your recommendation in comment #39. I'm hoping that this patch is close enough that you can fix up match_ati_r300_to_500() and any other minor tweaks that might still need be necessary and promote it for testing.
Don't forget that with ATI removing R300->R500 support from fglrx, you've got a whole bunch of users who are going to be forced to migrate to the Mesa ATI R300 driver. If this support isn't available, then the next time they upgrade to a new O/S release, DirectX support in Wine will stop working for them. Feature close-off for Ubuntu Lucid Linux is Feb 18, not sure about Redhat and other distributions.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #44 from cruiseoveride cruiseoveride@gmail.com 2010-02-10 01:57:36 --- (In reply to comment #42)
Created an attachment (id=26174)
--> (http://bugs.winehq.org/attachment.cgi?id=26174) [details]
Patch for setting DirectX device impersonation according to Mesa driver renderer string
Here's an updated version of the patch that works against 1.1.38. There was one of the segments that was failing due to a new variable declaration in that build.
I'm hoping that curan and/or cruiseoveride would be willing to try rebuilding with the new patch and testing it.
Well I patched the tree and built it, but I don't really know how to tell what its meant to be doing. 3dmark2001se now shows the video card as a HD4800 series. However the same missing-models problem occurs in the lobby demo.
So something being set in CARD_ATI_RADEON_9500 also needs to be set with CARD_ATI_RADEON_HD4800
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #45 from P.Panon ppanon@shaw.ca 2010-02-10 02:39:50 --- Thanks for testing cruiseoveride.
The two quirks which are turned on for ATIs and not for the Mesa ATI driver are quirk_ati_dx9() - controlled by match_ati_r300_to_500() - and quirk_one_point_sprite() which is controlled by match_fglrx(). with your patch our card was being misidentified as an R300 so both quirks would be activated.
It looks like quirk_one_point_sprite() is a work-around for a specific bug/crash in the ATI fglrx driver, so that seems less likely to be what's missing.
From the comment on quirk_ati_dx9(), it looks like it's currently applicable to
both the fglrx driver and the MacOS ATI driver. So that looks like the best candidate (especially since that's one I had a question about that Stefan and the other Wine Developers haven't had a chance to address). It's a texture-related workaround so perhaps it might give the effect you're seeing. The thing I don't understand is why it should be limited to R300 to R500 on the fglrx driver, since it seems to affect HD cards as well, unless the fglrx driver works around it internally but only for HD cards somehow.
Anyways can you please try replacing match_ati_r300_to_500() with:
static BOOL match_ati_r300_to_500(const struct wined3d_gl_info *gl_info, const char *gl_renderer, enum wined3d_gl_vendor gl_vendor, enum wined3d_pci_vendor card_vendor, enum wined3d_pci_device device) { if (card_vendor != HW_VENDOR_ATI) return FALSE; if (device == CARD_ATI_RADEON_9500) return TRUE; if (device == CARD_ATI_RADEON_X700) return TRUE; if (device == CARD_ATI_RADEON_X1600) return TRUE; if (gl_vendor == GL_VENDOR_MESA) return TRUE; return FALSE; }
If that works for you, I'll incorporate it into the patch.
Perhaps the match function should be renamed to something like match_ati_p2_textures_only() and maybe there should also be an if (gl_vendor == GL_VENDOR_ATI) return TRUE; added as well. Hopefully Stefan will let us know.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #46 from P.Panon ppanon@shaw.ca 2010-02-10 02:55:51 --- Hmm. Curan could you please download and set up the 3DMark2001SE demo to try to reproduce Cruiseoveride's display issue? Please test it with 1.1.38 and the patch from #42 but without the change from #45. Existing code comments say that with fglrx, the problem is that the driver reports GL2 support. However according to your previous trace, the Mesa DRI R300 driver currently reports GL 1.5, not 2.0, so it might not need quirk_ati_dx9() activated, at least for now (not sure if the R300 driver will stay limited at GL 1.5 since it's almost capable of GL 2.0).
3dmark2001SE is a free graphics demo downloadable from Futuremark http://www.futuremark.com/download/3dmark2001/
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #47 from cruiseoveride cruiseoveride@gmail.com 2010-02-10 03:13:25 --- (In reply to comment #45)
Anyways can you please try replacing match_ati_r300_to_500() with:
static BOOL match_ati_r300_to_500(const struct wined3d_gl_info *gl_info, const char *gl_renderer, enum wined3d_gl_vendor gl_vendor, enum wined3d_pci_vendor card_vendor, enum wined3d_pci_device device) { if (card_vendor != HW_VENDOR_ATI) return FALSE; if (device == CARD_ATI_RADEON_9500) return TRUE; if (device == CARD_ATI_RADEON_X700) return TRUE; if (device == CARD_ATI_RADEON_X1600) return TRUE; if (gl_vendor == GL_VENDOR_MESA) return TRUE; return FALSE; }
If that works for you, I'll incorporate it into the patch.
Perhaps the match function should be renamed to something like match_ati_p2_textures_only() and maybe there should also be an if (gl_vendor == GL_VENDOR_ATI) return TRUE; added as well. Hopefully Stefan will let us know.
Just tested that. And yes that block of code fixes the lobby demo.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #48 from Stefan Dösinger stefandoesinger@gmx.at 2010-02-10 08:20:09 --- I suspect the problematic part of the ati_dx9 quirk is this one:
if(gl_info->supported[ARB_TEXTURE_NON_POWER_OF_TWO]) { TRACE("GL_ARB_texture_non_power_of_two advertised on R500 or earlier card, removing.\n"); gl_info->supported[ARB_TEXTURE_NON_POWER_OF_TWO] = FALSE; gl_info->supported[WINE_NORMALIZED_TEXRECT] = TRUE; }
The background is that R500 and earlier cards do not support unconditional NP2 textures(GL_ARB_texture_non_power_of_two). However, both fglrx and OSX advertise OpenGL 2.0, which mandates support for this extension. As a result, the driver doesn't complain if they are used and falls back to software rendering. As a result, we disable the ARB_TEXTURE_NON_POWER_OF_TWO support.
There is an equivalent extension for direct3d's *conditional* NP2 support - GL_ARB_texture_rectangle. R500 cards support this just fine, but this extension has a major problem: It uses non-normalized texture coordinates(range 0.0-width and 0.0-height), which makes using them pretty awkward, since D3D's conditional NP2 textures use normalized(range 0.0-1.0) coords.
However, both fglrx and the OSX drivers are nice enough to render the emulated "unconditional" NP2 textures in hardware as long as all conditions that GL_ARB_texture_rectangle exposes are met. Those textures use normalized texcoords, so we can pass the coords to GL 1:1, and the driver can pass them to the card 1:1 as well.
To signal the rest of the code that it can use the APIs from ARB_texture_non_power_of_two, but has to observe the restrictions imposed by ARB_texture_rectangle, we set a faked extension WINE_normalized_texrect.
This whole thing assumes however, that the driver emulates unconditional NP2 textures in HW if possible, rather than falling back to software or not rendering at all. In the R300 glxinfo linked above advertises GL 1.5, and does not advertise ARB_texture_non_power_of_two, so the WINE_normalized_texrect fake extension should never be set.
R600+ cards(the dx10 ones) support unconditional NP2 textures(GL_ARB_texture_non_power_of_two) just fine, so no quirk is needed for them. We can just use whatever the driver advertises.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #49 from Stefan Dösinger stefandoesinger@gmx.at 2010-02-10 08:41:09 --- (In reply to comment #41)
Well, that's all very nice but detection of new mesa cards is this bug report is all about after all, as I'm sure cruiseoveride will be happy to remind us :-) So while I'm sure you would like all this code cleaned up, we would like the detection of the new mesa cards.
The general rule is that the infrastructure has to be fixed before trying to add in features that the existing infrastructure cannot support cleanly.
That said, the newest version of your patch looks pretty good, it already introduces the separation of HW and GL vendor. Currently it is too big though, and if it causes a regression it will be hard to spot where exactly it happened. So it needs to be broken up into a few separate patches. For example:
Patch 1: Rename GL_VENDOR to HW_VENDOR Patch 2: Add a real gl vendor detection Patch 3.1: Probably split up the quirk detection function if 3.2 can't be done properly otherwise. Patch 3.2: Adjust the quirks. See below for a few comments on this Patch 4: Add the card detection table and move the code Patch 5: Add Mesa ATI card detection code
And as I pointed out, my configuration doesn't report GLX info properly so I have a serious problem with testing my results.
Probably your 32 bit compat libraries are broken. You'll probably have to compile Mesa and libdrm for 32 bit manually and install it in your /usr/lib32 directory. A 32 bit glxinfo binary can help debugging
Wrt the quirks:
{ match_ati_r300_to_500, quirk_ati_dx9, "ATI GLSL constant and normalized texrect quirk" },
Driver dependent. There is a HW aspect to it, but it depends on the driver partially emulating NP2 textures.
{ match_apple, quirk_apple_glsl_constants, "Apple GLSL uniform override" },
Driver dependent, HW independent. Apple's GLSL infrastructure is teh suck.
{ match_geforce5, quirk_no_np2, "Geforce 5 NP2 disable" },
Driver dependent, since the nvidia driver advertises NP2 textures incorrectly. If noveau ever does the same we'll have to add a quirk for it as well, but not yet.
{ match_apple_intel, quirk_texcoord_w, "Init texcoord .w for Apple Intel GPU driver" },
That quirk should actually be inverted, to make the quirk code the default and the non-quirky code a quirk. For now leave it as is though. Depends on hw_vendor + card + driver
{ match_apple_nonr500ati, quirk_texcoord_w, "Init texcoord .w for Apple ATI >= r600 GPU driver" },
Same as above.
{ match_fglrx, quirk_one_point_sprite, "Fglrx point sprite crash workaround" },
Driver dependent. Maybe doesn't depend on the card, needs testing. But I think Roderick said its working on dx10 ATI cards.
{ match_dx10_capable, quirk_clip_varying, "Reserved varying for gl_ClipPos" },
Driver independent
{ match_allows_spec_alpha, quirk_allows_specular_alpha, "Allow specular alpha quirk" },
A very specific test for the behavior, doesn't depend at all on the GL identification
{ match_apple_nvts, quirk_apple_nvts, "Apple NV_texture_shader disable" },
Depends on gl_vendor=APPLE and support for NVTS
{ match_broken_nv_clip, quirk_disable_nvvp_clip, "Apple NV_vertex_program clip bug quirk" },
Has a very specific test, shouldn't depend on the card detection
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #50 from Cùran debian@carbon-project.org 2010-02-10 08:41:42 --- (In reply to comment #46) The build is running atm, I'll report the results ASAP.
As for the OpenGL 2 support in the FOSS driver: a X developer recently told me, that it is unlikely, they'll add it tu the r300 to r500 drivers (apart from the fact - as Stefan noted - that the hardware is not fully capable of OpenGL 2, which is also documented at http://wiki.x.org/wiki/RadeonFeature). But the Gallium drivers for r300 onwards already do support it (currently that's a mix of hardware and software acceleration, which should get replaced by full hardware acceleration in the future).
http://bugs.winehq.org/show_bug.cgi?id=21515
Cùran debian@carbon-project.org changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26040|0 |1 is obsolete| |
--- Comment #51 from Cùran debian@carbon-project.org 2010-02-10 09:58:48 --- Created an attachment (id=26185) --> (http://bugs.winehq.org/attachment.cgi?id=26185) 3D application with R300 and patch from comment #42 (WINEDEBUG=+d3d_caps)
This log was generated with the same application as was used for attachment 26040, so the results should be comparable. The benchmark will follow.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #52 from Cùran debian@carbon-project.org 2010-02-10 10:35:59 --- Created an attachment (id=26187) --> (http://bugs.winehq.org/attachment.cgi?id=26187) Display device discovered by 3DMark 2001 SE (for R300)
3DMark discovers a pretty generic "Direct3D HAL" device and the Lobby demo looks plain horrible. I see no characters/persons for example, just the objects they're using/wearing (like sun glasses or weapons), some particle stuff and the lobby itself. The car demo works (though that tests just DirectX 8.1 AFAIK) for example. Didn't check the others.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #53 from P.Panon ppanon@shaw.ca 2010-02-10 10:42:31 --- (In reply to comment #48)
R600+ cards(the dx10 ones) support unconditional NP2 textures(GL_ARB_texture_non_power_of_two) just fine, so no quirk is needed for them. We can just use whatever the driver advertises.
Except that according to cruiseoveride, who has an HD4800 running the Mesa driver that should not require the quirk, that his card doesn't work that way. He just tested in #44 that rendering was incorrect without the quirk, and in #47 that rendering was correct with the quirk. I agree with you about how it should be, but cruiseoveride seems to have determined empirically that the opposite is true.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #54 from Stefan Dösinger stefandoesinger@gmx.at 2010-02-10 11:00:28 --- (In reply to comment #53)
Except that according to cruiseoveride, who has an HD4800 running the Mesa driver that should not require the quirk, that his card doesn't work that way.
This smells like a driver, it's certainly not a hardware limitation. According to his 3dmark2001 log the driver advertises GL 2.0 and ARB_texture_non_power_of_two.
I recommend filing a bug against DRI if the issue occurs with the mesa git tree as well. It will be helpful for the DRI developers if we can isolate the bug into a stand-alone Linux test app that doesn't need Wine(the 3dmark2k1 binary blob and 65k lines of wined3d code don't make debugging easier for the driver devs)
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #55 from P.Panon ppanon@shaw.ca 2010-02-10 11:10:53 --- After digesting your comments, I was thinking it was a Mesa R600 driver bug too. Simplifying this down to a simple test program for the Mesa DRI R600 driver is well beyond my OpenGL abilities though. Any chance you could check if there might be an already existing bug report regarding NP2 textures that would be a likely candidate? At least that way I could get the patch in with the quirk active for Mesa/ATI and a TODO: comment to disable the quirk for the DRI R600 cards when the driver bug gets fixed.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #56 from P.Panon ppanon@shaw.ca 2010-02-10 11:35:44 --- Curan, that sounds like the same problem cruiseoveride was seeing - did you also apply the extra change from #45? When I break the patch up per Stefan's recommendations I'll incorporate that change as well, but I would like to confirm that it fixes things for you as well.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #57 from Cùran debian@carbon-project.org 2010-02-10 11:52:01 --- (In reply to comment #56)
[...] did you also apply the extra change from #45?
No, as per your request in comment #46 I only used the patch from comment #42 (attachment 26174). The binaries I used for testing are (again) available from http://dev.carbon-project.org/debian/wine-unstable/. So you'll want me to confirm, that applying the changes from comment #45 fix the lobby demo for me too, right? I'm on it and report back as soon as the build is finished.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #58 from cruiseoveride cruiseoveride@gmail.com 2010-02-10 13:27:44 --- Just for completeness sake;
1. Using wine from git, (wine-1.1.38-74-gb3b81ab) - Ubuntu 9.10 x86_64, 2.3ghz Phenom X3, 512Mb HD4870 - 02:00.0 VGA compatible controller: ATI Technologies Inc RV770 [Radeon HD 4870] GL_VERSION: 2.0 Mesa 7.8-devel GL_VENDOR: Advanced Micro Devices, Inc. GL_RENDERER: Mesa DRI R600 (RV770 9440) 20090101 TCL DRI2
2. The patch in comment #42 3. And match_ati_r300_to_500() from comment #45
The lobby demo now works properly (ie. the models are now non-transparent).
However, with or without any patching; 1. The High-Detail chase demo is extremely slow (2fps) 2. The High-Detail lobby demo is extremely slow (0fps) 3. There are missing textures in the Dragothic demo (people and the dragon itself) 4. Half the screen is just a flat blue in the "Advanced Pixel Shader" test
I get 2736 3dmark points (1024x768). Looks like I bought a $200 Riva TNT2 :)
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #59 from Cùran debian@carbon-project.org 2010-02-10 13:34:17 --- (In reply to comment #56) As announced in comment #57 I've tried the Lobby demo with the modifications from comment #45, but I still don't see the persons/characters. It still looks like what I've described in comment #52.
Additionally I can confirm what cruiseoverride stated in comment #58: high-quality demos are dead slow. I think the highest FPS value I saw was in the car demo with 4 FPS.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #60 from cruiseoveride cruiseoveride@gmail.com 2010-02-10 14:44:58 --- (In reply to comment #59)
As announced in comment #57 I've tried the Lobby demo with the modifications from comment #45, but I still don't see the persons/characters. It still looks like what I've described in comment #52.
If you have replaced match_ati_r300_to_500() with the version in comment #45 and still dont see the models, that would be really weird. Try make clean, double check the source files, and make again, make install.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #61 from Cùran debian@carbon-project.org 2010-02-10 14:55:33 --- (In reply to comment #60) No need for such stuff, as you might have seen from looking at http://dev.carbon-project.org/debian/wine-unstable/ I'm using a Debian build process involving a clean chroot environment for every build. Therefore no old files are present. Also an installation of the new packages automatically replaces/removes all old versions. The patches get applied at build time, something I can see by simply looking at the build logs. But you shouldn't forget that: a) I'm using a different GPU for these tests, b) what Stefan pointed out earlier, that this quirk might not work for the FOSS driver, and c) even though the r300 through r500 drivers share a lot of code, not every part is the same. So this quirk might help you but not necessarily me.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #62 from P.Panon ppanon@shaw.ca 2010-02-10 15:14:12 --- What Cùran said. :-) BTW, the slow performance on high quality demos may be inidcate use of Gallium3D features that are still only supported via the software pipeline.
There are a limited number of OpenGL applications for Linux for the Mesa developers to work with, and a lot of them are based on a small number of FPS engines. If we get this running, then I would think that they could start running Direct3D apps in Wine to generate OpenGL traces and have some new test scenarios to work with for quality assurance on rendering.
So yeah, right now the open source drivers only give your $200 card the performance of a RIVA TNT2, but a year ago it would have been a 3D door stop without the fglrx binary drivers. If we get this in place, it's another step towards increasing that effective performance. Maybe we can suggest to this guy - http://free3d.org/ - to start using Wine&3DMark2001 to replace GLX Gears. It seems to me that Quake III Arena was pretty commonly used as a graphics benchmark once upon a time, and that should run natively on Linux, so I'm not sure why he's not using that.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #63 from Stefan Dösinger stefandoesinger@gmx.at 2010-02-10 15:16:46 --- I recommend to keep track of the other bugs in a separate bug report to make following each issue easier. Beyond that I am opposed to put a np2-disable quirk for mesa r600+ cards into the Wine git tree because the Mesa drivers are under heavy development and such quirks tend to hurt in the long run. Peeking around at what changes the driver behavior and documenting this in a bug report is a good thing though.
There are no written rules towards which quirks are accepted into git, but it goes somewhat like this: A quirk is ok if:
* It is using an unspecified driver behavior to our advantage(e.g. the specular color quirk or the normalized texrect quirk) * There's a driver bug that is unlikely to get fixed anytime soon(e.g. legacy proprietary driver, or Apple drivers) and it hurts a wide range of apps(e.g. NVTS disable quirk on OSX) * A genuine hardware limitation that cannot be adequately queried from GL(NP2 texture disable on Geforce FX, NP2 texture disable on r300-r500) * A driver bug that causes a kernel panic or X server crash when running the wine tests(Otherwise people are afraid of running "make test"). E.g. point sprite quirk on fglrx.
Since it is comparably easy to get an open source driver fixed we try to avoid quirks for them, except if they're in the last category(People are often running old drivers, working on a non-d3d area of wine, run "make test" and get a kernel panic)
http://bugs.winehq.org/show_bug.cgi?id=21515
Stefan Dösinger stefandoesinger@gmx.at changed:
What |Removed |Added ---------------------------------------------------------------------------- CC|stefan@codeweavers.com |
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #64 from Stefan Dösinger stefandoesinger@gmx.at 2010-02-10 15:26:15 --- Fwiw(and getting off-topic), glxgears is a really lousy benchmark, but I guess everyone knows this by now.
Quake 3 is a lousy benchmark for today's cards as well. No shaders, no FBOs, no nothing except a few textured triangles with fog. The game gets quite decent rendering results with that though.
3DMark2000 and 2k1 are pretty outdated as well. They do use shaders though.
For really informative driver benchmarking I recommend Half Life 2, Team Fortress 2, 3Dmark 2003, 3Dmark 2000, 3Dmark 2001, UT2004(native if you want to). 3Dmark2k is pointless on its own, but still good to test legacy features. It points out quite bad performance with fixed function rendering on fglrx for example that isn't spotted by HL2 and TF2.
http://84.112.174.163/~stefan/imac/halflife2/results.php http://84.112.174.163/~stefan/imac/3dmark2000/results.php (first-gen iMac running Linux)
The HL2 results are pretty combative for a radeon X1600(except the last few, due to a known wine regression), while the 5500 3dmarks are pretty bad. My radeon 9000 mobile got around 14.000 on Windows I think, and 6000 on old Mesa drivers when I ran it the last time. (Don't ask me about the recent uptick. I noticed this just now)
Even more serious benchmarking, but those games run into a few Wine bugs, so they're less useful for Wine developers: * Left 4 Dead * Call of Duty: Modern Warfare 2 * Unreal Tournament 3 * 3Dmark 2006
So, I'll try to listen to my own advice from the last post and stay on topic from now on...
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #65 from Cùran debian@carbon-project.org 2010-02-10 15:58:03 --- (In reply to comment #62)
BTW, the slow performance on high quality demos may be inidcate use of Gallium3D features that are still only supported via the software pipeline.
Just for the record: I'm not using Gallium (when I mentioned Gallium in comment #50 I merely wanted to point out, that Gallium already claims to support a higher OpenGL level, even though part of that is still just a software implementation). I'm using the radeon driver, version 6.12.4 with some updates from Git (see http://packages.debian.org/changelogs/pool/main/x/xserver-xorg-video-ati/current/changelog#versionversion1:6.12.4-3). So I think if it falls back to software on my system, it would be some Mesa rasterizer (though i was of the opinion that everything up to OpenGL 1.5 is hardware accelerated with this driver). But don't ask me, I'm not an expert for X stuff. ;)
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #66 from cruiseoveride cruiseoveride@gmail.com 2010-02-10 16:11:47 --- I too am not using Gallium 3D. R600+ Gallium is still in its infancy.
I can't run 3dmark2003 because it complains about not having hardware that supports DXT1 and DXT3. And HL2 performance on my machine is better measured in SPF than in FPS. Suddenly the antiquated 3dmark2001se seems like a great benchmark :)
If there are any other traces you need from a HD4870 on Linux please let me know. I don't think I'm going to have this card for much longer (going to buy an Nvidia so I can play COD6-MW2).
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #67 from P.Panon ppanon@shaw.ca 2010-02-10 16:34:17 --- Thanks for all your input on attacking this bug Stefan. Regarding your point in comment #49 regarding broken 32bit-compat libs, I suspect you are correct. That was also Robert Hooker's suggestion when I filed an Ubuntu bug report (https://bugs.launchpad.net/bugs/515933) and why I switched to Lucid Alpha 2. Since I still have the problem though, I would prefer to get them to fix the packaging issue so that it's in the distribution and I don't have to maintain it. I therefore want to keep my system with a plain jane Lucid setup to test any updated packages they might produce rather than to later have to clean everything and reinstall the packages frequently.
Regarding comment #63, I don't suppose that if we opened a new bug for the R600 NP2, you would temporarily accept a separate patch for an np2 workaround for the MESA ATI R600 if it can easily be rolled back when the Mesa ATI developers fix their bug? I figure the answer is going to be no, but I thought I would try. :-)
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #68 from P.Panon ppanon@shaw.ca 2010-02-10 16:46:20 --- re #66, thanks for opening this bug and for your help in testing so far. It made me decide to get involved instead of just waiting for someone else to deal with it. You might still want to hang on to that card in a drawer because, as Stefan put it, the Mesa ATI drivers have been undergoing significant development. I would also expect that the D3D->OpenGL translation layer in Wine is going to extract a significant performance penalty with NVidia blobs as well.
On the other hand, I think another contributor to this bug report was looking to upgrade from an R300. :-)
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #69 from P.Panon ppanon@shaw.ca 2010-02-10 22:50:20 --- I have opened a bug report for the R600 driver at https://bugs.freedesktop.org/show_bug.cgi?id=26525
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #70 from P.Panon ppanon@shaw.ca 2010-02-11 00:31:50 --- Hi Cruiseoverride,
Since you kindly offered, :-) would you be able to get screenshots of the lobby scene with and without the NP2 quirk active? If you could then upload them as attachments to the R600 driver bug link I put in comment #69, that would help.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #71 from P.Panon ppanon@shaw.ca 2010-02-11 03:31:19 --- Created an attachment (id=26197) --> (http://bugs.winehq.org/attachment.cgi?id=26197) tarball of 5 sequential patches to 1.1.38
Patch sequence per comment #49
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #72 from Stefan Dösinger stefandoesinger@gmx.at 2010-02-11 04:54:23 --- The patches look pretty good, just a few comments:
Patch 2: HW_VENDOR_MESA isn't needed any longer, just use HW_VENDOR_WINE if the real HW vendor isn't properly detected.
Patch 3:
- /* if (match_apple(gl_info, gl_renderer, gl_vendor, card_vendor, device)) return FALSE;
- if (strstr(gl_renderer, "DRI")) return FALSE; Filter out Mesa DRI drivers. */
return TRUE; Yes, shouldn't be necessary, just remove both lines. I'd also invert the gl_vendor if check if(GL_VENDOR == GL_VENDOR_ATI) return TRUE; else return FALSE;
Patch 4:
- for (i = 0; i < (sizeof(vendor_card_select_table) / sizeof(*vendor_card_select_table)); ++i)
- {
if ((vendor_card_select_table[i].gl_vendor != *gl_vendor)
|| (vendor_card_select_table[i].card_vendor != *card_vendor))
continue;
TRACE_(d3d_caps)("Applying card_selector \"%s\".\n", vendor_card_select_table[i].description);
}return vendor_card_select_table[i].select_card(gl_info, gl_renderer, vidmem);
- /* Default to generic Nvidia hardware based on the supported OpenGL extensions. The choice
I recommend to check the select_card result, to allow the card detection code to return an unknown card(e.g. 0x0000) if the detection fails for some reason. Then the loop can abort and use the generic nvidia card selection code below.
/* Geforce6/7 lowend */
/* If it's GL_VENDOR_APPLE, then it could also be an ATI card, so allow it to fall through */ *vidmem = 64; /* */ return CARD_NVIDIA_GEFORCE_6200; /* Geforce 6100/6150/6200/7300/7400/7500 */
Why would we end up in select_card_nvidia_binary on OSX with an ATI card?
Patch 5:
+enum wined3d_pci_device (select_card_intel_mesa)(const struct wined3d_gl_info *gl_info, const char *gl_renderer,
unsigned int *vidmem )
+{
FIXME_(d3d_caps)("Card selection not handled for Mesa Intel driver\n");
if (WINE_D3D9_CAPABLE(gl_info)) return CARD_NVIDIA_GEFORCEFX_5600;
This will lead to a PCI vendor 0x8086, pci device 0x0312 which doesn't exist. Some other nvidia card if interpreted as an Intel device might be a SATA Raid controller *gg*
That taps into the suggestion concerning patch 4 - allow the detection functions to fail, and add a FIXME to the generic nvidia guessing code, something like FIXME("Implement a card ID detection function for GL vendor %s, HW vendor %s\n", debug_gl_vendor(gl_vendor), debug_hw_vendor(hw_vendor));
General: I don't know how you generated the patches, but the patch format(filename, lack of author info etc) is unknown to me. I recommend to use git-format-patch, it allows you to add a description of what the patch does into the patch, and adds your name and Email address. There are some higher level tools like stacked git that can do this as well.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #73 from Cùran debian@carbon-project.org 2010-02-11 08:24:44 --- FYI: I've built new Debian packages for Sid, available from http://dev.carbon-project.org/debian/wine-unstable/ with the patch series from attachment 26197. As the patches weren't -pab formatted and some of them contained lines for the resulting files with superfluous suffixes (like the „.p1“ in „dlls/wined3d/directx.c.p1") I've refreshed the patches (quilt to the rescue) and added DEP-3 compliant headers (http://dep.debian.net/deps/dep3/). The result is in debian/patches in the .debian.tar.bz2 file.
http://bugs.winehq.org/show_bug.cgi?id=21515
P.Panon ppanon@shaw.ca changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26197|0 |1 is obsolete| |
--- Comment #74 from P.Panon ppanon@shaw.ca 2010-02-12 00:44:47 --- Created an attachment (id=26219) --> (http://bugs.winehq.org/attachment.cgi?id=26219) tarball of 5 sequential patches to 1.1.38
(In reply to comment #72)
The patches look pretty good, just a few comments:
Patch 2: HW_VENDOR_MESA isn't needed any longer, just use HW_VENDOR_WINE if the real HW vendor isn't properly detected.
Patch 3:
- /* if (match_apple(gl_info, gl_renderer, gl_vendor, card_vendor, device)) return FALSE;
- if (strstr(gl_renderer, "DRI")) return FALSE; Filter out Mesa DRI drivers. */
return TRUE;
Yes, shouldn't be necessary, just remove both lines. I'd also invert the gl_vendor if check if(GL_VENDOR == GL_VENDOR_ATI) return TRUE; else return FALSE;
Patch 4:
- for (i = 0; i < (sizeof(vendor_card_select_table) / sizeof(*vendor_card_select_table)); ++i)
- {
if ((vendor_card_select_table[i].gl_vendor != *gl_vendor)
|| (vendor_card_select_table[i].card_vendor != *card_vendor))
continue;
TRACE_(d3d_caps)("Applying card_selector \"%s\".\n", vendor_card_select_table[i].description);
}return vendor_card_select_table[i].select_card(gl_info, gl_renderer, vidmem);
- /* Default to generic Nvidia hardware based on the supported OpenGL extensions. The choice
I recommend to check the select_card result, to allow the card detection code to return an unknown card(e.g. 0x0000) if the detection fails for some reason. Then the loop can abort and use the generic nvidia card selection code below.
The original card detection routine had a default Lowest Common Denominator card choice for each hardware vendor. I don't see why that needs to change since it seems to me that practice actually make more sense than falling though to a default of an unrelated vendor's product. The change you're asking for is also completely orthogonal to this bug and is not a necessary refactoring. Changing it is more likely to break existing behaviour and cause a regression which would hold up this time-sensitive patch.
/* Geforce6/7 lowend */
/* If it's GL_VENDOR_APPLE, then it could also be an ATI card, so allow it to fall through */ *vidmem = 64; /* */ return CARD_NVIDIA_GEFORCE_6200; /* Geforce 6100/6150/6200/7300/7400/7500 */
Why would we end up in select_card_nvidia_binary on OSX with an ATI card?
That was left over from one of my earlier attempts at dealing with card detection before refactoring that function per your recommendation.
Patch 5:
+enum wined3d_pci_device (select_card_intel_mesa)(const struct wined3d_gl_info *gl_info, const char *gl_renderer,
unsigned int *vidmem )
+{
FIXME_(d3d_caps)("Card selection not handled for Mesa Intel driver\n");
if (WINE_D3D9_CAPABLE(gl_info)) return CARD_NVIDIA_GEFORCEFX_5600;
This will lead to a PCI vendor 0x8086, pci device 0x0312 which doesn't exist. Some other nvidia card if interpreted as an Intel device might be a SATA Raid controller *gg*
Point taken. I've set it to default to the LCD for the existing Intel binary detection. They appear to officially all be DX9 capable in Windows, even if the Linux blobs, mesa drivers, or Apple GL drivers don't support all the necessary capabilities for DX9 emulation, so there's not much difference in the expectations of a windows app.
That taps into the suggestion concerning patch 4 - allow the detection functions to fail, and add a FIXME to the generic nvidia guessing code, something like FIXME("Implement a card ID detection function for GL vendor %s, HW vendor %s\n", debug_gl_vendor(gl_vendor), debug_hw_vendor(hw_vendor));
General: I don't know how you generated the patches, but the patch format(filename, lack of author info etc) is unknown to me.
It's called diff -u
I recommend to use git-format-patch, it allows you to add a description of what the patch does into the patch, and adds your name and Email address. There are some higher level tools like stacked git that can do this as well.
Thanks for the recommendation, perhaps at some point in time I'll find it useful, but I'm just not interested in learning your source control tool chain at this time. I just don't have time to set it up and learn it right now. If you're basically satisfied with the technical approach to the patch then to get this patch in I would suggest you or Luca try something like:
for i in `seq 1 5`; do patch -p0 < directx.c.p$i; [whatever check-in command you use]; done
I also no longer have time to play another round or more of "Now please fix this too". This updated patch archive addresses the remaining serious issues you've raised so there doesn't seem to be any major reason to hold it up. If you want to make some additional minor changes like you card_detect FIXME suggestion above (which is redundant since it already happens to a certain extent at vendor detection), you're welcome to unroll the above loop and make edits between the patch command and the check-in.
I wanted this to get into Ubuntu by a certain date, and with Curan's announcement in comment #73 that it's going into Debian unstable, there's a good chance it will be picked up. Banging my head here trying to satisfy you isn't going to change those chances significantly. My itch is scratched. Thanks for your past help. Peace out.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #75 from Cùran debian@carbon-project.org 2010-02-12 09:43:17 --- (In reply to comment #74)
I wanted this to get into Ubuntu by a certain date, and with Curan's announcement in comment #73 that it's going into Debian unstable, there's a good chance it will be picked up.
Oh, that is a sad misunderstanding: I'm not part of the maintainer team of Wine in Debian (even though I maintain packages for Debian). Therefore my packages for Sid (Unstable) are unofficial and will never enter Debian's archives. The next official Debian package is held up by gcc-mingw32, see http://bugs.debian.org/557783#35. So my packages are primarily for my own consumption (and based on the last official package, with the changes documented in debian/changelog) or other interested parties.
In any case: thank you very much for your work! I'll do another round of (unofficial) packages.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #76 from P.Panon ppanon@shaw.ca 2010-02-12 10:42:00 --- Ah, too bad. Still, I don't think there's any remaining reason why this patch set can't be incorporated into Wine for testing at this time. It provides neeeded functionality and I really have run out of time to spend on fixing other people's bugs just to get it approved.
http://bugs.winehq.org/show_bug.cgi?id=21515
P.Panon ppanon@shaw.ca changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26219|0 |1 is obsolete| |
--- Comment #77 from P.Panon ppanon@shaw.ca 2010-02-14 03:43:06 --- Created an attachment (id=26247) --> (http://bugs.winehq.org/attachment.cgi?id=26247) tarball of 5 sequential patches to 1.1.38
Sigh. In the attachment for comment #74, I meant to upload some of the fixes I indicated I would fix, but I screwed up the tar command and wound up overwriting what I'd done for the previous version instead of updating it. Here's what I meant to upload for comment #74. I apologize for the accident.
BTW, I got to check firsthand that the patch part of that for loop works, as does the unroll/edit, :-) so I'm pretty sure that, if you added your check in command(s) as indicated, it would work.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #78 from Cùran debian@carbon-project.org 2010-02-18 10:52:49 --- Created an attachment (id=26299) --> (http://bugs.winehq.org/attachment.cgi?id=26299) Patch series from attachment 26247; slightly edited to be applicable with patch or quilt.
(In reply to comment #77) Is it possible, that during your edit you introduced some whitespaces into attachment 26247? Because patch complained: "patch: **** Only garbage was found in the patch input." And only after I ran the patches through:
sed -e's/--- dlls/--- a/dlls/g' -e's/+++ dlls/+++ b/dlls/g' -e's/--- dlls/--- a/dlls/g' -e's/+++ dlls/+++ b/dlls/g'
(yes, only the first two replace commands were needed to make patch happy), I could apply them. I'm using patch version 2.6, and patch was invoked by quilt.
Anyway, I've refreshed the patches with quilt, added an header to them and attach them with this comment to the bug. This hopefully helps others to apply the patches „out of the box". The patches in this attachment are -p1 formatted. (Just to make sure: this comment isn't intended to give offence!)
http://bugs.winehq.org/show_bug.cgi?id=21515
Cùran debian@carbon-project.org changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26299|application/octet-stream |application/x-bzip2 mime type| |
http://bugs.winehq.org/show_bug.cgi?id=21515
Edward vbgraphix2003@hotmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |vbgraphix2003@hotmail.com
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #79 from P.Panon ppanon@shaw.ca 2010-02-25 04:47:49 --- Thanks Cùran.
I must admit that I'm quite surprised that, 1 week after your cleaned up patch, nothing seems to have happened with this patch even though Stefan had been quite helpful in providing feedback earlier. Not sure if that means it's queued up for testing or not but it's interesting to note that 1.1.39 has since been released.
Regarding your results on attachments 26185/7, it looks like the card is being treated as a CARD_ATI_RADEON_8500: trace:d3d_caps:wined3d_guess_card Applying card_selector "Mesa AMD/ATI driver". trace:d3d_caps:IWineD3DImpl_FillGLCaps FOUND (fake) card: 0x1002 (vendor id), 0x514c (device id) The rendering string should be matching to set the device to CARD_ATI_RADEON_9500, so the Mesa R300 driver must be failing one of the tests that Wine is using to determine whether DirectX 9 3D support is possible, and then defaulting to a card model that only supports DirectX 8. The driver_version_table[] only contains entries for ATI and NVidia Directx9 cards and therefore a default value is passed on to the Windows app. This correlates with your observations in comment #52.
#define WINE_D3D9_CAPABLE(gl_info) WINE_D3D8_CAPABLE(gl_info) && (gl_info->supported[ARB_FRAGMENT_PROGRAM] && gl_info->supported[ARB_VERTEX_SHADER]) Since the card passes the D3D8 test and RadeonFeature indicates the R300 driver should support Vertex and Fragment Shaders, The following trace lines would make me think that those two features should be supported by the driver. Maybe the driver's reporting something wrong in gl_info? trace:d3d_caps:IWineD3DImpl_FillGLCaps Max ARB_FRAGMENT_PROGRAM float constants: 256. trace:d3d_caps:IWineD3DImpl_FillGLCaps Max ARB_FRAGMENT_PROGRAM native float constants: 32. trace:d3d_caps:IWineD3DImpl_FillGLCaps Max ARB_FRAGMENT_PROGRAM native temporaries: 32. trace:d3d_caps:IWineD3DImpl_FillGLCaps Max ARB_FRAGMENT_PROGRAM native instructions: 96. trace:d3d_caps:IWineD3DImpl_FillGLCaps Max ARB_FRAGMENT_PROGRAM local parameters: 96. trace:d3d_caps:IWineD3DImpl_FillGLCaps Max ARB_VERTEX_PROGRAM float constants: 256. trace:d3d_caps:IWineD3DImpl_FillGLCaps Max ARB_VERTEX_PROGRAM native float constants: 256. trace:d3d_caps:IWineD3DImpl_FillGLCaps Max ARB_VERTEX_PROGRAM native temporaries: 32. trace:d3d_caps:IWineD3DImpl_FillGLCaps Max ARB_VERTEX_PROGRAM native instructions: 255.
So I'm not sure why that test is failing. However it shouldn't be due to code I've touched. A bigger question is why the vidmem settings aren't getting passed back. I didn't see a similar screenshot from cruiseoverride, but if his videomemory values are also zero, then those aren't being passed back somehow, which might explain poor rendering performance if the benchmark app thinks it doesn't have any card memory available.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #80 from P.Panon ppanon@shaw.ca 2010-02-25 04:56:41 --- Hmm. Regarding vidmem, Since it's defined as an unsigned int, I'm wondering if that could be an issue with int size on amd64.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #81 from Henri Verbeet hverbeet@gmail.com 2010-02-25 05:05:13 --- (In reply to comment #79)
I must admit that I'm quite surprised that, 1 week after your cleaned up patch, nothing seems to have happened with this patch even though Stefan had been quite helpful in providing feedback earlier. Not sure if that means it's queued up for testing or not but it's interesting to note that 1.1.39 has since been released.
Patches aren't picked up from bugzilla, you should submit them to wine-patches when you think they're ready.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #82 from Edward vbgraphix2003@hotmail.com 2010-02-25 07:10:08 --- (In reply to comment #79)
The rendering string should be matching to set the device to CARD_ATI_RADEON_9500, so the Mesa R300 driver must be failing one of the tests that Wine is using to determine whether DirectX 9 3D support is possible, and then defaulting to a card model that only supports DirectX 8.
I removed the dx9 test in directx.c just to check and got a fresh new bug to play with... regarding command stream errors in the kernel. That one is a pure driver error... which is almost worse because now all hope is lost until new driver code comes along to fix it. I've seen it reported elsewhere a bunch of times so I'm sure they are well aware.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #83 from Edward vbgraphix2003@hotmail.com 2010-02-25 07:12:33 --- (In reply to comment #80)
Hmm. Regarding vidmem, Since it's defined as an unsigned int, I'm wondering if that could be an issue with int size on amd64.
Although that could be a confounding factor, and it does seem like all the bug reports are 64 bit specific.
Anyone know how we could go about fixing the unsigned ints?
http://bugs.winehq.org/show_bug.cgi?id=21515
Cùran debian@carbon-project.org changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26299|0 |1 is obsolete| |
--- Comment #84 from Cùran debian@carbon-project.org 2010-02-25 08:55:12 --- Created an attachment (id=26478) --> (http://bugs.winehq.org/attachment.cgi?id=26478) Patch series from attachment 26299 refreshed for 1.1.39
(In reply to comment #79) I've refreshed the patch series again to apply cleanly on top of 1.1.39. I've attached the current version to this comment. The attached version of the patch series is applied to the Wine packages available from http://dev.carbon-project.org/debian/wine-unstable/ (binary and source packages available in case somebody else likes to try it) and can be found in the debian/patches directory (in wine-unstable_1.1.39-0.1.debian.tar.bz2). With regard to the wrong capabilities detection: Is there something I can do to help get the correct detection into Wine?
(In reply to comment #80)
Hmm. Regarding vidmem, Since it's defined as an unsigned int, I'm wondering if that could be an issue with int size on amd64.
Just FYI: all R300 tests I've run were done on i386.
http://bugs.winehq.org/show_bug.cgi?id=21515
Cùran debian@carbon-project.org changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26478|text/plain |application/x-bzip2 mime type| | Attachment #26478|1 |0 is patch| |
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #85 from P.Panon ppanon@shaw.ca 2010-02-25 17:51:02 --- Created an attachment (id=26493) --> (http://bugs.winehq.org/attachment.cgi?id=26493) Couple of additional trace statements
Hi. Perhaps Edward or Curan could try temporarily applying this patch to add a couple of extra trace statements and then upload a d3d_caps trace. This will let us see if it is ARB_FRAGMENT_PROGRAM or ARB_VERTEX_SHADER that's an issue on the R300, It will also verify what's being set for video memory in the gl_info->vidmem field after card detection.
http://bugs.winehq.org/show_bug.cgi?id=21515
Cùran debian@carbon-project.org changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26185|0 |1 is obsolete| |
--- Comment #86 from Cùran debian@carbon-project.org 2010-02-25 20:09:44 --- Created an attachment (id=26497) --> (http://bugs.winehq.org/attachment.cgi?id=26497) 3D application with R300 and patches from attachment 26478 and attachment 26493 (WINEDEBUG=+d3d_caps)
(In reply to comment #85) As can also be seen from attachment 25998 the R300 driver doesn't support ARB_vertex_shader, which isn't surprising, given the fact, that it was added in OpenGL 2.0 (http://www.opengl.org/registry/doc/glspec32.core.20091207.pdf, I.3.26, page 372) and the radeon driver only claims to support OpenGL 1.5 (see http://wiki.x.org/wiki/RadeonFeature), so it's displayed as "not supported" in the output I've attached to this comment. (Fake) Video memory and ARB_fragment_program are detected/supported.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #87 from Edward vbgraphix2003@hotmail.com 2010-02-25 21:06:50 --- (In reply to comment #84)
Created an attachment (id=26478)
--> (http://bugs.winehq.org/attachment.cgi?id=26478) [details]
Patch series from attachment 26299 [details] refreshed for 1.1.39
I was wondering... I have also been using Gallium with my r300, and modified your patch to detect Gallium rendering, but did it in a way only applicable to my card.
How could we go about detecting Gallium renderers? I'm assuming we just tack it on to the IF tests since you can only return each card once.
This whole business of grabbing the output of glxinfo seems rather hackish though especially since they keep changing the text. Wish there were a better way.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #88 from P.Panon ppanon@shaw.ca 2010-02-25 23:29:27 --- Thanks Cùran.
Edward, I'm afraid I have no idea how to detect Gallium renderers or how to enable them for that matter. I think what I'm going to do is submit the patch(es) as updated by Cùran in attachment 26478. At least that gets Direct3D8 support for R300-R500 and Direct3D9 support for R600-R700. That's better than what you would get for the R3/4/500 with fglrx (since those are no longer supported and maintained in recent revs), and provides open source support for the R6/700 series. It also provides a base for somebody to add Gallium support/detection later so it would be good to get that included into the main trunk so that we don't need to get Cùran to keep on refreshing the patch.
Either someone (else, since I don't have an R3/4/500) can add a new patch in this bug or open a new bug for adding Gallium detection and support.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #89 from Edward vbgraphix2003@hotmail.com 2010-02-26 07:19:47 --- (In reply to comment #88)
Thanks Cùran.
Edward, I'm afraid I have no idea how to detect Gallium renderers or how to enable them for that matter.
I am assuming that you just use the same format as before, but use the new Gallium glxinfo output.
http://www.phoronix.com/forums/showthread.php?t=21708&page=2
It worked for me anyways... my Morrowind Launcher program detects my card as X1600 after this... which is close enough.
So in the p5 patch, for instance, one part would look like this
/* Radeon R5xx */ if (strstr(gl_renderer, "Gallium 0.4 on R520") || strstr(gl_renderer, "RV535") || strstr(gl_renderer, "RV560") || strstr(gl_renderer, "RV570") || strstr(gl_renderer, "RS690") || strstr(gl_renderer, "R580")) { *vidmem = 128; /* X1600 uses 128-256MB, >=X1800 uses 56MB */ return CARD_ATI_RADEON_X1600; }
This would need to be done for all of the cards, but you would need to add in the old style to the same IF test, because I was getting compile errors when using separate IF tests complaining that each card can only be returned once.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #90 from Edward vbgraphix2003@hotmail.com 2010-02-26 07:22:05 --- Ignore the RS690 part in the code I posted, that goes in a different section.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #91 from Edward vbgraphix2003@hotmail.com 2010-02-26 07:36:30 --- I just looked up the strstr function and it seems it parses the whole string not just the beginning or end of the string. Perhaps the easiest way to handle this would be to erase everything but the card ID, and that should work for both classic Mesa and Gallium.
So that would be:
/* Radeon R5xx */ if (strstr(gl_renderer, "R520") || strstr(gl_renderer, "RV530") || strstr(gl_renderer, "RV535") || strstr(gl_renderer, "RV560") || strstr(gl_renderer, "RV570") || strstr(gl_renderer, "R580")) { *vidmem = 128; /* X1600 uses 128-256MB, >=X1800 uses 56MB */ return CARD_ATI_RADEON_X1600; }
I have absolutely no experience making patches, but this should make the patching much easier and future-proof for when Gallium reaches version 0.5 and beyond.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #92 from Cùran debian@carbon-project.org 2010-02-26 09:06:56 --- (In reply to comment #91)
I just looked up the strstr function and it seems it parses the whole string not just the beginning or end of the string. Perhaps the easiest way to handle this would be to erase everything but the card ID, and that should work for both classic Mesa and Gallium.
I'm not sure this is a good idea, as Mesa and Gallium aren't the same drivers and may require different quirks in the future. But this is probably better answered by somebody with knowledge of the internals of Wine's D3D implementation and driver handling. In case your proposition is viable, I'm happy to try to work it into the current patch series (or add it on top of that).
An additional question that comes to mind wrt Gallium: is it ready/usable yet? The developers on #radeon told me recently it is currently lacking hardware acceleration for most of its functionality and is doing a lot in software.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #93 from Edward vbgraphix2003@hotmail.com 2010-02-26 09:23:56 --- (In reply to comment #92)
An additional question that comes to mind wrt Gallium: is it ready/usable yet? The developers on #radeon told me recently it is currently lacking hardware acceleration for most of its functionality and is doing a lot in software.
It seems to work for my X1800 and provides me with surprisingly stable compositing... can't tell the difference between Gallium and regular Mesa for this.
Unfortunately, I can't get most games to run properly, even native ones. Nexuiz has some really bad glitching.
Progress is rapid though. These are exciting times for open source graphics. I practically have come to expect daily improvements, which is why I am eager to have Wine working.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #94 from Henri Verbeet hverbeet@gmail.com 2010-02-26 09:25:16 --- (In reply to comment #92)
I'm not sure this is a good idea, as Mesa and Gallium aren't the same drivers and may require different quirks in the future. But this is probably better answered by somebody with knowledge of the internals of Wine's D3D implementation and driver handling. In case your proposition is viable, I'm happy to try to work it into the current patch series (or add it on top of that).
We can split classic Mesa and Gallium if needed, but the main use of this code should be for reporting the right card and driver info back to the application. We also have plenty of quirks that use this information, but there's a strong preference to test actual behaviour for quirks instead of depending on the detected card/driver.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #95 from P.Panon ppanon@shaw.ca 2010-03-01 03:28:39 --- The reason why I used "(..." on the chip model matching is because two of the chip names also are included in the renderer driver name (DRI R300 and R600). While it might be possible to avoid using the open parentheses by judicious ordering of the tests, I figured doing so would make the code more brittle. Even with commenting, someone might later re-arrange the code and break the tests so that a whole bunch of cards get inappropriately branded as R300 or R600.
Henri's latest post would seem to indicate a preference for avoiding a GL_VENDOR_MESA_GALLIUM addition. Is there likely to be any improved ARB_VERTEX_SHADER support for on the DRI R300 driver? My impression from the web site is that any additional R300 support will be provided through Gallium. If that's so, then all the current D3D9 R300-R500 tests would never execute and they will always default to D3D8/CARD_ATI_RADEON_9500. So it might be possible to just take those comparisons right out of the D3D9 block and into a separate section that first tests for Gallium in the renderer string and then tests for the chip model without a parenthesis. Would probably need to duplicate the R600-R700 comparisons as well for completeness.
http://bugs.winehq.org/show_bug.cgi?id=21515
P.Panon ppanon@shaw.ca changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26247|0 |1 is obsolete| |
--- Comment #96 from P.Panon ppanon@shaw.ca 2010-03-01 04:44:13 --- Created an attachment (id=26553) --> (http://bugs.winehq.org/attachment.cgi?id=26553) 1.1.39 patch series for adding Mesa ATI support - added Gallium checks and cleanups suggested by Henri
Hi Cùran,
I'm hoping I can impose on you to check this latest patch set. Unfortunately I've been able to do even less testing than before because Lucid Alpha3's boot is broken on my machine. With Alpha3 out, Alpha2 no longer installs anymore either so I've had to roll all the way back to Karmic, which doesn't compile the 1.1.39 package properly. Apart for the new changes for Gallium to patch 5, the changes are minor though. Since it includes Henri's suggested corrections, I'm hoping it will get accepted into Wine if it passes your testing.
This feels like working with punch card decks way back when :-)
http://bugs.winehq.org/show_bug.cgi?id=21515
Cùran debian@carbon-project.org changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26497|0 |1 is obsolete| |
--- Comment #97 from Cùran debian@carbon-project.org 2010-03-01 07:29:00 --- Created an attachment (id=26560) --> (http://bugs.winehq.org/attachment.cgi?id=26560) 3D application with R300 and patches from attachment 26553 and attachment 26493 (WINEDEBUG=+d3d_caps)
(In reply to comment #96)
Unfortunately I've been able to do even less testing than before because Lucid Alpha3's boot is broken on my machine. With Alpha3 out, Alpha2 no longer installs anymore either so I've had to roll all the way back to Karmic, which doesn't compile the 1.1.39 package properly.
You should use Debian. (SCNR)
Apart for the new changes for Gallium to patch 5, the changes are minor though. Since it includes Henri's suggested corrections, I'm hoping it will get accepted into Wine if it passes your testing. This feels like working with punch card decks way back when :-)
I investigated the Gallium stuff too (as announced earlier in comment #92), but came to the same point you described in comment #95 and therefore wanted to ask here first again, but then a busy weekend didn't leave me with enough time on my hands. So you beat me and posted the updated patch series which I've gladly tested. And AFAICT from my tests (and from the attached log) everything is as before for me with the radeon driver. Therefore I'd say: sent it to the Wine patches mailing list, so the patch series gets included.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #98 from P.Panon ppanon@shaw.ca 2010-03-01 12:18:42 --- Ah, it just occurred to me that the problem with D3D9 glxinfo matching on the Gallium drivers might just be a temporary glitch, in which case it might succeed at some point in the future and no longer match the cards properly. I'll flip the Gallium tests to come first.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #99 from Edward vbgraphix2003@hotmail.com 2010-03-02 06:10:21 --- Great, once that last thing is done we can get this included! What are the odds this will get included in 1.1.40 ?
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #100 from P.Panon ppanon@shaw.ca 2010-03-02 12:57:31 --- Henri found one last thing that would cause a problem with the gl_vendor detection due to my moving the gl_info-based Apple detection code into it before that information is obtained. So I have to figure out how to fix that correctly. If this really is the last thing, then I would expect it will get committed. I don't know when 1.1.40 is scheduled though, so I don't know if it will make it in time (I won't have time to get to it until tonight, and so it probably won't be processed until tomorrow at the earliest). I would prefer if it could make it in.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #101 from Austin English austinenglish@gmail.com 2010-03-02 13:29:36 --- (In reply to comment #100)
I don't know when 1.1.40 is scheduled though, so I don't know if it will make it in time (I won't have time to get to it until tonight, and so it probably won't be processed until tomorrow at the earliest). I would prefer if it could make it in.
Should be this Friday.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #102 from P.Panon ppanon@shaw.ca 2010-03-03 13:30:28 --- Looks like the patch has been accepted so it should be in 1.1.40. That means we can probably close this bug since any remaining rendering/display issues appear to be due to Mesa driver GL API support level & bugs.
On the other hand, someone else might want to file a bug on the vidmem issue discussed in comments #79 -> #84 so that it get looks at further. That value appears to be set correctly in Wine's internal data structures by ..._guess_card() but not used/reported by whatever Direct3D API call 3DMark2001SE is using. That probably significantly affects performance since the application likely wouldn't attempt to load textures and 3D data into video memory as a result, causing them to be loaded more slowly from main memory instead.
I was thinking that maybe the issue is that the D3D API used is supposed to return dynamic values instead of static maximums. Perhaps those functions haven't been implemented yet. If so, then hopefully those functions will have FIXME/WARN entries and someone can rerun a trace after 1.1.40 comes out and look for FIXME output that would indicate what those functions might be.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #103 from Stefan Dösinger stefandoesinger@gmx.at 2010-03-03 14:02:35 ---
That probably significantly affects performance since the application likely wouldn't attempt to load textures and 3D data into video memory as a result, causing them to be loaded more slowly from main memory instead.
Reporting too little video memory is unlikely to impact performance negatively since the location of the texture is entirely up to opengl. We don't claim support for texturing from system memory(since windows drivers can't do that either), and even if we did opengl would still do optimizations behind our back.
What it usually does is make the app tell you that you have to get a better video card because it *thinks* it can't run. Or the app might cut back graphics quality.
The only case where a wrong vidmem value negatively affected performance was when we reported too much vidmem. The app tried to use more textures than the card could hold, resulting in constant purging and reloading textures by the driver.
http://bugs.winehq.org/show_bug.cgi?id=21515
Austin English austinenglish@gmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |RESOLVED Resolution| |FIXED
--- Comment #104 from Austin English austinenglish@gmail.com 2010-03-03 16:53:34 --- Patches are in => fixed.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #105 from Stefan Dösinger stefandoesinger@gmx.at 2010-03-03 17:03:28 --- By the way, thanks a lot for the hard work to all who have been involved!
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #106 from cruiseoveride cruiseoveride@gmail.com 2010-03-03 17:36:09 --- Graphics driver developers are reluctant (at least in my experience) to use WINE as a testing platform to improve their drivers on the basis that it is too time consuming to distinguish, and isolate faults in the graphics stack when there are too many levels of indirection and hacks required to make 3D Windows application run on Linux.
Considering that there is now out-of-the-box basic 3D acceleration for both Nvidia and ATi cards through open source drivers on all the major Linux distributions coming out this summer, the barrier to entry for gaming on Linux through WINE has significantly come down for the average n00b00n2.
Therefore can someone provide a guide for the average desktop-user on how to distinguish faults between WINE's hackery and/or the open source graphics stack in order to debug, trace and document it appropriately with their respective bug trackers and development channels?
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #107 from cruiseoveride cruiseoveride@gmail.com 2010-03-03 17:46:44 --- By the way, should I be seeing things like:
Mesa: User error: GL_INVALID_OPERATION in glProgramStringARB(relative address offset too large (65)) Mesa: User error: GL_INVALID_OPERATION in glProgramString(bad program) fixme:d3d_caps:wined3d_guess_card No card selector available for GL vendor 4 and card vendor 0000. fixme:win:EnumDisplayDevicesW ((null),0,0x33d6d8,0x00000000), stub!
with the patch applied?
My hardware hasn't changed.
GL_VERSION: 2.0 Mesa 7.8-devel GL_VENDOR: Advanced Micro Devices, Inc. GL_RENDERER: Mesa DRI R600 (RV770 9440) 20090101 TCL DRI2
wine-1.1.39-288-g74a059d
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #108 from P.Panon ppanon@shaw.ca 2010-03-03 19:12:43 --- That's interesting. It looks like it's setting gl_vendor correctly to GL_VENDOR_MESA, but card_vendor is being set to HW_VENDOR_WINE instead of HW_VENDOR_ATI. According to http://source.winehq.org/git/wine.git/?a=tree;f=dlls/wined3d;h=efa79e974c711... the git head for directx.c contains
static enum wined3d_pci_vendor wined3d_guess_card_vendor(const char *gl_vendor_string, const char *gl_renderer) { if (strstr(gl_vendor_string, "NVIDIA")) return HW_VENDOR_NVIDIA;
if (strstr(gl_vendor_string, "ATI") || strstr(gl_vendor_string, "Advanced Micro Devices, Inc.") || strstr(gl_vendor_string, "DRI R300 Project")) return HW_VENDOR_ATI;
if (strstr(gl_vendor_string, "Intel(R)") || strstr(gl_renderer, "Intel(R)") || strstr(gl_vendor_string, "Intel Inc.")) return HW_VENDOR_INTEL;
if (strstr(gl_vendor_string, "Mesa") || strstr(gl_vendor_string, "Tungsten Graphics, Inc") || strstr(gl_vendor_string, "VMware, Inc.")) return HW_VENDOR_WINE;
FIXME_(d3d_caps)("Received unrecognized GL_VENDOR %s. Returning HW_VENDOR_NVIDIA.\n", debugstr_a(gl_vendor_string));
return HW_VENDOR_NVIDIA; }
so it should be matching "Advanced Micro Devices, Inc." in the vendor string and using HW_VENDOR_ATI. You could confirm that with a d3d_caps trace and see what "found GL_VENDOR ("... shows. If it doesn't show (Advanced Micro Devices, Inc.)->(0x0004/0x1002) then maybe there's been a source update error somehow. In that case check the code above against the same procedure on your copy of directx.c
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #109 from P.Panon ppanon@shaw.ca 2010-03-03 19:25:01 --- Although it does look like I missed an update for the card_vendor detection for Gallium. wined3d_guess_card_vendor() should have
if (strstr(gl_vendor_string, "ATI") || strstr(gl_vendor_string, "Advanced Micro Devices, Inc.") || strstr(gl_vendor_string, "DRI R300 Project") || strstr(gl_vendor_string, "X.Org R300 Project")) return HW_VENDOR_ATI;
to properly match an R300 ATI driver with Gallium enabled. However, that doesn't seem to be your problem based on your GL vendor/renderer string values.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #110 from P.Panon ppanon@shaw.ca 2010-03-03 19:39:28 --- Oh and yeah, the first Mesa "user"/offset error is probably as a result of the capabilities testing being done in test_arb_vs_offset_limit(). Wine is deliberately trying something that may fail and triggering different behaviour based on the results. I expect the second Mesa error is probably the same sort of thing.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #111 from cruiseoveride cruiseoveride@gmail.com 2010-03-04 23:36:52 --- I double checked the source and rebuilt everything. Still getting the same message. And the card is showing up as a Nvidia FX5600.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #112 from cruiseoveride cruiseoveride@gmail.com 2010-03-04 23:39:00 --- Created an attachment (id=26613) --> (http://bugs.winehq.org/attachment.cgi?id=26613) d3d_caps trace while opening and then closing 3dmark2001se
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #113 from P.Panon ppanon@shaw.ca 2010-03-05 02:37:04 --- Created an attachment (id=26615) --> (http://bugs.winehq.org/attachment.cgi?id=26615) fix to last minute bug insertion
Sigh, I added a bug while trying to fix the one that Henri pointed out. guess_gl_vendor() and guess_card_vendor() used to be after the GL_VENDOR request, but by moving them later so that the supported functions table is initialized for detection of GL_VENDOR_APPLE, it also moved them after gl_string was overwritten with the GL_VERSION request. This patch swaps GL_VERSION and GL_VENDOR queries so that gl_string will still contain the gl_vendor during the guess function calls.
http://bugs.winehq.org/show_bug.cgi?id=21515
P.Panon ppanon@shaw.ca changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #26615|0 |1 is obsolete| |
--- Comment #114 from P.Panon ppanon@shaw.ca 2010-03-05 03:11:30 --- Created an attachment (id=26618) --> (http://bugs.winehq.org/attachment.cgi?id=26618) change previous patch fix to keep copy of GL_VENDOR string value; separate patch for Gallium card_vendor check
change previous patch fix to keep copy of GL_VENDOR string value instead of shuffling code around to be more robust; also includes separate patch for guess_card_vendor check to work with R300 Gallium driver
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #115 from P.Panon ppanon@shaw.ca 2010-03-05 04:45:36 --- After submitting that correction, as I was looking at its status, I noticed that Kusanagi Kouichi already submitted an effectively similar patch at 59001 which appears to do a better job of fixing the GL_VENDOR problem. So hopefully that one will go through to HEAD tomorrow.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #116 from P.Panon ppanon@shaw.ca 2010-03-05 05:24:09 --- RE: debugging drivers vs. Wine.
I've been wondering that a bit myself. It looks like if you're wanting to debug direct3d, you've got all the following trace categories available.
d3d, d3d8, d3d9, d3d10, d3d10core, d3d_caps, d3d_constants, d3d_decl, d3d_draw, d3drm, d3d_shader, d3d_surface, d3d_texture, d3dx, d3dxof, d3dxof_parsing, fps, gl_compat
If you're debugging, you would probably only want to turn on the ones you think are useful or else you're going to get huge logs and not be able to find anything in the flood of results. So If you think the problem is likely to be shader related then you could select d3d_shader. Trace d3d_texture if you think the problem is with texture manipulation, A modern app is unlikely to user D3D retained Mode, and d3d_cap(abilitie)s appears to mainly important for directx initialization (detecting what D3D functions can be supported, etc.)
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #117 from P.Panon ppanon@shaw.ca 2010-03-05 12:09:44 --- Kusanagi Kouichi's patch was accepted and is in head, so updating should fix your issue cruise. It looks like it all should make it into 1.1.40 since that doesn't appear to have been applied yet.
http://bugs.winehq.org/show_bug.cgi?id=21515
Alexandre Julliard julliard@winehq.org changed:
What |Removed |Added ---------------------------------------------------------------------------- Status|RESOLVED |CLOSED
--- Comment #118 from Alexandre Julliard julliard@winehq.org 2010-03-05 12:43:20 --- Closing bugs fixed in 1.1.40.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #119 from P.Panon ppanon@shaw.ca 2010-03-06 22:37:57 --- Heh, Finally figured out what I needed to do to get it to work. I got 3004 3D Marks with my 3850. That said there appears to be a lot of missing figures, and not just in the Lobby scene. So I don't know if my results may be higher than cruisoverride's because the card is doing less work.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #120 from cruiseoveride cruiseoveride@gmail.com 2010-03-06 23:11:13 --- Well I'm getting about 6,500 3dmark 2001se points now (Any resolution :) ). Its pretty much pegged at 75fps throughout on all tests. Which is coincidently my monitor's refresh rate.
Anyways at least a few baby steps forward.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #121 from P.Panon ppanon@shaw.ca 2010-03-09 03:55:06 --- Looking at the DRI Radeon feature page, it's pretty impressive how many rectangles are now green after the Feb. 28 update and how quickly the remaining blocks are cycling from red through orange to yellow.
I did notice a bunch of the following entries in the d3d_caps trace I got when trying to run 3DMark2001:
trace:d3d_caps:IWineD3DImpl_CheckDeviceFormat (0x169e88)-> (STUB) (Adptr:0, DevType:(2,WINED3DDEVTYPE_REF), AdptFmt:(112,WINED3DFMT_B5G6R5_UNORM), Use:(0,0,0), ResTyp:(3,WINED3DRTYPE_TEXTURE), CheckFmt:(827611204,WINED3DFMT_DXT1)) trace:d3d_caps:IWineD3DImpl_GetAdapterCount (0x169e88): Reporting 1 adapters trace:d3d_caps:CheckTextureCapability [FAILED] trace:d3d_caps:IWineD3DImpl_CheckDeviceFormat [FAILED] - Texture format not supported trace:d3d_caps:IWineD3DImpl_CheckDeviceFormat (0x169e88)-> (STUB) (Adptr:0, DevType:(2,WINED3DDEVTYPE_REF), AdptFmt:(112,WINED3DFMT_B5G6R5_UNORM), Use:(0,0,0), ResTyp:(3,WINED3DRTYPE_TEXTURE), CheckFmt:(827611204,WINED3DFMT_DXT1)) trace:d3d_caps:IWineD3DImpl_GetAdapterCount (0x169e88): Reporting 1 adapters trace:d3d_caps:CheckTextureCapability [FAILED] trace:d3d_caps:IWineD3DImpl_CheckDeviceFormat [FAILED] - Texture format not supported fixme:d3d:debug_d3dusagequery Unrecognized usage query flag(s) 0x2 trace:d3d_caps:IWineD3DImpl_CheckDeviceFormat (0x169e88)-> (STUB) (Adptr:0, DevType:(2,WINED3DDEVTYPE_REF), AdptFmt:(112,WINED3DFMT_B5G6R5_UNORM), Use:(2,WINED3DUSAGE_DEPTHSTENCIL,0), ResTyp:(1,WINED3DRTYPE_SURFACE), CheckFmt:(72,WINED3DFMT_D24_UNORM_S8_UINT)) trace:d3d_caps:IWineD3DImpl_GetAdapterCount (0x169e88): Reporting 1 adapters warn:d3d_caps:IWineD3DImpl_CheckDepthStencilMatch (0x169e88)-> (STUB) (Adptr:0, DevType:(2,WINED3DDEVTYPE_REF), AdptFmt:(70,WINED3DFMT_B5G6R5_UNORM), RendrTgtFmt:(70,WINED3DFMT_B5G6R5_UNORM), DepthStencilFmt:(48,WINED3DFMT_D24_UNORM_S8_UINT)) trace:d3d_caps:IWineD3DImpl_GetAdapterCount (0x169e88): Reporting 1 adapters trace:d3d_caps:IWineD3DImpl_CheckDepthStencilMatch (0x169e88) : Formats matched fixme:d3d:debug_d3dusagequery Unrecognized usage query flag(s) 0x2 trace:d3d_caps:IWineD3DImpl_CheckDeviceFormat (0x169e88)-> (STUB) (Adptr:0, DevType:(2,WINED3DDEVTYPE_REF), AdptFmt:(112,WINED3DFMT_B5G6R5_UNORM), Use:(2,WINED3DUSAGE_DEPTHSTENCIL,0), ResTyp:(1,WINED3DRTYPE_SURFACE), CheckFmt:(19,WINED3DFMT_D32_UNORM)) trace:d3d_caps:IWineD3DImpl_GetAdapterCount (0x169e88): Reporting 1 adapters trace:d3d_caps:IWineD3DImpl_CheckDeviceFormat [FAILED] - No depthstencil support fixme:d3d:debug_d3dusagequery Unrecognized usage query flag(s) 0x2
According to http://www.gamasutra.com/features/20051228/sherrod_01.shtml, DXT1 is a texture compression format that is supposed to be available in OpenGL, so perhaps that's just not implemented in Wine yet. I did say earlier that I thought those missing figures could be due to textures not being loaded and this looks like a possible reason.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #122 from Jeff Zaroyko jeffz@jeffz.name 2010-03-09 03:59:56 --- (In reply to comment #121)
According to http://www.gamasutra.com/features/20051228/sherrod_01.shtml, DXT1 is a texture compression format that is supposed to be available in OpenGL, so perhaps that's just not implemented in Wine yet. I did say earlier that I thought those missing figures could be due to textures not being loaded and this looks like a possible reason.
Have you read the following? http://dri.freedesktop.org/wiki/S3TC
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #123 from P.Panon ppanon@shaw.ca 2010-03-09 04:03:43 --- Oops. I said perhaps DXT1 compression wasn't implemented in Wine, but what I really meant was that perhaps it isn't implemented in Mesa Radeon yet. Thanks for the link Jeff. I'll check it out. I've been working on rebuilding my karmic ia32libs so I'll see whether I can incorporate the build info on that link. Cruiseoverride, you might want to check it out too.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #124 from cruiseoveride cruiseoveride@gmail.com 2010-03-09 09:27:46 --- That dxN library is more for educational purposes than anything else. As far as I know, the radeon driver (or any open driver) will ever have s3c implemented as it needs to be licensed from either S3 or a subvendor like Microsoft.
All my missing textures come back with that mp2 quirk enabled.
http://bugs.winehq.org/show_bug.cgi?id=21515
--- Comment #125 from Stefan Dösinger stefandoesinger@gmx.at 2010-03-09 12:37:04 --- You need this libdxtn library to enable s3tc support in Mesa, or force it on in driconf.
In general the radeon driver has no issue with the s3tc patent because it just passes the compressed data through from the app to the card and doesn't care about compressing or decompressing it. However, apps can request s3tc compression and decompression from opengl(upload uncompressed data to a s3tc texture or vice versa). The mesa people were considering a special "Only passthrough s3tc" extension for wine, but that won't work because Mesa might still have to decompress the texture in case it hits a software fallback somewhere.
http://bugs.winehq.org/show_bug.cgi?id=21515
Saulius K. saulius2@gmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |saulius2@gmail.com