FWIW, I have such an NVIDIA Geforce GT 750M card (under OS X 10.9.5) and Crossover 13.2 returned 768MB of vram for it. Crossover 14/14.0.1 returns 128MB instead, which resulted in a number of games suddenly crashing because they ran out of VRAM. This issue was quickly diagnosed by the helpful Codeweavers support person, and resolved by adding the registry entry to override the video memory size.
Looking at the current source code, I assume that's because shader_arb_get_caps() can set the shader_caps->vs_version maximally to 3, while d3d_level_from_caps() will only return WINED3D_D3D_LEVEL_10 for shader_caps->vs_version==4. As a result, d3d_level_from_caps now returns WINED3D_D3D_LEVEL_9_SM3, which is translated for an unknown NVidia card into CARD_NVIDIA_GEFORCE_6800. That one is indeed defined as having 128MB of VRAM.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Am 2014-11-14 01:30, schrieb Jonas Maebe:
FWIW, I have such an NVIDIA Geforce GT 750M card (under OS X 10.9.5) and Crossover 13.2 returned 768MB of vram for it. Crossover 14/14.0.1 returns 128MB instead, which resulted in a number of games suddenly crashing because they ran out of VRAM.
CrossOver isn't really the concern for this mailing list, but I'll have a look. CrossOver used to use ARB shaders for performance reasons, but on 13.x (and 14.x) we changed this on OSX because OSX supports only 128 fragment program env parameters, which isn't enough for many modern Shader Model 3 games (they expect 256). So I don't think shader_arb_get_caps is to blame.
What does Wine return for your card? If it also misidentifies the card it's something we should fix.
On 14/11/14 11:47, Stefan Dösinger wrote:
CrossOver isn't really the concern for this mailing list, but I'll have a look. CrossOver used to use ARB shaders for performance reasons, but on 13.x (and 14.x) we changed this on OSX because OSX supports only 128 fragment program env parameters, which isn't enough for many modern Shader Model 3 games (they expect 256). So I don't think shader_arb_get_caps is to blame.
Sorry I wasn't clear: I discovered the issue initially due to Crossover 14.x starting to crash for games that worked fine before. Once the source of the crashes was identified by Codeweavers support, I looked at the Wine source code to see what have caused them. The result of that investigation is what I wrote down in my original mail.
What does Wine return for your card? If it also misidentifies the card it's something we should fix.
Wine indeed also fails to identify the card in exactly the same way as Crossover, but the patch I sent to wine-patches with the same subject line as this mail fixes that.
However, it does not fix the general issue that since commit e9b0a0e18 unknown video cards will be auto-detected as supporting at most SM3 due to shader_arb_get_caps() never setting shader_caps->vs_version to 4 (before that commit, the detection of the supported shader level in d3d_level_from_gl_info was based on some ad hoc properties).
Jonas
On 14 November 2014 12:58, Jonas Maebe jonas.maebe@elis.ugent.be wrote:
However, it does not fix the general issue that since commit e9b0a0e18 unknown video cards will be auto-detected as supporting at most SM3 due to shader_arb_get_caps() never setting shader_caps->vs_version to 4 (before that commit, the detection of the supported shader level in d3d_level_from_gl_info was based on some ad hoc properties).
shader_arb_get_caps() should never get called for any hardware/driver that can do GLSL, unless you explicitly disable GLSL in the registry. (At which point you basically get to keep the pieces.)
On 14/11/14 13:21, Henri Verbeet wrote:
On 14 November 2014 12:58, Jonas Maebe jonas.maebe@elis.ugent.be wrote:
However, it does not fix the general issue that since commit e9b0a0e18 unknown video cards will be auto-detected as supporting at most SM3 due to shader_arb_get_caps() never setting shader_caps->vs_version to 4 (before that commit, the detection of the supported shader level in d3d_level_from_gl_info was based on some ad hoc properties).
shader_arb_get_caps() should never get called for any hardware/driver that can do GLSL, unless you explicitly disable GLSL in the registry. (At which point you basically get to keep the pieces.)
I did not disable anything. From what I understood from the code, if a video card is not known to wine, then * d3d_level_from_gl_info() will be used to determine the supported directx level * since e9b0a0e18, d3d_level_from_gl_info()'s return value is purely based on shader_caps->vs_version
I did miss that this version can also be set in glsl_shader.c, and there it's indeed not limited to 3.
In any case: without my patch, wine emulates my Geforce GT 750M 1GB under OS X 10.9.5 as a 128MB card, which (I assume) means that it detects it as an SM3 rather than an SM4 card. I don't have access to that machine right now, but if you want some +d3d or other debug logs let me know and I can post them tonight.
Jonas
On Friday 14 November 2014 14:22:49 Jonas Maebe wrote:
- since e9b0a0e18, d3d_level_from_gl_info()'s return value is purely
based on shader_caps->vs_version
I did miss that this version can also be set in glsl_shader.c, and there it's indeed not limited to 3.
In any case: without my patch, wine emulates my Geforce GT 750M 1GB under OS X 10.9.5 as a 128MB card, which (I assume) means that it detects it as an SM3 rather than an SM4 card. I don't have access to that machine right now, but if you want some +d3d or other debug logs let me know and I can post them tonight.
Hi Jonas,
I'm the author of that change and indeed if the driver does not expose at least OpenGL 3.0 as a compatibility profile the card will be considered DX9 era. I know OS X is stuck on OpenGL 2.1 on the compatibility context and only exposes higher GL versions for core profile. And wine is only capable of using compatibility contexts for now. Can you provide the list of extensions exposed by the driver? glxinfo should do that if it exists on OS X.
On 14/11/14 14:40, Andrei Slavoiu wrote:
I'm the author of that change and indeed if the driver does not expose at least OpenGL 3.0 as a compatibility profile the card will be considered DX9 era. I know OS X is stuck on OpenGL 2.1 on the compatibility context and only exposes higher GL versions for core profile. And wine is only capable of using compatibility contexts for now. Can you provide the list of extensions exposed by the driver? glxinfo should do that if it exists on OS X.
Hi Andrei,
Below you can find the glxinfo output. Thanks,
Jonas
name of display: /tmp/launch-LWUEse/org.macosforge.xquartz:0 display: /tmp/launch-LWUEse/org.macosforge.xquartz:0 screen: 0 direct rendering: Yes server glx vendor string: SGI server glx version string: 1.4 server glx extensions: GLX_SGIX_fbconfig, GLX_SGIS_multisample, GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_import_context client glx vendor string: Mesa Project and SGI client glx version string: 1.4 client glx extensions: GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_framebuffer_sRGB, GLX_MESA_copy_sub_buffer, GLX_MESA_multithread_makecurrent, GLX_MESA_swap_control, GLX_OML_swap_method, GLX_OML_sync_control, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap, GLX_INTEL_swap_event GLX version: 1.4 GLX extensions: GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_MESA_multithread_makecurrent, GLX_SGIS_multisample, GLX_SGIX_fbconfig OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: NVIDIA GeForce GT 750M OpenGL Engine OpenGL version string: 2.1 NVIDIA-8.26.28 310.40.55b01 OpenGL shading language version string: 1.20 OpenGL extensions: GL_ARB_color_buffer_float, GL_ARB_depth_buffer_float, GL_ARB_depth_clamp, GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_draw_elements_base_vertex, GL_ARB_draw_instanced, GL_ARB_fragment_program, GL_ARB_fragment_program_shadow, GL_ARB_fragment_shader, GL_ARB_framebuffer_object, GL_ARB_framebuffer_sRGB, GL_ARB_half_float_pixel, GL_ARB_half_float_vertex, GL_ARB_imaging, GL_ARB_instanced_arrays, GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_pixel_buffer_object, GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_provoking_vertex, GL_ARB_seamless_cube_map, GL_ARB_shader_objects, GL_ARB_shader_texture_lod, GL_ARB_shading_language_100, GL_ARB_shadow, GL_ARB_sync, GL_ARB_texture_border_clamp, GL_ARB_texture_compression, GL_ARB_texture_compression_rgtc, GL_ARB_texture_cube_map, GL_ARB_texture_env_add, GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3, GL_ARB_texture_float, GL_ARB_texture_mirrored_repeat, GL_ARB_texture_non_power_of_two, GL_ARB_texture_rectangle, GL_ARB_texture_rg, GL_ARB_transpose_matrix, GL_ARB_vertex_array_bgra, GL_ARB_vertex_blend, GL_ARB_vertex_buffer_object, GL_ARB_vertex_program, GL_ARB_vertex_shader, GL_ARB_window_pos, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_bindable_uniform, GL_EXT_blend_color, GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate, GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_clip_volume_hint, GL_EXT_debug_label, GL_EXT_debug_marker, GL_EXT_depth_bounds_test, GL_EXT_draw_buffers2, GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample, GL_EXT_framebuffer_multisample_blit_scaled, GL_EXT_framebuffer_object, GL_EXT_framebuffer_sRGB, GL_EXT_geometry_shader4, GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4, GL_EXT_multi_draw_arrays, GL_EXT_packed_depth_stencil, GL_EXT_packed_float, GL_EXT_provoking_vertex, GL_EXT_rescale_normal, GL_EXT_secondary_color, GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, GL_EXT_stencil_two_side, GL_EXT_stencil_wrap, GL_EXT_texture_array, GL_EXT_texture_compression_dxt1, GL_EXT_texture_compression_s3tc, GL_EXT_texture_env_add, GL_EXT_texture_filter_anisotropic, GL_EXT_texture_integer, GL_EXT_texture_lod_bias, GL_EXT_texture_mirror_clamp, GL_EXT_texture_rectangle, GL_EXT_texture_shared_exponent, GL_EXT_texture_sRGB, GL_EXT_texture_sRGB_decode, GL_EXT_timer_query, GL_EXT_transform_feedback, GL_EXT_vertex_array_bgra, GL_APPLE_aux_depth_stencil, GL_APPLE_client_storage, GL_APPLE_element_array, GL_APPLE_fence, GL_APPLE_float_pixels, GL_APPLE_flush_buffer_range, GL_APPLE_flush_render, GL_APPLE_object_purgeable, GL_APPLE_packed_pixels, GL_APPLE_pixel_buffer, GL_APPLE_rgb_422, GL_APPLE_row_bytes, GL_APPLE_specular_vector, GL_APPLE_texture_range, GL_APPLE_transform_hint, GL_APPLE_vertex_array_object, GL_APPLE_vertex_array_range, GL_APPLE_vertex_point_size, GL_APPLE_vertex_program_evaluators, GL_APPLE_ycbcr_422, GL_ATI_separate_stencil, GL_ATI_texture_env_combine3, GL_ATI_texture_float, GL_ATI_texture_mirror_once, GL_IBM_rasterpos_clip, GL_NV_blend_square, GL_NV_conditional_render, GL_NV_depth_clamp, GL_NV_fog_distance, GL_NV_fragment_program_option, GL_NV_fragment_program2, GL_NV_light_max_exponent, GL_NV_multisample_filter_hint, GL_NV_point_sprite, GL_NV_texgen_reflection, GL_NV_texture_barrier, GL_NV_vertex_program2_option, GL_NV_vertex_program3, GL_SGIS_generate_mipmap, GL_SGIS_texture_edge_clamp, GL_SGIS_texture_lod
În ziua de Vin 14 Noi 2014, la 19:26:01, Jonas Maebe a scris:
OpenGL extensions: GL_EXT_framebuffer_sRGB, GL_EXT_geometry_shader4, GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4,
Before GL_EXT_gpu_shader4 was used to differentiate SM3 cards from SM4. I remember somebody said this is not enough to ensure the card is SM4 capable, then how about GL_EXT_geometry_shader4?
On 14 Nov 2014, at 22:14, Andrei Slăvoiu wrote:
În ziua de Vin 14 Noi 2014, la 19:26:01, Jonas Maebe a scris:
OpenGL extensions: GL_EXT_framebuffer_sRGB, GL_EXT_geometry_shader4, GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4,
Before GL_EXT_gpu_shader4 was used to differentiate SM3 cards from SM4. I remember somebody said this is not enough to ensure the card is SM4 capable, then how about GL_EXT_geometry_shader4?
According to http://shiben.blogspot.be/2007/01/shader-model-40-examples.html , that is indeed the case:
"Now run glxinfo on xterm (linux), Run wglinfo.exe on cmd.exe, get wglinfo from http://www.cg.tuwien.ac.at/~wimmer/wglinfo/ (windows) See if you can see GL_EXT_geometry_shader4 in the clutter of text. If you can, you system is ready to compile and run SM4 codes."
and
"By the way, Shader Model 4.0 is also called the Direct X 10 Architecture"
Jonas
2014-11-14 22:14 GMT+01:00 Andrei Slăvoiu andrei.slavoiu@gmail.com:
În ziua de Vin 14 Noi 2014, la 19:26:01, Jonas Maebe a scris:
OpenGL extensions: GL_EXT_framebuffer_sRGB, GL_EXT_geometry_shader4, GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4,
Before GL_EXT_gpu_shader4 was used to differentiate SM3 cards from SM4. I remember somebody said this is not enough to ensure the card is SM4 capable, then how about GL_EXT_geometry_shader4?
Both extensions are not exposed in the non-core profile by Mesa drivers, but apparently they are on OS X in both legacy and core profiles. ARB_shader_bit_encoding, though, is not exposed by OS X, so the SM4 check in shader_glsl_get_caps() fails. More importantly, Mesa and OS X only support the core profile for GL 3.2+ so GLSL 150 is not supported in the legacy contexts wined3d is using right now.
I introduced that check back then and it is somewhat overzealous (e.g. GLSL version 1.50 pretty much implies EXT_gpu_shader4 and geometry shaders) but we will actually require that kind of features for D3D10 support. The way forward is to use 3.2+ core profiles and GLSL 1.50+ in wined3d instead of those missing extensions, but it's going to be a while for that.
Modifying that SM4 check is fine but we'd still require GLSL 1.50 and that means the check is still going to fail with those drivers. In theory we could also tone it down and e.g. only look for the SM4-level features supported by all the drivers but I'm not sure the end result would make a lot of sense.
On 18/11/14 17:22, Matteo Bruni wrote:
Modifying that SM4 check is fine but we'd still require GLSL 1.50 and that means the check is still going to fail with those drivers. In theory we could also tone it down and e.g. only look for the SM4-level features supported by all the drivers but I'm not sure the end result would make a lot of sense.
The main issue right now (as far as I'm concerned) is that pretty much all modern cards that are not explicitly supported by Wine will be wrongly detected on OS X and hence have their vram set to a ridiculously low amount.
Maybe the wine project doesn't really care about this, but I guess at least Crossover should add a workaround. At least I'm not the only one that bumped into this issue: https://www.codeweavers.com/support/forums/general/?t=27;msg=167632#c35
Jonas
În ziua de Mar 18 Noi 2014, la 17:22:35, Matteo Bruni a scris:
Modifying that SM4 check is fine but we'd still require GLSL 1.50 and that means the check is still going to fail with those drivers. In theory we could also tone it down and e.g. only look for the SM4-level features supported by all the drivers but I'm not sure the end result would make a lot of sense.
I'm not saying t modify the SM4 check in the glsl backend, that one needs to make sure the driver exposes everything needed for converting all features of DX10. I was thinking of adding another extra check to d3d_level_from_caps similar to the one that checks for GLSL 1.30 for the mesa drivers.
But since now that function no longer has access to gl_info and it would be very ugly to add it back just for this extension how about setting a new lets call it WINED3D_SHADER_CAP_GEOMETRY_SHADER4 to wined3d_caps in case this extension is found but not everything that is required for true SM4 support?
În ziua de Mar 18 Noi 2014, la 23:32:47, Andrei Slăvoiu a scris:
I was thinking of adding another extra check to d3d_level_from_caps similar to the one that checks for GLSL 1.30 for the mesa drivers.
But since now that function no longer has access to gl_info and it would be very ugly to add it back just for this extension how about setting a new lets call it WINED3D_SHADER_CAP_GEOMETRY_SHADER4 to wined3d_caps in case this extension is found but not everything that is required for true SM4 support?
Since nobody replied I just went ahead and wrote the code. See my latest series on the patches list.
On 14 November 2014 14:22, Jonas Maebe jonas.maebe@elis.ugent.be wrote:
I did not disable anything. From what I understood from the code, if a video card is not known to wine, then
- d3d_level_from_gl_info() will be used to determine the supported
directx level
- since e9b0a0e18, d3d_level_from_gl_info()'s return value is purely
based on shader_caps->vs_version
I did miss that this version can also be set in glsl_shader.c, and there it's indeed not limited to 3.
Right, it's possible/likely that shader_glsl_get_caps() returns shader model 3 on OSX because it's missing some of the extensions we're using for shader model 4, either in non-core contexts or at all. We're working on using core GL 3.2 / GLSL 1.50 instead, but that will take a while.