--- On Tue, 20/7/10, Henri Verbeet hverbeet@gmail.com wrote:
Negative on using experimental extensions I'm afraid. Writing a proof of concept patch is ok, of course, and I'd be willing to review that, but it can't go in until the extension is finalized. Note that this wouldn't allow you to get rid of the current code though, you still need a fallback. It would just allow for more accurate reporting on cards that do support that extension.
The attached uses that experimental extension [1] to report the correct total memory for cards that support it, while sticking with the existing hard coded values and code layout for those that don't.
I would appreciate any comments.
Thanks
Luke Bratch
[1] http://developer.download.nvidia.com/opengl/specs/GL_NVX_gpu_memory_info.txt
For ati there is also GL_ATI_mem_info [1] or WGL_AMD_gpu_association [2] . I believe that [1] is now obsolete as from reading some of the wine-dev mailing list [3], and some other site it seems AMD is now pushing the using of WGL_AMD_gpu_association instead. A patch could be written in the exact same manor as your nvidia patch for ati. There doesn't appear to be an intel or via extension to get video memory, although it's much less of a problem for those integrated graphics.
One thing with your patch is that only since 190.x drivers has the extension been introduced. So any card that is older than the 6xxx series won't support the extension. So you could remove the code for the FX card and below. Besides that I think your implementation is good; but I'm no dev, so I'll let Roderick and Henri make the real comments.
[1] http://www.opengl.org/registry/specs/ATI/meminfo.txt [2] http://www.opengl.org/registry/specs/AMD/wgl_gpu_association.txt [3] http://www.mail-archive.com/wine-devel@winehq.org/msg57060.html
On Tue, Jul 20, 2010 at 9:54 PM, Luke Bratch l_bratch@yahoo.co.uk wrote:
--- On Tue, 20/7/10, Henri Verbeet hverbeet@gmail.com wrote:
Negative on using experimental extensions I'm afraid. Writing a proof of concept patch is ok, of course, and I'd be willing to review that, but it can't go in until the extension is finalized. Note that this wouldn't allow you to get rid of the current code though, you still need a fallback. It would just allow for more accurate reporting on cards that do support that extension.
The attached uses that experimental extension [1] to report the correct total memory for cards that support it, while sticking with the existing hard coded values and code layout for those that don't.
I would appreciate any comments.
Thanks
Luke Bratch
[1] http://developer.download.nvidia.com/opengl/specs/GL_NVX_gpu_memory_info.txt
On Wed, Jul 21, 2010 at 1:57 AM, Seth Shelnutt shelnutt2@gmail.com wrote:
For ati there is also GL_ATI_mem_info [1] or WGL_AMD_gpu_association [2] . I believe that [1] is now obsolete as from reading some of the wine-dev mailing list [3], and some other site it seems AMD is now pushing the using of WGL_AMD_gpu_association instead. A patch could be written in the exact same manor as your nvidia patch for ati. There doesn't appear to be an intel or via extension to get video memory, although it's much less of a problem for those integrated graphics.
The WGL/GLX_AMD_gpu_association extension exports the amount of video memory but the extension itself is more meant to divide the load between multiple (even different) GPUs. The GL_ATI_mem_info extension is fancier and like the Nvidia one it also exports the amount of free video memory. The downside is that older drivers don't support the (undocumented) total amount of video memory.
Looking at the past most Nvidia extensions at some point become ARB extensions or serve as the start of one. I wouldn't be surprised if there would come some EXT/ARB specification. The few GL games out there right now all uses hacks to get this information (WoW for instance uses ddraw when running using OpenGLon Windows, recent ID software games use extensions like NV-CONTROL on Linux).
Roderick
On Tue, Jul 20, 2010 at 11:54 PM, Luke Bratch l_bratch@yahoo.co.uk wrote:
--- On Tue, 20/7/10, Henri Verbeet hverbeet@gmail.com wrote:
Negative on using experimental extensions I'm afraid. Writing a proof of concept patch is ok, of course, and I'd be willing to review that, but it can't go in until the extension is finalized. Note that this wouldn't allow you to get rid of the current code though, you still need a fallback. It would just allow for more accurate reporting on cards that do support that extension.
The attached uses that experimental extension [1] to report the correct total memory for cards that support it, while sticking with the existing hard coded values and code layout for those that don't.
I would appreciate any comments.
Thanks
Luke Bratch
[1] http://developer.download.nvidia.com/opengl/specs/GL_NVX_gpu_memory_info.txt
Actually I wrote a GL_NVX_gpu_memory_info patch as well, didn't have time to send the mail to here yet. The patch I wrote is a bit different and I think that's how it should work. You have to keep the current code as it is. After the gpu / amount of video memory have been selected AND if no gpu memory override is set in the registry (there can be good reasons for not exposing all video memory) then if the nvidia extension is around, the amount of video memory should be overridden. Let me dig up the stuff I made.
Roderick
On Wed, Jul 21, 2010 at 3:59 AM, Roderick Colenbrander thunderbird2k@gmail.com wrote:
On Tue, Jul 20, 2010 at 11:54 PM, Luke Bratch l_bratch@yahoo.co.uk wrote:
--- On Tue, 20/7/10, Henri Verbeet hverbeet@gmail.com wrote:
Negative on using experimental extensions I'm afraid. Writing a proof of concept patch is ok, of course, and I'd be willing to review that, but it can't go in until the extension is finalized. Note that this wouldn't allow you to get rid of the current code though, you still need a fallback. It would just allow for more accurate reporting on cards that do support that extension.
The attached uses that experimental extension [1] to report the correct total memory for cards that support it, while sticking with the existing hard coded values and code layout for those that don't.
I would appreciate any comments.
Thanks
Luke Bratch
[1] http://developer.download.nvidia.com/opengl/specs/GL_NVX_gpu_memory_info.txt
Actually I wrote a GL_NVX_gpu_memory_info patch as well, didn't have time to send the mail to here yet. The patch I wrote is a bit different and I think that's how it should work. You have to keep the current code as it is. After the gpu / amount of video memory have been selected AND if no gpu memory override is set in the registry (there can be good reasons for not exposing all video memory) then if the nvidia extension is around, the amount of video memory should be overridden. Let me dig up the stuff I made.
Roderick
I just had a look at the old stuff I made but it was quite hacky and not what I remembered from it (thought it was a bit nicer).
After thinking about it a bit, I think it is not an easy 'few lines' patch but it requires more changes. First the issues and what I don't like about adding it to the current mechanism. You could reason that we should query the amount of video memory from within each 'guess_card' function. The problem right now is that each function 'returns' when the right card has been found but we would like to do the override after the card has been found but before we return from the call. The code could be changed to do this but I think I find it nicer to make some bigger changes to the code.
I think 'select_card_*' should just return the pci device id and the default amount of video memory should be stored in lets say the driver version table (a different name for the table might make sense). It would also avoid us from duplicating some of the logic for open source nvidia/ati drivers (if they ever become useful for 3D gaming apart from Quake3). I think I was against such a change a few months ago when some of the code was restructured. If I was against it was likely because it was harder to add comments which can be useful for memory info.
Once all video memory is in the table, we could grab the default amount of video memory from this table from 'wined3d_guess_card'. In order to override the amount of video memory we could add an (optional) 'select_nvidia_gpu_memory' to 'vendor_card_select_table' to override the default amount of video memory. It would be easy to add other GL_*_memory_info extensions as selectors.
Before the memory info can be moved to the table, Intel driver information has to be added to there (there is none now but we have memory info in the selector functions). Then all the memory info can be moved.
I'm a little tired now, so might have missed something. I will likely play with this since I have a need for it as well. It is not so hard, but Henri I promise that the next WineD3D thing I hack on again is the blit_shader... Hopefully I have time for that by the end of this week.
Roderick
On 21 July 2010 05:08, Roderick Colenbrander thunderbird2k@gmail.com wrote:
I think 'select_card_*' should just return the pci device id and the default amount of video memory should be stored in lets say the driver version table (a different name for the table might make sense). It
Yeah, this should be based on the card id, like the description string. You can probably also do much of the gl_renderer matching based on a table. Ultimately you may want to move it out of the source code completely, into a file, or perhaps the registry. The card db could be updated independent of Wine itself then, though at that point you'll have to start worrying about compatibility between Wine versions.
Also note that while accurately detecting the total amount of video memory is nice, GetAvailableTextureMem() is probably at least as important.
On Wed, Jul 21, 2010 at 4:02 AM, Henri Verbeet hverbeet@gmail.com wrote:
On 21 July 2010 05:08, Roderick Colenbrander thunderbird2k@gmail.com wrote:
I think 'select_card_*' should just return the pci device id and the default amount of video memory should be stored in lets say the driver version table (a different name for the table might make sense). It
Yeah, this should be based on the card id, like the description string. You can probably also do much of the gl_renderer matching based on a table. Ultimately you may want to move it out of the source code completely, into a file, or perhaps the registry. The card db could be updated independent of Wine itself then, though at that point you'll have to start worrying about compatibility between Wine versions.
Also note that while accurately detecting the total amount of video memory is nice, GetAvailableTextureMem() is probably at least as important.
Sure, GL_*_memory_info extensions could help there but I think we should be cautious with just adding such support to there. Over the past few months I have seen various reports of users who run games which say to require lets say 128MB of video memory and they just don't work on Wine if you don't select a higher amount of video memory in the registry. I haven't looked at this at all yet but perhaps the current code has some tracking bugs or perhaps the fact that we don't track video memory of managed resources perhaps confuses some games. It could also be something else but perhaps you have seen things like this.
Roderick
On 21 July 2010 17:30, Roderick Colenbrander thunderbird2k@gmail.com wrote:
Sure, GL_*_memory_info extensions could help there but I think we should be cautious with just adding such support to there. Over the past few months I have seen various reports of users who run games which say to require lets say 128MB of video memory and they just don't work on Wine if you don't select a higher amount of video memory in the registry. I haven't looked at this at all yet but perhaps the current code has some tracking bugs or perhaps the fact that we don't track video memory of managed resources perhaps confuses some games. It could also be something else but perhaps you have seen things like this.
Yeah. There are some flaws in our resource management and associated memory tracking, although it's also certainly possible those reports turn out to be completely unrelated to that. I'd expect that using the actual info from the card would help rather than hurt there though.
On Tuesday, July 20, 2010 2:54:24 pm Luke Bratch wrote:
The attached uses that experimental extension [1] to report the correct total memory for cards that support it, while sticking with the existing hard coded values and code layout for those that don't.
I think the extension should be checked regardless of the card type. Especially with the open source drivers, there's no reason ATi or Intel cards can't pick it up. And with the Nouveau driver, even older nVidia cards can get it.
I'd also think the wrapping function should be generically named, so it can be extended to use other extensions, in case alternative ones come up.
On a technical note, it looks like you're checking the extension incorrectly. You're checking the address of the extension array entry, instead of the array entry value.