I'm not sure if this is appropriate for a code freeze or not but I think it is. We just ran into an issue with the folding gpu client in wine with someone. Turns out it's because wine reports the card as a 8300 not an 8500. Looking at directx.c I see that that is probably because an 8500 is not listed. I've got a patch to add the single line for 8500 but I was wondering, would it not be beneficial to add a list of all graphic cards dating back to a certain point? There are even less ATI cards listed than Nvidia. All that is required is going to say wikipedia or Nvidia/Ati.com and just go through the list of the cards.
Thanks,
Seth Shelnutt
On 10 July 2010 16:22, Seth Shelnutt shelnutt2@gmail.com wrote:
I'm not sure if this is appropriate for a code freeze or not but I think it is. We just ran into an issue with the folding gpu client in wine with someone. Turns out it's because wine reports the card as a 8300 not an 8500.
What kind of issue? (Just curious.)
Looking at directx.c I see that that is probably because an 8500 is not listed. I've got a patch to add the single line for 8500 but I was wondering, would it not be beneficial to add a list of all graphic cards dating back to a certain point? There are even less ATI cards listed than Nvidia. All that is required is going to say wikipedia or Nvidia/Ati.com and just go through the list of the cards.
Maybe. It's not quite as trivial as you make it sound though. You'd need a good mechanism to keep the list up-to-date, and note that we match against the GL_RENDERER string returned by the OpenGL driver. That isn't necessarily the same as the card's product name, and may differ between different drivers for the same card. (E.g. Mesa r300 vs. Mesa r300g vs. fglrx vs. OS X).
On Sat, Jul 10, 2010 at 3:57 PM, Henri Verbeet hverbeet@gmail.com wrote:
What kind of issue? (Just curious.)
Well a 8300 isn't cuda capable, and even with the forcegpu nvidia_g80
flag, wine still makes the client think it's an 8300 not an 8500. So then fah errors out with:
Initializing Nvidia gpu library Error setting CUDA device CUDA version is insufficient for CUDART version .
Maybe. It's not quite as trivial as you make it sound though. You'd need a good mechanism to keep the list up-to-date, and note that we match against the GL_RENDERER string returned by the OpenGL driver. That isn't necessarily the same as the card's product name, and may differ between different drivers for the same card. (E.g. Mesa r300 vs. Mesa r300g vs. fglrx vs. OS X).
Looking through the whole directx.c file I see what you mean. I'm not really sure there is any better way to do it then what it already done. It seems like it should be made more dynamic so everyone doesn't have to go hardcode in all the different graphic cards. It seems like instead of having the driver_version_table hard code everything, why not just add the driver versions to the existing select_card_nvidia/ati/intel_binary? The init_driver_info function takes the vender name and the renderer name and parses the table to get the driver version. To me the table seems like more work than we need.
What I propose is something like,
if (case HW_VENDOR_NVIDIA ) { if(strstr(gl_renderer, "card lower than 6100") <insert hardcoded description, video memory and appropriate drive versions> else description = ("NVIDIA GeForce " + gl_renderer); driverversiontable[] ={15, 11, 9745}; if(GL_SUPPORT(NVX_gpu_memory_info)) { glGetIntegerv(0x9048, &vidmen); }
else <use hard coded video memories already available> }
That is just quick psuedo code for what I am thinking. The same can be used for ATI and Intel. The gl_ati_meminfo extension has been around longer than the Nvidia extension so it should have an even greater presence (since catalyst 9.2). Anyone who has a 3xxx or newer from ATI or a 4xx from Nvidia is guaranteed to have the extension for detecting video memory. I know those are small number but there is also a large user base who has updated drivers. The old values will have to be hard coded like they are now, and even new cards will have to be hard coded in case they are not running the proprietary drivers. However for immediate support having this dynamic options will work great.
Overall with this, everything will be returned like it is suppose to, and it is now. There is no lose of functionality and while yes this new code will be added and a lot of the hard coded values will remain, this at least provides some level of automatically determining these values and support for newer cards or cards that have not been hard coded in.
What are your thoughts?
-Seth Shelnutt
On 11 July 2010 11:30, Seth Shelnutt shelnutt2@gmail.com wrote:
Looking through the whole directx.c file I see what you mean. I'm not really sure there is any better way to do it then what it already done. It seems like it should be made more dynamic so everyone doesn't have to go hardcode in all the different graphic cards.
The way I understand the card detection stuff is that Wine "detects" card *capabilities* (as opposed to specific hardware variants) and reports a compatible card. If your 8500 is CUDA-compatible and that is not being detected/used, then this is an issue with the way Wine selects which card to report.
There are far too many individual GPUs to be able to include a list of every (or nearly every) one in Wine. A dynamic model that reports what the GPU "says" it is is, IIRC, unsatisfactory for most brands of GPU as the information is not detailed enough or simply unreliable (I think Intel is particularly bad), and would make Mesa support virtually impossible.
This is just stuff I've gathered from reading wine-devel though; a more experienced dev might be able to give you a clearer picture.
On 11 July 2010 11:30, Seth Shelnutt shelnutt2@gmail.com wrote:
The gl_ati_meminfo extension has been around longer than the Nvidia extension so it should have an even greater presence (since catalyst 9.2). Anyone who has a 3xxx or newer from ATI or a 4xx from Nvidia is guaranteed to have the extension for detecting video memory.
Just re-read this. What? Are you suggesting that Wine should only support the latest video cards? I'm not sure if there was a Radeon 3000 before the X### series, and Nvidia's only 4## series before GT1## etc. was the Geforce MX 440 etc.
even new cards will have to be hard coded in case they are not running the proprietary drivers.
This is a good reason for keeping the system the way it is, isn't it? How many different AMD/ATI drivers are there now that have 3D support and support the same subsets of cards?
On Sat, Jul 10, 2010 at 6:30 PM, Seth Shelnutt shelnutt2@gmail.com wrote:
On Sat, Jul 10, 2010 at 3:57 PM, Henri Verbeet hverbeet@gmail.com wrote:
What kind of issue? (Just curious.)
Well a 8300 isn't cuda capable, and even with the forcegpu nvidia_g80 flag, wine still makes the client think it's an 8300 not an 8500. So then fah errors out with:
Initializing Nvidia gpu library Error setting CUDA device CUDA version is insufficient for CUDART version .
So when you select a 8500 the CUDA library accepts the card? If that's the case apparently the CUDA library calls Direct3D for GPU detection (I would find this a little strange though since it can directly access the display driver).
According to http://www.nvidia.com/object/cuda_gpus.html all Geforce8 GPUs support CUDA if the card ships with 256MB or more. You could test this by increasing the amount of video memory in the registry or in directx.c. If you want to fix Wine for this case a short term solution for this specific case is to separate the 8500 from the 8300/8400, so give its own pci id and set the amount of video memory to 256MB (there are cards with more than 256MB but 256MB is the minimum according to wikipedia).
Maybe. It's not quite as trivial as you make it sound though. You'd need a good mechanism to keep the list up-to-date, and note that we match against the GL_RENDERER string returned by the OpenGL driver. That isn't necessarily the same as the card's product name, and may differ between different drivers for the same card. (E.g. Mesa r300 vs. Mesa r300g vs. fglrx vs. OS X).
Looking through the whole directx.c file I see what you mean. I'm not really sure there is any better way to do it then what it already done. It seems like it should be made more dynamic so everyone doesn't have to go hardcode in all the different graphic cards. It seems like instead of having the driver_version_table hard code everything, why not just add the driver versions to the existing select_card_nvidia/ati/intel_binary? The init_driver_info function takes the vender name and the renderer name and parses the table to get the driver version. To me the table seems like more work than we need.
What I propose is something like,
if (case HW_VENDOR_NVIDIA ) { if(strstr(gl_renderer, "card lower than 6100") <insert hardcoded description, video memory and appropriate drive versions> else description = ("NVIDIA GeForce " + gl_renderer); driverversiontable[] ={15, 11, 9745}; if(GL_SUPPORT(NVX_gpu_memory_info)) { glGetIntegerv(0x9048, &vidmen); }
else <use hard coded video memories already available> }
The issue is more complicated than that. We also need the PCI id and that's one of the reasons why the code is at is right now. GLX_NVX_memory_info / GL_ATI_memory_info only provide the amount of video memory. In order to retrieve the pci id you would need NV-CONTROL or ATIFGLEXTENSION (the later is undocumented these days). We should likely fake the cards and use the GL memory_info extensions in the future once they are more mature. For instance GL_NVX_memory_info is still experimental (it worked fine for the cases I needed it).
Roderick
On 07/10/2010 09:27 PM, Roderick Colenbrander wrote:
The issue is more complicated than that. We also need the PCI id and that's one of the reasons why the code is at is right now. GLX_NVX_memory_info / GL_ATI_memory_info only provide the amount of video memory. In order to retrieve the pci id you would need NV-CONTROL or ATIFGLEXTENSION (the later is undocumented these days). We should likely fake the cards and use the GL memory_info extensions in the future once they are more mature. For instance GL_NVX_memory_info is still experimental (it worked fine for the cases I needed it).
Have we made a coherent attempt to document which GL extensions we care about or are having trouble with, particularly ones that aren't yet part of the Open GL spec?
If the powers that be knew this was an issue for us, it might become easier. We should at least throw it up on a wiki page as a want list - stuff on http://wiki.winehq.org/FromOtherProjects has a tendency to happen, eventually.
Thanks, Scott Ritchie