On Sat, Jul 10, 2010 at 6:30 PM, Seth Shelnutt shelnutt2@gmail.com wrote:
On Sat, Jul 10, 2010 at 3:57 PM, Henri Verbeet hverbeet@gmail.com wrote:
What kind of issue? (Just curious.)
Well a 8300 isn't cuda capable, and even with the forcegpu nvidia_g80 flag, wine still makes the client think it's an 8300 not an 8500. So then fah errors out with:
Initializing Nvidia gpu library Error setting CUDA device CUDA version is insufficient for CUDART version .
So when you select a 8500 the CUDA library accepts the card? If that's the case apparently the CUDA library calls Direct3D for GPU detection (I would find this a little strange though since it can directly access the display driver).
According to http://www.nvidia.com/object/cuda_gpus.html all Geforce8 GPUs support CUDA if the card ships with 256MB or more. You could test this by increasing the amount of video memory in the registry or in directx.c. If you want to fix Wine for this case a short term solution for this specific case is to separate the 8500 from the 8300/8400, so give its own pci id and set the amount of video memory to 256MB (there are cards with more than 256MB but 256MB is the minimum according to wikipedia).
Maybe. It's not quite as trivial as you make it sound though. You'd need a good mechanism to keep the list up-to-date, and note that we match against the GL_RENDERER string returned by the OpenGL driver. That isn't necessarily the same as the card's product name, and may differ between different drivers for the same card. (E.g. Mesa r300 vs. Mesa r300g vs. fglrx vs. OS X).
Looking through the whole directx.c file I see what you mean. I'm not really sure there is any better way to do it then what it already done. It seems like it should be made more dynamic so everyone doesn't have to go hardcode in all the different graphic cards. It seems like instead of having the driver_version_table hard code everything, why not just add the driver versions to the existing select_card_nvidia/ati/intel_binary? The init_driver_info function takes the vender name and the renderer name and parses the table to get the driver version. To me the table seems like more work than we need.
What I propose is something like,
if (case HW_VENDOR_NVIDIA ) { if(strstr(gl_renderer, "card lower than 6100") <insert hardcoded description, video memory and appropriate drive versions> else description = ("NVIDIA GeForce " + gl_renderer); driverversiontable[] ={15, 11, 9745}; if(GL_SUPPORT(NVX_gpu_memory_info)) { glGetIntegerv(0x9048, &vidmen); }
else <use hard coded video memories already available> }
The issue is more complicated than that. We also need the PCI id and that's one of the reasons why the code is at is right now. GLX_NVX_memory_info / GL_ATI_memory_info only provide the amount of video memory. In order to retrieve the pci id you would need NV-CONTROL or ATIFGLEXTENSION (the later is undocumented these days). We should likely fake the cards and use the GL memory_info extensions in the future once they are more mature. For instance GL_NVX_memory_info is still experimental (it worked fine for the cases I needed it).
Roderick