On 5/24/2019 12:39 AM, Józef Kucia wrote:
On Wed, May 22, 2019 at 11:17 AM Zhiyi Zhang zzhang@codeweavers.com wrote:
+static BOOL xinerama_get_gpus( struct x11drv_gpu **new_gpus, int *count ) +{
- static const WCHAR wine_gpuW[] = {'W','i','n','e',' ','G','P','U',0};
- struct x11drv_gpu *gpus;
- /* Xinerama has no support for GPU, faking one */
- gpus = heap_calloc( 1, sizeof(*gpus) );
- if (!gpus)
return FALSE;
- strcpyW( gpus[0].name, wine_gpuW );
- *new_gpus = gpus;
- *count = 1;
- return TRUE;
+}
Is it really required to create fake GPU data? Do you see a path forward to improve this? In order to match the Windows behavior the GPU name should be consistent with the name returned by other APIs, i.e. OpenGL, Vulkan and Direct3D. Ideally, the GPU name should use the GPU database from wined3d. In the long term, it it might be needed to move the GPU database outside wined3d.
Yes. This GPU data is needed to initialize PCI GPU registry keys. And an adapter registry key links to the GPU registry key.
In the future, we can use GPU database and OpenGL context to guess the primary GPU name for Xinerama. XRandR should report GPU name correctly, although I am not sure they will always be consistent with other APIs. And Xinerama will only be used when desktop mode is used or XRandR 1.4 is unavailable.
To reach the ideal state where all GPU names are consistent, we face the problem when trying to implement LUID support, which is that there is currently no way to identify each GPU across different APIs. Anyway, it can't be worse than current implementation though.