On 29 Jul 2021, at 16:24, Henri Verbeet hverbeet@gmail.com wrote:
On Thu, 29 Jul 2021 at 13:52, Jan Sikorski jsikorski@codeweavers.com wrote:
@@ -1410,12 +1410,12 @@ static BOOL wined3d_buffer_vk_create_buffer_object(struct wined3d_buffer_vk *buf FIXME("Ignoring some bind flags %#x.\n", bind_flags);
memory_type = 0;
- if (!(resource->usage & WINED3DUSAGE_DYNAMIC))
if (resource->access & WINED3D_RESOURCE_ACCESS_MAP_R) memory_type |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT; else if (resource->access & WINED3D_RESOURCE_ACCESS_MAP_W) memory_type |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;memory_type |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
- else if (!(resource->usage & WINED3DUSAGE_DYNAMIC))
memory_type |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
If I understand correctly, we'd get here for DEFAULT resources with CPU read/write access. I wonder if there would be any advantage in using DEVICE_LOCAL | HOST_VISIBLE on GPUs that do in fact support that memory type, perhaps using a scheme similar to vkd3d's vkd3d_select_memory_type().
We could just try to allocate it and retry without DEVICE_LOCAL if it fails. That would also work in case this type is supported but out of space.
- Jan