On Thu, Nov 4, 2021 at 5:49 PM Henri Verbeet hverbeet@gmail.com wrote:
On Thu, 4 Nov 2021 at 17:31, Zebediah Figura zfigura@codeweavers.com wrote:
On 11/4/21 11:19 AM, Matteo Bruni wrote:
That was more or less the idea, although the whole mechanism was a bit different (and with GL posing more constraints). Specifically, we can't / don't want to make GL calls from non-CS threads, but at the same time we want to be able to create new BOs for DISCARD maps (either because we use a separate BO for each wined3d buffer DISCARD map or because we're suballocating and there is no free space but we want to trigger the allocation of a new BO to suballocate from - basically the same as the non-slab case of wined3d_context_vk_create_bo()). So yeah, I don't think you have to care about that case here in the VK callback and it's probably nicer to do what Henri suggested i.e. go explicitly through the CS for a "slow" alloc, since that way the fallback is in generic code.
Assuming I understood the whole thing correctly, I'm not up to speed with this as much as I'd like...
For Vulkan I think it doesn't matter, since we can just map from the client thread. For GL we have to to map from the CS thread, but if we just map the old resource via wined3d_resource_map(), we'll use &wined3d_buffer_gl.bo instead of allocating new memory, which means that the client will continue to have no accessible memory to return for a discard map. Repeat ad infinitum.
Allocating sysmem would help, but my understanding is that expanding the available GPU memory pool would be better.
Yeah. My idea for that, although I never worked it out all the way, would be to keep a mapped bo for uploads on the application side of the CS. Then if that runs low, we'd send a request through the CS to allocate more, but without waiting for that to complete. Ideally that request would then have completed before the original upload space actually runs out. If we did run out though, we'd use a CPU allocation to avoid stalling those requests. I.e., the basic premise being that we'd like to avoid stalling even for those requests.
Yeah, that sounds great in principle.